-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support setting tolerations, nodeSelector, and affinity during helm install #8
Conversation
memory: 128Mi | ||
requests: | ||
cpu: 10m | ||
cpu: 20m |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you checked when the controller is running without any provider yaml being deployed, how many resource it take?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
deploy/templates/autoscaling.yaml
Outdated
apiVersion: autoscaling/v2 | ||
kind: HorizontalPodAutoscaler | ||
metadata: | ||
name: az-appconfig-k8s-provider-hpa |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use the full name variable
deploy/parameter/helm-values.yaml
Outdated
@@ -41,18 +41,31 @@ securityContext: | |||
|
|||
resources: | |||
limits: | |||
cpu: 500m | |||
cpu: 100m | |||
memory: 128Mi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems double the request memory is too conservative. How about using 256Mi as a limit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated
No description provided.