-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prevent Policy Server Crash in case of a maintenance in Kubernetes Nodepools #383
Conversation
Validate helm chart to have more than 1 replica for policy-server Signed-off-by: Ferhat Güneri <[email protected]>
increase replica count for policy server Signed-off-by: Ferhat Güneri <[email protected]>
Signed-off-by: Ferhat Güneri <[email protected]>
Create values.schema.json
Thanks for the contribution. This fixes only the |
Yes, that would be great. Take a look at kubewarden/kubewarden-controller#564 (comment) and implement the "Pod Disruption Budget" section. Feel free to reach out if something is not clear or if you need help |
Hi @flavio I'm aware of that issue but there is no progress since long time. This is a very critical problem and I believe it needs to be fixed immediately. Do you have any idea how long it will take to get it fixed? It is not really good idea to patch these helm charts internally and deal with the upcoming changes. |
I think we can start working on this fix during the next sprint and make it part of the 1.11 release, but I have to discuss that with the other maintainers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, many thanks for bringing this forward!
I totally agree with providing a PodDisruptionBudget, yet we have the policy for the default values to not be production ready; people deploy with the defaults to test in a local cluster, and there's a myriad of production deployment flavours that cannot be covered via the default values.
I would welcome an optional configurable setting for the PodDisruptionBudget and the minimum replicaCount.
Closing, we will fix that inside of the controller with kubewarden/kubewarden-controller#564 |
Description
If policy server crash because of wrong clusteradmissionpolicy it is blocking pod to be created and cannot evaluate resources correctly. which is affecting control plane. therefore need to keep policy-server always available. These changes also can be added to policy server deployment but since it is hardcoded with Go, I thought of editing the helm chart.