-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Statefulset updates with forbidden updates should re-create Statefulset #318
Comments
can you list few operators. Also the reason of not adding this was to respect the k8s API error's for statefulset. The only edge case which druid operator implements is with deletion of sts using non cascading deletion is for scaling storage vertically in statefulsets. If you can give more usecases of what exactly is the edge case to implement this. Operator cannot/will not delete sts to force in desired state, unless it needs to.
can you post a config, this might be a bug, though havent seen this issue. |
i actually do not remember, sorry.
that should be easy to test:
|
@applike-ss yes, the steps you mentioned are not allowed by the statefulset. It allows only podSpec, rollingUpgrade and podmanagempolicy to be changed. I dont see any operator issue here. |
In our case we had to adjust the labels for the pods, so that our scaling provider would not scale down an instance where druid is running. |
Sometimes we have to make adjustments to our configuration for druid which can include updating fields on a statefulset that is not allowed to be updated (e.g. labels).
When updating the pod labels, currently this results in the operator changing the service label selector before the statefulset has been rolled out (either because it fails and the operator just proceeds or because the order in which the service resource is updated is not correct?).
This behaviour is not ideal, so i would suggest to let the druid operator re-create statefulsets when they can't be updated to ensure that resources are in the desired state before proceeding. Ideally that would be done with the cascade=orphan option to keep the pods that are running and exchange them after the statefulset was re-created.
I have seen this behaviour in other operators, but maybe there was a reason it was not implemented here?
The text was updated successfully, but these errors were encountered: