-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Design and Implement a Cluster Scale Up/Down Mechanism #58
Comments
Another idea to consider is if a user manually recreate a replica (by deleting the pod and the PVC). In such cases we need to verify within the cluster that the old replica no longer exists. |
Cluster rescaling proposaletcd operator should be able to scale cluster up and down and react to pod deletion or PVC deletion. Scaling procedureThere should be fields We should introduce new status condition Cluster state configmap should contain Status reconciliationField Field Scaling upWhen Process is the following:
In case of errors, EtcdCluster will be stuck on If user cancellation (by updating EtcdCluster's If user sets Scaling downWhen Process is the following:
|
We need to design a mechanism for scaling a cluster up and down.
When a user modifies
spec.replicas
, the cluster should scale to the required number of replicas accordingly. Currently, we are utilizing a StatefulSet, but we understand that we might have to move away from it in favor of a custom pod controller.Scaling up should work out of the box, but scaling down might be more complex due to several considerations:
We're open to suggestions on how to address these challenges and implement an efficient and reliable scaling mechanism.
The text was updated successfully, but these errors were encountered: