Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restore topolvm support #108

Open
wants to merge 5 commits into
base: datadog-master-12.0
Choose a base branch
from

Conversation

Fricounet
Copy link

This PR adds back a previous commit (added in #52) that is needed to provide proper support for topolvm scaling in the autoscaler. This will allow the autoscaler to scale nodegroups having local data when a pod requests some.

This also rename the topolvm storage class from topolvm-provisioner to dynamic-local-data to provide a better UX by being more descriptive on what kind of volume the user can expect.

dhenkel92 and others added 3 commits June 10, 2024 14:31
Rename the storage class from `topolvm-provisioner` to
`dynamic-local-data` in order to provider a better user experience.
Users don't need to know that the storage class is using topolvm
underneath but they are interested in knowing what kind of volumes will
be provisioned.
We don't plan on supporting openEBS in our clusters in the foreseable
future so let's drop the code to autoscale nodes base on it for now.
Running TopoLVM for local persistent-data like we do with the local volume provisioner will be a pain in terms of scheduling (how to make sure a pod using shared local data on a node can always come back on it). As a result, we will keep the current setup with the local volume provisioner for persistent storage while we leverage TopoLVM for ephemeral storage on  the node. This way, we won't have to deal with TopoLVM scheduling issues.
Explicitely do not trigger a scale up if we request a persistent topolvm volume (or an ephemeral local data volume)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants