-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: improve distruption for underutilization #992
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: Luke-Smartnews The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Welcome @Luke-Smartnews! |
Hi @Luke-Smartnews. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@@ -3,7 +3,8 @@ apiVersion: apiextensions.k8s.io/v1 | |||
kind: CustomResourceDefinition | |||
metadata: | |||
annotations: | |||
controller-gen.kubebuilder.io/version: v0.14.0 | |||
controller-gen.kubebuilder.io/version: v0.8.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please run make toolchain
to update your controller-gen version
Tested in our dev env, working without any issues. Will do more load tests. |
Can you write a description of how you've implemented this? This seems to drastically change how "WhenUnderutilized" works. Previously, it took effect when the cluster as a whole was underutilized, but now it seems to just mean that the node is underutilized? |
Excited to see a POC! Further than a description it should probably be first proposed as an RFC to get reviewed by the community, and to talk about alternative approaches or if this is behavior the community wants to accept. Seems interesting though :) |
will do after I confirm it works on a large scale. also is there a template of RFC? |
Test Report
|
Hey @Luke-Smartnews, this is a core difference in how Karpenter considers underutilization. We've intentionally not surfaced a utilization threshold due to its edge cases and how it ends up driving overall lower utilization. This would be a huge change to our disruption logic, and would definitely require opening up a design + RFC. I'm not sure we would be accepting this feature (at least in its current state without a design), but I would love to hear about the use-cases you're trying to solve here. |
@njtran is the current utilisation logic documented somewhere? From my brief reading of the source, it looks like karpenter does a "fake" scheduling run, and if the nodes pods can be scheduled elsewhere it consolidates the node? Our use case is flink jobs performing data transfer. Those pods are bursty in nature, and we want to insure those pods run for at least 10 minutes so they can reach a checkpoint before getting rescheduled. The current system is reclaiming nodes in under 90s, so they barely have time to get scheduled before karpenter pulls the rug. Thank you! |
@barryrobison I think you want this #752, you're correct in that's how Karpenter considers Consolidation |
@njtran |
Hey @Luke-Smartnews, even if this is just a change to the candidacy, we'd need to review a design/RFC before continuing on this. Would you be able to open a design PR first? |
This PR has been inactive for 14 days. StaleBot will close this stale PR after 14 more days of inactivity. |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Will close this until there's an RFC for this. Please feel free to re-open when you cut the RFC! |
@njtran - Could you refer to the RFC process needed to unblock this PR? We are seeing worse bin-packing performance from Karpenter than cluster-autoscaler and are debating switching back. Being able to leverage advanced consolidation features (eval time + thresholds) would put Karpenter on an equal footing for us. Please advise. |
Background
we're trying to use karpenter in our production environment, the scaling-out feature is working well, but the scaling-in(disruption) is a problem to us. There're are only two options:
WhenUnderutilized | WhenEmpty
,spec.disruption.consolidateAfter
#735)Proposals