-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Minor refactor to scale-up orchestrator for more re-usability #7649
base: master
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: kawych The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@@ -188,7 +188,8 @@ func (e *scaleUpExecutor) executeScaleUp( | |||
return nil | |||
} | |||
|
|||
func combineConcurrentScaleUpErrors(errs []errors.AutoscalerError) errors.AutoscalerError { | |||
// CombineConcurrentScaleUpErrors returns combined scale-up error to report after multiple concurrent scale-ups might haver failed. | |||
func CombineConcurrentScaleUpErrors(errs []errors.AutoscalerError) errors.AutoscalerError { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it make more sense as a part of the errors
package?
@@ -222,7 +222,9 @@ func (o *ScaleUpOrchestrator) ScaleUp( | |||
return buildNoOptionsAvailableStatus(markedEquivalenceGroups, skippedNodeGroups, nodeGroups), nil | |||
} | |||
var scaleUpStatus *status.ScaleUpStatus | |||
createNodeGroupResults, scaleUpStatus, aErr = o.CreateNodeGroup(bestOption, nodeInfos, schedulablePodGroups, podEquivalenceGroups, daemonSets, allOrNothing) | |||
oldId := bestOption.NodeGroup.Id() | |||
initializer := NewAsyncNodeGroupInitializer(bestOption.NodeGroup, nodeInfos[oldId], o.scaleUpExecutor, o.taintConfig, daemonSets, o.processors.ScaleUpStatusProcessor, o.autoscalingContext, allOrNothing) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Creation of the initializer used to be flag guarded and here it is no longer the case - is that intentional? If not, can you keep the flag guard?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may not be ideal, but I preferred this over the alternatives:
- passing around a
nil
- creating a dummy initializer implementation for the case when flag is not flipped
Overall creation of the initializer doesn't really do anything yet.
One obvious option that might make more sense (PLMK WDYT) is to split off orchestrator's CreateNodeGroupAsync
method.
What type of PR is this?
What this PR does / why we need it:
It's a minor refactor that makes it easier to re-use parts of the core scale-up logic while replacing other parts:
Special notes for your reviewer:
Does this PR introduce a user-facing change?