-
Notifications
You must be signed in to change notification settings - Fork 575
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ ROSA: Cleanup #4850
✨ ROSA: Cleanup #4850
Conversation
// ClusterComputeSpec defines the configuration for the required worker nodes provisioned as part of the cluster creation. | ||
type ClusterComputeSpec struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this for the default workers
pool configuration?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we call this struct something different then? For example defaultNodePool
and specify why/when this configuration is needed? are we also planning to surface this information as a MachinePool?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't want to call it defaultNodePool
since OCM can create multiple nodepools (one per az).
the rosa cli flag for this is called computeMachineType
, so I thought this will be less confusing for customer transition from the cli, wdyt?
specify why/when this configuration is needed
InstanceType
and Autoscaling
are optional, only the availabilityZones
field is required, we could probably infer this from the subnetIDs
field in the future.
are we also planning to surface this information as a MachinePool?
At first I was planing to do that when I thought only 1 default nodepool will be created, but given the fact that multiple default nodepools can be created. it didn't seem like a good idea anymore. the logic is more complicated and the resulting ROSAMachinePool
CRs will be different from CRs created manually because OCM limits what fields you can edit on the defaul machinepools and also prevents you from deleting them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so I thought this will be less confusing for customer transition from the cli, wdyt?
What's the purpose of these configuration field? My understanding was that ROSA uses these fields to create a default pool for each availability zone?
only the availabilityZones field is required
Do we want to move availabilityZones
outside given that's a required field?
we could probably infer this from the subnetIDs field in the future
Is there an issue open for it?
CRs will be different from CRs created manually because OCM limits what fields you can edit on the defaul machinepools and also prevents you from deleting them.
From a Cluster API user experience perspective seems like we might want to fix these behaviors relatively soon.
Are we planning to delete these machine pools eventually once OCM APIs support to create them with zero replicas?
If we don't surface them, users would incur in additional costs hidden away from their configuration, which isn't ideal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the purpose of these configuration field? My understanding was that ROSA uses these fields to create a default pool for each availability zone?
correct
Is there an issue open for it?
Do we want to move availabilityZones outside given that's a required field?
If we don't surface them, users would incur in additional costs hidden away from their configuration, which isn't ideal.
since its difficult to surface those default machinepools today, my idea was to have this required field there, so you would always get the computeSpec
field in your manifest so you know that you are getting compute resources created per az you specify.
This is not great, but the rosa cli will create the same default machinepools when you do rosa create cluster --avaialbity-zones az1, az2 ...
so users of the cli are familiar with this behavior.
The plan is for OCM/ROSA to stop creating default machinePools altogether. Then we would just need to drop this clusterCompute
field in CAPI.
/retest-required |
ccb113b
to
53fd6f2
Compare
/retest-required |
- group rosa clusterCompute config - make force deletion optional through annotation - report deleting status in condition
- fixed machinepool triggering an upgrade when upgrading controlPlane
53fd6f2
to
d60bc63
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: vincepri The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
DefaultMachinePoolSpec
controlplane.cluster.x-k8s.io/rosacontrolplane-force-delete
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #
Special notes for your reviewer:
Checklist:
Release note: