-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
render workers using cluster chart #282
Conversation
Note As this is a draft PR no triggers from the PR body will be handled. If you'd like to trigger them while draft please add them as a PR comment. |
/run cluster-test-suites |
cluster-test-suites
📋 View full results in Tekton Dashboard Rerun trigger: Tip To only re-run the failed test suites you can provide a |
/run cluster-test-suites |
cluster-test-suites
📋 View full results in Tekton Dashboard Rerun trigger: Tip To only re-run the failed test suites you can provide a |
/run cluster-test-suites |
cluster-test-suites
📋 View full results in Tekton Dashboard Rerun trigger: Tip To only re-run the failed test suites you can provide a |
/run cluster-test-suites |
cluster-test-suites
📋 View full results in Tekton Dashboard Rerun trigger: Tip To only re-run the failed test suites you can provide a |
@@ -79,6 +79,12 @@ Properties within the `.global.controlPlane` object | |||
| `global.controlPlane.apiServerPort` | **API server port** - The API server Load Balancer port. This option sets the Spec.ClusterNetwork.APIServerPort field on the Cluster CR. In CAPI this field isn't used currently. It is instead used in providers. In CAPA this sets only the public facing port of the Load Balancer. In CAPZ both the public facing and the destination port are set to this value. CAPV and CAPVCD do not use it.|**Type:** `integer`<br/>**Default:** `6443`| | |||
| `global.controlPlane.image` | **Node container image**|**Type:** `object`<br/>| | |||
| `global.controlPlane.image.repository` | **Repository**|**Type:** `string`<br/>**Default:** `"gsoci.azurecr.io/giantswarm"`| | |||
| `global.controlPlane.machineHealthCheck` | **Machine health check**|**Type:** `object`<br/>| |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
apparently the machine health check was just there previously. Now we have options. The defaults are the same as before.
@@ -1,16 +0,0 @@ | |||
# DEPRECATED - remove once CP and workers are rendered with cluster chart |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I got rid of everything marked DEPRECATED but let's check if there's more old stuff
{{/* | ||
Generates template spec for worker machines. | ||
*/}} | ||
{{- define "worker-vspheremachinetemplate-spec" -}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nothing new here, we just need the spec separated out.
@@ -17,7 +17,7 @@ spec: | |||
apiVersion: {{ include "infrastructureApiVersion" . }} | |||
kind: VSphereMachineTemplate | |||
metadata: | |||
name: {{ include "resource.default.name" $ }}-{{ $nodePoolName }}-{{ include "mtRevision" $c }} | |||
name: {{ include "resource.default.name" $ }}-{{ $nodePoolName }}-{{ include "machineTemplateSpec.hash" (dict "data" (include "worker-vspheremachinetemplate-spec" $) "salt" $.Values.cluster.providerIntegration.hashSalt) }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the naming function that ensures machines have the same names as referenced inside the cluster chart resources
"machineHealthCheckResourceEnabled": true, | ||
"machinePoolResourcesEnabled": false, | ||
"machinePoolResourcesEnabled": true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this just enables rendering worker resources in the cluster chart
Please correct me if I'm wrong but in case users have migrated to v0.60.0 there should not be another breaking change, right? |
There's quite a lot going on here but nothing jumps at me here. |
/run cluster-test-suites |
cluster-test-suites
📋 View full results in Tekton Dashboard Rerun trigger: Tip To only re-run the failed test suites you can provide a |
All looks good, aside from one small change (which i will try to explain). I templated the chart out from main and this branch and diffed them and I can see that the machineHealthCheck config ends up in the vspheremachinetemplate spec:
This won't work as it's not a valid field in the CRD - the controller will (should?) refuse to create the VMs. However you can unset it in the I was surprised that the tests passed because of this, but then I realised that the test suite values provide a nodepool definition already which will override the default definition in the chart (and therefore hide this problem when running in CI). |
There were differences in the rendered Helm template, please check! Output
|
/run cluster-test-suites |
cluster-test-suites
📋 View full results in Tekton Dashboard Rerun trigger: Tip To only re-run the failed test suites you can provide a |
/run cluster-test-suites TARGET_SUITES=./providers/capv/standard |
cluster-test-suites
📋 View full results in Tekton Dashboard Rerun trigger: Tip To only re-run the failed test suites you can provide a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
me likey
https://github.com/giantswarm/giantswarm/issues/31416
This pr:
This migrates all the worker resources to be rendered from the cluster chart.
Trigger e2e tests
/run cluster-test-suites