Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrate CAPA China solution into migration-cli tool #3481

Closed
3 tasks
Tracked by #3012
T-Kukawka opened this issue Jun 5, 2024 · 5 comments
Closed
3 tasks
Tracked by #3012

Integrate CAPA China solution into migration-cli tool #3481

T-Kukawka opened this issue Jun 5, 2024 · 5 comments

Comments

@T-Kukawka
Copy link
Contributor

T-Kukawka commented Jun 5, 2024

With CAPA China available and functional, last step is to integrate this specific AWS provider iteration into the migration-cli for customers that do run clusters in China.

Acceptance

  • adjust migration-cli and ensure that the tool is working correctly for WC migration
  • make sure that migration from argali works
  • make sure that migration from akita works
@calvix
Copy link

calvix commented Jun 20, 2024

notes:

  • connection to China can be very slow at different times of the day, so make sure that connection is stable and not too slow (usually early morning to noon worked fine for me and afternoon was sometimes too slow)
  • for AWS China you need a different set of AWS credentials for opsctl credentials aws so be sure to change to the right profile via export OPSCTL_GET_CREDENTIALS_AWS_PROFILE=china
  • be sure that the currently used AMI in the release is available in China

@calvix
Copy link

calvix commented Jun 20, 2024

The biggest issue so far was getting a working vintage working cluster with the latest release in China, be sure to check that all apps are deployed before proceeding with testing the capim-migration-cli. Ideally, just create the cluster cancel the pipeline, and wait until all apps are in the deployed state just to be sure.

@calvix
Copy link

calvix commented Jun 21, 2024

I managed to sucesfully execute the migration cli in china with the changes in this PR https://github.com/giantswarm/capi-migration-cli/pull/106

I didn't do complete check of cluster after migration so that should be checked as well.

 #: ./tests/tests -aws-region=cn-north-1 -cluster-name=vacl04 -enable-notifications -mc-vintage=argali -mc-capi=galaxy --cluster-organization=giantswarm -skip-vintage-cluster-creation
Vintage cluster vacl04 is in Created state
Vintage nodepool gktbsmwrdw is ready
All nodepools are ready
Security bundle is deployed

Vintage cluster vacl04 is ready for migration

=======================================
now its time to do any changes you want to do before the migration to CAPI starts.
Please type 'Yes' to confirm to proceed further:
> yes
Confirmed! Continuing.

Checking kubernetes client for Vintage MC argali.
Connected to gs-argali, k8s server version v1.25.15
Checking kubernetes client for the cluster vacl04.
Context for cluster [argali vacl04] not found, executing 'opsctl login', check your browser window.
Connected to gs-argali-vacl04-clientcert, k8s server version v1.25.16
Checking kubernetes client for CAPI MC galaxy.
Context for cluster [galaxy] not found, executing 'opsctl login', check your browser window.
Connected to gs-galaxy, k8s server version v1.25.16
Generating AWS credentials for cluster argali/vacl04.
Assume Role MFA token code: 859128
Generated AWS credentials for arn:aws-cn:sts::306934455918:assumed-role/GiantSwarmAdmin/vaclav.rozsypalek-opsctl-cli
Checking argali's vault connection
Connected to vault 1.7.10
Init phase finished.

Fetching vintage CRs
Starting migration of cluster vacl04 from argali to galaxy

Preparation Phase
Deleting vintage app etcd-kubernetes-resources-count-exporter.
Deleted vintage app etcd-kubernetes-resources-count-exporter
Deleting vintage app k8s-dns-node-cache-app.
Deleted vintage app k8s-dns-node-cache-app
Deleting vintage app chart-operator-extensions.
Deleted vintage app chart-operator-extensions
Deleting vintage app aws-ebs-csi-driver-servicemonitors.
Deleted vintage app aws-ebs-csi-driver-servicemonitors
Migrating secrets to CAPI MC
Migrated vacl04-cilium-user-values-extraconfig configmap
Migrated vacl04-external-dns-default-secrets secret
Migrated vacl04-external-dns-default-config configmap
Migrated vacl04-observability-bundle-logging-extraconfig configmap
Successfully migrated cluster AWS default apps values.
Successfully migrated cluster IAM account role to AWSClusterRoleIdentity.
Successfully disabled vintage machine health check.

Executing the following command to migrate non-default apps via external tool:
app-migration-cli prepare -s argali -d galaxy -n vacl04 -o org-giantswarm
Connected to gs-argali, k8s server version v1.25.15
Connected to gs-galaxy, k8s server version v1.25.16
⚠  Warning
⚠  No apps targeted for migration
⚠  The capi-migration will continue but no apps.application.giantswarm.io CRs will be transferred
⚠  Warning
Successfully prepared non-default apps migration.
Successfully scaled down Vintage app operator deployment.
Successfully scaled down kyverno-admission-controller in workload cluster.
Cleaned legacy chart cilium
Cleaned legacy chart aws-ebs-csi-driver
Cleaned legacy chart aws-cloud-controller-manager
Cleaned legacy chart coredns
Cleaned legacy chart vertical-pod-autoscaler-crd
Found 1 security groups for tag deletion
Deleted tag kubernetes.io/cluster/vacl04 from security group sg-067aeec14b2dcbf92
Deleted vintage "vacl04" node pool security group tag so aws-load-balancer-controller uses the CAPA-created security group.
Found 4 security groups for tag deletion
Deleted tag kubernetes.io/cluster/vacl04 from security group sg-0f1520346fe693ba5
Deleted tag kubernetes.io/cluster/vacl04 from security group sg-0320a7dcb3a76d028
Deleted tag kubernetes.io/cluster/vacl04 from security group sg-0ea6881c4b54adc37
Deleted tag kubernetes.io/cluster/vacl04 from security group sg-0fb262f1c1e2fdbe3
Deleted vintage "vacl04" cluster security group tags.
Preparation phase completed.

Skipping stopping reconciliation of CRs on vintage cluster
Generated CAPI cluster template manifest for cluster vacl04 in /home/calvix/giantswarm/src/github.com/giantswarm/capi-migration-cli/vacl04-cluster.yaml
Applying CAPI cluster APP CR to MC
CAPI Cluster app applied successfully.

Waiting for 1 CAPI node with status Ready
...........Registering instance i-0a17222facd10c6cf with ELB vacl04-api
Registering instance i-0a17222facd10c6cf with ELB vacl04-api-internal
..........................................Registering instance i-04bbea31bff5fd876 with ELB vacl04-api
Registering instance i-0a17222facd10c6cf with ELB vacl04-api
......Registering instance i-04bbea31bff5fd876 with ELB vacl04-api
Registering instance i-0a17222facd10c6cf with ELB vacl04-api
......Registering instance i-04bbea31bff5fd876 with ELB vacl04-api
Registering instance i-0a17222facd10c6cf with ELB vacl04-api
......Registering instance i-04bbea31bff5fd876 with ELB vacl04-api
Registering instance i-0a17222facd10c6cf with ELB vacl04-api
......Registering instance i-04bbea31bff5fd876 with ELB vacl04-api
Registering instance i-0a17222facd10c6cf with ELB vacl04-api
...............................
CAPI node ip-10-206-145-178.cn-north-1.compute.internal ready. 1/1

Found CAPI 1 nodes with status Ready, waited for 1080 sec.
Removed initial-cluster from kubeadm configmap
Removed initial-cluster from kubeadm configmap
Applying CAPI cluster APP CR to MC
CAPI Cluster app applied successfully.

Cluster-info ConfigMap found
Removing static manifests for control-plane components on all vintage nodes.
Applied PolicyExceptions for CP node cleanup job
Created job disable-cp-ip-10-206-145-100.cn-north-1.compute.internal
Created job disable-cp-ip-10-206-145-167.cn-north-1.compute.internal
Created job disable-cp-ip-10-206-145-44.cn-north-1.compute.internal
Waiting until all jobs finished
Job 1/3 - disable-cp-ip-10-206-145-100.cn-north-1.compute.internal not finished yet, waiting 5 sec.
faild to get Job, retrying in 5 specfaild to get Job, retrying in 5 specfaild to get Job, retrying in 5 specfaild to get Job, retrying in 5 specfaild to get Job, retrying in 5 specfaild to get Job, retrying in 5 specfaild to get Job, retrying in 5 specJob 1/3 - disable-cp-ip-10-206-145-100.cn-north-1.compute.internal not finished yet, waiting 5 sec.
Job 1/3 - disable-cp-ip-10-206-145-100.cn-north-1.compute.internal finished.
Job 2/3 - disable-cp-ip-10-206-145-167.cn-north-1.compute.internal not finished yet, waiting 5 sec.
Job 2/3 - disable-cp-ip-10-206-145-167.cn-north-1.compute.internal finished.
Job 3/3 - disable-cp-ip-10-206-145-44.cn-north-1.compute.internal finished.

Deleted PolicyExceptions for CP node cleanup job
Deleted pod org-giantswarm/vacl04-app-operator-f979659c8-f2678 on CAPI MC to force reconcilation
Deleted pod giantswarm/chart-operator-948bb4c8f-mws6g to reschedule it on CAPI control plane node
Deleted pod kube-system/cilium-5gfm5
Deleted pod kube-system/cilium-5rw4w
Deleted pod kube-system/cilium-6fkl7
Deleted pod kube-system/cilium-7qtx8
Deleted pod kube-system/cilium-ccg9p
Deleted pod kube-system/cilium-hpk8f
Deleted pod kube-system/cilium-nsjvz
Deleted pod kube-system/cilium-operator-d55b66d55-m66w9
Deleted pod kube-system/cilium-operator-d55b66d55-m9lhm
Deleted pod kube-system/cilium-pn8sc
Deleted pod kube-system/cilium-s6kmw
Deleted pod kube-system/cilium-vlhbn
Deleted pod kube-system/cilium-xrdg5
Deleted pod kube-system/cilium-zbb45
Deleted pod kube-system/hubble-relay-685c94569-qfp7d
Deleted pod kube-system/hubble-ui-6dff67b9fc-m5wjw
Waiting for 3 CAPI node with status Ready

CAPI node ip-10-206-145-125.cn-north-1.compute.internal ready. 1/3

CAPI node ip-10-206-145-178.cn-north-1.compute.internal ready. 2/3
...
CAPI node ip-10-206-145-188.cn-north-1.compute.internal ready. 3/3

Found CAPI 3 nodes with status Ready, waited for 30 sec.
Draining all vintage control plane nodes.
Found 3 nodes for draining
Started draining node ip-10-206-145-100.cn-north-1.compute.internal
Started draining node ip-10-206-145-167.cn-north-1.compute.internal
Started draining node ip-10-206-145-44.cn-north-1.compute.internal
WARNING: ignoring DaemonSet-managed Pods: kube-system/aws-cloud-controller-manager-9drmx, kube-system/capi-node-labeler-djrxd, kube-system/cert-exporter-daemonset-d957w, kube-system/cilium-sw6mv, kube-system/k8s-audit-metrics-ff6hn, kube-system/net-exporter-4j4db, kube-system/node-exporter-node-exporter-rj8cw, kube-system/prometheus-blackbox-exporter-7c8g7, kube-system/promtail-d2zf2
WARNING: ignoring DaemonSet-managed Pods: kube-system/aws-cloud-controller-manager-4hjsc, kube-system/capi-node-labeler-jbc4q, kube-system/cert-exporter-daemonset-wmdtg, kube-system/cilium-hmrtn, kube-system/k8s-audit-metrics-gd4sf, kube-system/net-exporter-2wxzh, kube-system/node-exporter-node-exporter-mrpfj, kube-system/prometheus-blackbox-exporter-t98tl, kube-system/promtail-lc5f9
evicting pod kube-system/disable-cp-ip-10-206-145-167.cn-north-1.compute.internal-mhzgf
evicting pod kube-system/cluster-autoscaler-5d68686567-8jbvv
WARNING: ignoring DaemonSet-managed Pods: kube-system/aws-cloud-controller-manager-9654g, kube-system/capi-node-labeler-jkwf8, kube-system/cert-exporter-daemonset-xl4hx, kube-system/cilium-jlktx, kube-system/k8s-audit-metrics-hrt5t, kube-system/net-exporter-kshbc, kube-system/node-exporter-node-exporter-2ltbx, kube-system/prometheus-blackbox-exporter-ljdcj, kube-system/promtail-hlkfm
evicting pod kube-system/disable-cp-ip-10-206-145-100.cn-north-1.compute.internal-lrcxj
evicting pod kube-system/coredns-controlplane-68c5ffc5d6-wv787
evicting pod kube-system/ebs-csi-controller-598fb4b8dc-jbl4w
evicting pod giantswarm/chart-operator-948bb4c8f-fsmjp
evicting pod kube-system/disable-cp-ip-10-206-145-44.cn-north-1.compute.internal-kq66g
Finished draining node ip-10-206-145-167.cn-north-1.compute.internal
Finished draining node ip-10-206-145-44.cn-north-1.compute.internal
Finished draining node ip-10-206-145-100.cn-north-1.compute.internal
Deleting vintage control plane ASGs.
AWS Credential expired, need to generate new credentials
Assume Role MFA token code: 752617
refreshed AWS credentials
Found 3 ASG groups for deletion
Deleted ASG group cluster-vacl04-tccpn-ControlPlaneNodeAutoScalingGroup-Bjzl7f2wthgh
Terminated 1 instances in ASG group cluster-vacl04-tccpn-ControlPlaneNodeAutoScalingGroup-Bjzl7f2wthgh
Deleted ASG group cluster-vacl04-tccpn-ControlPlaneNodeAutoScalingGroup2-dRoQH9ysJTo4
Terminated 1 instances in ASG group cluster-vacl04-tccpn-ControlPlaneNodeAutoScalingGroup2-dRoQH9ysJTo4
Deleted ASG group cluster-vacl04-tccpn-ControlPlaneNodeAutoScalingGroup3-YMtXh8wabFCj
Terminated 1 instances in ASG group cluster-vacl04-tccpn-ControlPlaneNodeAutoScalingGroup3-YMtXh8wabFCj
Deleted vintage control plane ASGs.
Waiting KubeadmControlPlane to stabilise  (all replicas to be up to date and ready).
..........Registering instance i-04bbea31bff5fd876 with ELB vacl04-api
Registering instance i-0a17222facd10c6cf with ELB vacl04-api
Registering instance i-081fe811a356ae645 with ELB vacl04-api
........Deleted crashed pod kube-system/cilium-operator-d55b66d55-5p9tr
....Registering instance i-04bbea31bff5fd876 with ELB vacl04-api
Registering instance i-0a17222facd10c6cf with ELB vacl04-api
Registering instance i-081fe811a356ae645 with ELB vacl04-api
............Registering instance i-04bbea31bff5fd876 with ELB vacl04-api
Registering instance i-0a17222facd10c6cf with ELB vacl04-api
Registering instance i-081fe811a356ae645 with ELB vacl04-api
Registering instance i-068182b4910d26a16 with ELB vacl04-api
.
KubeadmControlPlane is stabilised.
Tainting vintage nodes.
Found 3 nodes for tainting
Waiting for all CAPI nodes in node pool gktbsmwrdw to be ready.
Waiting for 3 CAPI node with status Ready

CAPI node ip-10-206-146-102.cn-north-1.compute.internal ready. 1/3

CAPI node ip-10-206-146-179.cn-north-1.compute.internal ready. 2/3

CAPI node ip-10-206-146-60.cn-north-1.compute.internal ready. 3/3

Found CAPI 3 nodes with status Ready, waited for 0 sec.
Draining all vintage worker nodes for nodepool gktbsmwrdw.
Found 3 nodes for draining
Started draining node ip-10-206-146-157.cn-north-1.compute.internal
Started draining node ip-10-206-146-58.cn-north-1.compute.internal
Started draining node ip-10-206-146-75.cn-north-1.compute.internal
WARNING: ignoring DaemonSet-managed Pods: kube-system/capi-node-labeler-5r4xd, kube-system/cert-exporter-daemonset-mm24m, kube-system/cilium-vsl5c, kube-system/ebs-csi-node-vl8t7, kube-system/grafana-agent-nrfmq, kube-system/net-exporter-hrzmn, kube-system/node-exporter-node-exporter-qmsn6, kube-system/prometheus-blackbox-exporter-vl8wp, kube-system/promtail-slxl6
WARNING: ignoring DaemonSet-managed Pods: kube-system/capi-node-labeler-4pvmf, kube-system/cert-exporter-daemonset-9r9xp, kube-system/cilium-dzs82, kube-system/ebs-csi-node-2cfts, kube-system/grafana-agent-6zdfz, kube-system/net-exporter-h46nn, kube-system/node-exporter-node-exporter-l86gx, kube-system/prometheus-blackbox-exporter-7nmtf, kube-system/promtail-tp6m7
evicting pod kube-system/cert-manager-app-webhook-5966bbcf74-g22ng
evicting pod kube-system/kube-prometheus-stack-operator-7c77dd57f7-6hp22
evicting pod kube-system/vertical-pod-autoscaler-recommender-5b95c48fd9-xk9lp
evicting pod kube-system/aws-pod-identity-webhook-app-79cc6b5c5-ggcrp
evicting pod kube-system/vertical-pod-autoscaler-admission-controller-6579d587d5-kj227
evicting pod kyverno/kyverno-background-controller-6c86975567-zmswj
evicting pod kube-system/cert-exporter-deployment-7574bdc786-fgghm
evicting pod kube-system/coredns-workers-64569847cd-lwgj6
evicting pod kyverno/kyverno-reports-controller-6d4c6fdd87-hnpf5
evicting pod kube-system/cert-manager-app-7f74b8ddb6-sb4jb
WARNING: ignoring DaemonSet-managed Pods: kube-system/capi-node-labeler-v9ds9, kube-system/cert-exporter-daemonset-z7lkz, kube-system/cilium-xmpbf, kube-system/ebs-csi-node-7n64l, kube-system/grafana-agent-46knx, kube-system/net-exporter-m6mjv, kube-system/node-exporter-node-exporter-scmmh, kube-system/prometheus-blackbox-exporter-2g6d6, kube-system/promtail-kmdt8
evicting pod security-bundle/kyverno-policy-operator-7c985c9966-5npxp
evicting pod kube-system/aws-pod-identity-webhook-app-79cc6b5c5-jx9gz
evicting pod kube-system/prometheus-prometheus-agent-0
evicting pod kyverno/kyverno-cleanup-controller-7c879f9f97-r7mmc
evicting pod kube-system/vertical-pod-autoscaler-admission-controller-6579d587d5-zvq5r
evicting pod kube-system/external-dns-d7c56d856-24bjd
evicting pod kube-system/metrics-server-6d97f48957-zrp8s
evicting pod kube-system/coredns-workers-64569847cd-p96k7
evicting pod kube-system/kube-prometheus-stack-kube-state-metrics-7f85897869-qcnch
evicting pod kyverno/kyverno-ui-749bcb576f-fcprv
evicting pod kube-system/aws-pod-identity-webhook-app-79cc6b5c5-tk7w9
evicting pod kube-system/cert-manager-app-cainjector-745cdb9664-rlnq8
evicting pod kube-system/vertical-pod-autoscaler-updater-56447dd84-48nzd
evicting pod kube-system/cert-manager-app-webhook-5966bbcf74-t5gj9
evicting pod kube-system/metrics-server-6d97f48957-d8bns
evicting pod kyverno/kyverno-kyverno-plugin-57d4449684-snj9r
evicting pod kyverno/kyverno-policy-reporter-548b89fb5d-kvhpj
error when evicting pods/"vertical-pod-autoscaler-admission-controller-6579d587d5-zvq5r" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
I0621 09:15:39.143130 1003720 request.go:697] Waited for 1.195879019s due to client-side throttling, not priority and fairness, request: POST:https://api.vacl04.k8s.argali.pek.aws.k8s.adidas.com.cn:443/api/v1/namespaces/kube-system/pods/metrics-server-6d97f48957-zrp8s/eviction
error when evicting pods/"coredns-workers-64569847cd-p96k7" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
error when evicting pods/"aws-pod-identity-webhook-app-79cc6b5c5-tk7w9" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
error when evicting pods/"cert-manager-app-webhook-5966bbcf74-t5gj9" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
error when evicting pods/"metrics-server-6d97f48957-d8bns" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/vertical-pod-autoscaler-admission-controller-6579d587d5-zvq5r
error when evicting pods/"vertical-pod-autoscaler-admission-controller-6579d587d5-zvq5r" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/coredns-workers-64569847cd-p96k7
error when evicting pods/"coredns-workers-64569847cd-p96k7" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/aws-pod-identity-webhook-app-79cc6b5c5-tk7w9
evicting pod kube-system/cert-manager-app-webhook-5966bbcf74-t5gj9
error when evicting pods/"cert-manager-app-webhook-5966bbcf74-t5gj9" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/metrics-server-6d97f48957-d8bns
error when evicting pods/"metrics-server-6d97f48957-d8bns" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/vertical-pod-autoscaler-admission-controller-6579d587d5-zvq5r
evicting pod kube-system/coredns-workers-64569847cd-p96k7
error when evicting pods/"coredns-workers-64569847cd-p96k7" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/cert-manager-app-webhook-5966bbcf74-t5gj9
error when evicting pods/"cert-manager-app-webhook-5966bbcf74-t5gj9" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/metrics-server-6d97f48957-d8bns
error when evicting pods/"metrics-server-6d97f48957-d8bns" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/coredns-workers-64569847cd-p96k7
error when evicting pods/"coredns-workers-64569847cd-p96k7" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/cert-manager-app-webhook-5966bbcf74-t5gj9
evicting pod kube-system/metrics-server-6d97f48957-d8bns
error when evicting pods/"metrics-server-6d97f48957-d8bns" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/coredns-workers-64569847cd-p96k7
evicting pod kube-system/metrics-server-6d97f48957-d8bns
error when evicting pods/"metrics-server-6d97f48957-d8bns" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/metrics-server-6d97f48957-d8bns
error when evicting pods/"metrics-server-6d97f48957-d8bns" -n "kube-system" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod kube-system/metrics-server-6d97f48957-d8bns
Registering instance i-04bbea31bff5fd876 with ELB vacl04-api
Registering instance i-0a17222facd10c6cf with ELB vacl04-api
Registering instance i-0ec60ca96eb9d136c with ELB vacl04-api
Registering instance i-081fe811a356ae645 with ELB vacl04-api
Registering instance i-068182b4910d26a16 with ELB vacl04-api
Finished draining node ip-10-206-146-157.cn-north-1.compute.internal
Finished draining node ip-10-206-146-75.cn-north-1.compute.internal
Finished draining node ip-10-206-146-58.cn-north-1.compute.internal
Deleting vintage gktbsmwrdw node pool ASG.
AWS Credentials are still valid, no need to refresh
Found 1 ASG groups for deletion
Deleted ASG group cluster-vacl04-tcnp-gktbsmwrdw-NodePoolAutoScalingGroup-e7bdTDaEc1ti
Terminated 3 instances in ASG group cluster-vacl04-tcnp-gktbsmwrdw-NodePoolAutoScalingGroup-e7bdTDaEc1ti
Deleted vintage gktbsmwrdw node pool ASG.
Successfully scaled up kyverno-admission-controller in workload cluster.

Executing the following command to apply non-default apps to CAPI MC via external tool:
app-migration-cli apply -s argali -d galaxy -n vacl04 -o org-giantswarm
Connected to gs-argali, k8s server version v1.25.15
Connected to gs-galaxy, k8s server version v1.25.16
⚠  Warning
⚠  No apps targeted for migration
⚠  The given file was empty
⚠  Warning
Finished migrating cluster vacl04 to CAPI infrastructure
=======================================
Cluster was successfully migrated to CAPI
Now you can check the cluster and confirm  everything was properly migrated

Next step is cleanup of the cluster, type Yes to proceed with cleanup of the cluster once you are ready.

Please type 'Yes' to confirm to proceed further:
> yes
Confirmed! Continuing.

Executing kubectl --context gs-argali delete -f vacl04-cluster-vintage.yaml --wait=false

cluster.cluster.x-k8s.io "vacl04" deleted
awscluster.infrastructure.giantswarm.io "vacl04" deleted
g8scontrolplane.infrastructure.giantswarm.io "a2fwpqdfzc" deleted
awscontrolplane.infrastructure.giantswarm.io "a2fwpqdfzc" deleted
machinedeployment.cluster.x-k8s.io "gktbsmwrdw" deleted
awsmachinedeployment.infrastructure.giantswarm.io "gktbsmwrdw" deleted

Executing kubectl --context gs-galaxy delete -f vacl04-cluster.yaml --wait=false

configmap "vacl04-userconfig" deleted
app.application.giantswarm.io "vacl04" deleted

Executing kubectl --context gs-galaxy -n org-giantswarm delete secret/vacl04-ca secret/vacl04-sa secret/vacl04-etcd secret/vacl04-migration-custom-files --wait=false

secret "vacl04-ca" deleted
secret "vacl04-sa" deleted
secret "vacl04-etcd" deleted
secret "vacl04-migration-custom-files" deleted

@calvix calvix removed their assignment Jun 21, 2024
@bdehri bdehri self-assigned this Jul 30, 2024
@bdehri
Copy link

bdehri commented Aug 16, 2024

Migration looks fine other than one small hickup

Failed to clean vintage cluster
Error: Execution failed: could not find hosted zone ID for berk3.k8s.argali.pek.aws.k8s.adidas.com.cn.
	/Users/berkdehrioglu/gs-git/capi-migration-cli/pkg/migrator/aws.go:667
	/Users/berkdehrioglu/go/pkg/mod/github.com/giantswarm/[email protected]/retry.go:13
	/Users/berkdehrioglu/gs-git/capi-migration-cli/pkg/migrator/aws.go:673
	/Users/berkdehrioglu/gs-git/capi-migration-cli/pkg/migrator/aws.go:467
	/Users/berkdehrioglu/gs-git/capi-migration-cli/pkg/migrator/migrator.go:505
	/Users/berkdehrioglu/gs-git/capi-migration-cli/pkg/migrator/migrator.go:155
	/Users/berkdehrioglu/gs-git/capi-migration-cli/main.go:69

at the end of vintage cleanup, we try to update VintageApiLoadbalancerDNSRecord, but dns zones are not present at china. We should skip that if the region is china. Other than that, migration looks fine.

@paurosello paurosello self-assigned this Oct 16, 2024
@paurosello paurosello moved this from Inbox 📥 to In Progress ⛏️ in Roadmap Oct 16, 2024
@paurosello
Copy link

We are good to merge the associated PR

@github-project-automation github-project-automation bot moved this from In Progress ⛏️ to Done ✅ in Roadmap Oct 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Development

No branches or pull requests

4 participants