Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleanup catalystproject-latam's terraform state #4374

Closed
sgibson91 opened this issue Jul 8, 2024 · 0 comments · Fixed by #4400
Closed

Cleanup catalystproject-latam's terraform state #4374

sgibson91 opened this issue Jul 8, 2024 · 0 comments · Fixed by #4400
Assignees

Comments

@sgibson91
Copy link
Member

Context

There are currently some changes in the terraform state of catalystproject-latam that would force recreate the nodepools. These are mostly due to the disk size changing and I'm wondering if this is a remnant from #4215

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
  - destroy
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # google_container_node_pool.notebook["gpu-t4-highmem-16"] must be replaced
-/+ resource "google_container_node_pool" "notebook" {
      ~ id                          = "projects/catalystproject-392106/locations/southamerica-east1/clusters/latam-cluster/nodePools/nb-gpu-t4-highmem-16" -> (known after apply)
      ~ instance_group_urls         = [
          - "https://www.googleapis.com/compute/v1/projects/catalystproject-392106/zones/southamerica-east1-c/instanceGroupManagers/gke-latam-cluster-nb-gpu-t4-highmem-1-1990e2aa-grp",
        ] -> (known after apply)
      ~ managed_instance_group_urls = [
          - "https://www.googleapis.com/compute/v1/projects/catalystproject-392106/zones/southamerica-east1-c/instanceGroups/gke-latam-cluster-nb-gpu-t4-highmem-1-1990e2aa-grp",
        ] -> (known after apply)
      + max_pods_per_node           = (known after apply)
        name                        = "nb-gpu-t4-highmem-16"
      + name_prefix                 = (known after apply)
      ~ node_count                  = 0 -> (known after apply)
      ~ node_locations              = [
          - "southamerica-east1-c",
        ] -> (known after apply)
      + operation                   = (known after apply)
        # (5 unchanged attributes hidden)

      ~ autoscaling {
          ~ location_policy      = "BALANCED" -> (known after apply)
          - total_max_node_count = 0 -> null
          - total_min_node_count = 0 -> null
            # (2 unchanged attributes hidden)
        }

      - network_config {
          - create_pod_range     = false -> null
          - enable_private_nodes = false -> null
        }

      ~ node_config {
          ~ disk_size_gb      = 100 -> 400 # forces replacement
          ~ guest_accelerator = [
              ~ {
                  - gpu_partition_size             = ""
                  - gpu_sharing_config             = []
                    # (3 unchanged attributes hidden)
                },
            ]
          ~ image_type        = "COS_CONTAINERD" -> (known after apply)
          ~ local_ssd_count   = 0 -> (known after apply)
          ~ metadata          = {
              - "disable-legacy-endpoints" = "true"
            } -> (known after apply)
          + min_cpu_platform  = (known after apply)
            tags              = []
            # (10 unchanged attributes hidden)

          - shielded_instance_config {
              - enable_integrity_monitoring = true -> null
              - enable_secure_boot          = false -> null
            }

            # (1 unchanged block hidden)
        }

      - upgrade_settings {
          - max_surge       = 1 -> null
          - max_unavailable = 0 -> null
          - strategy        = "SURGE" -> null
        }

        # (1 unchanged block hidden)
    }

  # google_container_node_pool.notebook["gpu-t4-highmem-4"] must be replaced
-/+ resource "google_container_node_pool" "notebook" {
      ~ id                          = "projects/catalystproject-392106/locations/southamerica-east1/clusters/latam-cluster/nodePools/nb-gpu-t4-highmem-4" -> (known after apply)
      ~ instance_group_urls         = [
          - "https://www.googleapis.com/compute/v1/projects/catalystproject-392106/zones/southamerica-east1-c/instanceGroupManagers/gke-latam-cluster-nb-gpu-t4-highmem-4-1c46087a-grp",
        ] -> (known after apply)
      ~ managed_instance_group_urls = [
          - "https://www.googleapis.com/compute/v1/projects/catalystproject-392106/zones/southamerica-east1-c/instanceGroups/gke-latam-cluster-nb-gpu-t4-highmem-4-1c46087a-grp",
        ] -> (known after apply)
      + max_pods_per_node           = (known after apply)
        name                        = "nb-gpu-t4-highmem-4"
      + name_prefix                 = (known after apply)
      ~ node_count                  = 0 -> (known after apply)
      ~ node_locations              = [
          - "southamerica-east1-c",
        ] -> (known after apply)
      + operation                   = (known after apply)
        # (5 unchanged attributes hidden)

      ~ autoscaling {
          ~ location_policy      = "BALANCED" -> (known after apply)
          - total_max_node_count = 0 -> null
          - total_min_node_count = 0 -> null
            # (2 unchanged attributes hidden)
        }

      - network_config {
          - create_pod_range     = false -> null
          - enable_private_nodes = false -> null
        }

      ~ node_config {
          ~ disk_size_gb      = 100 -> 400 # forces replacement
          ~ guest_accelerator = [
              ~ {
                  - gpu_partition_size             = ""
                  - gpu_sharing_config             = []
                    # (3 unchanged attributes hidden)
                },
            ]
          ~ image_type        = "COS_CONTAINERD" -> (known after apply)
          ~ local_ssd_count   = 0 -> (known after apply)
          ~ metadata          = {
              - "disable-legacy-endpoints" = "true"
            } -> (known after apply)
          + min_cpu_platform  = (known after apply)
            tags              = []
            # (10 unchanged attributes hidden)

          - shielded_instance_config {
              - enable_integrity_monitoring = true -> null
              - enable_secure_boot          = false -> null
            }

            # (1 unchanged block hidden)
        }

      - upgrade_settings {
          - max_surge       = 1 -> null
          - max_unavailable = 0 -> null
          - strategy        = "SURGE" -> null
        }

        # (1 unchanged block hidden)
    }

  # google_container_node_pool.notebook["n2-highmem-16"] must be replaced
-/+ resource "google_container_node_pool" "notebook" {
      ~ id                          = "projects/catalystproject-392106/locations/southamerica-east1/clusters/latam-cluster/nodePools/nb-n2-highmem-16" -> (known after apply)
      ~ instance_group_urls         = [
          - "https://www.googleapis.com/compute/v1/projects/catalystproject-392106/zones/southamerica-east1-c/instanceGroupManagers/gke-latam-cluster-nb-n2-highmem-16-e7bf5700-grp",
        ] -> (known after apply)
      ~ managed_instance_group_urls = [
          - "https://www.googleapis.com/compute/v1/projects/catalystproject-392106/zones/southamerica-east1-c/instanceGroups/gke-latam-cluster-nb-n2-highmem-16-e7bf5700-grp",
        ] -> (known after apply)
      + max_pods_per_node           = (known after apply)
        name                        = "nb-n2-highmem-16"
      + name_prefix                 = (known after apply)
      ~ node_count                  = 0 -> (known after apply)
      ~ node_locations              = [
          - "southamerica-east1-c",
        ] -> (known after apply)
      + operation                   = (known after apply)
        # (5 unchanged attributes hidden)

      ~ autoscaling {
          ~ location_policy      = "BALANCED" -> (known after apply)
          - total_max_node_count = 0 -> null
          - total_min_node_count = 0 -> null
            # (2 unchanged attributes hidden)
        }

      - network_config {
          - create_pod_range     = false -> null
          - enable_private_nodes = false -> null
        }

      ~ node_config {
          ~ disk_size_gb      = 100 -> 400 # forces replacement
          ~ guest_accelerator = [] -> (known after apply)
          ~ image_type        = "COS_CONTAINERD" -> (known after apply)
          ~ local_ssd_count   = 0 -> (known after apply)
          ~ metadata          = {
              - "disable-legacy-endpoints" = "true"
            } -> (known after apply)
          + min_cpu_platform  = (known after apply)
            tags              = []
            # (10 unchanged attributes hidden)

          - shielded_instance_config {
              - enable_integrity_monitoring = true -> null
              - enable_secure_boot          = false -> null
            }

            # (1 unchanged block hidden)
        }

      - upgrade_settings {
          - max_surge       = 1 -> null
          - max_unavailable = 0 -> null
          - strategy        = "SURGE" -> null
        }

        # (1 unchanged block hidden)
    }

  # google_container_node_pool.notebook["n2-highmem-64"] must be replaced
-/+ resource "google_container_node_pool" "notebook" {
      ~ id                          = "projects/catalystproject-392106/locations/southamerica-east1/clusters/latam-cluster/nodePools/nb-n2-highmem-64" -> (known after apply)
      ~ instance_group_urls         = [
          - "https://www.googleapis.com/compute/v1/projects/catalystproject-392106/zones/southamerica-east1-c/instanceGroupManagers/gke-latam-cluster-nb-n2-highmem-64-58601ccf-grp",
        ] -> (known after apply)
      ~ managed_instance_group_urls = [
          - "https://www.googleapis.com/compute/v1/projects/catalystproject-392106/zones/southamerica-east1-c/instanceGroups/gke-latam-cluster-nb-n2-highmem-64-58601ccf-grp",
        ] -> (known after apply)
      + max_pods_per_node           = (known after apply)
        name                        = "nb-n2-highmem-64"
      + name_prefix                 = (known after apply)
      ~ node_count                  = 0 -> (known after apply)
      ~ node_locations              = [
          - "southamerica-east1-c",
        ] -> (known after apply)
      + operation                   = (known after apply)
        # (5 unchanged attributes hidden)

      ~ autoscaling {
          ~ location_policy      = "BALANCED" -> (known after apply)
          - total_max_node_count = 0 -> null
          - total_min_node_count = 0 -> null
            # (2 unchanged attributes hidden)
        }

      - network_config {
          - create_pod_range     = false -> null
          - enable_private_nodes = false -> null
        }

      ~ node_config {
          ~ disk_size_gb      = 100 -> 400 # forces replacement
          ~ guest_accelerator = [] -> (known after apply)
          ~ image_type        = "COS_CONTAINERD" -> (known after apply)
          ~ local_ssd_count   = 0 -> (known after apply)
          ~ metadata          = {
              - "disable-legacy-endpoints" = "true"
            } -> (known after apply)
          + min_cpu_platform  = (known after apply)
            tags              = []
            # (10 unchanged attributes hidden)

          - shielded_instance_config {
              - enable_integrity_monitoring = true -> null
              - enable_secure_boot          = false -> null
            }

            # (1 unchanged block hidden)
        }

      - upgrade_settings {
          - max_surge       = 1 -> null
          - max_unavailable = 0 -> null
          - strategy        = "SURGE" -> null
        }

        # (1 unchanged block hidden)
    }

  # google_container_node_pool.notebook["unam-n2-highmem-16"] will be destroyed
  # (because key ["unam-n2-highmem-16"] is not in for_each map)
  - resource "google_container_node_pool" "notebook" {
      - cluster                     = "latam-cluster" -> null
      - id                          = "projects/catalystproject-392106/locations/southamerica-east1/clusters/latam-cluster/nodePools/nb-unam-n2-highmem-16" -> null
      - initial_node_count          = 0 -> null
      - instance_group_urls         = [
          - "https://www.googleapis.com/compute/v1/projects/catalystproject-392106/zones/southamerica-east1-c/instanceGroupManagers/gke-latam-cluster-nb-unam-n2-highmem--614666dc-grp",
        ] -> null
      - location                    = "southamerica-east1" -> null
      - managed_instance_group_urls = [
          - "https://www.googleapis.com/compute/v1/projects/catalystproject-392106/zones/southamerica-east1-c/instanceGroups/gke-latam-cluster-nb-unam-n2-highmem--614666dc-grp",
        ] -> null
      - name                        = "nb-unam-n2-highmem-16" -> null
      - node_count                  = 0 -> null
      - node_locations              = [
          - "southamerica-east1-c",
        ] -> null
      - project                     = "catalystproject-392106" -> null
      - version                     = "1.29.1-gke.1589018" -> null

      - autoscaling {
          - location_policy      = "BALANCED" -> null
          - max_node_count       = 100 -> null
          - min_node_count       = 0 -> null
          - total_max_node_count = 0 -> null
          - total_min_node_count = 0 -> null
        }

      - management {
          - auto_repair  = true -> null
          - auto_upgrade = false -> null
        }

      - network_config {
          - create_pod_range     = false -> null
          - enable_private_nodes = false -> null
        }

      - node_config {
          - disk_size_gb      = 400 -> null
          - disk_type         = "pd-balanced" -> null
          - guest_accelerator = [] -> null
          - image_type        = "COS_CONTAINERD" -> null
          - labels            = {
              - "2i2c.org/community"           = "unam"
              - "hub.jupyter.org/node-purpose" = "user"
              - "k8s.dask.org/node-purpose"    = "scheduler"
            } -> null
          - local_ssd_count   = 0 -> null
          - logging_variant   = "DEFAULT" -> null
          - machine_type      = "n2-highmem-16" -> null
          - metadata          = {
              - "disable-legacy-endpoints" = "true"
            } -> null
          - oauth_scopes      = [
              - "https://www.googleapis.com/auth/cloud-platform",
            ] -> null
          - preemptible       = false -> null
          - resource_labels   = {
              - "community"    = "unam"
              - "node-purpose" = "notebook"
            } -> null
          - service_account   = "[email protected]" -> null
          - spot              = false -> null
          - tags              = [] -> null
          - taint             = [
              - {
                  - effect = "NO_SCHEDULE"
                  - key    = "hub.jupyter.org_dedicated"
                  - value  = "user"
                },
              - {
                  - effect = "NO_SCHEDULE"
                  - key    = "2i2c.org/community"
                  - value  = "unam"
                },
            ] -> null

          - shielded_instance_config {
              - enable_integrity_monitoring = true -> null
              - enable_secure_boot          = false -> null
            }

          - workload_metadata_config {
              - mode = "GKE_METADATA" -> null
            }
        }

      - upgrade_settings {
          - max_surge       = 1 -> null
          - max_unavailable = 0 -> null
          - strategy        = "SURGE" -> null
        }
    }

  # google_container_node_pool.notebook["unam-n2-highmem-4"] will be destroyed
  # (because key ["unam-n2-highmem-4"] is not in for_each map)
  - resource "google_container_node_pool" "notebook" {
      - cluster                     = "latam-cluster" -> null
      - id                          = "projects/catalystproject-392106/locations/southamerica-east1/clusters/latam-cluster/nodePools/nb-unam-n2-highmem-4" -> null
      - initial_node_count          = 0 -> null
      - instance_group_urls         = [
          - "https://www.googleapis.com/compute/v1/projects/catalystproject-392106/zones/southamerica-east1-c/instanceGroupManagers/gke-latam-cluster-nb-unam-n2-highmem--d9a9b81b-grp",
        ] -> null
      - location                    = "southamerica-east1" -> null
      - managed_instance_group_urls = [
          - "https://www.googleapis.com/compute/v1/projects/catalystproject-392106/zones/southamerica-east1-c/instanceGroups/gke-latam-cluster-nb-unam-n2-highmem--d9a9b81b-grp",
        ] -> null
      - name                        = "nb-unam-n2-highmem-4" -> null
      - node_count                  = 0 -> null
      - node_locations              = [
          - "southamerica-east1-c",
        ] -> null
      - project                     = "catalystproject-392106" -> null
      - version                     = "1.29.1-gke.1589018" -> null

      - autoscaling {
          - location_policy      = "BALANCED" -> null
          - max_node_count       = 100 -> null
          - min_node_count       = 0 -> null
          - total_max_node_count = 0 -> null
          - total_min_node_count = 0 -> null
        }

      - management {
          - auto_repair  = true -> null
          - auto_upgrade = false -> null
        }

      - network_config {
          - create_pod_range     = false -> null
          - enable_private_nodes = false -> null
        }

      - node_config {
          - disk_size_gb      = 400 -> null
          - disk_type         = "pd-balanced" -> null
          - guest_accelerator = [] -> null
          - image_type        = "COS_CONTAINERD" -> null
          - labels            = {
              - "2i2c.org/community"           = "unam"
              - "hub.jupyter.org/node-purpose" = "user"
              - "k8s.dask.org/node-purpose"    = "scheduler"
            } -> null
          - local_ssd_count   = 0 -> null
          - logging_variant   = "DEFAULT" -> null
          - machine_type      = "n2-highmem-4" -> null
          - metadata          = {
              - "disable-legacy-endpoints" = "true"
            } -> null
          - oauth_scopes      = [
              - "https://www.googleapis.com/auth/cloud-platform",
            ] -> null
          - preemptible       = false -> null
          - resource_labels   = {
              - "community"    = "unam"
              - "node-purpose" = "notebook"
            } -> null
          - service_account   = "[email protected]" -> null
          - spot              = false -> null
          - tags              = [] -> null
          - taint             = [
              - {
                  - effect = "NO_SCHEDULE"
                  - key    = "hub.jupyter.org_dedicated"
                  - value  = "user"
                },
              - {
                  - effect = "NO_SCHEDULE"
                  - key    = "2i2c.org/community"
                  - value  = "unam"
                },
            ] -> null

          - shielded_instance_config {
              - enable_integrity_monitoring = true -> null
              - enable_secure_boot          = false -> null
            }

          - workload_metadata_config {
              - mode = "GKE_METADATA" -> null
            }
        }

      - upgrade_settings {
          - max_surge       = 1 -> null
          - max_unavailable = 0 -> null
          - strategy        = "SURGE" -> null
        }
    }

  # google_container_node_pool.notebook["unam-n2-highmem-64"] will be destroyed
  # (because key ["unam-n2-highmem-64"] is not in for_each map)
  - resource "google_container_node_pool" "notebook" {
      - cluster                     = "latam-cluster" -> null
      - id                          = "projects/catalystproject-392106/locations/southamerica-east1/clusters/latam-cluster/nodePools/nb-unam-n2-highmem-64" -> null
      - initial_node_count          = 0 -> null
      - instance_group_urls         = [
          - "https://www.googleapis.com/compute/v1/projects/catalystproject-392106/zones/southamerica-east1-c/instanceGroupManagers/gke-latam-cluster-nb-unam-n2-highmem--081429e4-grp",
        ] -> null
      - location                    = "southamerica-east1" -> null
      - managed_instance_group_urls = [
          - "https://www.googleapis.com/compute/v1/projects/catalystproject-392106/zones/southamerica-east1-c/instanceGroups/gke-latam-cluster-nb-unam-n2-highmem--081429e4-grp",
        ] -> null
      - name                        = "nb-unam-n2-highmem-64" -> null
      - node_count                  = 0 -> null
      - node_locations              = [
          - "southamerica-east1-c",
        ] -> null
      - project                     = "catalystproject-392106" -> null
      - version                     = "1.29.1-gke.1589018" -> null

      - autoscaling {
          - location_policy      = "BALANCED" -> null
          - max_node_count       = 100 -> null
          - min_node_count       = 0 -> null
          - total_max_node_count = 0 -> null
          - total_min_node_count = 0 -> null
        }

      - management {
          - auto_repair  = true -> null
          - auto_upgrade = false -> null
        }

      - network_config {
          - create_pod_range     = false -> null
          - enable_private_nodes = false -> null
        }

      - node_config {
          - disk_size_gb      = 400 -> null
          - disk_type         = "pd-balanced" -> null
          - guest_accelerator = [] -> null
          - image_type        = "COS_CONTAINERD" -> null
          - labels            = {
              - "2i2c.org/community"           = "unam"
              - "hub.jupyter.org/node-purpose" = "user"
              - "k8s.dask.org/node-purpose"    = "scheduler"
            } -> null
          - local_ssd_count   = 0 -> null
          - logging_variant   = "DEFAULT" -> null
          - machine_type      = "n2-highmem-64" -> null
          - metadata          = {
              - "disable-legacy-endpoints" = "true"
            } -> null
          - oauth_scopes      = [
              - "https://www.googleapis.com/auth/cloud-platform",
            ] -> null
          - preemptible       = false -> null
          - resource_labels   = {
              - "community"    = "unam"
              - "node-purpose" = "notebook"
            } -> null
          - service_account   = "[email protected]" -> null
          - spot              = false -> null
          - tags              = [] -> null
          - taint             = [
              - {
                  - effect = "NO_SCHEDULE"
                  - key    = "hub.jupyter.org_dedicated"
                  - value  = "user"
                },
              - {
                  - effect = "NO_SCHEDULE"
                  - key    = "2i2c.org/community"
                  - value  = "unam"
                },
            ] -> null

          - shielded_instance_config {
              - enable_integrity_monitoring = true -> null
              - enable_secure_boot          = false -> null
            }

          - workload_metadata_config {
              - mode = "GKE_METADATA" -> null
            }
        }

      - upgrade_settings {
          - max_surge       = 1 -> null
          - max_unavailable = 0 -> null
          - strategy        = "SURGE" -> null
        }
    }

Should I run a refresh only plan to accept these changes into state, or do they need dealing with some other way?

Currently, I am using -target= to target the filestore so I'm not blocked #4335

Proposal

No response

Updates and actions

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant