Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deploy test cluster #3

Merged
merged 2 commits into from
Jul 10, 2023
Merged

deploy test cluster #3

merged 2 commits into from
Jul 10, 2023

Conversation

splattner
Copy link
Member

No description provided.

@splattner splattner temporarily deployed to production July 10, 2023 10:03 — with GitHub Actions Inactive
@github-actions
Copy link

Terraform Format and Style 🖌success

Terraform Initialization ⚙️success

Terraform Validation 🤖success

Validation Output

Success! The configuration is valid.


Terraform Plan 📖success

Show Plan

terraform
module.training-cluster.tls_private_key.terraform: Refreshing state... [id=a7a9b4c6d77c308635e5eee1c5833c9d10997e3b]
module.training-cluster.random_password.gitea-pg-password: Refreshing state... [id=none]
module.training-cluster.hcloud_load_balancer.lb: Refreshing state... [id=1346050]
module.training-cluster.hcloud_network.network: Refreshing state... [id=3103931]
module.training-cluster.random_password.argocd-admin-password: Refreshing state... [id=none]
module.training-cluster.random_password.gitea-admin-password: Refreshing state... [id=none]
module.training-cluster.hcloud_placement_group.controlplane: Refreshing state... [id=182113]
module.training-cluster.random_password.rke2_cluster_secret: Refreshing state... [id=none]
module.training-cluster.random_password.student-passwords[0]: Refreshing state... [id=none]
module.training-cluster.hcloud_ssh_key.terraform: Refreshing state... [id=12675393]
module.training-cluster.hcloud_network_subnet.subnet: Refreshing state... [id=3103931-10.0.0.0/24]
module.training-cluster.restapi_object.api-a-record: Refreshing state... [id=2766359]
module.training-cluster.hcloud_load_balancer_service.rke2: Refreshing state... [id=1346050__9345]
module.training-cluster.hcloud_load_balancer_network.lb: Refreshing state... [id=1346050-3103931]
module.training-cluster.hcloud_load_balancer_service.api: Refreshing state... [id=1346050__6443]
module.training-cluster.restapi_object.api-aaaa-record: Refreshing state... [id=2766360]
module.training-cluster.hcloud_server.worker[1]: Refreshing state... [id=34694701]
module.training-cluster.hcloud_load_balancer_target.controlplane: Refreshing state... [id=lb-label-selector-tgt-a62253c532df170594d490612ecd8e1dcfd3e5e5ddd75c713a8888e4e7f12189-1346050]
module.training-cluster.hcloud_server.worker[0]: Refreshing state... [id=34694700]
module.training-cluster.hcloud_server_network.worker[0]: Refreshing state... [id=34694700-3103931]
module.training-cluster.hcloud_server_network.worker[1]: Refreshing state... [id=34694701-3103931]
module.training-cluster.hcloud_server.controlplane[2]: Refreshing state... [id=34694706]
module.training-cluster.hcloud_server.controlplane[0]: Refreshing state... [id=34694881]
module.training-cluster.hcloud_server.controlplane[1]: Refreshing state... [id=34694707]
module.training-cluster.hcloud_server_network.controlplane[0]: Refreshing state... [id=34694881-3103931]
module.training-cluster.hcloud_server_network.controlplane[1]: Refreshing state... [id=34694707-3103931]
module.training-cluster.hcloud_server_network.controlplane[2]: Refreshing state... [id=34694706-3103931]
module.training-cluster.hcloud_firewall.firewall: Refreshing state... [id=964636]
module.training-cluster.null_resource.wait_for_k8s_api: Refreshing state... [id=8711829371430069968]
module.training-cluster.ssh_resource.getkubeconfig: Refreshing state... [id=873266276485423219]
module.training-cluster.null_resource.cleanup-node-before-destroy[0]: Refreshing state... [id=8261093469454294782]
module.training-cluster.null_resource.cleanup-node-before-destroy[1]: Refreshing state... [id=5445725408253908594]
module.training-cluster.kubernetes_service.welcome: Refreshing state... [id=default/welcome]
module.training-cluster.kubernetes_ingress_v1.welcome: Refreshing state... [id=default/welcome]
module.training-cluster.kubernetes_namespace.longhorn-system: Refreshing state... [id=longhorn-system]
module.training-cluster.kubernetes_secret.hcloud: Refreshing state... [id=kube-system/hcloud]
module.training-cluster.kubernetes_secret.cloud-controller-manager: Refreshing state... [id=default/cloud-controller-manager]
module.training-cluster.kubernetes_service_account.cloud-controller-manager: Refreshing state... [id=kube-system/cloud-controller-manager]
module.training-cluster.kubernetes_cluster_role_binding.cloud-controller-manager: Refreshing state... [id=system:cloud-controller-manager]
module.training-cluster.kubernetes_deployment.cloud-controller-manager: Refreshing state... [id=kube-system/hcloud-cloud-controller-manager]
module.training-cluster.time_sleep.wait_for_cluster_ready: Refreshing state... [id=2023-07-10T08:43:59Z]
module.training-cluster.kubernetes_cluster_role_binding.cluster-admin[0]: Refreshing state... [id=webshell-cluster-admin]
module.training-cluster.helm_release.hcloud-csi-driver: Refreshing state... [id=hcloud-csi-driver]
module.training-cluster.kubernetes_secret.cloud_init_worker: Refreshing state... [id=kube-system/cloud-init-worker]
module.training-cluster.helm_release.longhorn: Refreshing state... [id=longhorn]
module.training-cluster.kubernetes_namespace.argocd: Refreshing state... [id=argocd]
module.training-cluster.kubernetes_config_map.welcome-content: Refreshing state... [id=default/welcome-content]
module.training-cluster.kubernetes_namespace.ingress-haproxy: Refreshing state... [id=ingress-haproxy]
module.training-cluster.kubernetes_namespace.gitea: Refreshing state... [id=gitea]
module.training-cluster.kubernetes_namespace.cert-manager: Refreshing state... [id=cert-manager]
module.training-cluster.helm_release.kubed: Refreshing state... [id=config-syncer]
module.training-cluster.kubernetes_cluster_role.argocd: Refreshing state... [id=argocd]
module.training-cluster.helm_release.ingress-haproxy: Refreshing state... [id=ingress-haproxy]
module.training-cluster.data.kubernetes_secret.acend-wildcard: Reading...
module.training-cluster.kubernetes_secret.hosttech-secret: Refreshing state... [id=cert-manager/hosttech-secret]
module.training-cluster.helm_release.certmanager: Refreshing state... [id=certmanager]
module.training-cluster.helm_release.argocd: Refreshing state... [id=argocd]
module.training-cluster.kubernetes_deployment.welcome: Refreshing state... [id=default/welcome]
module.training-cluster.data.kubernetes_secret.acend-wildcard: Read complete after 0s [id=cert-manager/acend-wildcard]
module.training-cluster.time_sleep.wait_for_ssl_ready: Refreshing state... [id=2023-07-10T09:07:03Z]
module.training-cluster.helm_release.gitea: Refreshing state... [id=gitea]
module.training-cluster.helm_release.argocd-training-project: Refreshing state... [id=argocd-apps]
module.training-cluster.null_resource.cleanup-argo-cr-before-destroy: Refreshing state... [id=8318085053169750327]
module.training-cluster.helm_release.appset-trainee-webshell[0]: Refreshing state... [id=trainee-webshell]
module.training-cluster.helm_release.appset-trainee-env[0]: Refreshing state... [id=trainee-env]
module.training-cluster.k8s_cert_manager_io_cluster_issuer_v1.clusterissuer-letsencrypt-prod: Refreshing state...
module.training-cluster.k8s_cert_manager_io_certificate_v1.certificate-acend-wildcard: Refreshing state...
module.training-cluster.helm_release.certmanager-webhook-hosttech: Refreshing state... [id=cert-manager-webhook-hosttech]
module.training-cluster.k8s_cert_manager_io_cluster_issuer_v1.clusterissuer-acend-hosttech: Refreshing state...
module.training-cluster.k8s_manifest.certificate-acend-wildcard: Refreshing state... [id=cert-manager::cert-manager.io/v1::Certificate::acend-wildcard]
module.training-cluster.k8s_manifest.clusterissuer-letsencrypt-prod: Refreshing state... [id=::cert-manager.io/v1::ClusterIssuer::letsencrypt-prod]
module.training-cluster.k8s_manifest.clusterissuer-acend-hosttech: Refreshing state... [id=::cert-manager.io/v1::ClusterIssuer::letsencrypt-prod-acend]
module.training-cluster.data.kubernetes_service.ingress-haproxy: Reading...
module.training-cluster.data.kubernetes_service.ingress-haproxy: Read complete after 0s [id=ingress-haproxy/ingress-haproxy-kubernetes-ingress]
module.training-cluster.restapi_object.labapp-a-record: Refreshing state... [id=2766384]
module.training-cluster.restapi_object.labapp-aaaa-record: Refreshing state... [id=2766383]
module.training-cluster.time_sleep.wait_30_seconds: Refreshing state... [id=2023-07-10T08:46:58Z]
module.training-cluster.restapi_object.gitea-user[0]: Refreshing state... [id=user1]
module.training-cluster.restapi_object.gitea-repo[0]: Refreshing state... [id=user1/argocd-training-examples]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # module.training-cluster.helm_release.argocd will be updated in-place
  ~ resource "helm_release" "argocd" {
        id                         = "argocd"
      ~ metadata                   = [
          - {
              - app_version = "v2.7.7"
              - chart       = "argo-cd"
              - name        = "argocd"
              - namespace   = "argocd"
              - revision    = 8
              - values      = jsonencode(
                    {
                      - configs    = {
                          - cm     = {
                              - params = {
                                  - server = {
                                      - insecure = true
                                    }
                                }
                              - url    = "https://argocd.test.cluster.acend.ch"
                            }
                          - secret = {
                              - argocdServerAdminPassword = "$2a$10$Isf6TfUwpUvsXx24OrtY0u3Fz6/6fU5TgzqaMoSAzPThb2dicx6YO"
                              - extra                     = {
                                  - "accounts.user1.password"      = "$2a$10$aZ06ly31/22XnIWxREEp7.TekWPb9cFf/NqoQtq5oFuOXrx7gHLj6"
                                  - "accounts.user1.passwordMtime" = "2023-07-10T10:01:03Z"
                                }
                            }
                        }
                      - controller = {
                          - metrics = {
                              - enabled = true
                            }
                        }
                      - global     = {
                          - configs = {
                              - rbac = {
                                  - "policy.csv" = <<-EOT
                                        p, role:student, applications, *, */*, allow
                                        p, role:student, clusters, get, *, allow
                                        p, role:student, clusters, update, *, allow
                                        p, role:student, repositories, get, *, allow
                                        p, role:student, repositories, create, *, allow
                                        p, role:student, repositories, update, *, allow
                                        p, role:student, repositories, delete, *, allow
                                        
                                        p, role:student, projects, get, *, allow
                                        p, role:student, projects, create, *, allow
                                        p, role:student, projects, update, *, allow
                                        p, role:student, projects, delete, *, allow
                                        
                                        p, role:student, projects, get, argocd-training, allow
                                        
                                        g, user1, role:student
                                    EOT
                                }
                            }
                        }
                      - server     = {
                          - config      = {
                              - "accounts.user1"      = "apiKey, login"
                              - "resource.exclusions" = <<-EOT
                                    - kinds:
                                      - "ciliumidentities"
                                      - "ciliumendpoints"
                                      - "ciliumnodes"
                                EOT
                            }
                          - ingress     = {
                              - annotations = {
                                  - "ingress.kubernetes.io/server-ssl" = true
                                }
                              - enabled     = true
                              - hosts       = [
                                  - "argocd.test.cluster.acend.ch",
                                ]
                              - tls         = [
                                  - {
                                      - hosts      = [
                                          - "argocd.test.cluster.acend.ch",
                                        ]
                                      - secretName = "acend-wildcard"
                                    },
                                ]
                            }
                          - ingressGrpc = {
                              - enabled = true
                              - hosts   = [
                                  - "argocd-grpc.test.cluster.acend.ch",
                                ]
                              - tls     = [
                                  - {
                                      - hosts      = [
                                          - "argocd-grpc.test.cluster.acend.ch",
                                        ]
                                      - secretName = "acend-wildcard"
                                    },
                                ]
                            }
                        }
                    }
                )
              - version     = "5.37.1"
            },
        ] -> (known after apply)
        name                       = "argocd"
      ~ values                     = [
          - (sensitive value),
          - <<-EOT
                global:
                  configs:
                    rbac: 
                      policy.csv: |
                        p, role:student, applications, *, */*, allow
                        p, role:student, clusters, get, *, allow
                        p, role:student, clusters, update, *, allow
                        p, role:student, repositories, get, *, allow
                        p, role:student, repositories, create, *, allow
                        p, role:student, repositories, update, *, allow
                        p, role:student, repositories, delete, *, allow
                
                        p, role:student, projects, get, *, allow
                        p, role:student, projects, create, *, allow
                        p, role:student, projects, update, *, allow
                        p, role:student, projects, delete, *, allow
                
                        p, role:student, projects, get, argocd-training, allow
                
                        g, user1, role:student
            EOT,
          - <<-EOT
                server:
                  config:
                    resource.exclusions: |
                      - kinds:
                        - "ciliumidentities"
                        - "ciliumendpoints"
                        - "ciliumnodes"
            EOT,
        ] -> (known after apply)
        # (26 unchanged attributes hidden)

        # (13 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

Pusher: @splattner, Workflow: Deploy

@splattner splattner merged commit 793b0be into main Jul 10, 2023
1 check passed
@splattner splattner deleted the testcluster branch September 17, 2024 15:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant