Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to deploy nginx to consul-k8s when connect inject is enabled. #4241

Open
GeorgeJose7 opened this issue Aug 11, 2024 · 4 comments
Open
Labels
type/bug Something isn't working

Comments

@GeorgeJose7
Copy link

Overview of the Issue

I am try to deploy consul in a digital ocean Kubernetes cluster using helm. Once consul is up and running I am trying to deploy nginx into the cluster. The pods for nginx are not starting up. and the follow information was found in the logs.

Defaulted container "consul-dataplane" out of: consul-dataplane, nginx-deployment, consul-connect-inject-init (init)
Error from server (BadRequest): container "consul-dataplane" in pod "nginx-deployment-68d9dc9859-dgqbj" is waiting to start: PodInitializing

How can I resolve this issue?

Kubernetes information

Provider : Digital ocean
version : 1.30.2-do.0

Consul helm Chart

chart information

annotations:
  artifacthub.io/images: |
    - name: consul
      image: hashicorp/consul:1.19.1
    - name: consul-k8s-control-plane
      image: hashicorp/consul-k8s-control-plane:1.5.1
    - name: consul-dataplane
      image: hashicorp/consul-dataplane:1.5.1
    - name: envoy
      image: envoyproxy/envoy:v1.25.11
  artifacthub.io/license: MPL-2.0
  artifacthub.io/links: |
    - name: Documentation
      url: https://www.consul.io/docs/k8s
    - name: hashicorp/consul
      url: https://github.com/hashicorp/consul
    - name: hashicorp/consul-k8s
      url: https://github.com/hashicorp/consul-k8s
  artifacthub.io/prerelease: "false"
  artifacthub.io/signKey: |
    fingerprint: C874011F0AB405110D02105534365D9472D7468F
    url: https://keybase.io/hashicorp/pgp_keys.asc
apiVersion: v2
appVersion: 1.19.1
description: Official HashiCorp Consul Chart
home: https://www.consul.io
icon: https://raw.githubusercontent.com/hashicorp/consul-k8s/main/assets/icon.png
kubeVersion: '>=1.22.0-0'
name: consul
sources:
- https://github.com/hashicorp/consul
- https://github.com/hashicorp/consul-k8s
version: 1.5.1

Helm chart values

global:
  name: consul
  enabled: true
  datacenter: dc1
  tls:
    enabled: true
    enableAutoEncrypt: true
    verify: true
  acls: 
    manageSystemACLs: true
    gossipEncryption:
      secretName: consul-bootstrap-secret
      secretKey: token

# Disable the expose server in production.
server:
  replicas: 1
  bootstrapExpect: 1
  exposeService:
    enabled: false
  #  type: LoadBalancer
  storage: 5Gi
  storageClass: do-block-storage
  resources:
    requests:
      memory: "100Mi"
      cpu: "100m"
    limits:
      memory: "100Mi"
      cpu: "100m"

connectInject:
  enabled: true
  default: true
  k8sAllowNamespaces: ['*']
#  aclInjectToken:
#    secretName: consul-bootstrap-secret
#    secretKey: token
  apiGateway:
    managedGatewayClass:
      serviceType: LoadBalancer
  

meshGateway:
  enabled: false
  replicas: 1

controller:
  enabled: true

ui:
  enabled: true
  service:
    enabled: true
    type: LoadBalancer

terminatingGateways:
  enabled: true

API Gateway

apiVersion: gateway.networking.k8s.io/v1beta1
# The Gateway is the main infrastructure resource that links API gateway components.
kind: Gateway
metadata:
 name: api-gateway
 namespace: consul
spec:
 gatewayClassName: consul
 # Configures the listener that is bound to the gateway's address.
 listeners:
   # Defines the listener protocol (HTTP, HTTPS, or TCP)
 - protocol: HTTPS
   port: 443
   name: https
   allowedRoutes:
     namespaces:
       from: All
   tls:
     # Defines the certificate to use for the HTTPS listener.
     certificateRefs:
       - name: consul-server-cert
         kind: Secret

Nginx manifest

# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-deployment
  name: nginx-deployment
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-deployment
  template:
    metadata:
      labels:
        app: nginx-deployment
      annotations:
        'consul.hashicorp.com/connect-inject': 'true'
    spec:
      serviceAccountName: nginx-sa
      containers:
      - image: k8s.gcr.io/ingressconformance/echoserver:v0.0.1
        name: nginx-deployment
        env:
        - name: SERVICE_NAME
          value: nginx-deployment
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        ports:
        - containerPort: 80
---
# service
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-service
  name: nginx-service
  namespace: default
spec:
  ports:
  - port: 443
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx-deployment
---
# service intention
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
  name: api-gateway 
spec:
  destination:
    name: nginx-service
  sources:
    - name: api-gateway
      action: allow  
---
# service defaults
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
metadata:
  name: nginx-service-default
  namespace: default
spec:
  protocol: http
---
# service account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-sa
  namespace: default
automountServiceAccountToken: true
---
# http route
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: route-echo
  namespace: default
spec:
  parentRefs:
  - name: api-gateway
    namespace: consul
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /echo
    backendRefs:
    - kind: Service
      name: nginx-service


      
@GeorgeJose7 GeorgeJose7 added the type/bug Something isn't working label Aug 11, 2024
@blake
Copy link
Member

blake commented Aug 13, 2024

Hi @GeorgeJose7, you will need to provide additional information to help us determine why the consul-dataplane container is not starting. Can you please review the https://kubernetes.io/docs/tasks/debug/debug-application/debug-pods/#debugging-pods and provide additional output from kubectl describe for the nginx-deployment pod?

@GeorgeJose7
Copy link
Author

@blake

This is the output of kubectl describe command

Name:             nginx-deployment-68d9dc9859-8dq6p
Namespace:        default
Priority:         0
Service Account:  nginx-sa
Node:             pool-03ffzmlif-bao7d/10.114.0.2
Start Time:       Tue, 13 Aug 2024 15:48:24 +0200
Labels:           app=nginx-deployment
                  consul.hashicorp.com/connect-inject-managed-by=consul-k8s-endpoints-controller
                  consul.hashicorp.com/connect-inject-status=injected
                  pod-template-hash=68d9dc9859
Annotations:      consul.hashicorp.com/connect-inject: true
                  consul.hashicorp.com/connect-inject-status: injected
                  consul.hashicorp.com/connect-k8s-version: v1.5.1
                  consul.hashicorp.com/connect-service-port: 80
                  consul.hashicorp.com/consul-k8s-version: v1.5.1
                  consul.hashicorp.com/original-pod:
                    {"kind":"Pod","apiVersion":"v1","metadata":{"generateName":"nginx-deployment-68d9dc9859-","namespace":"default","creationTimestamp":null,"...
                  consul.hashicorp.com/transparent-proxy-status: enabled
Status:           Pending
IP:               10.244.1.75
IPs:
  IP:           10.244.1.75
Controlled By:  ReplicaSet/nginx-deployment-68d9dc9859
Init Containers:
  consul-connect-inject-init:
    Container ID:  containerd://847fb6ce241d0d1527a79c2626527f620cd2c3974edc4379013347a923df4d0d
    Image:         hashicorp/consul-k8s-control-plane:1.5.1
    Image ID:      docker.io/hashicorp/consul-k8s-control-plane@sha256:eb342fa3f36093d3d30a1ff903595d8c1a19beaf8f4780899580494873467ad3
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -ec
      consul-k8s-control-plane connect-init -pod-name=${POD_NAME} -pod-namespace=${POD_NAMESPACE} \
        -log-level=info \
        -log-json=false \
        -service-account-name="nginx-sa" \
        -service-name="" \
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 13 Aug 2024 15:49:50 +0200
      Finished:     Tue, 13 Aug 2024 15:49:51 +0200
    Ready:          False
    Restart Count:  4
    Limits:
      cpu:     50m
      memory:  150Mi
    Requests:
      cpu:     50m
      memory:  25Mi
    Environment:
      POD_NAME:                        nginx-deployment-68d9dc9859-8dq6p (v1:metadata.name)
      POD_NAMESPACE:                   default (v1:metadata.namespace)
      NODE_NAME:                        (v1:spec.nodeName)
      CONSUL_ADDRESSES:                consul-server.consul.svc
      CONSUL_GRPC_PORT:                8502
      CONSUL_HTTP_PORT:                8501
      CONSUL_API_TIMEOUT:              5m0s
      CONSUL_NODE_NAME:                $(NODE_NAME)-virtual
      CONSUL_USE_TLS:                  true
      CONSUL_CACERT_PEM:               -----BEGIN CERTIFICATE-----
                                       MIIDQjCCAuigAwIBAgIUQfrb4XmWe283WZ9ntaFiuOqs9C4wCgYIKoZIzj0EAwIw
                                       gZExCzAJBgNVBAYTAlVTMQswCQYDVQQIEwJDQTEWMBQGA1UEBxMNU2FuIEZyYW5j
                                       aXNjbzEaMBgGA1UECRMRMTAxIFNlY29uZCBTdHJlZXQxDjAMBgNVBBETBTk0MTA1
                                       MRcwFQYDVQQKEw5IYXNoaUNvcnAgSW5jLjEYMBYGA1UEAxMPQ29uc3VsIEFnZW50
                                       IENBMB4XDTI0MDgxMzEzNDE0NFoXDTM0MDgxMTEzNDI0NFowgZExCzAJBgNVBAYT
                                       AlVTMQswCQYDVQQIEwJDQTEWMBQGA1UEBxMNU2FuIEZyYW5jaXNjbzEaMBgGA1UE
                                       CRMRMTAxIFNlY29uZCBTdHJlZXQxDjAMBgNVBBETBTk0MTA1MRcwFQYDVQQKEw5I
                                       YXNoaUNvcnAgSW5jLjEYMBYGA1UEAxMPQ29uc3VsIEFnZW50IENBMFkwEwYHKoZI
                                       zj0CAQYIKoZIzj0DAQcDQgAE6xlZxCqILae9+hOrSteqT9hWW+C2Gs9kPt9dTUMA
                                       PZUJ1Bl6JJXP6Mt/Uz5dqPfLAxqLg4LWHagD/aBea1XNiaOCARowggEWMA4GA1Ud
                                       DwEB/wQEAwIBhjAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDwYDVR0T
                                       AQH/BAUwAwEB/zBoBgNVHQ4EYQRfOGI6NzY6YTU6OTA6NzQ6NTI6ZTc6MjI6MDY6
                                       Yzc6N2Y6OWQ6YTM6MmU6NDc6ZTQ6ZWY6Mjc6NzE6Yzg6NzY6MDI6YzY6ODU6MjI6
                                       ZGU6NGY6ZmU6ODc6MTI6OGQ6ZmIwagYDVR0jBGMwYYBfOGI6NzY6YTU6OTA6NzQ6
                                       NTI6ZTc6MjI6MDY6Yzc6N2Y6OWQ6YTM6MmU6NDc6ZTQ6ZWY6Mjc6NzE6Yzg6NzY6
                                       MDI6YzY6ODU6MjI6ZGU6NGY6ZmU6ODc6MTI6OGQ6ZmIwCgYIKoZIzj0EAwIDSAAw
                                       RQIhALDcXK/S5ciVuZnb2cfHLTzZhMP4mg9yZbFGLXYD2cZfAiBEqJbbaDj4knoZ
                                       l3Uv9OqozTEEzFdNseShueI9LekI8A==
                                       -----END CERTIFICATE-----

      CONSUL_TLS_SERVER_NAME:
      CONSUL_LOGIN_AUTH_METHOD:        consul-k8s-auth-method
      CONSUL_LOGIN_BEARER_TOKEN_FILE:  /var/run/secrets/kubernetes.io/serviceaccount/token
      CONSUL_LOGIN_META:               pod=$(POD_NAMESPACE)/$(POD_NAME)
      CONSUL_REDIRECT_TRAFFIC_CONFIG:  {"ConsulDNSIP":"127.0.0.1","ConsulDNSPort":8600,"ProxyUserID":"5995","ProxyInboundPort":20000,"ProxyOutboundPort":15001,"ExcludeInboundPorts":null,"ExcludeOutboundPorts":null,"ExcludeOutboundCIDRs":null,"ExcludeUIDs":["5996"],"NetNS":"","IptablesProvider":null}
    Mounts:
      /consul/connect-inject from consul-connect-inject-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b4msk (ro)
Containers:
  consul-dataplane:
    Container ID:
    Image:         hashicorp/consul-dataplane:1.5.1
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Args:
      -addresses
      consul-server.consul.svc
      -grpc-port=8502
      -proxy-service-id-path=/consul/connect-inject/proxyid
      -log-level=info
      -log-json=false
      -envoy-concurrency=2
      -credential-type=login
      -login-auth-method=consul-k8s-auth-method
      -login-bearer-token-path=/var/run/secrets/kubernetes.io/serviceaccount/token
      -ca-certs=/consul/connect-inject/consul-ca.pem
      -graceful-port=20600
      -shutdown-drain-listeners
      -shutdown-grace-period-seconds=30
      -graceful-shutdown-path=/graceful_shutdown
      -startup-grace-period-seconds=0
      -graceful-startup-path=/graceful_startup
      -telemetry-prom-scrape-path=/metrics
      -consul-dns-bind-port=8600
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Readiness:      tcp-socket :20000 delay=1s timeout=1s period=10s #success=1 #failure=3
    Environment:
      TMPDIR:                     /consul/connect-inject
      NODE_NAME:                   (v1:spec.nodeName)
      DP_SERVICE_NODE_NAME:       $(NODE_NAME)-virtual
      POD_NAME:                   nginx-deployment-68d9dc9859-8dq6p (v1:metadata.name)
      POD_NAMESPACE:              default (v1:metadata.namespace)
      POD_UID:                     (v1:metadata.uid)
      DP_CREDENTIAL_LOGIN_META:   pod=$(POD_NAMESPACE)/$(POD_NAME)
      DP_CREDENTIAL_LOGIN_META1:  pod=$(POD_NAMESPACE)/$(POD_NAME)
      DP_CREDENTIAL_LOGIN_META2:  pod-uid=$(POD_UID)
    Mounts:
      /consul/connect-inject from consul-connect-inject-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b4msk (ro)
  nginx-deployment:
    Container ID:
    Image:          k8s.gcr.io/ingressconformance/echoserver:v0.0.1
    Image ID:
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:
      SERVICE_NAME:  nginx-deployment
      POD_NAME:      nginx-deployment-68d9dc9859-8dq6p (v1:metadata.name)
      NAMESPACE:     default (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b4msk (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 False
  Ready                       False
  ContainersReady             False
  PodScheduled                True
Volumes:
  kube-api-access-b4msk:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
  consul-connect-inject-data:
    Type:        EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:      Memory
    SizeLimit:   <unset>
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                 From               Message
  ----     ------     ----                ----               -------
  Normal   Scheduled  2m6s                default-scheduler  Successfully assigned default/nginx-deployment-68d9dc9859-8dq6p to pool-03ffzmlif-bao7d
  Normal   Pulled     41s (x5 over 2m5s)  kubelet            Container image "hashicorp/consul-k8s-control-plane:1.5.1" already present on machine
  Normal   Created    40s (x5 over 2m5s)  kubelet            Created container consul-connect-inject-init
  Normal   Started    40s (x5 over 2m5s)  kubelet            Started container consul-connect-inject-init
  Warning  BackOff    3s (x10 over 2m1s)  kubelet            Back-off restarting failed container consul-connect-inject-init in pod nginx-deployment-68d9dc9859-8dq6p_default(88547d26-da1f-48ba-9c59-0a85af69dedf)

@lowtianwei
Copy link

same issue on my side.

@blake
Copy link
Member

blake commented Nov 27, 2024

@GeorgeJose7 @lowtianwei it's still not clear from kubectl describe what is going on. It might also help to have the logs from the consul-connect-inject-init container.

One thing that I do notice is that the ServiceAccount name does not match the name of the Kubernetes Service. These are required to be the same when ACLs are enabled. https://developer.hashicorp.com/consul/docs/k8s/connect#service-names

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants