-
Notifications
You must be signed in to change notification settings - Fork 322
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to deploy nginx to consul-k8s when connect inject is enabled. #4241
Labels
type/bug
Something isn't working
Comments
Hi @GeorgeJose7, you will need to provide additional information to help us determine why the consul-dataplane container is not starting. Can you please review the https://kubernetes.io/docs/tasks/debug/debug-application/debug-pods/#debugging-pods and provide additional output from |
This is the output of Name: nginx-deployment-68d9dc9859-8dq6p
Namespace: default
Priority: 0
Service Account: nginx-sa
Node: pool-03ffzmlif-bao7d/10.114.0.2
Start Time: Tue, 13 Aug 2024 15:48:24 +0200
Labels: app=nginx-deployment
consul.hashicorp.com/connect-inject-managed-by=consul-k8s-endpoints-controller
consul.hashicorp.com/connect-inject-status=injected
pod-template-hash=68d9dc9859
Annotations: consul.hashicorp.com/connect-inject: true
consul.hashicorp.com/connect-inject-status: injected
consul.hashicorp.com/connect-k8s-version: v1.5.1
consul.hashicorp.com/connect-service-port: 80
consul.hashicorp.com/consul-k8s-version: v1.5.1
consul.hashicorp.com/original-pod:
{"kind":"Pod","apiVersion":"v1","metadata":{"generateName":"nginx-deployment-68d9dc9859-","namespace":"default","creationTimestamp":null,"...
consul.hashicorp.com/transparent-proxy-status: enabled
Status: Pending
IP: 10.244.1.75
IPs:
IP: 10.244.1.75
Controlled By: ReplicaSet/nginx-deployment-68d9dc9859
Init Containers:
consul-connect-inject-init:
Container ID: containerd://847fb6ce241d0d1527a79c2626527f620cd2c3974edc4379013347a923df4d0d
Image: hashicorp/consul-k8s-control-plane:1.5.1
Image ID: docker.io/hashicorp/consul-k8s-control-plane@sha256:eb342fa3f36093d3d30a1ff903595d8c1a19beaf8f4780899580494873467ad3
Port: <none>
Host Port: <none>
Command:
/bin/sh
-ec
consul-k8s-control-plane connect-init -pod-name=${POD_NAME} -pod-namespace=${POD_NAMESPACE} \
-log-level=info \
-log-json=false \
-service-account-name="nginx-sa" \
-service-name="" \
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 13 Aug 2024 15:49:50 +0200
Finished: Tue, 13 Aug 2024 15:49:51 +0200
Ready: False
Restart Count: 4
Limits:
cpu: 50m
memory: 150Mi
Requests:
cpu: 50m
memory: 25Mi
Environment:
POD_NAME: nginx-deployment-68d9dc9859-8dq6p (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
NODE_NAME: (v1:spec.nodeName)
CONSUL_ADDRESSES: consul-server.consul.svc
CONSUL_GRPC_PORT: 8502
CONSUL_HTTP_PORT: 8501
CONSUL_API_TIMEOUT: 5m0s
CONSUL_NODE_NAME: $(NODE_NAME)-virtual
CONSUL_USE_TLS: true
CONSUL_CACERT_PEM: -----BEGIN CERTIFICATE-----
MIIDQjCCAuigAwIBAgIUQfrb4XmWe283WZ9ntaFiuOqs9C4wCgYIKoZIzj0EAwIw
gZExCzAJBgNVBAYTAlVTMQswCQYDVQQIEwJDQTEWMBQGA1UEBxMNU2FuIEZyYW5j
aXNjbzEaMBgGA1UECRMRMTAxIFNlY29uZCBTdHJlZXQxDjAMBgNVBBETBTk0MTA1
MRcwFQYDVQQKEw5IYXNoaUNvcnAgSW5jLjEYMBYGA1UEAxMPQ29uc3VsIEFnZW50
IENBMB4XDTI0MDgxMzEzNDE0NFoXDTM0MDgxMTEzNDI0NFowgZExCzAJBgNVBAYT
AlVTMQswCQYDVQQIEwJDQTEWMBQGA1UEBxMNU2FuIEZyYW5jaXNjbzEaMBgGA1UE
CRMRMTAxIFNlY29uZCBTdHJlZXQxDjAMBgNVBBETBTk0MTA1MRcwFQYDVQQKEw5I
YXNoaUNvcnAgSW5jLjEYMBYGA1UEAxMPQ29uc3VsIEFnZW50IENBMFkwEwYHKoZI
zj0CAQYIKoZIzj0DAQcDQgAE6xlZxCqILae9+hOrSteqT9hWW+C2Gs9kPt9dTUMA
PZUJ1Bl6JJXP6Mt/Uz5dqPfLAxqLg4LWHagD/aBea1XNiaOCARowggEWMA4GA1Ud
DwEB/wQEAwIBhjAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDwYDVR0T
AQH/BAUwAwEB/zBoBgNVHQ4EYQRfOGI6NzY6YTU6OTA6NzQ6NTI6ZTc6MjI6MDY6
Yzc6N2Y6OWQ6YTM6MmU6NDc6ZTQ6ZWY6Mjc6NzE6Yzg6NzY6MDI6YzY6ODU6MjI6
ZGU6NGY6ZmU6ODc6MTI6OGQ6ZmIwagYDVR0jBGMwYYBfOGI6NzY6YTU6OTA6NzQ6
NTI6ZTc6MjI6MDY6Yzc6N2Y6OWQ6YTM6MmU6NDc6ZTQ6ZWY6Mjc6NzE6Yzg6NzY6
MDI6YzY6ODU6MjI6ZGU6NGY6ZmU6ODc6MTI6OGQ6ZmIwCgYIKoZIzj0EAwIDSAAw
RQIhALDcXK/S5ciVuZnb2cfHLTzZhMP4mg9yZbFGLXYD2cZfAiBEqJbbaDj4knoZ
l3Uv9OqozTEEzFdNseShueI9LekI8A==
-----END CERTIFICATE-----
CONSUL_TLS_SERVER_NAME:
CONSUL_LOGIN_AUTH_METHOD: consul-k8s-auth-method
CONSUL_LOGIN_BEARER_TOKEN_FILE: /var/run/secrets/kubernetes.io/serviceaccount/token
CONSUL_LOGIN_META: pod=$(POD_NAMESPACE)/$(POD_NAME)
CONSUL_REDIRECT_TRAFFIC_CONFIG: {"ConsulDNSIP":"127.0.0.1","ConsulDNSPort":8600,"ProxyUserID":"5995","ProxyInboundPort":20000,"ProxyOutboundPort":15001,"ExcludeInboundPorts":null,"ExcludeOutboundPorts":null,"ExcludeOutboundCIDRs":null,"ExcludeUIDs":["5996"],"NetNS":"","IptablesProvider":null}
Mounts:
/consul/connect-inject from consul-connect-inject-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b4msk (ro)
Containers:
consul-dataplane:
Container ID:
Image: hashicorp/consul-dataplane:1.5.1
Image ID:
Port: <none>
Host Port: <none>
Args:
-addresses
consul-server.consul.svc
-grpc-port=8502
-proxy-service-id-path=/consul/connect-inject/proxyid
-log-level=info
-log-json=false
-envoy-concurrency=2
-credential-type=login
-login-auth-method=consul-k8s-auth-method
-login-bearer-token-path=/var/run/secrets/kubernetes.io/serviceaccount/token
-ca-certs=/consul/connect-inject/consul-ca.pem
-graceful-port=20600
-shutdown-drain-listeners
-shutdown-grace-period-seconds=30
-graceful-shutdown-path=/graceful_shutdown
-startup-grace-period-seconds=0
-graceful-startup-path=/graceful_startup
-telemetry-prom-scrape-path=/metrics
-consul-dns-bind-port=8600
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Readiness: tcp-socket :20000 delay=1s timeout=1s period=10s #success=1 #failure=3
Environment:
TMPDIR: /consul/connect-inject
NODE_NAME: (v1:spec.nodeName)
DP_SERVICE_NODE_NAME: $(NODE_NAME)-virtual
POD_NAME: nginx-deployment-68d9dc9859-8dq6p (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
POD_UID: (v1:metadata.uid)
DP_CREDENTIAL_LOGIN_META: pod=$(POD_NAMESPACE)/$(POD_NAME)
DP_CREDENTIAL_LOGIN_META1: pod=$(POD_NAMESPACE)/$(POD_NAME)
DP_CREDENTIAL_LOGIN_META2: pod-uid=$(POD_UID)
Mounts:
/consul/connect-inject from consul-connect-inject-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b4msk (ro)
nginx-deployment:
Container ID:
Image: k8s.gcr.io/ingressconformance/echoserver:v0.0.1
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
SERVICE_NAME: nginx-deployment
POD_NAME: nginx-deployment-68d9dc9859-8dq6p (v1:metadata.name)
NAMESPACE: default (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b4msk (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-b4msk:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
consul-connect-inject-data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m6s default-scheduler Successfully assigned default/nginx-deployment-68d9dc9859-8dq6p to pool-03ffzmlif-bao7d
Normal Pulled 41s (x5 over 2m5s) kubelet Container image "hashicorp/consul-k8s-control-plane:1.5.1" already present on machine
Normal Created 40s (x5 over 2m5s) kubelet Created container consul-connect-inject-init
Normal Started 40s (x5 over 2m5s) kubelet Started container consul-connect-inject-init
Warning BackOff 3s (x10 over 2m1s) kubelet Back-off restarting failed container consul-connect-inject-init in pod nginx-deployment-68d9dc9859-8dq6p_default(88547d26-da1f-48ba-9c59-0a85af69dedf)
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Overview of the Issue
I am try to deploy consul in a digital ocean Kubernetes cluster using helm. Once consul is up and running I am trying to deploy nginx into the cluster. The pods for nginx are not starting up. and the follow information was found in the logs.
How can I resolve this issue?
Kubernetes information
Provider
: Digital oceanversion
: 1.30.2-do.0Consul helm Chart
chart information
Helm chart values
API Gateway
Nginx manifest
The text was updated successfully, but these errors were encountered: