Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

socket connection failed on windows node #491

Closed
himanshuz2 opened this issue Jun 2, 2022 · 10 comments
Closed

socket connection failed on windows node #491

himanshuz2 opened this issue Jun 2, 2022 · 10 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@himanshuz2
Copy link

What happened:
Smb driver keeps crashing and restarting on windows node.
E0602 15:05:03.340095 6944 connection.go:132] Lost connection to unix://C:\csi\csi.sock.

kubectl logs csi-smb-node-win-6npfj node-driver-registrar -n kube-system
I0602 15:01:56.103598 6944 main.go:166] Version: v2.5.0
I0602 15:01:56.477717 6944 main.go:167] Running node-driver-registrar in mode=registration
I0602 15:01:56.478467 6944 main.go:191] Attempting to open a gRPC connection with: "unix://C:\\csi\\csi.sock"
I0602 15:02:03.853302 6944 main.go:198] Calling CSI driver to discover driver name
I0602 15:02:03.864755 6944 main.go:208] CSI driver name: "smb.csi.k8s.io"
I0602 15:02:03.865660 6944 node_register.go:53] Starting Registration Server at: /registration/smb.csi.k8s.io-reg.sock
I0602 15:02:03.865988 6944 node_register.go:62] Registration Server started at: /registration/smb.csi.k8s.io-reg.sock
I0602 15:02:03.868307 6944 node_register.go:92] Skipping HTTP server because endpoint is set to: ""
I0602 15:02:04.049487 6944 main.go:102] Received GetInfo call: &InfoRequest{}
I0602 15:02:04.050576 6944 main.go:109] "Kubelet registration probe created" path="C:\var\lib\kubelet\plugins\smb.csi.k8s.io\registration"
I0602 15:02:04.070355 6944 main.go:120] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,}
E0602 15:05:03.340095 6944 connection.go:132] Lost connection to unix://C:\csi\csi.sock.

What you expected to happen:
Smb driver on windows node should run without restarts.

How to reproduce it:
Create a workload cluster using capz with a windows nodepool in it such that windows node is ready
Install CSI Proxy and CSI Driver SMB including installing on windows node
check the logs for the csi-smb-node-win-xxxxx node-driver-registrar

Anything else we need to know?:

Environment:

  • CSI Driver version: master

  • Kubernetes version (use kubectl version): 1.23.6

  • OS (e.g. from /etc/os-release): Jammy Jellyfish

  • Kernel (e.g. uname -a):Linux CAPZ-Management 5.15.0-27-generic fix: Add release-tools subtree #28-Ubuntu SMP Thu Apr 14 04:55:28 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

  • Install tools: capz kubectl helm

  • Others:

@andyzhangx
Copy link
Member

what's the smb container logs?

kubectl logs csi-smb-node-win-6npfj smb -n kube-system

and can you also run kubectl describe po csi-smb-node-win-6npfj -n kube-system?

@himanshuz2
Copy link
Author

log for smb is as follows. I am not able to understand what IP is it trying to reach. I have deleted all pvc related to this driver.

It may be possible that I scaled the cluster down to one windows node and this is IP of the other node which is no longer part of scaleset.

root@CAPZ-Management:/home/bmadministrator/.kube# kubectl --kubeconfig=config logs csi-smb-node-win-6npfj -n kube-system
error: a container name must be specified for pod csi-smb-node-win-6npfj, choose one of: [liveness-probe node-driver-registrar smb]
root@CAPZ-Management:/home/bmadministrator/.kube# kubectl --kubeconfig=config logs csi-smb-node-win-6npfj smb -n kube-system
I0606 13:04:00.686365 4260 main.go:90] set up prometheus server on [::]:29645
I0606 13:04:00.928714 4260 smb.go:80]
DRIVER INFORMATION:

Build Date: "2022-05-30T22:51:41Z"
Compiler: gc
Driver Name: smb.csi.k8s.io
Driver Version: v1.8.0
Git Commit: ""
Go Version: go1.17.3
Platform: windows/amd64

Streaming logs below:
I0606 13:04:00.928714 4260 safe_mounter_windows.go:311] using CSIProxyMounterV1, API Versions filesystem: v1, SMB: v1
I0606 13:04:00.928714 4260 driver.go:93] Enabling controller service capability: CREATE_DELETE_VOLUME
I0606 13:04:00.928714 4260 driver.go:93] Enabling controller service capability: SINGLE_NODE_MULTI_WRITER
I0606 13:04:00.928714 4260 driver.go:112] Enabling volume access mode: SINGLE_NODE_WRITER
I0606 13:04:00.928714 4260 driver.go:112] Enabling volume access mode: SINGLE_NODE_READER_ONLY
I0606 13:04:00.928714 4260 driver.go:112] Enabling volume access mode: SINGLE_NODE_SINGLE_WRITER
I0606 13:04:00.928714 4260 driver.go:112] Enabling volume access mode: SINGLE_NODE_MULTI_WRITER
I0606 13:04:00.928714 4260 driver.go:112] Enabling volume access mode: MULTI_NODE_READER_ONLY
I0606 13:04:00.928714 4260 driver.go:112] Enabling volume access mode: MULTI_NODE_SINGLE_WRITER
I0606 13:04:00.928714 4260 driver.go:112] Enabling volume access mode: MULTI_NODE_MULTI_WRITER
I0606 13:04:00.928714 4260 driver.go:103] Enabling node service capability: STAGE_UNSTAGE_VOLUME
I0606 13:04:00.928714 4260 driver.go:103] Enabling node service capability: SINGLE_NODE_MULTI_WRITER
I0606 13:04:00.928714 4260 driver.go:103] Enabling node service capability: VOLUME_MOUNT_GROUP
I0606 13:04:00.928714 4260 server.go:118] Listening for connections on address: &net.UnixAddr{Name:"C:\\csi\\csi.sock", Net:"unix"}
I0606 13:05:12.951014 4260 utils.go:76] GRPC call: /csi.v1.Node/NodeStageVolume
I0606 13:05:12.951552 4260 utils.go:77] GRPC request: {"secrets":"stripped","staging_target_path":"\var\lib\kubelet\plugins\kubernetes.io\csi\pv\pvc-399de223-2d51-44cc-85dd-558e5ac42ca7\globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","file_mode=0777","uid=1001","gid=1001"]}},"access_mode":{"mode":5}},"volume_context":{"source":"//20.236.225.104/share","storage.kubernetes.io/csiProvisionerIdentity":"1654175169191-8081-smb.csi.k8s.io","subdir":"pvc-399de223-2d51-44cc-85dd-558e5ac42ca7"},"volume_id":"20.236.225.104/share#pvc-399de223-2d51-44cc-85dd-558e5ac42ca7#"}
I0606 13:05:12.956007 4260 nodeserver.go:194] NodeStageVolume: targetPath(\var\lib\kubelet\plugins\kubernetes.io\csi\pv\pvc-399de223-2d51-44cc-85dd-558e5ac42ca7\globalmount) volumeID(20.236.225.104/share#pvc-399de223-2d51-44cc-85dd-558e5ac42ca7#) context(map[source://20.236.225.104/share storage.kubernetes.io/csiProvisionerIdentity:1654175169191-8081-smb.csi.k8s.io subdir:pvc-399de223-2d51-44cc-85dd-558e5ac42ca7]) mountflags([dir_mode=0777 file_mode=0777 uid=1001 gid=1001]) mountOptions([AZURE\windows])
I0606 13:05:12.956007 4260 safe_mounter_windows.go:181] IsLikelyNotMountPoint: \var\lib\kubelet\plugins\kubernetes.io\csi\pv\pvc-399de223-2d51-44cc-85dd-558e5ac42ca7\globalmount
I0606 13:05:12.956007 4260 safe_mounter_windows.go:238] Exists path: \var\lib\kubelet\plugins\kubernetes.io\csi\pv\pvc-399de223-2d51-44cc-85dd-558e5ac42ca7\globalmount
I0606 13:05:12.957971 4260 safe_mounter_windows.go:238] Exists path: \var\lib\kubelet\plugins\kubernetes.io\csi\pv\pvc-399de223-2d51-44cc-85dd-558e5ac42ca7\globalmount
I0606 13:05:12.958539 4260 smb_common_windows.go:74] Removing path: \var\lib\kubelet\plugins\kubernetes.io\csi\pv\pvc-399de223-2d51-44cc-85dd-558e5ac42ca7\globalmount
I0606 13:05:12.958539 4260 safe_mounter_windows.go:151] Remove directory: \var\lib\kubelet\plugins\kubernetes.io\csi\pv\pvc-399de223-2d51-44cc-85dd-558e5ac42ca7\globalmount
I0606 13:05:12.959180 4260 safe_mounter_windows.go:71] SMBMount: remote path: //20.236.225.104/share/pvc-399de223-2d51-44cc-85dd-558e5ac42ca7 local path: \var\lib\kubelet\plugins\kubernetes.io\csi\pv\pvc-399de223-2d51-44cc-85dd-558e5ac42ca7\globalmount
I0606 13:05:12.959180 4260 safe_mounter_windows.go:238] Exists path: \var\lib\kubelet\plugins\kubernetes.io\csi\pv\pvc-399de223-2d51-44cc-85dd-558e5ac42ca7
I0606 13:05:12.959882 4260 safe_mounter_windows.go:111] begin to mount \20.236.225.104\share\pvc-399de223-2d51-44cc-85dd-558e5ac42ca7 on c:\var\lib\kubelet\plugins\kubernetes.io\csi\pv\pvc-399de223-2d51-44cc-85dd-558e5ac42ca7\globalmount
E0606 13:05:35.946864 4260 utils.go:81] GRPC error: rpc error: code = Internal desc = volume(20.236.225.104/share#pvc-399de223-2d51-44cc-85dd-558e5ac42ca7#) mount "//20.236.225.104/share/pvc-399de223-2d51-44cc-85dd-558e5ac42ca7" on "\var\lib\kubelet\plugins\kubernetes.io\csi\pv\pvc-399de223-2d51-44cc-85dd-558e5ac42ca7\globalmount" failed with smb mapping failed with error: rpc error: code = Unknown desc = NewSmbGlobalMapping failed. output: "New-SmbGlobalMapping : The network path was not found. \r\nAt line:1 char:190\r\n+ ... ser, $PWord;New-SmbGlobalMapping -RemotePath $Env:smbremotepath -Cred ...\r\n+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\r\n + CategoryInfo : NotSpecified: (MSFT_SmbGlobalMapping:ROOT/Microsoft/...mbGlobalMapping) [New-SmbGlobalMa \r\n pping], CimException\r\n + FullyQualifiedErrorId : Windows System Error 53,New-SmbGlobalMapping\r\n \r\n", err: exit status 1

The describe pod output is below.

Name: csi-smb-node-win-6npfj
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Node: win-p-win000000/10.1.0.6
Start Time: Wed, 01 Jun 2022 09:46:40 -0400
Labels: app=csi-smb-node-win
controller-revision-hash=6664874694
pod-template-generation=1
Annotations: cni.projectcalico.org/containerID: 66ff69a9ec4f0c6ad0849520893c44f5ab223e661e66a8fb2218f86958607a26
cni.projectcalico.org/podIP: 192.168.152.167/32
cni.projectcalico.org/podIPs: 192.168.152.167/32
Status: Running
IP: 192.168.152.167
IPs:
IP: 192.168.152.167
Controlled By: DaemonSet/csi-smb-node-win
Containers:
liveness-probe:
Container ID: containerd://f28c669e6644b2b8690da6ef686dc746d5d217fbc9adbf85e07833f4892ba3f9
Image: registry.k8s.io/sig-storage/livenessprobe:v2.6.0
Image ID: registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932
Port:
Host Port:
Args:
--csi-address=$(CSI_ENDPOINT)
--probe-timeout=3s
--health-port=29643
--v=2
State: Running
Started: Mon, 06 Jun 2022 08:57:50 -0400
Last State: Terminated
Reason: Unknown
Exit Code: 255
Started: Mon, 06 Jun 2022 06:41:53 -0400
Finished: Mon, 06 Jun 2022 08:57:12 -0400
Ready: True
Restart Count: 6
Limits:
memory: 100Mi
Requests:
cpu: 10m
memory: 40Mi
Environment:
CSI_ENDPOINT: unix://C:\csi\csi.sock
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9c8j6 (ro)
C:\csi from plugin-dir (rw)
node-driver-registrar:
Container ID: containerd://9736fbebe4c68a4c600d0c8b2841bff93bab99fadc9cb310cd3743cb4dbabfb5
Image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.0
Image ID: registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:4fd21f36075b44d1a423dfb262ad79202ce54e95f5cbc4622a6c1c38ab287ad6
Port:
Host Port:
Args:
--v=2
--csi-address=$(CSI_ENDPOINT)
--kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
State: Running
Started: Mon, 06 Jun 2022 08:57:59 -0400
Last State: Terminated
Reason: Unknown
Exit Code: 255
Started: Mon, 06 Jun 2022 06:42:02 -0400
Finished: Mon, 06 Jun 2022 08:57:12 -0400
Ready: True
Restart Count: 6
Limits:
memory: 100Mi
Requests:
cpu: 10m
memory: 40Mi
Liveness: exec [/csi-node-driver-registrar.exe --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH) --mode=kubelet-registration-probe] delay=60s timeout=30s period=10s #success=1 #failure=3
Environment:
CSI_ENDPOINT: unix://C:\csi\csi.sock
DRIVER_REG_SOCK_PATH: C:\var\lib\kubelet\plugins\smb.csi.k8s.io\csi.sock
KUBE_NODE_NAME: (v1:spec.nodeName)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9c8j6 (ro)
C:\csi from plugin-dir (rw)
C:\registration from registration-dir (rw)
C:\var\lib\kubelet from kubelet-dir (rw)
smb:
Container ID: containerd://5663fe1f77b367ffd59a62fdffa5077541c6c16c1f3df0054502ba0bc831582e
Image: gcr.io/k8s-staging-sig-storage/smbplugin:canary
Image ID: gcr.io/k8s-staging-sig-storage/smbplugin@sha256:842ca9b262e25a1a1915cbf5c0a45c1e27b52753c0f461e30a80f15030ee7194
Port: 29643/TCP
Host Port: 0/TCP
Args:
--v=5
--endpoint=$(CSI_ENDPOINT)
--nodeid=$(KUBE_NODE_NAME)
--metrics-address=0.0.0.0:29645
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: -1073741510
Started: Mon, 06 Jun 2022 09:13:00 -0400
Finished: Mon, 06 Jun 2022 09:15:58 -0400
Ready: False
Restart Count: 110
Limits:
memory: 200Mi
Requests:
cpu: 10m
memory: 40Mi
Liveness: http-get http://:healthz/healthz delay=30s timeout=10s period=30s #success=1 #failure=5
Environment:
CSI_ENDPOINT: unix://C:\csi\csi.sock
KUBE_NODE_NAME: (v1:spec.nodeName)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9c8j6 (ro)
C:\csi from plugin-dir (rw)
C:\var\lib\kubelet from kubelet-dir (rw)
\.\pipe\csi-proxy-filesystem-v1 from csi-proxy-fs-pipe-v1 (rw)
\.\pipe\csi-proxy-filesystem-v1beta1 from csi-proxy-fs-pipe-v1beta1 (rw)
\.\pipe\csi-proxy-smb-v1 from csi-proxy-smb-pipe-v1 (rw)
\.\pipe\csi-proxy-smb-v1beta1 from csi-proxy-smb-pipe-v1beta1 (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
csi-proxy-fs-pipe-v1:
Type: HostPath (bare host directory volume)
Path: \.\pipe\csi-proxy-filesystem-v1
HostPathType:
csi-proxy-smb-pipe-v1:
Type: HostPath (bare host directory volume)
Path: \.\pipe\csi-proxy-smb-v1
HostPathType:
csi-proxy-fs-pipe-v1beta1:
Type: HostPath (bare host directory volume)
Path: \.\pipe\csi-proxy-filesystem-v1beta1
HostPathType:
csi-proxy-smb-pipe-v1beta1:
Type: HostPath (bare host directory volume)
Path: \.\pipe\csi-proxy-smb-v1beta1
HostPathType:
registration-dir:
Type: HostPath (bare host directory volume)
Path: C:\var\lib\kubelet\plugins_registry
HostPathType: Directory
kubelet-dir:
Type: HostPath (bare host directory volume)
Path: C:\var\lib\kubelet
HostPathType: Directory
plugin-dir:
Type: HostPath (bare host directory volume)
Path: C:\var\lib\kubelet\plugins\smb.csi.k8s.io
HostPathType: DirectoryOrCreate
kube-api-access-9c8j6:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=windows
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/os:NoSchedule op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message


Normal Started 3d20h (x19 over 3d22h) kubelet Started container smb
Warning BackOff 3d20h (x228 over 3d21h) kubelet Back-off restarting failed container
Warning Unhealthy 3d20h (x117 over 3d22h) kubelet Liveness probe failed: Get "http://192.168.152.137:29643/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Warning NodeNotReady 3d17h node-controller Node is not ready
Warning FailedMount 3d16h (x2 over 3d16h) kubelet MountVolume.SetUp failed for volume "kube-api-access-9c8j6" : object "kube-system"/"kube-root-ca.crt" not registered
Normal SandboxChanged 3d16h kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 3d16h kubelet Container image "registry.k8s.io/sig-storage/livenessprobe:v2.6.0" already present on machine
Normal Created 3d16h kubelet Created container liveness-probe
Normal Started 3d16h kubelet Started container liveness-probe
Normal Pulled 3d16h kubelet Container image "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.0" already present on machine
Normal Created 3d16h kubelet Created container node-driver-registrar
Normal Started 3d16h kubelet Started container node-driver-registrar
Normal Pulled 3d16h (x2 over 3d16h) kubelet Container image "gcr.io/k8s-staging-sig-storage/smbplugin:canary" already present on machine
Normal Created 3d16h (x2 over 3d16h) kubelet Created container smb
Normal Killing 3d16h kubelet Container smb failed liveness probe, will be restarted
Normal Started 3d16h (x2 over 3d16h) kubelet Started container smb
Warning Unhealthy 3d16h (x24 over 3d16h) kubelet Liveness probe failed: Get "http://192.168.152.150:29643/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Warning BackOff 3d16h (x8 over 3d16h) kubelet Back-off restarting failed container
Warning FailedMount 3h1m kubelet MountVolume.SetUp failed for volume "kube-api-access-9c8j6" : object "kube-system"/"kube-root-ca.crt" not registered
Normal SandboxChanged 3h (x2 over 3h1m) kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 3h kubelet Container image "registry.k8s.io/sig-storage/livenessprobe:v2.6.0" already present on machine
Normal Created 3h kubelet Created container liveness-probe
Normal Pulled 3h kubelet Container image "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.0" already present on machine
Normal Started 3h kubelet Started container liveness-probe
Normal Created 3h kubelet Created container node-driver-registrar
Normal Started 3h kubelet Started container node-driver-registrar
Normal Pulled 177m (x2 over 3h) kubelet Container image "gcr.io/k8s-staging-sig-storage/smbplugin:canary" already present on machine
Normal Created 177m (x2 over 3h) kubelet Created container smb
Normal Killing 177m kubelet Container smb failed liveness probe, will be restarted
Normal Started 177m (x2 over 3h) kubelet Started container smb
Warning Unhealthy 165m (x24 over 179m) kubelet Liveness probe failed: Get "http://192.168.152.152:29643/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Warning BackOff 160m (x9 over 162m) kubelet Back-off restarting failed container
Normal SandboxChanged 154m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 154m kubelet Container image "registry.k8s.io/sig-storage/livenessprobe:v2.6.0" already present on machine
Normal Created 154m kubelet Created container liveness-probe
Normal Started 154m kubelet Started container liveness-probe
Normal Pulled 154m kubelet Container image "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.0" already present on machine
Normal Created 154m kubelet Created container node-driver-registrar
Normal Started 154m kubelet Started container node-driver-registrar
Normal Pulled 154m kubelet Container image "gcr.io/k8s-staging-sig-storage/smbplugin:canary" already present on machine
Normal Created 153m kubelet Created container smb
Normal Started 153m kubelet Started container smb
Warning Unhealthy 153m kubelet Liveness probe failed: Get "http://192.168.152.158:29643/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Warning FailedMount 18m kubelet MountVolume.SetUp failed for volume "kube-api-access-9c8j6" : object "kube-system"/"kube-root-ca.crt" not registered
Normal SandboxChanged 18m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 18m kubelet Container image "registry.k8s.io/sig-storage/livenessprobe:v2.6.0" already present on machine
Normal Created 18m kubelet Created container liveness-probe
Normal Started 18m kubelet Started container liveness-probe
Normal Pulled 18m kubelet Container image "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.0" already present on machine
Normal Created 18m kubelet Created container node-driver-registrar
Normal Started 18m kubelet Started container node-driver-registrar
Normal Pulled 15m (x2 over 18m) kubelet Container image "gcr.io/k8s-staging-sig-storage/smbplugin:canary" already present on machine
Normal Created 15m (x2 over 18m) kubelet Created container smb
Normal Started 15m (x2 over 17m) kubelet Started container smb
Normal Killing 12m (x2 over 15m) kubelet Container smb failed liveness probe, will be restarted
Warning Unhealthy 3m36s (x24 over 17m) kubelet Liveness probe failed: Get "http://192.168.152.167:29643/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

@himanshuz2
Copy link
Author

himanshuz2 commented Jun 6, 2022

I can also kind of reproduce this in local micro k8s cluster with windows node running Calico.
kubectl logs csi-smb-node-win-q4cwq node-driver-registrar -n kube-system

Error from server: Get "https://win-7ivb3hcarnh:10250/containerLogs/kube-system/csi-smb-node-win-q4cwq/node-driver-registrar": dial tcp: lookup win-7ivb3hcarnh: Temporary failure in name resolution
Error from server: Get "https://win-7ivb3hcarnh:10250/containerLogs/kube-system/csi-smb-node-win-q4cwq/smb": dial tcp: lookup win-7ivb3hcarnh: Temporary failure in name resolution

@himanshuz2
Copy link
Author

I still have the original issue on the azure cluster having windows node pool that has been created with CAPZ.

@andyzhangx
Copy link
Member

I see this error, could you add automountServiceAccountToken: false into pod.spec, by running kubectl edit ds -n kube-system csi-smb-node-win

Warning FailedMount 3h1m kubelet MountVolume.SetUp failed for volume "kube-api-access-9c8j6" : object "kube-system"/"kube-root-ca.crt" not registered

@himanshuz2
Copy link
Author

Thank you, I will check and update after 2 weeks.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 5, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 5, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants