Skip to content
This repository has been archived by the owner on Nov 3, 2023. It is now read-only.

kubectl run ends with ErrImagePull in Azure Kubernetes Cluster (AKS) #99

Open
Shaked opened this issue Aug 14, 2021 · 2 comments
Open

Comments

@Shaked
Copy link

Shaked commented Aug 14, 2021

What steps did you take and what happened

I have ran the following commands:

/usr/local/bin/kubectl buildkit -n agents create citests-daemon --custom-config buildkitd-cm
/usr/local/bin/kubectl build -t service-bus-reporter:test-24633 ./service-bus-reporter/ -f service-bus-reporter/Dockerfile --builder citests-daemon
/usr/local/bin/kubectl run -n agents --restart=Never --rm -i --image service-bus-reporter:test-24633 ci-publish-24633 

Sometimes it might work but most of the times it ends with:

ci-publish-24633                  0/1     ErrImagePull        0          2s   
ci-publish-24633                  0/1     ImagePullBackOff    0          15s  
ci-publish-24633                  0/1     ErrImagePull        0          30s  
ci-publish-24633                  0/1     ImagePullBackOff    0          41s  
ci-publish-24633                  0/1     ErrImagePull        0          55s  
ci-publish-24633                  0/1     Terminating         0          60s  

What did you expect to happen

I expect the newly built image to be in AKS and run.

Environment Details:

  • kubectl buildkit version (use kubectl buildkit version): v0.1.3
  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.0", GitCommit:"c2b5237ccd9c0f1d600d3072634ca66cefdf272f", GitTreeState:"clean", BuildDate:"2021-08-04T18:03:20Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"c86ad89b8715ed17fd55b87cbb2888ccc6fa9878", GitTreeState:"clean", BuildDate:"2020-09-25T01:53:27Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.22) and server (1.18) exceeds the supported minor version skew of +/-1
  • Where are you running kubernetes (e.g., bare metal, vSphere Tanzu, Cloud Provider xKS, etc.): AKS (Azure)
  • Container Runtime and version (e.g. containerd sudo ctr version or dockerd docker version on one of your kubernetes worker nodes):
sudo ctr version
Client:
  Version:  1.4.4+azure
  Revision: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
  Go version: go1.13.15

Server:
  Version:  1.4.4+azure
  Revision: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
  UUID: 5c6ac07c-7cd8-4b02-84e9-402d66fec664

--- 
docker version
Client:
 Version:           19.03.14+azure
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        fd3371eb7df1adeceff5935cf3ade0576a0f48d5
 Built:             Sat Oct 24 07:44:17 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.14+azure
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       7d75c1d40d88ddef08653dbd611f41df42bdf087
  Built:            Mon Mar 12 00:00:00 2018
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.4+azure
  GitCommit:        05f951a3781f4f2c1911b05e61c160e9c30eaa8e
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.18.0
  GitCommit:

Builder Logs
[If applicable, an excerpt from kubectl logs -l app=buildkit from around the time you hit the failure may be very helpful]

Dockerfile
[If applicable, please include your Dockerfile or excerpts related to the failure]

Vote on this request

This is an invitation to the community to vote on issues. Use the "smiley face" up to the right of this comment to vote.

  • 👍 "I would like to see this bug fixed as soon as possible"
  • 👎 "There are more important bugs to focus on right now"
@dhiltgen
Copy link
Contributor

dhiltgen commented Sep 15, 2021

This feels like it might be a duplicate of #79 perhaps. Try using fully qualified registry names on both your build tag and run tag and see if that solves this.

If not, can you see what kubectl describe pod XXX looks like on these failing pods, and perhaps copy the tail-end of the build output - perhaps there's something there that will help shed light on what's going wrong.

Another thing to check is the image pull policy on the pods to make sure it's not getting set to always pull.

@madclement
Copy link

@Shaked The problem could be is that the Image might be getting built on one node and the pod is being deployed on another node which doesn't contain the Image. I am also currently facing in same Issue in Amazon EKS. Some times it work but most of the time it it failing.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants