Skip to content
This repository has been archived by the owner on Nov 3, 2023. It is now read-only.

Docker Hub default prefix not added to short image names making them unusable to run pods #79

Open
glehmann opened this issue Mar 22, 2021 · 2 comments

Comments

@glehmann
Copy link

What steps did you take and what happened

I'm trying to use buildkit-cli-for-kubectl with kind, but ufortunately the image is not available after the build.
I've used:

kind create cluster
kubectl buildkit create --worker containerd
kubectl build -t test:kube .
kubectl run -it --rm --image test:kube foo

The last command fails because the image is not available in the node, so it tries to download it from docker.io.

Also note that I had to force the worker type to containerd in order to have the expected mount points.

What did you expect to happen

I expected to get my image available in my kind cluster.

kind is not explicitly listed in https://github.com/vmware-tanzu/buildkit-cli-for-kubectl/blob/main/README.md#works-in-numerous-kubernetes-environments, but I think it's used in your tests and it's mentioned in https://github.com/vmware-tanzu/buildkit-cli-for-kubectl/blob/main/docs/installing.md#vmware-fusion so I was quite hopeful it would work :)

Environment Details:

  • kubectl buildkit version (use kubectl buildkit version)
❯ kubectl buildkit version
v0.1.2
  • Kubernetes version (use kubectl version)
❯ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-21T01:11:42Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
  • Where are you running kubernetes (e.g., bare metal, vSphere Tanzu, Cloud Provider xKS, etc.)
❯ kind version
kind v0.10.0 go1.15.7 linux/amd64
  • Container Runtime and version (e.g. containerd sudo ctr version or dockerd docker version on one of your kubernetes worker nodes)
❯ docker exec -it kind-control-plane ctr version
Client:
  Version:  v1.4.0-106-gce4439a8
  Revision: ce4439a8151f77dc50adb655ab4852ee9c366589
  Go version: go1.13.15

Server:
  Version:  v1.4.0-106-gce4439a8
  Revision: ce4439a8151f77dc50adb655ab4852ee9c366589
  UUID: b18896e9-c396-4b3f-97ca-c681396dc76d

Builder Logs

❯ kubectl logs -l app=buildkit
time="2021-03-22T08:30:01Z" level=warning msg="using host network as the default"
time="2021-03-22T08:30:01Z" level=info msg="found worker \"q7aw1509sdjwnwbazmwhwm88q\", labels=map[org.mobyproject.buildkit.worker.executor:containerd org.mobyproject.buildkit.worker.hostname:buildkit-7674b967b9-hhm6f org.mobyproject.buildkit.worker.snapshotter:overlayfs], platforms=[linux/amd64]"
time="2021-03-22T08:30:01Z" level=info msg="found 1 workers, default=\"q7aw1509sdjwnwbazmwhwm88q\""
time="2021-03-22T08:30:01Z" level=warning msg="currently, only the default worker can be used."
time="2021-03-22T08:30:01Z" level=info msg="running server on /run/buildkit/buildkitd.sock"
❯ kubectl build -t test:kube .
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 65B done
#1 DONE 0.0s

#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

#3 [internal] load metadata for docker.io/library/alpine:latest
#3 DONE 2.0s

#4 [1/2] FROM docker.io/library/alpine@sha256:a75afd8b57e7f34e4dad8d65e2c7b...
#4 resolve docker.io/library/alpine@sha256:a75afd8b57e7f34e4dad8d65e2c7ba2e1975c795ce1ee22fa34f8cf46f96a3be 0.0s done
#4 DONE 0.0s

#5 [2/2] RUN apk add git
#5 0.140 fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
#5 1.135 fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
#5 2.359 (1/7) Installing ca-certificates (20191127-r5)
#5 2.561 (2/7) Installing brotli-libs (1.0.9-r3)
#5 2.858 (3/7) Installing nghttp2-libs (1.42.0-r1)
#5 2.958 (4/7) Installing libcurl (7.74.0-r1)
#5 3.177 (5/7) Installing expat (2.2.10-r1)
#5 3.285 (6/7) Installing pcre2 (10.36-r0)
#5 3.519 (7/7) Installing git (2.30.2-r0)
#5 5.980 Executing busybox-1.32.1-r3.trigger
#5 5.988 Executing ca-certificates-20191127-r5.trigger
#5 6.060 OK: 19 MiB in 21 packages
#5 DONE 6.2s

#6 exporting to image
#6 exporting layers
#6 exporting layers 1.6s done
#6 exporting manifest sha256:e181c5c0df5cfd7bc8f3cfeca2a1b0c7327b160b35e50396b00687416a682602 done
#6 exporting config sha256:8176f7a97f3cdf4346a9a6faa029deeee91ac388320d499ab7b9efd75a644dc5 done
#6 naming to test:kube done
#6 DONE 1.6s
❯ kubectl describe pod/foo
Name:         foo
Namespace:    default
Priority:     0
Node:         kind-control-plane/172.26.0.2
Start Time:   Mon, 22 Mar 2021 09:41:41 +0100
Labels:       run=foo
Annotations:  <none>
Status:       Pending
IP:           10.244.0.11
IPs:
  IP:  10.244.0.11
Containers:
  foo:
    Container ID:   
    Image:          test:kube
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ErrImagePull
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rjmvd (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-rjmvd:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-rjmvd
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  8s    default-scheduler  Successfully assigned default/foo to kind-control-plane
  Normal   Pulling    7s    kubelet            Pulling image "test:kube"
  Warning  Failed     4s    kubelet            Failed to pull image "test:kube": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/test:kube": failed to resolve reference "docker.io/library/test:kube": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
  Warning  Failed     4s    kubelet            Error: ErrImagePull
  Normal   BackOff    4s    kubelet            Back-off pulling image "test:kube"
  Warning  Failed     4s    kubelet            Error: ImagePullBackOff
❯ kubectl describe pod/buildkit-7674b967b9-hhm6f 
Name:         buildkit-7674b967b9-hhm6f
Namespace:    default
Priority:     0
Node:         kind-control-plane/172.26.0.2
Start Time:   Mon, 22 Mar 2021 09:30:00 +0100
Labels:       app=buildkit
              pod-template-hash=7674b967b9
              rootless=false
              runtime=containerd
              worker=containerd
Annotations:  <none>
Status:       Running
IP:           10.244.0.9
IPs:
  IP:           10.244.0.9
Controlled By:  ReplicaSet/buildkit-7674b967b9
Containers:
  buildkitd:
    Container ID:  containerd://39cb639b0370d788d23b20bab716ab94867c9b9cd6f85f4b85e15bf3988682c9
    Image:         moby/buildkit:buildx-stable-1
    Image ID:      docker.io/moby/buildkit@sha256:4a9629b0e1c3e9e8ed1856f42bc77dfd5bb6b09be23909fe92f78baad334cda2
    Port:          <none>
    Host Port:     <none>
    Args:
      --oci-worker=false
      --containerd-worker=true
    State:          Running
      Started:      Mon, 22 Mar 2021 09:30:01 +0100
    Ready:          True
    Restart Count:  0
    Readiness:      exec [buildctl debug workers] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /etc/buildkit/ from buildkitd-config (rw)
      /run/containerd from run-containerd (rw)
      /run/containerd/containerd.sock from containerd-sock (rw)
      /tmp from tmp (rw)
      /var/lib/buildkit from var-lib-buildkit (rw)
      /var/lib/containerd from var-lib-containerd (rw)
      /var/log from var-log (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rjmvd (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  buildkitd-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      buildkit
    Optional:  false
  containerd-sock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/containerd/containerd.sock
    HostPathType:  Socket
  var-lib-buildkit:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/buildkit
    HostPathType:  DirectoryOrCreate
  var-lib-containerd:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/containerd
    HostPathType:  Directory
  run-containerd:
    Type:          HostPath (bare host directory volume)
    Path:          /run/containerd
    HostPathType:  Directory
  var-log:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log
    HostPathType:  Directory
  tmp:
    Type:          HostPath (bare host directory volume)
    Path:          /tmp
    HostPathType:  Directory
  default-token-rjmvd:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-rjmvd
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  18m   default-scheduler  Successfully assigned default/buildkit-7674b967b9-hhm6f to kind-control-plane
  Normal  Pulled     18m   kubelet            Container image "moby/buildkit:buildx-stable-1" already present on machine
  Normal  Created    18m   kubelet            Created container buildkitd
  Normal  Started    18m   kubelet            Started container buildkitd

Dockerfile

FROM alpine
RUN apk add git

Vote on this request

This is an invitation to the community to vote on issues. Use the "smiley face" up to the right of this comment to vote.

  • 👍 "I would like to see this bug fixed as soon as possible"
  • 👎 "There are more important bugs to focus on right now"
@dhiltgen
Copy link
Contributor

Thanks for filing the issue.

I've tried to repro with the latest release we just recently produced (0.1.3) and it seems like the images are loading correctly in my setup.

Docker: 20.10.1
Kind: 0.10.0-alpha
K8s: v1.19.1
kubectl-buildkit: 0.1.3

Let me look over your scenario a little more closely and see if I can find a quirk that might explain the differing behavior.

@dhiltgen
Copy link
Contributor

dhiltgen commented Mar 30, 2021

It looks like the problem is with the auto-prefixing.

% kubectl build -t test:kube .
...
% docker exec -it kind-control-plane ctr --namespace k8s.io image ls | grep test:kube
test:kube                                                                                       application/vnd.docker.distribution.manifest.v2+json      sha256:f5977248d2d92915066aa2139a29da44410ffa07ea23feb14b971e929aadac2a 748.6 KiB linux/amd64

The "automagic" docker.io/ prefix is not getting stuffed on the image, however other code paths in k8s are respecting Docker Hub's default prefix pattern.

A quick workaround is to simply run kubectl build -t docker.io/library/test:kube . if that's what you want to name your image, but we should fix this so it's logically consistent with the rest of kubernetes.

@dhiltgen dhiltgen changed the title image not available in the kind node after the build Docker Hub default prefix not added to short image names making them unusable to run pods Mar 30, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants