Skip to content
This repository has been archived by the owner on Nov 3, 2023. It is now read-only.

k3s support #107

Open
smerschjohann opened this issue Oct 16, 2021 · 5 comments
Open

k3s support #107

smerschjohann opened this issue Oct 16, 2021 · 5 comments

Comments

@smerschjohann
Copy link

Describe the problem/challenge you have
Currently k3s as kubernetes cluster is not supported as it deploys containerd at a different location and not all settings are exposed to override it at creation time.

Description of the solution you'd like
Either detect a k3s instance and deploy the buildkit pod/deployment with the correct settings or allow the customization in such a way that it will work on k3s.

A working deployment looks like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: buildkit
[...]
        volumeMounts:
        - mountPath: /etc/buildkit/
          name: buildkitd-config
        - mountPath: /run/containerd/containerd.sock
          name: containerd-sock
        - mountPath: /var/lib/buildkit/buildkit
          mountPropagation: Bidirectional
          name: var-lib-buildkit
        - mountPath: /var/lib/containerd
          mountPropagation: Bidirectional
          name: var-lib-containerd
        - mountPath: /run/containerd
          mountPropagation: Bidirectional
          name: run-containerd
        - mountPath: /var/log
          mountPropagation: Bidirectional
          name: var-log
        - mountPath: /tmp
          mountPropagation: Bidirectional
          name: tmp
        - mountPath: /var/lib/rancher
          mountPropagation: Bidirectional
          name: rancher
[..]
      volumes:
      - configMap:
          defaultMode: 420
          name: buildkit
        name: buildkitd-config
      - hostPath:
          path: /run/k3s/containerd/containerd.sock
          type: Socket
        name: containerd-sock
      - hostPath:
          path: /var/lib/buildkit/buildkit
          type: DirectoryOrCreate
        name: var-lib-buildkit
      - hostPath:
          path: /var/lib/rancher/k3s/agent/containerd
          type: Directory
        name: var-lib-containerd
      - hostPath:
          path: /run/containerd
          type: Directory
        name: run-containerd
      - hostPath:
          path: /var/log
          type: Directory
        name: var-log
      - hostPath:
          path: /tmp
          type: Directory
        name: tmp
      - hostPath:
          path: /var/lib/rancher
          type: Directory
        name: rancher

This means:

  • an additional hostPath mount must be added (/var/lib/rancher),
  • the var-lib-containerd must be set to: /var/lib/rancher/k3s/agent/containerd
  • the containerd-sock must be set to: /run/k3s/containerd/containerd.sock

Design/Architecture Details
It would be enough to change the following:

  • enable the customization of the var-lib-containerd path in addition to the containerd-sock.
  • allow adding additional volumes / volumeMounts

Environment Details:

k3s v1.21.5+k3s2

Vote on this request

This is an invitation to the community to vote on issues. Use the "smiley face" up to the right of this comment to vote.

  • 👍 "This project will be more useful if this feature were added"
  • 👎 "This feature will not enhance the project in a meaningful way"
@sacesare
Copy link

k0s too)

@spkane
Copy link

spkane commented May 18, 2022

k3s does not appear to be the core problem here.

colima uses k3s and this can be made to work there without adjusting code. (see: #133)

So, I think the issue may be more specific to the specific implementation that is being used.

That being said, making this tool configurable would be a big win and something that will make it much more useable in general.

@Blackmamba23
Copy link

Would be nice if k3s is supported, getting the following error for k3s cluster

FailedMount MountVolume.SetUp failed for volume "containerd-sock" : hostPath type check failed: /run/containerd/containerd.sock is not a socket file

@sdemura
Copy link

sdemura commented Oct 13, 2022

./buildkitd --containerd-worker-addr /run/k3s/containerd/containerd.sock

@zcrisler
Copy link

zcrisler commented May 3, 2023

Just encountered this issue on an RKE2 v1.24.9+rke2r2 cluster. The suggested changes to the buildkit Deployment worked.

Perhaps it would be simpler to work around these types of issues if kubectl buildkit create had a --dry-run option that output the Deployment and ConfigMap resources instead of creating them?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants