-
-
Notifications
You must be signed in to change notification settings - Fork 393
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: Containerd.io 1.7.24-1 | unix opening TUN device file: operation not permitted #2606
Comments
@qdm12 is more or less the only maintainer of this project and works on it in his free time.
|
Same problem here: Seems to be related to a recent update - I installed containerd.io 1.7.24-1. |
Interim fix - I downgraded containerd.io to 1.7.23-1, it's resolved the issue.
|
Or just mount |
related to containerd/containerd#11078 |
I have another error:
Repro:
when running it in docker-in-docker mode (using sysbox-ce_0.6.4-0.linux). Found this issue, downgraded to 1.7.23-1 - it fixed the issue. |
Was this comment meant for this repository? Doesn't seem like it at first glance at least. |
If you are using docker compose, adding this to your gluetun service should fix the issue:
|
Can confirm that this fix #2606 (comment) works |
thanks for the advice, worked for me too 2606 |
Hi, It's not a bug from gluetun, look at : containerd/containerd#11078 (comment) |
Only getting: |
You are probably missing the From https://wiki.alpinelinux.org/wiki/Setting_up_a_OpenVPN_server:
|
THANKS! Works. |
For me, adding this does NOT solve the issue:
Nor did this:
The only fix I found (other than rolling back) was this:
|
Ok so I had this issue too - thanks for the comments here. This apparently is an intended change on containerd and runx - see this comment for the details - so they closed their ticket as won't fix. I confirmed that downgrading containerd (as suggested) fixed the issue, but then figured out (following the comment) how to fix it on the newer version. If you are using docker compose, like I am - you need to add a line similar to this to your yml file for gluetun:
Adding that line does the trick for me and this now runs smoothly on the 1.7.24 version latest. Hope that helps others! |
you the man, thank you |
It does if you correctly assigned NET_ADMIN as per the docs. |
I still wonder, why suddenly mapping the device in, is neccesary?
so it should (like before) be able to handle the rest automatically. |
Because of the change made in containerd (actually the runc dependency 2 years ago but was recently included in containerd), as per the comments in this issue. |
This also appears to affect k3s v1.31.3+k3s1. Downgrading to v1.31.2+k3s1 fixes the issue. containerd/containerd#11078 (comment) handwaves at using a "generic device plugin", but I wasn't able to figure out how to draw the rest of the owl on that one over morning coffee. |
That was addressed a few months ago, I even added code to suggest adding Also the wiki already contained the Please if your issue is NOT resolved by adding --device /dev/net/tun in your configuration, open a separate issue from here. I'll leave this open for a few more days for users to read this comment but it will get closed then. |
Thanks! This explained my unexplainable problem that I'd spent a good few days trying to track down. The downgrade caused other issues but now the cluster is back to normal and gluetun is running! |
Glad that helped! Kubernetes does not appear to have an equivalent of the I may take another run at upgrading my cluster over the holiday. I'll post another update if I figure something out and don't get lost shaving yaks. |
@Sharpie any idea why this wouldn't work on |
@kldzj Looks like v1.29.11+k3s1 was released on the same day as v1.31.3+k3s1, so the likely explanation is that it contains the same bump to |
Yup, containerd/containerd#11078 (comment) states that the change of not sharing |
I did some research and testing with K3D and came up with 3 ways to deal with the following error on Kubernetes:
TL/DR: Hold off on upgrading Kubernetes until runc v1.2.4 is in use, or run the Gluetun container with Reproduction CaseI used the following Deployment which starts Gluetun as a SidecarContainer and then boots a netshoot container that retrieves connection info (NOTE: the ---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: gluetun-example
name: gluetun-example
spec:
replicas: 1
selector:
matchLabels:
app: gluetun-example
template:
metadata:
labels:
app: gluetun-example
spec:
initContainers:
- name: gluetun
image: 'qmcgaw/gluetun:v3.39.1'
restartPolicy: Always
env:
- name: VPN_SERVICE_PROVIDER
value: custom
- name: VPN_TYPE
value: wireguard
- name: WIREGUARD_ADDRESSES
value: '10.2.0.2/32'
- name: VPN_ENDPOINT_PORT
value: '51820'
- name: WIREGUARD_PRIVATE_KEY
valueFrom:
secretKeyRef:
name: proton-wireguard
key: wireguard-privatekey
- name: VPN_ENDPOINT_IP
valueFrom:
secretKeyRef:
name: proton-wireguard
key: wireguard-peer-endpoint
- name: WIREGUARD_PUBLIC_KEY
valueFrom:
secretKeyRef:
name: proton-wireguard
key: wireguard-peer-publickey
securityContext:
capabilities:
add:
- NET_ADMIN
startupProbe:
exec:
command:
- /gluetun-entrypoint
- healthcheck
initialDelaySeconds: 10
timeoutSeconds: 5
periodSeconds: 5
failureThreshold: 3
livenessProbe:
exec:
command:
- /gluetun-entrypoint
- healthcheck
timeoutSeconds: 5
periodSeconds: 5
failureThreshold: 3
containers:
- name: netshoot
image: nicolaka/netshoot
command:
- /bin/sh
- '-c'
- |
while true; do
curl -sS https://am.i.mullvad.net/json | jq
sleep 60
done When added to a cluster running k3s v1.31.2-k3s1: k3d cluster create --image rancher/k3s:v1.31.2-k3s1 dot-2 Everything works:
When added to a cluster running k3s v1.31.4-k3s1:
The pod fails to complete initialization due to TUN device permissions:
SolutionsDon't upgrade Kubernetes until runc v1.2.4 is in useThe runc maintainers have reverted the removal of Future Kubernetes releases that use runc v1.2.4 or newer should Just Work As They Used To™
Run Gluetun in privileged modeUpdate Gluetun containers to run in privileged mode (shoutout to @holysoles for adding this to the wiki): diff --git a/gluetun-deployment.yaml b/gluetun-deployment.yaml
index c509daa..e1be491 100644
--- a/gluetun-deployment.yaml
+++ b/gluetun-deployment.yaml
@@ -43,6 +43,7 @@ spec:
name: proton-wireguard
key: wireguard-peer-publickey
securityContext:
+ privileged: true
capabilities:
add:
- NET_ADMIN
Use generic-device-plugin to manage access to /dev/net/tunDeploy the Generic Device Plugin configured to manage access to ---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: generic-device-plugin
namespace: kube-system
labels:
app.kubernetes.io/name: generic-device-plugin
spec:
selector:
matchLabels:
app.kubernetes.io/name: generic-device-plugin
template:
metadata:
labels:
app.kubernetes.io/name: generic-device-plugin
spec:
priorityClassName: system-node-critical
tolerations:
- operator: "Exists"
effect: "NoExecute"
- operator: "Exists"
effect: "NoSchedule"
containers:
- image: squat/generic-device-plugin
# count: 1024 is arbitrary, but will limit each k8s node
# to only running 1024 containers that use /dev/net/tun
args:
- --device
- |
name: tun
groups:
- count: 1024
paths:
- path: /dev/net/tun
name: generic-device-plugin
resources:
requests:
cpu: 50m
memory: 10Mi
limits:
cpu: 50m
memory: 20Mi
ports:
- containerPort: 8080
name: http
securityContext:
privileged: true
volumeMounts:
- name: device-plugin
mountPath: /var/lib/kubelet/device-plugins
- name: dev
mountPath: /dev
volumes:
- name: device-plugin
hostPath:
path: /var/lib/kubelet/device-plugins
- name: dev
hostPath:
path: /dev
updateStrategy:
type: RollingUpdate Then, update Gluetun containers to request diff --git a/gluetun-deployment.yaml b/gluetun-deployment.yaml
index c509daa..5826d22 100644
--- a/gluetun-deployment.yaml
+++ b/gluetun-deployment.yaml
@@ -42,6 +42,9 @@ spec:
secretKeyRef:
name: proton-wireguard
key: wireguard-peer-publickey
+ resources:
+ limits:
+ squat.ai/tun: "1"
securityContext:
capabilities:
add:
|
Fyi v3.40.0 is released containing the warning mentioned. I'll leave this opened especially for the comment above ⬆️ to be added to the wiki temporarily (btw thanks for the investigation and sharing!) |
Is this urgent?
Yes
Host OS
Ubuntu 24.04
CPU arch
x86_64
VPN service provider
AirVPN
What are you using to run the container
docker-compose
What is the version of Gluetun
3.39.1
What's the problem 🤔
Hello,
Following an update of
containerd.io
from version1.7.23-1
to1.7.24-1
, Gluetun doesn't start anymore.Downgrading to
1.7.23-1
fixes the issue.Someone opened a bug report for containerd, but I don't know if meanwhile you can circumvent the issue or post a warning somewhere.
containerd/containerd#11078
Share your logs (at least 10 lines)
Share your configuration
No response
The text was updated successfully, but these errors were encountered: