Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High CPU usage by gvfs-udisks2-vo which is caused by microk8s #500

Closed
khteh opened this issue Jun 10, 2019 · 34 comments
Closed

High CPU usage by gvfs-udisks2-vo which is caused by microk8s #500

khteh opened this issue Jun 10, 2019 · 34 comments

Comments

@khteh
Copy link
Contributor

khteh commented Jun 10, 2019

Please run microk8s.inspect and attach the generated tarball to this issue.
inspection-report-20190610_165704.tar.gz

We appreciate your feedback. Thank you for using microk8s.
top shows:

 2293 khteh     20   0  270176  26568   7776 R  56.7   0.1 198:27.51 gvfs-udisks2-vo                                                                                                                               

udisksctl monitor shows it is mainly by microk8s. What happen?

@ktsakalozos
Copy link
Member

Hi @khteh

I do not see this over here. Can you help me reproduce the issue you are seeing? Is it possible for you to share all the manifests you apply and the addons you enable?

I see that some hours ago you deployed a redis cluster. Could that be related with the spike in the disk utilization?

@khteh
Copy link
Contributor Author

khteh commented Jun 10, 2019

Hi @ktsakalozos these are the add-ons that I have enabled:

$ microk8s.status
microk8s is running
addons:
linkerd: enabled
jaeger: enabled
rbac: disabled
prometheus: enabled
dns: enabled
fluentd: enabled
storage: enabled
gpu: disabled
registry: disabled
dashboard: enabled
ingress: enabled
metrics-server: enabled
istio: enabled

I delete the statefulsets of redis cluster and now gvfs-udisk2-vo is at 10% CPU consumption. Is it the redis or something else?

@tdickman
Copy link

I am also seeing this occur in my cluster. Interestingly I have been using the same set of applications, so this must either be caused by a change in microk8s or a change in ubuntu.

I'll see if I can get a simplified set of applications - right now I'm running a number of private applications.

@graste
Copy link

graste commented Jul 1, 2019

Maybe this helps: disabling "istio" lead to significantly less cpu usage in my case for gvfs-udisks2….

@tellisnz-shift
Copy link

I have this as well and don't have a stateful set for redis, but do for things like kafka, zookeeper, elasticsearch.

udisksctl monitor has a lot of this.

10:06:29.082: /org/freedesktop/UDisks2/block_devices/loop2: org.freedesktop.UDisks2.Filesystem: Properties Changed
  MountPoints:          /run/containerd/runc/k8s.io/ed49d3cf89a55d4c6501e207ba8aee230e1c379c2c696a7a35db77021f8f30ec/runc.BjKdmt
                        /snap/microk8s/671
10:06:29.107: /org/freedesktop/UDisks2/block_devices/loop2: org.freedesktop.UDisks2.Filesystem: Properties Changed
  MountPoints:          /snap/microk8s/671
10:06:32.095: /org/freedesktop/UDisks2/block_devices/loop2: org.freedesktop.UDisks2.Filesystem: Properties Changed
  MountPoints:          /run/containerd/runc/k8s.io/f456b4f044b2c99883107eba795ba6c396a2134c0d9e8bded53db9dd2bc18823/runc.TJpcMy
                        /snap/microk8s/671
10:06:32.108: /org/freedesktop/UDisks2/block_devices/loop2: org.freedesktop.UDisks2.Filesystem: Properties Changed
  MountPoints:          /snap/microk8s/671
10:06:35.934: /org/freedesktop/UDisks2/block_devices/loop2: org.freedesktop.UDisks2.Filesystem: Properties Changed
  MountPoints:          /run/containerd/runc/k8s.io/ed49d3cf89a55d4c6501e207ba8aee230e1c379c2c696a7a35db77021f8f30ec/runc.kAhFaH
                        /snap/microk8s/671
10:06:35.956: /org/freedesktop/UDisks2/block_devices/loop2: org.freedesktop.UDisks2.Filesystem: Properties Changed
  MountPoints:          /snap/microk8s/671

The loop2 mount is the snap mount for microk8s.

/dev/loop2 /snap/microk8s/671 squashfs ro,nodev,relatime 0 0

Reading up about gvfs-udisks2-volume-monitor it seems there is the 'x-gvfs-hide' option you can provide when mounting but i don't know if that is like a filter for the events or just forces hiding in the gnome ui. Anyone got any ideas on how I can see by trying to add it to the snap mount?

Either way - losing half to one CPU on seemingly regular file ops containers are doing for a monitor that seems to not do much except (in my naive understanding of it) expose volumes/filesystem to GNOME is a heavy price to pay?

@ktsakalozos
Copy link
Member

@tellisnz-shift just ask on the snapcraft forum [1] about the 'x-gvfs-hide' option. Perhaps the good people at snapcraft will be able to add this option when mounting MicroK8s. I see however that the same issue is reported for K3s [2] and the investigation over there pointed to outdated gnome packages. What distribution are you on? It would be great if we had a script that would reproduce this behavior.

[1] #500 (comment)
[2] k3s-io/k3s#522

@tellisnz-shift
Copy link

Thanks for asking on there @ktsakalozos.

This is me currently:

cat /etc/os-release

NAME="Linux Mint"
VERSION="19 (Tara)"
ID=linuxmint
ID_LIKE=ubuntu
PRETTY_NAME="Linux Mint 19"
VERSION_ID="19"
...
VERSION_CODENAME=tara
UBUNTU_CODENAME=bionic

I'm fully up to date as far as I am aware. As far as scripting, I will try and find some time over the weekend.

@AdamIsrael
Copy link
Contributor

I'm seeing the same behavior, on Ubuntu 19.04.

$ microk8s.status
microk8s is running
addons:
linkerd: disabled
jaeger: disabled
rbac: disabled
prometheus: enabled
dns: enabled
fluentd: disabled
storage: enabled
gpu: disabled
registry: disabled
knative: disabled
dashboard: enabled
ingress: disabled
metrics-server: disabled
istio: disabled

A few data points:

  • I don't have redis running, but I do have kafka and zookeeper, among others, using persistent volumes. I removed each application, one by one, observing only marginal decreases in gvfs-udisks2-volume-monitor.

  • It still had spikes of high CPU usage when I'd removed all applications. Disabling the storage plugin resolved it.

  • I then re-enabled the storage plugin. There was a brief spike in CPU as the initial storage was created, and then it remained idle.

It makes sense that it's related to the storage plugin. The fact I saw high CPU usage after removing my services, and saw it resume to low CPU usage after enable/disabling the plugin, makes me think that there's something with the storage plugin that might be at fault.

@sourcecodes2
Copy link

I'm seeing the same, Ubuntu 18.04 LTS.

$ microk8s.status
microk8s is running
addons:
knative: disabled
jaeger: disabled
fluentd: disabled
gpu: disabled
cilium: disabled
storage: enabled
registry: disabled
rbac: disabled
ingress: enabled
dns: enabled
metrics-server: disabled
linkerd: disabled
prometheus: disabled
istio: disabled
dashboard: enabled

This issue locks up my high-spec development machine for a few minutes. I/O becomes saturated. This is with a very basic cluster deployment with no "traffic".

Interestingly, this only happens if I change network (i.e. connect to a wifi network) or resume from suspend. (Resuming from suspend may be triggering a "network change").

Disabling and re-enabling the storage plugin appears to fix it for a while. (As well as completely purging microK8s and reinstalling it)

@Multiply
Copy link

We're also seeing the same issue with Ubuntu.
The problem is less severe on computers with more cores, it seems.

@mariusstaicu
Copy link

on an older 4 core i7 cpu the freezes are almost unbearable

@tarun0
Copy link

tarun0 commented Feb 17, 2020

I faced the similar issue after leaving the system idle. It froze almost every Monday. Switched to Minikube and it's better now.

@jeanlucmongrain
Copy link

Experience the same, and there is seem to be no way to disable it

@knkski
Copy link
Contributor

knkski commented Feb 19, 2020

I'm getting this issue when deploying Kubeflow, which has lots of pods that require storage. Since I'm not running the cluster full-time, I was able to work around this by running systemctl stop --user gvfs-udisks2-volume-monitor to disable the gvfs-udisks2-volume-monitor process.

Also, I tried adding the x-gvfs-hide option to all of the mounts, which didn't seem to help. udisksctl monitor showed very few events (unlike before adding the option), but gvfs-udisks2-volume-monitor still ate up CPU. It's possible that's due to me remounting with the flag instead of mounting with the flag. Here's what I ran to add the flag:

mount | grep /var/snap/microk8s | cut -d" " -f3 | xargs -I{} bash -c "echo {}; sudo mount -o remount,x-gvfs-hide {}"

@cameronbraid
Copy link

cameronbraid commented Jun 11, 2020

FYI I have this on a ubuntu 20.04 system with kubernetes installed using kubespray.

I did use microk8s at one point, but had already stopped it.

Interestingly if I run nautilus on this machine, it freezes. Same with all browser file save dialogs.

Running

systemctl stop --user gvfs-udisks2-volume-monitor

Fixes both the 100% CPU issue, and makes nautilus stop from freezing

@Richard87
Copy link
Contributor

Same here to on Fedora 33... gvfs-udisks2-volume-monitor uses between 50 and 75% cpu constantly, and gsd-housekeeping is constantly between 25-50%

stopping monitor stopped it and its cpu usage, but I have no idea what side-effects that have?

@lhotari
Copy link
Contributor

lhotari commented Apr 1, 2021

This seems to help in reducing the CPU usage of gvfs-udisk2-volume-monitor:

sudo tee /etc/udev/rules.d/90-loopback.rules <<EOF
# hide loopback devices from udisks
SUBSYSTEM=="block", DEVPATH=="/devices/virtual/block/loop*", ENV{UDISKS_PRESENTATION_HIDE}="1", ENV{UDISKS_IGNORE}="1"
EOF

For me, the gvfs-udisk2-volume-monitor was constantly spiking to over 10% CPU consumption. After making the above change and rebooting, the spikes are 1-2% of CPU.

This solution was inspired by https://github.com/moby/moby/blob/b96a0909f0ebc683de817665ff090d57ced6f981/contrib/udev/80-docker.rules#L3

@lhotari
Copy link
Contributor

lhotari commented Apr 1, 2021

It seems that the correct fix for this issue would be to make snap add x-gvfs-hide mount option to the loopback device mounts.
I created canonical/snapd#10104 to snapd for adding this.

@rdxmb
Copy link

rdxmb commented Sep 29, 2021

I got here from searching "high cpu udisks2 ubuntu". Although my problem is in another environment, maybe my research can help here (?)

I am running kubernetes installed with kubeadm, I've just had problems with high cpu caused by the package udisks2, which was installed as a recommend from https://packages.ubuntu.com/focal-updates/fwupd by running apt full-upgrade in Ubuntu 20.04. This has installed a daemon called udisks2, which costs me 1/4 of my cpu. Uninstalling udisks2 again solved my problem.

(Please feel free to hide this comment if this is not related or a kind of help to this issue)

@irhawks
Copy link

irhawks commented Nov 22, 2021

I've met the same problem as described @Richard87 , However stopping monitor with systemctl stop --user gvfs-udisks2-volume-monitor can only shutdown the gvfs-udisks2-volume-monitor process started by current login user, and a process with the same name but owned by gdm still take up about 70% cpu and housekeeping is constantly between 25-50%

   2005 gdm       20   0  327416  13312  10060 S  58.6   0.0  10:49.31 /usr/libexec/gvfs-udisks2-volume-monitor                                                
   1776 root      20   0 9812040 207184  65040 S  39.1   0.3   9:41.91 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubec+ 
 115413 hawk      20   0  324092  11008   9116 S  37.5   0.0   5:23.32 /usr/libexec/gsd-housekeeping                                                           
   2368 gdm       20   0  323904  10556   8728 S  33.6   0.0   6:16.14 /usr/libexec/gsd-housekeeping                                                           

hawk is the current login user of the above top result, and above solutions do not work.

Here is my system configuration

$ cat /etc/os-release 
NAME="Ubuntu"
VERSION="20.04.3 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.3 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:04:18Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

and further more I use strace -fvvp 2005 to watch /usr/libexec/gvfs-udisks-volume monitor, a large amount of outputs appeared like:

stat("/etc/fstab", {st_dev=makedev(0x103, 0x2), st_ino=23330818, st_mode=S_IFREG|0664, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=8, st_size=970, st_atime=1633308016 /* 2021-10-04T08:40:16.313422041+0800 */, st_atime_nsec=313422041, st_mtime=1633308016 /* 2021-10-04T08:40:16.213424478+0800 */, st_mtime_nsec=213424478, st_ctime=1633308016 /* 2021-10-04T08:40:16.217424380+0800 */, st_ctime_nsec=217424380}) = 0
openat(AT_FDCWD, "/etc/fstab", O_RDONLY|O_CLOEXEC) = 12
fstat(12, {st_dev=makedev(0x103, 0x2), st_ino=23330818, st_mode=S_IFREG|0664, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=8, st_size=970, st_atime=1633308016 /* 2021-10-04T08:40:16.313422041+0800 */, st_atime_nsec=313422041, st_mtime=1633308016 /* 2021-10-04T08:40:16.213424478+0800 */, st_mtime_nsec=213424478, st_ctime=1633308016 /* 2021-10-04T08:40:16.217424380+0800 */, st_ctime_nsec=217424380}) = 0
read(12, "# /etc/fstab: static file system"..., 4096) = 970
read(12, "", 4096)                      = 0
close(12)                               = 0
poll([{fd=3, events=POLLIN}, {fd=10, events=0}], 2, -1) = 1 ([{fd=10, revents=POLLERR}])
write(3, "\1\0\0\0\0\0\0\0", 8)         = 8
write(3, "\1\0\0\0\0\0\0\0", 8)         = 8
write(3, "\1\0\0\0\0\0\0\0", 8)         = 8
poll([{fd=3, events=POLLIN}, {fd=10, events=0}], 2, 0) = 2 ([{fd=3, revents=POLLIN}, {fd=10, revents=POLLERR}])
read(3, "\3\0\0\0\0\0\0\0", 16)         = 8
openat(AT_FDCWD, "/proc/self/mountinfo", O_RDONLY|O_CLOEXEC) = 12
fstat(12, {st_dev=makedev(0, 0x17), st_ino=82751, st_mode=S_IFREG|0444, st_nlink=1, st_uid=125, st_gid=130, st_blksize=1024, st_blocks=0, st_size=0, st_atime=1637586591 /* 2021-11-22T21:09:51.692000445+0800 */, st_atime_nsec=692000445, st_mtime=1637586591 /* 2021-11-22T21:09:51.692000445+0800 */, st_mtime_nsec=692000445, st_ctime=1637586591 /* 2021-11-22T21:09:51.692000445+0800 */, st_ctime_nsec=692000445}) = 0
read(12, "24 29 0:22 / /sys rw,nosuid,node"..., 1024) = 1024
lstat("/proc", {st_dev=makedev(0, 0x17), st_ino=1, st_mode=S_IFDIR|0555, st_nlink=1834, st_uid=0, st_gid=0, st_blksize=1024, st_blocks=0, st_size=0, st_atime=1637586578 /* 2021-11-22T21:09:38.700000006+0800 */, st_atime_nsec=700000006, st_mtime=1637586578 /* 2021-11-22T21:09:38.700000006+0800 */, st_mtime_nsec=700000006, st_ctime=1637586578 /* 2021-11-22T21:09:38.700000006+0800 */, st_ctime_nsec=700000006}) = 0
lstat("/proc/self", {st_dev=makedev(0, 0x17), st_ino=4026531841, st_mode=S_IFLNK|0777, st_nlink=1, st_uid=0, st_gid=0, st_blksize=1024, st_blocks=0, st_size=0, st_atime=1637586578 /* 2021-11-22T21:09:38.704000006+0800 */, st_atime_nsec=704000006, st_mtime=1637586578 /* 2021-11-22T21:09:38.700000006+0800 */, st_mtime_nsec=700000006, st_ctime=1637586578 /* 2021-11-22T21:09:38.700000006+0800 */, st_ctime_nsec=700000006}) = 0
readlink("/proc/self", "2005", 4095)    = 4
lstat("/proc/2005", {st_dev=makedev(0, 0x17), st_ino=31460, st_mode=S_IFDIR|0555, st_nlink=9, st_uid=125, st_gid=130, st_blksize=1024, st_blocks=0, st_size=0, st_atime=1637586591 /* 2021-11-22T21:09:51.660000444+0800 */, st_atime_nsec=660000444, st_mtime=1637586591 /* 2021-11-22T21:09:51.660000444+0800 */, st_mtime_nsec=660000444, st_ctime=1637586591 /* 2021-11-22T21:09:51.660000444+0800 */, st_ctime_nsec=660000444}) = 0
lstat("/proc/2005/mountinfo", {st_dev=makedev(0, 0x17), st_ino=82751, st_mode=S_IFREG|0444, st_nlink=1, st_uid=125, st_gid=130, st_blksize=1024, st_blocks=0, st_size=0, st_atime=1637586591 /* 2021-11-22T21:09:51.692000445+0800 */, st_atime_nsec=692000445, st_mtime=1637586591 /* 2021-11-22T21:09:51.692000445+0800 */, st_mtime_nsec=692000445, st_ctime=1637586591 /* 2021-11-22T21:09:51.692000445+0800 */, st_ctime_nsec=692000445}) = 0
read(12, "10 - cgroup2 cgroup2 rw\n35 33 0:"..., 1024) = 1024
read(12, "nodev,noexec,relatime shared:21 "..., 1024) = 1024
read(12, "nodev,noexec,relatime shared:31 "..., 1024) = 1024
read(12, "ap-store/547 ro,nodev,relatime s"..., 1024) = 1024
read(12, "dd1 rw,space_cache,subvolid=5,su"..., 1024) = 1024
read(12, "a79c0f7325c362b44befa88499cb6248"..., 1024) = 1024
read(12, "OCXHSWXIZXIHSUL65E4G:/var/lib/do"..., 1024) = 1024
read(12, "y rw,lowerdir=/var/lib/docker/ov"..., 1024) = 1024
read(12, "ar/lib/docker/overlay2/70f8ad8ee"..., 1024) = 1024
read(12, "PKEWVHEN2L7NM42H5U6YNM4:/var/lib"..., 1024) = 1024
read(12, "/kubelet/pods/9a241697-91d5-44f8"..., 1024) = 1024
read(12, "86793b5ba6868a518350b5ffa6c176c0"..., 1024) = 1024
read(12, "HEHUALQ3BHM,upperdir=/var/lib/do"..., 1024) = 1024

@khteh
Copy link
Contributor Author

khteh commented Nov 23, 2021

It has been 2 and half years since my first post on this issue. I take another look to see if I see the same CPU load issue with the latest version of everything.

$ k version
Client Version: version.Info{Major:"1", Minor:"22+", GitVersion:"v1.22.3-3+9ec7c40ec93c73", GitCommit:"9ec7c40ec93c73c2281bdd2e4a75baf6247366a0", GitTreeState:"clean", BuildDate:"2021-11-03T10:20:42Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22+", GitVersion:"v1.22.3-3+9ec7c40ec93c73", GitCommit:"9ec7c40ec93c73c2281bdd2e4a75baf6247366a0", GitTreeState:"clean", BuildDate:"2021-11-03T10:17:37Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}

top shows gvfs-udisks2-vo CPU utilization is less than 20% when I run the "same" load with redis cluster consisting of 3 masters and 3 slaves. Of course, my laptop has a much higher horse power now with 11th Gen Intel(R) Core(TM) i9-11950H @ 2.60GHz in "Balanced" power mode.

@lhotari
Copy link
Contributor

lhotari commented Nov 23, 2021

@khteh does the workaround described in #500 (comment) work for you?

@khteh
Copy link
Contributor Author

khteh commented Nov 23, 2021

I don't understand that "fix" and therefore, haven't tried.

@amandahla
Copy link

I tried the fix but no effect :-(
gvfs-udisks2-vo CPU utilization still more than 30/40% from time to time...

@jamiechapmanbrn
Copy link

Is there any progress here? It's annoying seeing an entire core on my laptop stuck at 100% usage for hours on end and it creates a huge amount of waste heat and kills battery life if I don't notice it.

@thgruiz
Copy link

thgruiz commented Aug 7, 2023

Any news about this?
Same problem here...

tried both:

systemctl disable --user gvfs-udisks2-volume-monitor
systemctl stop --user gvfs-udisks2-volume-monitor

and

mount | grep /var/snap/microk8s | cut -d" " -f3 | xargs -I{} bash -c "echo {}; sudo mount -o remount,x-gvfs-hide {}"

with no luck

I'm on ubuntu 22.04:

cat /etc/lsb-release 
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.3 LTS"

uname -a
Linux ruiz-Latitude-5400 6.2.0-26-generic #26~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Jul 13 16:27:29 UTC 2 x86_64 x86_64 x86_64 GNU/Linux


kubectl version --output=json
{
  "clientVersion": {
    "major": "1",
    "minor": "27",
    "gitVersion": "v1.27.4",
    "gitCommit": "fa3d7990104d7c1f16943a67f11b154b71f6a132",
    "gitTreeState": "clean",
    "buildDate": "2023-07-19T12:20:54Z",
    "goVersion": "go1.20.6",
    "compiler": "gc",
    "platform": "linux/amd64"
  },
  "kustomizeVersion": "v5.0.1",
  "serverVersion": {
    "major": "1",
    "minor": "27",
    "gitVersion": "v1.27.4",
    "gitCommit": "fa3d7990104d7c1f16943a67f11b154b71f6a132",
    "gitTreeState": "clean",
    "buildDate": "2023-07-21T14:01:24Z",
    "goVersion": "go1.20.6",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}


and installed microk8s with:

sudo snap install microk8s --classic

microk8s enable dns
microk8s enable storage
microk8s enable hostpath-storage
microk8s enable metallb:192.168.1.240/24
microk8s enable ingress

@hemangjoshi37a
Copy link

can't believe this issue is created in 2019 and still not solved. please anyone has found any robust solution please let me know . thanks .

@khteh
Copy link
Contributor Author

khteh commented Jan 15, 2024

I don't see this error in the past year using microk8s in both localhost and production workload. Closing.

@khteh khteh closed this as completed Jan 15, 2024
@jamiechapmanbrn
Copy link

This still happens on my system. It is Ubuntu 22.04 with microk8s 1.21. It happens any time I open a file browser while microk8s is running. Disabling the service doesn't fix the problem, the fix from the above comment doesn't fix it.

@khteh
Copy link
Contributor Author

khteh commented Jan 17, 2024

The current version is 1.27.8 and what is your hardware configuration? One thing though, I don't have redis cluster running on local cluster at the moment as I initially experienced and posted. That might be the reason I don't see the issue now.

@jamiechapmanbrn
Copy link

Yeah, I definitely should be moving the microk8s version. I'll post something once I upgrade to a newer version.

I'm running on a Lenovo Thinkbook 14 G3 with an AMD Ryzen 7 5700U. Kernel 6.2.0-37-generic.

@lhotari
Copy link
Contributor

lhotari commented Jan 17, 2024

I contributed an improvement to snapcore as mentioned in #500 (comment) . However, there was a util-linux bug which is fixed by util-linux/util-linux@d85f45d since util-linux 2.38 . Ubuntu 22.04 has util-linux 2.37 .

There's now a bug report in gvfs at https://bugs.launchpad.net/ubuntu/+source/gvfs/+bug/2047356 too

@lhotari
Copy link
Contributor

lhotari commented Jan 17, 2024

I've been using this workaround successfully for years:

sudo tee /etc/udev/rules.d/90-loopback.rules <<EOF
# hide loopback devices from udisks
SUBSYSTEM=="block", DEVPATH=="/devices/virtual/block/loop*", ENV{UDISKS_PRESENTATION_HIDE}="1", ENV{UDISKS_IGNORE}="1"
EOF

Reboot to ensure that it takes effect.

@strazto
Copy link

strazto commented Jan 23, 2025

I've been using this workaround successfully for years:

sudo tee /etc/udev/rules.d/90-loopback.rules <<EOF
# hide loopback devices from udisks
SUBSYSTEM=="block", DEVPATH=="/devices/virtual/block/loop*", ENV{UDISKS_PRESENTATION_HIDE}="1", ENV{UDISKS_IGNORE}="1"
EOF

Reboot to ensure that it takes effect.

@lhotari 's workaround worked instantly for me

I'm not using microk8s on my cluster (It's a compose cluster), and I think it's probably a snap problem because of the prevailing loopbacks (I only skimmed this thread) but it still worked.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests