You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been running Impermanence for a long time now but have realised that the performance I'm getting really doesn't match what my hardware is capable of.
For reference my system flake can be found here and I'm on the nixe system with the relevant files to these mounts being located in these files /var/log, /persist and /home/racci/Games (This is proxied to impermanence through this)
After creating files with the command dd if=/dev/zero of=<path>/test.img count=10 bs=1G on the directly mounted btrfs subvolume, a impermanence mount with the NixOS module and one with the home-manager module, I've run a read speed test and noticed roughly 40-60% performance impact in read and write speeds.
Is this a simple side effect of the mounting method that is used for mounting user directories due to permissions or something, and is there any way to scrape back any of the lost performance via different mounting methods?
Write:
Read:
Mounts:
The text was updated successfully, but these errors were encountered:
I've been researching this further and found that the performance hit is caused by fuse, I've found that using the flag --direct-io with bindfs can regain a good chunk of the performance in writes and a bit on read, the --enable-ioctl flag also seems to regain a very slight bit of performance.
I've been running Impermanence for a long time now but have realised that the performance I'm getting really doesn't match what my hardware is capable of.
For reference my system flake can be found here and I'm on the nixe system with the relevant files to these mounts being located in these files /var/log, /persist and /home/racci/Games (This is proxied to impermanence through this)
After creating files with the command
dd if=/dev/zero of=<path>/test.img count=10 bs=1G
on the directly mounted btrfs subvolume, a impermanence mount with the NixOS module and one with the home-manager module, I've run a read speed test and noticed roughly 40-60% performance impact in read and write speeds.Is this a simple side effect of the mounting method that is used for mounting user directories due to permissions or something, and is there any way to scrape back any of the lost performance via different mounting methods?
Write:

Read:

Mounts:

The text was updated successfully, but these errors were encountered: