You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was also put off by unraids very closed source solution + their proprietary fuse driver, and so I never used it. I went through a lot of tools you have listed, but arrived at a different solution.
at a theoretical level - zfs should put newly written blocks on to memory, flush them to my slog device (ssd) if I'm running out of memory, and then asynchronously copy those blocks from both sources to the hdd array. this is all done transparently and removes the need for cache filesystem and mergerfs
given your goal of reduced power consumption and complexity - this seems closer to optimal, it will skip many disk operations since there will be less block operations overall when compared to completely downloading then copying the entire file.
however I haven't done any benchmarks on this thought. I was wondering if you had considered this and tried it and had any results? it should work transparently in front of snapraid.
you can even create a zfs array on top of your xfs disks to avoid reformatting. zfs can work with a raw file on top of existing filesystems. you can build your array across these disks, add optionally add a log device using a file on your ssd, and test the performance/power without rebooting.
some other things I noticed in my journey that agree with your findings
yes, btrfs has tragic performance characteristics.and it becomes even worse once u have multiple drives. it's almost unusable.
btrfs doesn't support parity and so you need to use something like snapraid in front of it if you want real raid.
mergerfs on nfs I could never get working right. maybe it's something to do with fuse on fuse. I have no clue.
some things that might be interesting to you:
zfs doesnt support online volume expansion yet. it's supposed to be merged into maser this year (2024). this might reduce your need for snapraid.
you complain about striping causing reads to all your disks - why not use a zfs raid configuration that doesn't stripe ? (mirror vdevs) I don't know how this will affect the power consumption.
do you really need a cache drive? assuming you have at least 32gb of memory, that should be more than enough for most downloads to comfortably be dealt with by zil. my cache drive is barely touched and only used when I'm doing large local operations. snapraid managing different zfs pools with zil might be all the performance you need. just make sure zfs async writes are on. you could use one zfs pool per disk and manage it at the top level with snapraid. have you been benchmarked this against your mergerfs solution?
personally, my entire setup at this point is a back to single zfs pool mounted to the /home/ directory of a server, which is then shared everywhere with auth via smb on other computers, and nfs on kubernetes. for a while it was more complicated but I lost too much hair.
i never have issues with it due to the fact that zfs is really the only moving part.
automated backups with zfs/send/recv to a service like rsync.net is the current next piece of my puzzle.
The text was updated successfully, but these errors were encountered:
hey
I was also put off by unraids very closed source solution + their proprietary fuse driver, and so I never used it. I went through a lot of tools you have listed, but arrived at a different solution.
I use a simple zfs slog device https://docs.oracle.com/cd/E36784_01/html/E36845/gnjlj.html and use a plain zfs mount, along with sshfs/smb/nfs for remote mounts, as my alternative.
at a theoretical level - zfs should put newly written blocks on to memory, flush them to my slog device (ssd) if I'm running out of memory, and then asynchronously copy those blocks from both sources to the hdd array. this is all done transparently and removes the need for cache filesystem and mergerfs
given your goal of reduced power consumption and complexity - this seems closer to optimal, it will skip many disk operations since there will be less block operations overall when compared to completely downloading then copying the entire file.
however I haven't done any benchmarks on this thought. I was wondering if you had considered this and tried it and had any results? it should work transparently in front of snapraid.
you can even create a zfs array on top of your xfs disks to avoid reformatting. zfs can work with a raw file on top of existing filesystems. you can build your array across these disks, add optionally add a log device using a file on your ssd, and test the performance/power without rebooting.
some other things I noticed in my journey that agree with your findings
some things that might be interesting to you:
zfs doesnt support online volume expansion yet. it's supposed to be merged into maser this year (2024). this might reduce your need for snapraid.
you complain about striping causing reads to all your disks - why not use a zfs raid configuration that doesn't stripe ? (mirror vdevs) I don't know how this will affect the power consumption.
do you really need a cache drive? assuming you have at least 32gb of memory, that should be more than enough for most downloads to comfortably be dealt with by zil. my cache drive is barely touched and only used when I'm doing large local operations. snapraid managing different zfs pools with zil might be all the performance you need. just make sure zfs async writes are on. you could use one zfs pool per disk and manage it at the top level with snapraid. have you been benchmarked this against your mergerfs solution?
personally, my entire setup at this point is a back to single zfs pool mounted to the /home/ directory of a server, which is then shared everywhere with auth via smb on other computers, and nfs on kubernetes. for a while it was more complicated but I lost too much hair.
i never have issues with it due to the fact that zfs is really the only moving part.
automated backups with zfs/send/recv to a service like rsync.net is the current next piece of my puzzle.
The text was updated successfully, but these errors were encountered: