-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High-level squashfuse optimization opportunities #73
Comments
You would definitely expect overhead from FUSE over an in-kernel driver, yes. |
Okay, then I'll stick to the kernel version for now. Thanks! |
FWIW: when using Perf tells me Is there any reason to keep the high-level version if the low-level version performs so much better? |
The high-level version uses a simpler FUSE API, which has wider availability. A number of platforms (Minix, NetBSD) only support the high-level API. Unfortunately, the high-level API doesn't map one-to-one(-ish) with kernel VFS operations, but instead talks to a library that manages things like inode allocation, that make it inherently slower. Something that hits many different inodes, like du, should be particularly bad. If squashfuse_ll works better for you, then I recommend sticking with it! But I'd like to keep squashfuse available on other platforms, and there's no harm in leaving the high-level version around. Let me rename this ticket to something about optimizing high-level squashfuse, since that seems to be where this has landed. Please go ahead and explain how you did your testing, and share what results you got. Then we can use this as an opportunity for anybody who wants to spend time optimizing high-level squashfuse. |
mount
Using
there's no caching effects.
Perf shows squashfuse spends all its time decompressing:
|
I've documented some performance issues with
squashfuse
here: https://github.com/haampie/squashfs-mount/. I'm observing about a 1.5x increase in compile time of LLVM when mounting compilers from a squashfs file using squashfuse compared to just using (lib)mount.Is this overhead expected?
The text was updated successfully, but these errors were encountered: