Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: Parallel file reading #239

Open
nh2 opened this issue Apr 1, 2023 · 16 comments
Open

Feature request: Parallel file reading #239

nh2 opened this issue Apr 1, 2023 · 16 comments
Assignees
Milestone

Comments

@nh2
Copy link

nh2 commented Apr 1, 2023

Currently mksquashfs seems to use a single reader thread.

Many current devices only achieve optimal throughput when files are read from them in parallel:

  • current SSDs (which require a high queue depth)
  • large RAID arrays (e.g. servers with 16 disks in)
  • network file systems (parallelism hiding network latency)

Could mksquashfs add (configurable) threaded reading?

Thanks!

@plougher plougher self-assigned this Apr 4, 2023
@plougher plougher added this to the Undecided milestone Apr 4, 2023
@plougher
Copy link
Owner

plougher commented Apr 5, 2023

This is an interesting request (the second in one week). Back when I parallelised Mksquashfs for the first time in about 2006 I did extensive experiments reading the source filesystem using one thread and multiple threads. These experiments showed the maximum performance was obtained with a single read thread (and so you're right that there is only one reader thread). But this was in the days of mechanical hard drives with slow seeking, and the results were not that surprising. By and large anything which caused seeking (including parallel reading of files) produced worse performance.

Modern hardware including RAID (*) and SSD drives may have changed the situation. So I'll add this to the list of enhancements and see if priorities allow it to be looked at for the next release.

(*) RAID has been around since the late 1980s. In fact I implemented a block striping RAID system in 1991. But they have become more and more widespread in recent years.

As far as RAID is concerned I assume these systems are using block striping rather than bit-striping otherwise there should not be an issue. Also as readahead should kick in for large files utilising all the disks with block striping, I assume the issue is with small files which do not benefit from readahead.

@plougher plougher modified the milestones: Undecided, 4.7 release Dec 13, 2024
@ptallada
Copy link

I'm interested in this feature too :)

@plougher
Copy link
Owner

You may have noticed that I have pushed the parallel file reading improvements to the branch reader_improvements.

The code in that branch implements the ability to have up to 128 parallel reader threads (the 128 limit is arbitrary, and can be increased). By default the code uses a conservative six parallel reader threads, split into three "small file reader threads" and three "multi-block reader threads".

The default amount of reader threads can be changed with the following (currently undocumented) options:

  1. -single-reader-thread - use a single reader thread as in the previous Mksquashfs
  2. -small-reader-threads N - use N amount of small reader threads. Current maximum 64.
  3. -block-reader-threads N - use N amount of block reader threads. Current maximum 64.

The difference between a small-reader-thread and a block-reader-thread is that a small-reader-thread only reads files smaller than the block size, and a block reader thread only reads files that are a block size or larger. The split is due to performance testing which showed distinguishing reader threads in this way was useful and can increase performance. The rationale for this will be documented/discussed later.

Can these parallel reader threads increase performance? In testing, whether they increase performance is entirely dependent on a vast combination of variables, such as amount of processors, I/O speed of the media, and the mix of small files and large files in the source files to be compressed. Generally speaking, if you have not got many processors, a fast medium and files that are large, then a single reader thread hasn't been your bottleneck, and your bottleneck is the speed of compression, which won't change. On the other hand if you have a fast machine, the media benefits from parallel reading, and files are small, then a single reader thread has been a major bottleneck limiting the performance of Mksquashfs, and I have cases where Mksquashfs is more than six times faster.

In addition, in testing I have not found an optimal setting for the above options, as you might expect the optimal settings are highly dependent on the previously mentioned variables. The smaller the files and the more parallel the media, the greater performance can be obtained from a large number of parallel reader threads, especially the small-reader-threads. Experimentation with the options seems to be necessary to get the optimal performance.

The following test matrix was generated by running Mksquashfs over a source filesystem consisting of 128 byte files, then 256 byte files, 512 byte files etc, with the amount of parallel reader threads varied from 1 (single reader thread), 1:1 (one small reader thread and one block thread) to 62:62 (62 small reader threads, and 62 block reader threads). The amount of data was same in all cases (i.e. the same amount of data was split across 128 byte files, 256 byte files etc.) The machine has 14 cores/20 threads. The media was a SanDisk Extreme 55AE SSD connected via USB 3.

        1               1:1             2:2             4:4             8:8             12:12           16:16           20:20           30:30           40:40           50:50           62:62
128     56:07.72        56:19.13        30:15.96        17:38.77        12:43.23        10:12.51        9:09.48         8:34.55         8:34.23         7:55.01         7:21.74         7:07.87
        37%             36%             71%             116%            88%             94%             105%            108%            108%            108%            116%            123%
256     27:31.65        27:38.58        14:32.88        8:05.15         5:49.41         4:33.68         3:52.34         3:47.39         3:44.00         3:24.72         3:14.90         3:02.63
        40%             39%             79%             130%            90%             100%            114%            120%            119%            121%            128%            141%
512     13:37.79        13:40.29        7:07.47         3:52.94         2:43.34         2:06.58         1:48.12         1:45.19         1:44.34         1:34.15         1:29.35         1:22.20
        51%             50%             96%             139%            107%            124%            147%            153%            151%            157%            171%            188%
1K      6:46.23         6:48.34         3:31.96         1:53.98         1:18.67         1:00.65         0:51.99         0:50.60         0:50.70         0:44.55         0:41.96         0:39.04
        68%             67%             109%            150%            143%            176%            208%            219%            218%            232%            253%            277%
2K      3:22.83         3:23.73         1:45.44         0:56.10         0:38.24         0:29.31         0:25.55         0:24.76         0:24.68         0:22.28         0:20.69         0:19.25
        92%             91%             137%            180%            227%            296%            343%            360%            362%            385%            422%            466%
4K      1:41.59         1:41.48         0:52.75         0:28.21         0:19.81         0:15.05         0:12.95         0:12.50         0:12.43         0:11.21         0:10.51         0:09.85
        138%            139%            183%            265%            387%            535%            643%            676%            685%            745%            823%            906%
8K      0:51.53         0:51.40         0:28.22         0:16.97         0:10.51         0:08.37         0:08.16         0:08.19         0:08.14         0:07.58         0:07.17         0:06.81
        179%            177%            240%            424%            753%            986%            1024%           1015%           1020%           1098%           1196%           1267%
16K     0:25.69         0:25.85         0:14.19         0:10.28         0:06.38         0:06.05         0:06.08         0:06.41         0:06.49         0:06.56         0:06.49         0:06.40
        233%            229%            470%            724%            1286%           1359%           1363%           1297%           1386%           1516%           1561%           1600%
32K     0:15.63         0:15.18         0:09.47         0:07.08         0:05.30         0:06.11         0:05.59         0:06.16         0:06.35         0:05.91         0:06.01         0:05.59
        390%            400%            775%            1088%           1547%           1589%           1776%           1696%           1583%           1661%           1687%           1814%
64K     0:13.38         0:12.98         0:06.92         0:05.30         0:05.78         0:05.78         0:05.76         0:05.74         0:05.81         0:05.80         0:05.82         0:05.81
        461%            470%            1099%           1643%           1741%           1739%           1729%           1739%           1732%           1720%           1726%           1731%
128K    0:07.64         0:07.58         0:05.48         0:05.03         0:05.42         0:05.46         0:05.48         0:05.01         0:05.60         0:05.37         0:05.43         0:05.77
        866%            875%            1339%           1721%           1721%           1705%           1704%           1843%           1717%           1739%           1716%           1624%
256K    0:06.24         0:05.74         0:04.63         0:05.39         0:05.45         0:05.03         0:05.48         0:05.71         0:05.23         0:05.40         0:05.39         0:04.95
        1127%           1243%           1631%           1734%           1700%           1857%           1739%           1628%           1715%           1723%           1732%           1887%
512K    0:06.11         0:06.04         0:04.97         0:05.18         0:05.39         0:04.98         0:05.50         0:05.76         0:05.15         0:05.39         0:05.37         0:05.07
        1161%           1171%           1529%           1727%           1720%           1865%           1749%           1608%           1734%           1739%           1740%           1852%
1M      0:05.32         0:05.19         0:04.81         0:05.38         0:05.36         0:05.25         0:05.38         0:05.31         0:05.35         0:05.34         0:05.31         0:05.06
        1364%           1406%           1598%           1675%           1708%           1764%           1730%           1747%           1726%           1750%           1763%           1848%

For instance with 128 byte files, a single reader thread took 58 minutes, and 62 small reader threads took 7 minutes.

Also with 4K files, a single reader thread took 1 minute 42 seconds, and 62 small reader threads took 10 seconds.

The larger the files, the smaller the improvement is obtained with parallel reader threads. But 512K files still obtain an improvement.

Please test, and I'll be interested in what feedback you have.

@ptallada
Copy link

Hi,

Thanks a lot for your work! I've made a few quick tests.

My main use case is for packaging datasets to be stored on magnetic tape. This media heavily discourages small files (<2GiB), so that making archives/zips/tars/isos/squashfs is fundamental to have efficient read/write operations.

Test dataset: ~50 GB, ~750 files. 480 of those files are ~100MB, 122 files < 128k (block size)
Test machine: 48 cores, 512 GiB RAM. Reading from network storage, Writing to RAM (/dev/shm).

First test: default mkquashfs, gzip compression

real 3m9.225s
user 24m52.409s
sys 0m34.520s

Second test: parallel mkquashfs, gzip compression, 64:64 readers. x3.4 TIMES FASTER

real 0m42.996s
user 29m0.185s
sys 1m8.281s

I think the improvement is pretty substantial. Are there any plans to merge this feature?

On another topic, I saw that the zstd compression is not supported (yet?). Is there an inherent limitation on which compression algorithms may be supported using the parallel readers?

Thanks again!

@plougher
Copy link
Owner

I think the improvement is pretty substantial. Are there any plans to merge this feature?

I hit and fixed a deadlock last week, and so I'm going to continue testing for a couple of days, and then hopefully merge it.

On another topic, I saw that the zstd compression is not supported (yet?). Is there an inherent limitation on which compression algorithms may be supported using the parallel readers?

Hmm, by default only gzip is enabled in the Makefile. To enable zstd (and the other compression algorithms) you can edit the Makefile and uncomment the lines, e.g.

#ZSTD_SUPPORT = 1

becomes

ZSTD_SUPPORT = 1

Or you can enable it on the Make command line, e.g.

CONFIG=1 ZSTD_SUPPORT=1 make

Historically the reason for this is because not all distros had support for the more modern compression algorithms. But these are probably quite rare now, and so it should default to building all the compression algorithms.

Thanks for testing and the feedback. A speedup of x3.4 in a real-world scenario was more than I was expecting!

@ptallada
Copy link

I think the improvement is pretty substantial. Are there any plans to merge this feature?

I hit and fixed a deadlock last week, and so I'm going to continue testing for a couple of days, and then hopefully merge it.

That's great!

On another topic, I saw that the zstd compression is not supported (yet?). Is there an inherent limitation on which compression algorithms may be supported using the parallel readers?

Hmm, by default only gzip is enabled in the Makefile. To enable zstd (and the other compression algorithms) you can edit the Makefile and uncomment the lines, e.g.

#ZSTD_SUPPORT = 1

becomes

ZSTD_SUPPORT = 1

Or you can enable it on the Make command line, e.g.

CONFIG=1 ZSTD_SUPPORT=1 make

Historically the reason for this is because not all distros had support for the more modern compression algorithms. But these are probably quite rare now, and so it should default to building all the compression algorithms.

Oh, perfect, then I'll test tomorrow again with zstd.

Thanks for testing and the feedback. A speedup of x3.4 in a real-world scenario was more than I was expecting!

I'm more than happy to help test this. I have data to pack these days, and I'll do some more tests on other real life data.

Thanks again!

@ptallada
Copy link

Hi,

Another real case scenario. I attach a plot with the size distribution. Written to RAM as before.

Image

In this case, because of the large amounts of files and the network latency, it takes a long time to read all the data.

Serial, zstd:

real 12m44.803s
user 29m58.880s
sys 0m28.390s

Parallel, 64:64, x5.77 TIMES FASTER

real 1m52.950s
user 37m12.439s
sys 0m31.375s

BTW, Could I test with even MORE reader threads? I'm pretty sure our storage system can cope with ~500 small reader threads.

@lalten
Copy link
Contributor

lalten commented Feb 13, 2025

I'm using squashfs as part of rules_appimage. Contents are mostly executable code (lots of ELF and .py files). My squashfs archives look mostly like this:

Code to generate plot (thx ChatGPT)
import os
import sys
import matplotlib.pyplot as plt

# Check if directory is passed as an argument
if len(sys.argv) < 2:
    print("Usage: python3 file_size_distribution.py /path/to/directory")
    sys.exit(1)

# Get directory from command-line argument
directory = sys.argv[1]

# Check if the directory exists
if not os.path.isdir(directory):
    print(f"Error: Directory '{directory}' not found.")
    sys.exit(1)

# Define size buckets and initialize counters
buckets = {
    '<10 B': 0,
    '10 B': 0,
    '100 B': 0,
    '1 KB': 0,
    '10 KB': 0,
    '100 KB': 0,
    '1 MB': 0,
    '10 MB': 0,
    '>100 MB': 0
}

total_files = 0
total_size = 0

# Scan directory and categorize file sizes
for root, dirs, files in os.walk(directory):
    for file in files:
        file_path = os.path.join(root, file)
        try:
            size = os.path.getsize(file_path)
            total_files += 1
            total_size += size
            
            if size < 10:
                buckets['<10 B'] += 1
            elif size < 100:
                buckets['10 B'] += 1
            elif size < 1000:
                buckets['100 B'] += 1
            elif size < 10000:
                buckets['1 KB'] += 1
            elif size < 100000:
                buckets['10 KB'] += 1
            elif size < 1000000:
                buckets['100 KB'] += 1
            elif size < 10000000:
                buckets['1 MB'] += 1
            elif size < 100000000:
                buckets['10 MB'] += 1
            else:
                buckets['>100 MB'] += 1
        except (FileNotFoundError, PermissionError):
            pass  # Skip files that can't be accessed

# Convert total size to human-readable format
def human_readable_size(size):
    for unit in ['B', 'KB', 'MB', 'GB', 'TB']:
        if size < 1024:
            return f"{size:.1f} {unit}"
        size /= 1024
    return f"{size:.1f} PB"

total_size_hr = human_readable_size(total_size)

# Prepare data for plotting
labels = list(buckets.keys())
counts = list(buckets.values())

# Plot the data
plt.figure(figsize=(10, 6))
plt.bar(labels, counts, color='blue')
plt.xlabel('File Size')
plt.ylabel('Number of Files')
plt.title('File Size Distribution')
plt.suptitle(f"Total Files: {total_files} | Total Size: {total_size_hr}", fontsize=10)
plt.grid(axis='y', linestyle='--', alpha=0.7)
plt.xticks(rotation=45)
plt.tight_layout()

# Save and show the plot
output_file = 'file_size_distribution.png'
plt.savefig(output_file)
plt.show()

print(f"Plot saved as {output_file}")

Image

I extracted and re-packed this squashfs on a Ryzen 7 3800X 8-Core Processor (16 threads) with the mksquashfs from this branch.

  • From SSD, with default gzip compression:

    Exportable Squashfs 4.0 filesystem, gzip compressed, data block size 131072
            compressed data, compressed metadata, compressed fragments,
            compressed xattrs, compressed ids
            duplicates are removed
    Filesystem size 365280.59 Kbytes (356.72 Mbytes)
            20.74% of uncompressed filesystem size (1761249.24 Kbytes)
    Inode table size 93443 bytes (91.25 Kbytes)
            35.09% of uncompressed inode table size (266284 bytes)
    Directory table size 72959 bytes (71.25 Kbytes)
            42.77% of uncompressed directory table size (170603 bytes)
    Number of duplicate files found 547
    Number of inodes 6458
    Number of files 5353
    Number of fragments 423
    Number of symbolic links 18
    Number of device nodes 0
    Number of fifo nodes 0
    Number of socket nodes 0
    Number of directories 1087
    Number of hard-links 789
    Number of ids (unique uids + gids) 2
    Number of uids 1
            laltenmueller (1004)
    Number of gids 1
            laltenmueller (1005)
    
    • -single-reader-thread
      142,72s user 1,76s system 1509% cpu 9,573 total
    • -small-reader-threads 1 -block-reader-threads 1
      143,11s user 1,77s system 1514% cpu 9,564 total
    • -small-reader-threads 8 -block-reader-threads 8
      142,71s user 1,96s system 1490% cpu 9,706 total
    • -small-reader-threads 64 -block-reader-threads 64
      143,30s user 2,26s system 1484% cpu 9,805 total
  • From /dev/shm, with default gzip compression:

    • -single-reader-thread
      169,80s user 2,13s system 1501% cpu 11,451 total
    • -small-reader-threads 1 -block-reader-threads 1
      169,49s user 2,09s system 1506% cpu 11,390 total
    • -small-reader-threads 8 -block-reader-threads 8
      170,30s user 2,32s system 1493% cpu 11,559 total
    • -small-reader-threads 64 -block-reader-threads 64
      169,89s user 2,85s system 1495% cpu 11,549 total
  • From SSD, with -noIdTableCompression -noDataCompression -noFragmentCompression -noXattrCompression:

    Exportable Squashfs 4.0 filesystem, gzip compressed, data block size 131072
            uncompressed data, compressed metadata, uncompressed fragments,
            uncompressed xattrs, uncompressed ids
            duplicates are removed
    Filesystem size 1694869.95 Kbytes (1655.15 Mbytes)
            96.23% of uncompressed filesystem size (1761249.24 Kbytes)
    Inode table size 60429 bytes (59.01 Kbytes)
            22.69% of uncompressed inode table size (266284 bytes)
    Directory table size 72951 bytes (71.24 Kbytes)
            42.76% of uncompressed directory table size (170603 bytes)
    Number of duplicate files found 547
    Number of inodes 6458
    Number of files 5353
    Number of fragments 423
    Number of symbolic links 18
    Number of device nodes 0
    Number of fifo nodes 0
    Number of socket nodes 0
    Number of directories 1087
    Number of hard-links 789
    Number of ids (unique uids + gids) 2
    Number of uids 1
            laltenmueller (1004)
    Number of gids 1
            laltenmueller (1005)
    
    • -single-reader-thread
      0,62s user 3,32s system 246% cpu 1,595 total
    • -small-reader-threads 1 -block-reader-threads 1
      0,61s user 3,11s system 249% cpu 1,493 total
    • -small-reader-threads 8 -block-reader-threads 8
      0,66s user 3,88s system 266% cpu 1,707 total
    • -small-reader-threads 64 -block-reader-threads 64
      0,75s user 6,06s system 305% cpu 2,226 total
  • From /dev/shm, with -noIdTableCompression -noDataCompression -noFragmentCompression -noXattrCompression:

    • -single-reader-thread
      0,83s user 3,43s system 251% cpu 1,692 total
    • -small-reader-threads 1 -block-reader-threads 1
      0,91s user 3,66s system 251% cpu 1,814 total
    • -small-reader-threads 8 -block-reader-threads 8
      0,90s user 4,27s system 253% cpu 2,036 total
    • -small-reader-threads 64 -block-reader-threads 64
      1,07s user 6,27s system 287% cpu 2,551 total

I'm not sure if I'm missing some important piece here or if my setup just doesn't benefit from parallel reads.

@ptallada
Copy link

Hi @lalten,

I think there are a few factors that may explain why you don't get much difference.

  • Your dataset is too small
  • It is also stored on local, so not affected by network latencies of remote storage (as me)
  • small and local, it is probably cached by the OS in memory, so it is not actually read.

The use cases mostly benefiting from parallel readers should be (a) lots of small files, and (b) on a remote storage :)

@plougher
Copy link
Owner

BTW, Could I test with even MORE reader threads? I'm pretty sure our storage system can cope with ~500 small reader threads.

Np, I have increased the maximum to 8192 small reader threads (and 8192 block reader threads) in this commit

a1d261d

The commit message has some important information about maximum open file limits, and what happens when you hit them

    reader: increase maximum amount of readers to 8192:8192
    
    For now you should ensure that the maximum number of reader
    threads is below your maximum open file limit (run
    ulimit -n), which is usually 1024.  Or of course you can
    set the limit higher with ulimit -n.
    
    If you do specifiy more readers than your open file limit
    currently you'll get the mysterious error message
    
    libgcc_s.so.1 must be installed for pthread_exit to work
    
    The reason for this is described in this bug on the Python
    bug tracker, and it's rather silly.
    
    https://bugs.python.org/issue44434
    
    Basically glibc pthread_exit() loads an unwind function
    from libgcc_s.so.1 using dlopen().  If the process is out of
    file descriptors then this will fail, and pthread_exit()
    will abort.
    ``` 

@plougher
Copy link
Owner

I'm not sure if I'm missing some important piece here or if my setup just doesn't benefit from parallel reads.

If the files are cached in memory (which is likely here) then what you're measuring is the speed of memory access which is very fast even with a single read thread. The CPU times of around 1500% (or 93.75% of total) percent show they're maxed out.

To test the actual speed from SSD, before each time you run Mksquashfs, you should flush the caches by running the following as root

% echo 3 > /proc/sys/vm/drop_caches

@plougher
Copy link
Owner

BTW, Could I test with even MORE reader threads? I'm pretty sure our storage system can cope with ~500 small reader threads.

Np, I have increased the maximum to 8192 small reader threads (and 8192 block reader threads) in this commit

Does the increase in threads improve performance? I won't be disappointed if it didn't! But it will be interesting for me to know the results. Thanks.

@plougher
Copy link
Owner

I think the improvement is pretty substantial. Are there any plans to merge this feature?

I hit and fixed a deadlock last week, and so I'm going to continue testing for a couple of days, and then hopefully merge it.

Status update ... Further extensive pre-release testing brought up another unexpected issue. I have spent the last couple of days tracking down this new issue.

About 30% of the core of Mksquashfs has been rewritten to deal with parallel reader threads, and unfortunately as a result this does require extensive pre-release testing to pick up any unknown issues.

For your information the "new" issue turned out to be a "benign" bug that has been in Mksquashfs for 17 years without being discovered. The reason why this bug has cropped up now is because the parallel reader threads make it about 1000 times more likely to occur.

In short testing a large filesystem (about 160Gbytes, 4.2 million files and 1.5 million duplicates), occasionally the number of reported duplicates would be less than expected by between 1 and 5 files, that is at worst 1,656,263 duplicates rather than the correct 1,625,268 files. The effect of this is a correct filesystem, but where 1 - 5 files have not been seen as duplicates and so stored twice, which means a loss of compression. This is why the bug is relatively benign because it doesn't result in an incorrect filesystem.

As I said this turns out to be a race condition introduced in 2008, which is almost impossible to hit with a single reader thread and so has never shown up before, but with about 10 or more parallel reader threads it does occur rarely (here about 5 in 4.2 million files).

The fix is relatively easy to do, and I'll be working on that tomorrow. After a week of testing this is the only issue to show up, and so hopefully I'll be able to merge it later this week.

@ptallada
Copy link

Hi @plougher,

I think in order to benefit from many many threads you would need some kind of crappy storage setup but also a powerful machine to handle so many threads. A massive storage mounted on a remote location with high latency and also lots of disks to handle the parallelism, and also millions of tiny files. I pity anyone that should work this scenario ;)

In my case, I could not measure much difference. Using other tools for parallel reading, our system can hold up to 700-800 threads, depending on file size. (If you are curious, webdav transfers Zürich-Barcelona using 700 threads going about 500 MiB/s). So, I don't think I can benefit from more than 1k threads.

Thanks again!

Pau.

@plougher
Copy link
Owner

Hi @ptallada

In my case, I could not measure much difference. Using other tools for parallel reading, our system can hold up to 700-800 threads, depending on file size. (If you are curious, webdav transfers Zürich-Barcelona using 700 threads going about 500 MiB/s). So, I don't think I can benefit from more than 1k threads.

Np, thanks for testing. It's good to know anything over 1k threads isn't going to be of benefit.

I spent a couple of weeks in rather lovely Zurich (sorry I don't how to do the umlaut) about 20 years ago. My brother was involved with ETH Zurich at the time and so we made it a vacation. Out of amusement we stayed at the Hotel Bristol, Bristol obviously sounds exotic in Zurich, but for us it was hardly exotic being our nearest big city. The only memorable mishap was I was learning German at the time, and often confused words. So one lunchtime I meant to order two glasses of the local wine, and instead ordered two bottles, and was too embarrassed to correct my mistake. I don't normally drink a bottle of wine at lunchtime.

I have stayed up all night working on this bug, and so I off to bed. Thanks for all your feedback,

@ptallada
Copy link

I spent a couple of weeks in rather lovely Zurich (sorry I don't how to do the umlaut) about 20 years ago. My brother was involved with ETH Zurich at the time and so we made it a vacation.

Data comes from there, huge dark-matter simulations ;)

I don't normally drink a bottle of wine at lunchtime.

Hahahah, good to know :P

plougher added a commit that referenced this issue Feb 20, 2025
This merges the work to add parallel reader threads to
Mksquashfs.

This is discussed in issue #239

In particular for more information about the new options,
and the possible speed improvements see comment

#239 (comment)

Signed-off-by: Phillip Lougher <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants