You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I just tired comrpessing a bit apitrace file (13 GB), which got compressed to around 1GB. During compression all cores are used, but I noticed that during decompression, only a small portion of cores are used, and many remain idle. I'm running the test on Ryzen 7 1700X, so it has 8 cores and 18 threads available (hyperthreading).
Is decompression hard limited in how many cores it uses?
The text was updated successfully, but these errors were encountered:
That could be the case, I'm using an HDD. I can get some SSD for a test.
On a side note, may be writing can be optimized with some buffering, to avoid the bottleneck? I.e. use more RAM for parallel writes in it, before writing out big chunk to disk in one go? That would minimize parallel disk access.
I just tired comrpessing a bit apitrace file (13 GB), which got compressed to around 1GB. During compression all cores are used, but I noticed that during decompression, only a small portion of cores are used, and many remain idle. I'm running the test on Ryzen 7 1700X, so it has 8 cores and 18 threads available (hyperthreading).
Is decompression hard limited in how many cores it uses?
The text was updated successfully, but these errors were encountered: