You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think that rather than having a compression module, the code could just run an external compression program and then read back from it.
This would allow us to use more diverse tools.
I think we should use pbzip2 for this, since db servers have many cores.
I run this experiment:
# asd is a file filled with random data, sized 149MiB
$ time (cat asd | pbzip2 > asd.bz2)
real 0m6.155s
user 0m40.856s
sys 0m0.660s
salvo@vulcano /tmp$ time bzip2 asd
real 0m21.327s
user 0m20.692s
sys 0m0.156s
salvo@vulcano /tmp$
As you can see pbzip is clearly faster, even on streamed input, not just on mappable files.
This would benefit, by reducing the backup time.
The text was updated successfully, but these errors were encountered:
I think that rather than having a compression module, the code could just run an external compression program and then read back from it.
This would allow us to use more diverse tools.
I think we should use pbzip2 for this, since db servers have many cores.
I run this experiment:
As you can see pbzip is clearly faster, even on streamed input, not just on mappable files.
This would benefit, by reducing the backup time.
The text was updated successfully, but these errors were encountered: