Skip to content

Commit

Permalink
Use zstd compression in bottommost layer (#2582)
Browse files Browse the repository at this point in the history
Tested up to block ~14m, zstd uses ~12% less space which seems to result
in a small:ish (2-4%) performance improvement on block import speed -
this seems like a better baseline for more extensive testing in the
future.

Pre: 57383308 kb
Post: 50831236 kb
  • Loading branch information
arnetheduck authored Aug 30, 2024
1 parent 42a08cf commit 84a72c8
Showing 1 changed file with 7 additions and 4 deletions.
11 changes: 7 additions & 4 deletions nimbus/db/core_db/backend/aristo_rocksdb.nim
Original file line number Diff line number Diff line change
Expand Up @@ -93,13 +93,16 @@ proc toRocksDb*(
cfOpts.memtableWholeKeyFiltering = true
cfOpts.memtablePrefixBloomSizeRatio = 0.1

# LZ4 seems to cut database size to 2/3 roughly, at the time of writing
# ZSTD seems to cut database size to 2/3 roughly, at the time of writing
# Using it for the bottom-most level means it applies to 90% of data but
# delays compression until data has settled a bit, which seems like a
# reasonable tradeoff.
# TODO evaluate zstd compression with a trained dictionary
# https://github.com/facebook/rocksdb/wiki/Compression
cfOpts.bottommostCompression = Compression.lz4Compression
# Compared to LZ4 that was tested earlier, the default ZSTD config results
# in 10% less space and similar or slightly better performance in some
# simple tests around mainnet block 14M.
# TODO evaluate zstd dictionary compression
# https://github.com/facebook/rocksdb/wiki/Dictionary-Compression
cfOpts.bottommostCompression = Compression.zstdCompression

# TODO In the AriVtx table, we don't do lookups that are expected to result
# in misses thus we could avoid the filter cost - this does not apply to
Expand Down

0 comments on commit 84a72c8

Please sign in to comment.