You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It looks like computing a chunk group hash is faster in abao than in bao-tree. To fix this we should have a microbenchmark comparing the two, and then obviously fix it.
The text was updated successfully, but these errors were encountered:
divagant-martian
changed the title
Add microbenchnarks for computing a chunk group hash
Add microbenchmarks for computing a chunk group hash
Jul 29, 2023
It looks like this was a false hope. The binary where the outboard computation is super fast is already using bao-tree, at least from the symbol table...
abao/bao_bin on master [$!?] is 📦 v0.12.1 via 🦀 v1.71.0
❯ nm /Users/rklaehn/bin/iroh | grep bao_tree | wc -l
221
abao/bao_bin on master [$!?] is 📦 v0.12.1 via 🦀 v1.71.0
❯ nm /Users/rklaehn/bin/iroh | grep abao | wc -l
0
The issue of the fastest way to compute a chunk group hash is solved in BLAKE3-team/BLAKE3#329 / the iroh-blake3 crate and whatever @oconnor663 will come up with to solve this long term.
It looks like computing a chunk group hash is faster in abao than in bao-tree. To fix this we should have a microbenchmark comparing the two, and then obviously fix it.
See
n0-computer/iroh#1288
The text was updated successfully, but these errors were encountered: