This repository has been archived by the owner on Nov 26, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 27
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
It can be easy to look at small average step and hash times and miss that the total time is what we're really trying to reduce.
I definitely had some incorrect assumptions about this data structure which made it more difficult to learn. So, I'm documenting how it works and adding some tests. The simple_merkle test is currently failing because the `set` method doesn't allow setting an index larger than the largest currently set leaf's index. There is some debate as to whether or not this is the correct behavior. To run the test, use: ``` $> cargo test -- --include-ignored ```
At this point, the new root hash is eagerly calculated after each call to `extend`.
If this happened frequently, it should really improve the perfomance of the machine. However, it looks like it doesn't happen at all with the benchmark inputs.
Previously, it could hit an index out of bounds if the new leafs caused any parent layer to grow beyond its current size.
Hopefully, this will allow us to compare this branch's implementation of a merkle tree to the one on merkle-perf-a.
The previous implementation was growing the same layers and dirty_indices arrays because the clone isn't deep (I guess.)
There are a few different things going on in this commit. 1. I've added some counters for when methods get called on the Merkle tree. 2. I've added integration with gperftools for profiling specific areas of the code.
This allows me to profile CPU and Heap independently, and to enable and disable the call counters independently.
This part of the code is obviously slow. Let's see if we can improve it.
This is why there were all those unexpected "new_advanced" calls on the memory merkle. The resizes were actually setting self.merkle back to None.
There was a bug where expanding the lowest layer and calling set on all of the new elements was not sufficient to grow the upper layers. This commit also fixes a warning about the package-level profile override being ineffective.
I don't think it's being used.
I have no idea why this is needed. But, it makes `make docker` successful again.
The system tests are timing out because the implementation is still too slow for large steps with lots of store and resize memory calls.
I'm closing this PR. The implementation is only fast because it has a bug. The root hash will quite often be incorrect after the merkle tree is extended in |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This change does two things:
Merkle::extend
method on the merkle tree implementation which extends the vectors which hold the leaves and their parents and then calls set on all of the hashes being appended to layer[0].Memory::resize
method to useMerkle::extend
instead of throwing away the merkle tree and creating a new one.This results in the
always_merkleize
strategy to be much faster than it would be without these changes. Before this change, thebenchbin
binary ran with metrics like these:After this change, the data is more like this:
NOTE: With the optimization, even step-size 16,777,216 (2^24) is able to run 100 iterations in just a little bit over a minute.
references https://linear.app/offchain-labs/issue/NIT-2411/arbitrator-optimizations