You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The first iteration of the RemovalRecordsIntegrity consensus program has a rather disappointing performance, clocking in at 358000 cycles for an arbitrary 2-in 2-out transaction.
clock cycle count
hash table height
u32 table height
op stack height
ram table height
358251
49708
46564
252217
63020
Looking at the profile, it seems that more than two thirds of the clock cycles are spent verifying MMR authentication paths. (We need to do that because otherwise someone could produce a transaction with removal records that cannot be applied to the mutator set, but which still looks valid.)
There are several ways in which we might address this performance issue.
Wait for merkle_step_mem, which is in the pipeline over on Triton VM.
Separate the program RemovalRecordsIntegrity into two programs. The first program establishes that the salted inputs hash corresponds to the listed removal records. The second program establishes that the removal records can in fact be applied to the current mutator set accumulator. This option does not reduce the workload but shifts it around in such a way that users on light devices can still produce the proofs that are relevant for ownership (the lock scripts) and privacy (the first program), and delegate the other parts to peers.
Compress the target_chunks field of removal records. After all, the removal record indices are either (a) deep in the SWBF in which case they live in the same tree with overwhelming probability and hence their authentication paths will show large overlap, or (b) shallow in which case their authentication paths are short and there is no problem to begin with. In case (a) the authentication paths can be deduplicated into an authentication structure, which certainly saves memory. It is unclear though if the increased arithmetic cost of computing with this memory-efficient data structure justifies the benefit of having less data to run the computation over.
The text was updated successfully, but these errors were encountered:
The first iteration of the
RemovalRecordsIntegrity
consensus program has a rather disappointing performance, clocking in at 358000 cycles for an arbitrary 2-in 2-out transaction.Looking at the profile, it seems that more than two thirds of the clock cycles are spent verifying MMR authentication paths. (We need to do that because otherwise someone could produce a transaction with removal records that cannot be applied to the mutator set, but which still looks valid.)
There are several ways in which we might address this performance issue.
merkle_step_mem
, which is in the pipeline over on Triton VM.RemovalRecordsIntegrity
into two programs. The first program establishes that the salted inputs hash corresponds to the listed removal records. The second program establishes that the removal records can in fact be applied to the current mutator set accumulator. This option does not reduce the workload but shifts it around in such a way that users on light devices can still produce the proofs that are relevant for ownership (the lock scripts) and privacy (the first program), and delegate the other parts to peers.target_chunks
field of removal records. After all, the removal record indices are either (a) deep in the SWBF in which case they live in the same tree with overwhelming probability and hence their authentication paths will show large overlap, or (b) shallow in which case their authentication paths are short and there is no problem to begin with. In case (a) the authentication paths can be deduplicated into an authentication structure, which certainly saves memory. It is unclear though if the increased arithmetic cost of computing with this memory-efficient data structure justifies the benefit of having less data to run the computation over.The text was updated successfully, but these errors were encountered: