-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Send hashes in the echo round instead of the full messages #90
Conversation
Pull Request Test Coverage Report for Build 13193469924Details
💛 - Coveralls |
3b6d310
to
6ed1729
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
EDIT: Although I have a question. This optimization is CGGMP-specific and I wonder how well it jives with manul
's goal of being a general purpose "round manager" library. I'm fine merging CGGMP specific stuff but it's worth keeping track of where we stray from the general-purpose goal.
I think it's a pretty general thing, we don't want to send whole messages the second time. Even without quadratic scaling echos with full messages can get pretty large. |
In addition to the requested changes, also added the forgotten length prefix when hashing variable-length bytestrings. Tend to forget this kind of stuff because in |
d67250d
to
60fec79
Compare
Some echo broadcasts in CGGMP'24 include large structures that scale linearly with the number of nodes, so the corresponding echo round messages would scale quadratically. This PR replaces sending full payloads the second time with just sending payload hashes. The trade-offs are:
Evidence
(need to keep the echo messages themselves and the signed hashes)Also fixes a bug with the
TestHasher
OutputSize
andbuffer.len()
mismatch. Before this PR the created digest was actually ignored when creatingTestSignature
, so the bug didn't fire.This approach hashes in two stages when it's technically not necessary (that is, for non-echo messages), but I don't think it leads to any noticeable performance decrease — with most hashes it's the update part that's slow, not the finalize. So the only additional cost is an extra tag pushed into the digest.