-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Block proposal size limits #2904
Conversation
97b19c0
to
2219d73
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
But personally I'd remove the check inside BlockProposal::new
again.
maximum_bytes_read_per_block: 47, | ||
maximum_bytes_written_per_block: 53, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Let's continue the tradition!)
maximum_bytes_read_per_block: 47, | |
maximum_bytes_written_per_block: 53, | |
maximum_bytes_read_per_block: 53, | |
maximum_bytes_written_per_block: 59, |
WorkerError::BlockProposalTooLarge | ||
); | ||
Ok(()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not use the method BlockProposal::check_size
?
linera-chain/src/data_types.rs
Outdated
content, | ||
owner: secret.public().into(), | ||
signature, | ||
blobs, | ||
validated_block_certificate: Some(lite_cert), | ||
} | ||
}; | ||
block_proposal.check_size(maximum_block_proposal_size)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's not strictly necessary: We'd fail already in the local node anyway if it were too large, and this time we serialize it multiple times locally just to check the size. It would also keep the code simpler to remove this.
check_block_epoch(epoch, block)?; | ||
let policy = committee.policy().clone(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clone is not necessary if you're passing by reference?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do a immutable borrow of self.0.chain
on L178, but we also try to do a mutable borrow of it on L211. If we don't clone here we have to clone the whole chain state view on L211 or something like that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can see that we check if already-created BlockProposal
is below limit but where's the code that takes that limit into account when creating a BlockProposal
?
In |
It checks afterwards but was thinking about the code that makes sure during a construction of Also, how do we handle this error? When a client/local node ends up constructing a block too big, will it retry with a smaller set of inputs? |
2219d73
to
669d44b
Compare
Every block we construct has to go through the local Currently it just fails AFAIU, doesn't retry. |
That would be pretty complicated and a lot of code (you'd need to continuously partially serialize stuff, and take into account that in BCS e.g. array length encoding itself changes its length, etc.), and I'm not sure it's really worth it: In the vast majority of cases the intended block shouldn't be too big anyway, so I'd avoid adding any latency to the process just to make it fail a little bit earlier in the failure case. (Anyway, just my opinion; happy to be outvoted!) |
Can you point me to the code which makes a decision about:
So that the final block proposal doesn't exceed the limits |
We currently don't ensure that a block proposal always fits the limits by making these decisions. We include everything the client is trying to propose, and either succeed or fail depending on the resulting block proposal's size. Then the user of the client is responsible for retrying with a smaller proposal (less blobs, etc). |
Here we drop incoming failing messages until the proposal succeeds in the local node.. We then handle it in the local node as well which applies all the same checks as the validators do. |
But this require proposol re-execution, right? and we want to get rid of this (execution on the client) so the logic I mentioned earlier will have to be added. |
669d44b
to
ce8e26e
Compare
Partially, but maybe only for chains we don't own (and therefore don't propose blocks on). Otherwise the only way to be sure that the proposal really is valid is to execute it. E.g. if an operation execution returns an error, that makes the block invalid. So I don't see why we'd do other checks but not this one. |
(We certainly want to deduplicate execution, so that we don't do it three times in the client. But one execution will still do all the checks.) |
B/c we have loops (at least two, but maybe more) in the client during block proposal where we re-execute a block with a different set of arguments. This cannot be good for the latency. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this isn't a correct way of enforcing the block limit (see my comments above) but this implementation looks correct.
Absolutely! We do want to get rid of those loops; but that's beyond the scope of this PR. |
ce8e26e
to
affe1d0
Compare
affe1d0
to
248a0e8
Compare
248a0e8
to
dded250
Compare
Motivation
Large block proposals can make messages containing them exceed the gRPC message limit, and require a lot of storage and bandwidth, as they may contain several blobs.
Proposal
Add configurable block proposal size limits. This is part of #2199
Test Plan
CI, incremented one of the tests
Release Plan
devnet
branchtestnet
branch