Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Block proposal size limits #2904

Merged
merged 1 commit into from
Nov 16, 2024
Merged

Block proposal size limits #2904

merged 1 commit into from
Nov 16, 2024

Conversation

ndr-ds
Copy link
Contributor

@ndr-ds ndr-ds commented Nov 15, 2024

Motivation

Large block proposals can make messages containing them exceed the gRPC message limit, and require a lot of storage and bandwidth, as they may contain several blobs.

Proposal

Add configurable block proposal size limits. This is part of #2199

Test Plan

CI, incremented one of the tests

Release Plan

  • These changes should be backported to the latest devnet branch
  • These changes should be backported to the latest testnet branch

Copy link
Contributor Author

ndr-ds commented Nov 15, 2024

This stack of pull requests is managed by Graphite. Learn more about stacking.

@ndr-ds ndr-ds force-pushed the 11-15-block_proposal_size_limits branch 4 times, most recently from 97b19c0 to 2219d73 Compare November 16, 2024 05:34
@ndr-ds ndr-ds marked this pull request as ready for review November 16, 2024 05:34
@ndr-ds ndr-ds requested review from afck, deuszx, jvff and ma2bd November 16, 2024 05:35
Copy link
Contributor

@afck afck left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍
But personally I'd remove the check inside BlockProposal::new again.

Comment on lines 154 to 155
maximum_bytes_read_per_block: 47,
maximum_bytes_written_per_block: 53,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Let's continue the tradition!)

Suggested change
maximum_bytes_read_per_block: 47,
maximum_bytes_written_per_block: 53,
maximum_bytes_read_per_block: 53,
maximum_bytes_written_per_block: 59,

linera-core/src/unit_tests/client_tests.rs Show resolved Hide resolved
WorkerError::BlockProposalTooLarge
);
Ok(())
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not use the method BlockProposal::check_size?

content,
owner: secret.public().into(),
signature,
blobs,
validated_block_certificate: Some(lite_cert),
}
};
block_proposal.check_size(maximum_block_proposal_size)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's not strictly necessary: We'd fail already in the local node anyway if it were too large, and this time we serialize it multiple times locally just to check the size. It would also keep the code simpler to remove this.

check_block_epoch(epoch, block)?;
let policy = committee.policy().clone();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clone is not necessary if you're passing by reference?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do a immutable borrow of self.0.chain on L178, but we also try to do a mutable borrow of it on L211. If we don't clone here we have to clone the whole chain state view on L211 or something like that

Copy link
Contributor

@deuszx deuszx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see that we check if already-created BlockProposal is below limit but where's the code that takes that limit into account when creating a BlockProposal?

@afck
Copy link
Contributor

afck commented Nov 16, 2024

where's the code that takes that limit into account when creating a BlockProposal

In BlockProposal::new_… but as I wrote above I think it's redundant and somewhat expensive, since we handle and check it in the local node anyway.

@deuszx
Copy link
Contributor

deuszx commented Nov 16, 2024

where's the code that takes that limit into account when creating a BlockProposal

In BlockProposal::new_… but as I wrote above I think it's redundant and somewhat expensive, since we handle and check it in the local node anyway.

It checks afterwards but was thinking about the code that makes sure during a construction of Block we don't exceed this limit or that we never even try to construct a BlockProposal which would exceed the limit (for example by limiting the number of blobs we put there or messages).

Also, how do we handle this error? When a client/local node ends up constructing a block too big, will it retry with a smaller set of inputs?

@ndr-ds ndr-ds force-pushed the 11-15-block_proposal_size_limits branch from 2219d73 to 669d44b Compare November 16, 2024 12:06
Copy link
Contributor Author

ndr-ds commented Nov 16, 2024

where's the code that takes that limit into account when creating a BlockProposal

In BlockProposal::new_… but as I wrote above I think it's redundant and somewhat expensive, since we handle and check it in the local node anyway.

It checks afterwards but was thinking about the code that makes sure during a construction of Block we don't exceed this limit or that we never even try to construct a BlockProposal which would exceed the limit (for example by limiting the number of blobs we put there or messages).

Also, how do we handle this error? When a client/local node ends up constructing a block too big, will it retry with a smaller set of inputs?

Every block we construct has to go through the local handle_block_proposal call in the local_node, so I think this should be fine? Unless you build the block proposal but don't actually try to submit it, but then maybe we just don't care.

Currently it just fails AFAIU, doesn't retry.

@afck
Copy link
Contributor

afck commented Nov 16, 2024

during a construction

That would be pretty complicated and a lot of code (you'd need to continuously partially serialize stuff, and take into account that in BCS e.g. array length encoding itself changes its length, etc.), and I'm not sure it's really worth it: In the vast majority of cases the intended block shouldn't be too big anyway, so I'd avoid adding any latency to the process just to make it fail a little bit earlier in the failure case.

(Anyway, just my opinion; happy to be outvoted!)

@deuszx
Copy link
Contributor

deuszx commented Nov 16, 2024

Can you point me to the code which makes a decision about:

  1. Which operations to accept
  2. Which incoming messages to accept
  3. Which blobs to accept
  4. etc

So that the final block proposal doesn't exceed the limits

Copy link
Contributor Author

ndr-ds commented Nov 16, 2024

Can you point me to the code which makes a decision about:

  1. Which operations to accept
  2. Which incoming messages to accept
  3. Which blobs to accept
  4. etc

So that the final block proposal doesn't exceed the limits

We currently don't ensure that a block proposal always fits the limits by making these decisions. We include everything the client is trying to propose, and either succeed or fail depending on the resulting block proposal's size. Then the user of the client is responsible for retrying with a smaller proposal (less blobs, etc).

@afck
Copy link
Contributor

afck commented Nov 16, 2024

Here we drop incoming failing messages until the proposal succeeds in the local node.. We then handle it in the local node as well which applies all the same checks as the validators do.

@deuszx
Copy link
Contributor

deuszx commented Nov 16, 2024

Here we drop incoming failing messages until the proposal succeeds in the local node.. We then handle it in the local node as well which applies all the same checks as the validators do.

But this require proposol re-execution, right? and we want to get rid of this (execution on the client) so the logic I mentioned earlier will have to be added.

@ndr-ds ndr-ds force-pushed the 11-15-block_proposal_size_limits branch from 669d44b to ce8e26e Compare November 16, 2024 12:25
@afck
Copy link
Contributor

afck commented Nov 16, 2024

and we want to get rid of this (execution on the client)

Partially, but maybe only for chains we don't own (and therefore don't propose blocks on).

Otherwise the only way to be sure that the proposal really is valid is to execute it. E.g. if an operation execution returns an error, that makes the block invalid. So I don't see why we'd do other checks but not this one.

@afck
Copy link
Contributor

afck commented Nov 16, 2024

(We certainly want to deduplicate execution, so that we don't do it three times in the client. But one execution will still do all the checks.)

@deuszx
Copy link
Contributor

deuszx commented Nov 16, 2024

and we want to get rid of this (execution on the client)

Partially, but maybe only for chains we don't own (and therefore don't propose blocks on).

Otherwise the only way to be sure that the proposal really is valid is to execute it. E.g. if an operation execution returns an error, that makes the block invalid. So I don't see why we'd do other checks but not this one.

B/c we have loops (at least two, but maybe more) in the client during block proposal where we re-execute a block with a different set of arguments. This cannot be good for the latency.

Copy link
Contributor

@deuszx deuszx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this isn't a correct way of enforcing the block limit (see my comments above) but this implementation looks correct.

@afck
Copy link
Contributor

afck commented Nov 16, 2024

B/c we have loops (at least two, but maybe more) in the client during block proposal where we re-execute a block with a different set of arguments. This cannot be good for the latency.

Absolutely! We do want to get rid of those loops; but that's beyond the scope of this PR.

@ndr-ds ndr-ds force-pushed the 11-15-block_proposal_size_limits branch from ce8e26e to affe1d0 Compare November 16, 2024 12:32
@ndr-ds ndr-ds force-pushed the 11-15-block_proposal_size_limits branch from affe1d0 to 248a0e8 Compare November 16, 2024 13:11
@ndr-ds ndr-ds force-pushed the 11-15-block_proposal_size_limits branch from 248a0e8 to dded250 Compare November 16, 2024 13:13
Copy link
Contributor Author

ndr-ds commented Nov 16, 2024

Merge activity

  • Nov 16, 8:37 AM EST: A user started a stack merge that includes this pull request via Graphite.
  • Nov 16, 8:37 AM EST: A user merged this pull request with Graphite.

@ndr-ds ndr-ds merged commit bc4a8c8 into main Nov 16, 2024
22 checks passed
@ndr-ds ndr-ds deleted the 11-15-block_proposal_size_limits branch November 16, 2024 13:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants