Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

max request size limitation without buffer #36946

Open
wbpcode opened this issue Nov 1, 2024 · 8 comments
Open

max request size limitation without buffer #36946

wbpcode opened this issue Nov 1, 2024 · 8 comments
Labels
area/buffer area/http enhancement Feature requests. Not bugs or questions. no stalebot Disables stalebot from closing an issue

Comments

@wbpcode
Copy link
Member

wbpcode commented Nov 1, 2024

Title: One line description

Now, all the the exist way to limit the max request body in the Envoy need to buffer the request body. But I won't whether make sense to add a featue to limit the request body without buffer body.

The request body will be proxied to upstream. But if the limitation is reached then the 413 will be returned and whole connection will be closed directly.

From the clients' view, the Envoy responded a 413, from the servers' view, Envoy cancelled the requests by close connection. This could limit the max request body size and need keep the body in the memory buffer.

🤔

@wbpcode wbpcode added enhancement Feature requests. Not bugs or questions. triage Issue requires triage labels Nov 1, 2024
@wufanqqfsc
Copy link

It's reasonable to return 413 ,but why we need close the connection ? The max request body size reached for this request but not mean other requests via this connection will also over size right ?

@wbpcode
Copy link
Member Author

wbpcode commented Nov 4, 2024

It's reasonable to return 413 ,but why we need close the connection ? The max request body size reached for this request but not mean other requests via this connection will also over size right ?

It depends on the protocol. HTTP1.1 will close the connection because the reponse will be complete before the response. For the HTTP2, the connection may be kept.

@KBaichoo
Copy link
Contributor

KBaichoo commented Nov 4, 2024

To confirm the problem: we want to have a max request payload size while streaming the request to avoid buffering the entire request in memory at the proxy?

@KBaichoo KBaichoo added area/http area/buffer and removed triage Issue requires triage labels Nov 4, 2024
@wbpcode
Copy link
Member Author

wbpcode commented Nov 4, 2024

To confirm the problem: we want to have a max request payload size while streaming the request to avoid buffering the entire request in memory at the proxy?

yeah.

@wufanqqfsc
Copy link

It's reasonable to return 413 ,but why we need close the connection ? The max request body size reached for this request but not mean other requests via this connection will also over size right ?

It depends on the protocol. HTTP1.1 will close the connection because the reponse will be complete before the response. For the HTTP2, the connection may be kept.

What about the http1.1 with keep-alive?

@wbpcode
Copy link
Member Author

wbpcode commented Nov 5, 2024

It's reasonable to return 413 ,but why we need close the connection ? The max request body size reached for this request but not mean other requests via this connection will also over size right ?

It depends on the protocol. HTTP1.1 will close the connection because the reponse will be complete before the response. For the HTTP2, the connection may be kept.

What about the http1.1 with keep-alive?

HTTP1.1 has no frame or stream like HTTP2 on the top of the connection. So, if the response cannot end as normally, to avoid possible dirty data, the connection should be closed anyway.
All sendLocalReply of Envoy before requests or responses are completed as normally, may result in HTTP1.1 connection closing.

@wufanqqfsc
Copy link

OK, the proposed solution here is:
1.For downstream: envoy count and sum the received buffer size. And then compare with our target max body size. If over size then envoy sendLocalrepay to downstream with 413.
2.For upstream you want to close the connection to avoid more dirty data transfer ed to upstream service right? But anyway there is already some dirty data send to the upstream. So we want envoy to close the
connection. And for the upstream service should handle the connection lost event and clean the buffer data.

I just wonder how this body limit was done by other proxy like nginx . Is it possible to have the same behavior ? So that it will much easier to do proxy replacement and no sence for upstream service.

Copy link

github-actions bot commented Dec 6, 2024

This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or "no stalebot" or other activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the stale stalebot believes this issue/PR has not been touched recently label Dec 6, 2024
@wbpcode wbpcode added no stalebot Disables stalebot from closing an issue and removed stale stalebot believes this issue/PR has not been touched recently labels Dec 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/buffer area/http enhancement Feature requests. Not bugs or questions. no stalebot Disables stalebot from closing an issue
Projects
None yet
Development

No branches or pull requests

3 participants