-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
`Bad_request if request is more than 4152 bytes on OSX #18
Comments
Any guidance on where I can tweak httpaf to support this? |
Updating the title as I realized while trying to debug this that this happens on any request that's > 4152 bytes (wireshark was handy to keep track of this). This may be specific to OSX, still trying to track down the cause. |
Try setting |
Yeah, that does the trick. Bumping it fixes the problem until the request gets larger. Is that intended behavior (i.e. we should be carefully selecting for possibly-large bodies), or a bug? |
This isn't due to a large request body, but rather because of a large header. Request bodies that can't fit into the read buffer will be delivered to the application in at most The same is not true for headers. That part of the parser does as little copying and buffering as possible. As a result if it can't fit any one component into the read buffer, that will result in a "parser stall" which results in turn results in the bad request response that you're observing. This is by design, both for efficiency and to minimize the effect of DoSing... which I guess is the same thing. A few things can be done to ameliorate: Right now there is an implicit requirement is entire request message (besides the body) can fit into the read buffer. That can be relaxed slightly so that the first line of the request has to fit into a read buffer. That'll leave a bit more room for the headers. The requirement could be further relaxed to allow each header-line to fit in the read buffer. But regardless there's always going to be an upper-bound on the length of the components of request messages (besides the body) and that's for sure by design. I think the best thing to do is to maybe relax the parser in the first way I suggested, and then document this. |
I agree with the "best thing to do" part, but probably with one addition - make it easy to catch/handle this in a way that we can return a |
This refs #18 in that users can now set the buffer size they want. It also gets rid of a pending `TODO` in the code.
Upstream issue inhabitedtype/httpaf#18 details a case where the http/af parser is unable to parse a message if the read buffer size isn't as big as all the bytes that form the message headers. While it is desirable, and a design goal of the library, to be efficient in parsing and thus avoid DoS attacks, it is more sensible to allow each message header to fit within the read buffer size up to its limit. This diff makes that change, which better accommodates real world use cases for the default read buffer size.
Upstream issue inhabitedtype/httpaf#18 details a case where the http/af parser is unable to parse a message if the read buffer size isn't as big as all the bytes that form the message headers. While it is desirable, and a design goal of the library, to be efficient in parsing and thus avoid DoS attacks, it is more sensible to allow each message header to fit within the read buffer size up to its limit. This diff makes that change, which better accommodates real world use cases for the default read buffer size.
Upstream issue inhabitedtype/httpaf#18 details a case where the http/af parser is unable to parse a message if the read buffer size isn't as big as all the bytes that form the message headers. While it is desirable, and a design goal of the library, to be efficient in parsing and thus avoid DoS attacks, it is more sensible to allow each message header to fit within the read buffer size up to its limit. This diff makes that change, which better accommodates real world use cases for the default read buffer size.
Upstream issue inhabitedtype/httpaf#18 details a case where the http/af parser is unable to parse a message if the read buffer size isn't as big as all the bytes that form the message headers. While it is desirable, and a design goal of the library, to be efficient in parsing and thus avoid DoS attacks, it is more sensible to allow each message header to fit within the read buffer size up to its limit. This diff makes that change, which better accommodates real world use cases for the default read buffer size.
Example curl to trigger the issue (assuming you have an httpaf server running):
(this came from a real-world case that we have to support, unfortunately)
The above snippet should trigger the
~error_handler
increate_connection_handler
, with an error that matches| #Status.standard as error => print_endline (Status.default_reason_phrase error)
and returnsBad request
. I suspect this should be413 Request_entity_too_large
instead.I searched a bit, but not sure if there's a knob we can tweak somewhere to allow for larger headers.
The text was updated successfully, but these errors were encountered: