-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Address TODOs #383
base: main
Are you sure you want to change the base?
Address TODOs #383
Conversation
source/h2_decoder.c
Outdated
if (aws_string_eq_c_str(method_string, "CONNECT")) { | ||
if (scheme_string || path_string) { | ||
/* RFC-9113 8.5 The ":scheme" and ":path" pseudo-header fields MUST be omitted for CONNECT method */ | ||
DECODER_LOG(ERROR, decoder, "CONNECT request has :path or :scheme that are must omitted."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need to check that these things aren't empty strings?
I see you checking the request_string, but not the others
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still wondering about this: should we be doing more validation on the pseudo-headers?
Like, are any of them allowed to be ""?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was checked before. from the s_process_header_field
at here
source/h2_frames.c
Outdated
void aws_h2_frame_encoder_set_setting_header_table_size(struct aws_h2_frame_encoder *encoder, uint32_t data) { | ||
/* Setting for dynamic table size changed from peer, we will update the dynamic table size when we encoder the next | ||
* header block */ | ||
* header block. The spec requires us to update the table if the updated size is smaller than the current size, but | ||
* we can still update it even if not */ | ||
aws_hpack_set_max_table_size(encoder->hpack, data); | ||
aws_hpack_set_protocol_max_size_setting(encoder->hpack, data); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just spent like an hour trying to make sense of whether or not we're doing the right thing with this hpack resize stuff. MAN this stuff is hard to understand. We just have a lot of variables with similar names, and the 1 hpack class that has both encoding and decoding logic. Can I make some suggestions?
-
Don't have
aws_h2_frame_encoder_set_setting_header_table_size()
callaws_hpack_set_protocol_max_size_setting()
at all. Turns out that function has no effect on an aws_hpack_context that is being used for encoding. -
rename
aws_hpack_set_max_table_size()->
/* Call this after receiving SETTINGS_HEADER_TABLE_SIZE from peer and sending the ACK.
* The hpack-encoder remembers all size updates, and makes sure to encode the proper
* number of Dynamic Table Size Updates the next time a header block is sent. */
AWS_HTTP_API
void aws_hpack_encoder_update_max_table_size(struct aws_hpack_context *context, uint32_t new_max_size);
- rename
aws_hpack_set_protocol_max_size_setting()->
/* Call this after sending SETTINGS_HEADER_TABLE_SIZE and receiving ACK from the peer.
* The hpack-decoder remembers all size updates, and makes sure that the peer
* sends the appropriate Dynamic Table Size Updates in the next header block we receive. */
AWS_HTTP_API
void aws_hpack_decoder_update_max_table_size(struct aws_hpack_context *context, uint32_t new_max_size);
- I still don't think
aws_hpack_set_protocol_max_size_setting()
akaaws_hpack_decoder_update_max_table_size()
is doing the right thing. In the same way the encoder tracks asmallest
andlast
size, I think we need to do similar tracking for the decoder so we can properly check that the peer sends the right number of updates?
man, this is why we need to split aws_hpack_context into aws_hpack_encoder
and aws_hpack_decoder
. It's too confusing having everyone's variables and functions all mixed up together
@@ -54,6 +54,7 @@ enum aws_http_errors { | |||
AWS_ERROR_HTTP_STREAM_MANAGER_SHUTTING_DOWN, | |||
AWS_ERROR_HTTP_STREAM_MANAGER_CONNECTION_ACQUIRE_FAILURE, | |||
AWS_ERROR_HTTP_STREAM_MANAGER_UNEXPECTED_HTTP_VERSION, | |||
AWS_ERROR_HTTP_REQUIRED_PSEUDO_HEADER_MISSING, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see this error used anywhere.
Remove it if you're not using it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it was used here
source/h2_decoder.c
Outdated
if (aws_string_eq_c_str(method_string, "CONNECT")) { | ||
if (scheme_string || path_string) { | ||
/* RFC-9113 8.5 The ":scheme" and ":path" pseudo-header fields MUST be omitted for CONNECT method */ | ||
DECODER_LOG(ERROR, decoder, "CONNECT request has :path or :scheme that are must omitted."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still wondering about this: should we be doing more validation on the pseudo-headers?
Like, are any of them allowed to be ""?
@@ -1253,6 +1288,21 @@ static struct aws_h2err s_process_header_field( | |||
goto malformed; | |||
} | |||
|
|||
if (!aws_strutil_is_http_field_value(header_field->value)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just found a new TODO to add to our code. From RFC-9110:
A field value does not include leading or trailing whitespace. When a specific version of HTTP allows such whitespace to appear in a message, a field parsing implementation MUST exclude such whitespace prior to evaluating the field value
I see the H1 encoder and decoder using aws_strutil_trim_http_whitespace()
, but I don't see us doing it anywhere in our H2 code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When a specific version of HTTP allows such whitespace to appear in a message
A field value MUST NOT start or end with an ASCII whitespace character (ASCII SP or HTAB, 0x20 or 0x09). here. So, I think we should not trim the white space for h2
source/hpack_decoder.c
Outdated
decoder->progress_entry.u.literal.prefix_size = 4; | ||
decoder->progress_entry.state = HPACK_ENTRY_STATE_LITERAL_BEGIN; | ||
} | ||
if (decoder->dynamic_table_protocol_max_size_setting < decoder->context.dynamic_table.size) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the logic here is correct
The encoder has the correct logic. In the dynamic_table_size_update
struct it tracks latest_value
and smallest_value
separately.
struct {
size_t latest_value;
size_t smallest_value;
bool pending;
} dynamic_table_size_update;
The decoder needs to do something similar, if multiple SETTINGS packets are received back to back, and change the table size to 10, then to 100, then the decoder needs to confirm that it receives an update <= 10
(RFC-7541 4.2)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm, I read again, and still feel confusing about:
An endpoint MUST treat a field block that follows an acknowledgment of the reduction to the maximum dynamic table size as a connection error (Section 5.4.1) of type COMPRESSION_ERROR if it does not start with a conformant Dynamic Table Size Update instruction.
here should be the requirement from the decoder. Especially about start with a conformant Dynamic Table Size Update instruction
. What is conformant update instruction for the case you pointing out to start with? I don't think there is any clear statement about what kind of Dynamic Table Size Update instruction
to start with.
from hpack spec here:
the smallest maximum table size that occurs in that interval MUST be signaled in a dynamic table
size update. The final maximum size is always signaled, resulting in at most two dynamic table size updates
I guess the first one could be larger than the smallest value as long as the second one is smaller than it. :(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
resulting in at most two dynamic table size updates
However, I don't see any place that enforce the dynamic table size updates to be no more than one in normal case. But I guess they assume that's the case as from here
In HTTP/2, this limit is the last value of the SETTINGS_HEADER_TABLE_SIZE parameter
Here is my understanding:
- We can receive at most two continuous Dynamic Table Size Update instruction after ack the settings that could result in reduce the size.
- One of those two MUST be smaller than the smallest max.
- Everyone MUST be smaller than the latest max.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.