-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add packet batch trigger for better packet handling #60
base: dev
Are you sure you want to change the base?
Conversation
b7af646
to
8a5b63b
Compare
include/quic/udp.hpp
Outdated
using receive_callback_t = std::function<void(const Packet& pkt)>; | ||
using receive_batch_callback_t = std::function<void()>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a knee-jerk reaction to suffixing with {}_t
now and you only have yourself to blame
include/quic/udp.hpp
Outdated
@@ -103,6 +113,8 @@ namespace oxen::quic | |||
|
|||
event_ptr rev_ = nullptr; | |||
receive_callback_t receive_callback_; | |||
receive_batch_callback_t receive_callback_batch_; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe batch_processing_cb
?
src/udp.cpp
Outdated
if (self.pending_receive_batch_) | ||
{ | ||
self.pending_receive_batch_ = false; | ||
if (self.receive_callback_batch_) | ||
self.receive_callback_batch_(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering about this boolean (self.pending_receive_batch_
). Few questions/thoughts we can chat about later:
- I feel like there shouldn't be a case where
self.pending_receive_batch_
would be true andself.receive_callback_batch_
would not be set? If the user doesn't provide a batch processing cb, I would guess that the batch processing logic should be avoided entirely? Though that boolean is flipped to true at line 206::process_packet
, implying its flipped and checked invariant of whether batch processing is "active" for this endpoint or not - What if we used the set-ness of the callback as an indicator of whether batch processing is "active" or "inactive" for this endpoint? Then the boolean could be a "we have a full batch" signal of sorts
- We could wrap the callback in a lightweight struct that has two members, a callback and an int/size_t specifying the size of the batch? The size parameter could even default to something sensible at or below the loop limit of 64 packets
Follow-on... would be cool to speed-test this with different batch sizes to see what works most optimally.
Follow-on follow-on... maybe even approximate grid search cross-validation across all our configurable parameters...
tests/007-datagrams.cpp
Outdated
auto batch_counter_before_final = f.get(); | ||
REQUIRE(data_counter == 31); | ||
REQUIRE(batch_counter_before_final > batches_before_flood); | ||
REQUIRE(batch_counter == batch_counter_before_final + 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interestingly the last REQUIRE
failed on a few debug CI builds
8a5b63b
to
025fe0e
Compare
This is dropped now. |
This adds an optional packet post-receive callback to Endpoint that allows Endpoint to signal the application when it is done processing a set of incoming packets -- either because it has processed them all, or because it hit the max-recv-per-loop limit (64). This is designed to allow an application to more efficiently deal with packets, especially datagrams, in batches: by using this an application can use the datagram handler to collect incoming datagrams, then use the post-receive callback to process accumulated datagrams in one go. Because the post-receive callback always fires immediately *before* libquic goes back to potentially blocking waiting to read new packets, this means the application can do batch processing without needing to resort to timers or polling for packet handling. This is particularly important for Lokinet, where we take the packet in the callback transfer it to a job in the Lokinet main loop thread to process it there: doing this one packet at a time is not likely to scale well because of the sheer number of jobs that would have to be put on the logic thread event queue; by batching them into groups of up to 64 packets at a time we ought to be able to do considerably better, and by triggering the processing based on the post-receive trigger we ensure we don't introduce unnecessary delays in terms of when packets get processed.
025fe0e
to
8318b81
Compare
Keeping as draft until I've tried making this fit into Lokinet. |
This adds an optional packet batch callback to endpoint that allows endpoint to signal the application when it is done processing a set of incoming packets for the endpoint -- either because it has processed them all, or because it hit the max-recv-per-loop limit (64).
This is designed to allow an application to more efficiently deal with packets, especially datagrams, in batches: by using this an application can use the datagram handler to collect incoming datagrams, then use the batch callback to process accumulated datagrams in one go. Because the batch callback always fires before libquic goes back to potentially blocking waiting to read new packets, this means the application can do batch processing without needing to resort to timers or polling for packet handling.
This is particularly important for Lokinet, where we take the packet in the callback transfer it to a job in the Lokinet main loop thread to process it there: doing this one packet at a time is not likely to scale well because of the sheer number of jobs that would have to be put on the logic thread event queue; by batching them into groups of up to 64 packets at a time we ought to be able to do considerably better.