-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Response ordering must be guaranteered when streaming #397
Comments
edit: Nervermind, I hadn't read the final sentence in the quote. I'm not sure if the response ordering must be guaranteed. For instance, 2 different packets might follow different network paths in the network layer, which might still cause messages to arrive out of order if one of the paths suddenly is slower than what was expected. What's the problem that motivated this issue/PR? |
This would be solved at TCP level, no? |
The problem I was trying to solve is to guarantee order for a bunch of records that are sent by another server. Server 1: requests large amount of records I can't fit whole data in the memory As I understand it's a single RPC call, so by specs order must be guaranteed. Packet ordering is a concern of a more low level network stuff (sorry for my english, OSI model, TCP layer). |
Yeah, I agree. I just left a comment in the PR regarding the data structure change. If there's a large number of messages this can become a significant problem. Other than that, we can move forward with this fix. |
…esponse process handles ":consume_response" (#396) * Simple bug demonstration and bugfix * Migrate responses from list to erlang queue * Use queue.to_list instead of queue.peek in tests * Use factory to build message for ordering test * Fix bye luis test and encoding * Fix mint adapter connection process test after responses migration to erlang queue * Update test/grpc/client/adapters/mint/connection_process_test.exs --------- Co-authored-by: Paulo Valente <[email protected]>
Describe the bug
Stream response process genserver prepends new responses when handling
consume_response
messages.This disrupts ordering when stream is consumed by application slower than new responses arrive.
From docs:
To Reproduce
Run this simple test in
test/grpc/client/adapters/mint/stream_response_process_test.exs
underbuild_stream/1
describe block.Here we expecting that messages very ordered by server. Hello Luis is the first and Hello Luit - second. But order has been modified by Stream Response Server.
Expected behavior
Stream Response Process must save new responses in the same order as they've arrived
Logs
Protos
Versions:
Additional context
Attached demonstration pull request #396
The text was updated successfully, but these errors were encountered: