Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Response ordering must be guaranteered when streaming #397

Closed
senconscious opened this issue Jan 9, 2025 · 4 comments · Fixed by #396
Closed

Response ordering must be guaranteered when streaming #397

senconscious opened this issue Jan 9, 2025 · 4 comments · Fixed by #396
Labels

Comments

@senconscious
Copy link
Contributor

Describe the bug

Stream response process genserver prepends new responses when handling consume_response messages.
This disrupts ordering when stream is consumed by application slower than new responses arrive.

From docs:

Client streaming RPCs where the client writes a sequence of messages and sends them to the server, again using a provided stream. Once the client has finished writing the messages, it waits for the server to read them and return its response. Again gRPC guarantees message ordering within an individual RPC call.

To Reproduce
Run this simple test in test/grpc/client/adapters/mint/stream_response_process_test.exs under build_stream/1 describe block.

test "ordering bug", %{pid: pid} do
      hello_luis =
        <<0, 0, 0, 0, 12, 10, 10, 72, 101, 108, 108, 111, 32, 76, 117, 105, 115>>

      hello_luit =
        <<0, 0, 0, 0, 12, 10, 10, 72, 101, 108, 108, 111, 32, 76, 117, 105, 116>>

      stream = StreamResponseProcess.build_stream(pid)
      StreamResponseProcess.consume(pid, :data, hello_luis)
      StreamResponseProcess.consume(pid, :data, hello_luit)
      StreamResponseProcess.done(pid)

      expected_elements =
        [
          ok: %Helloworld.HelloReply{message: "Hello Luis", __unknown_fields__: []},
          ok: %Helloworld.HelloReply{message: "Hello Luit", __unknown_fields__: []}
        ]

      assert Enum.to_list(stream) == expected_elements
    end

Here we expecting that messages very ordered by server. Hello Luis is the first and Hello Luit - second. But order has been modified by Stream Response Server.

Expected behavior

Stream Response Process must save new responses in the same order as they've arrived

Logs

  • N/A

Protos

  • N/A

Versions:

  • OS: Mac Sonoma
  • Erlang: 26.1.2
  • Elixir: 1.15.7-otp-26
  • mix.lock(grpc, gun, cowboy, cowlib): (Doesn't matter)

Additional context
Attached demonstration pull request #396

@senconscious senconscious changed the title Element ordering must be guaranteered when streaming Response ordering must be guaranteered when streaming Jan 9, 2025
@polvalente
Copy link
Contributor

polvalente commented Jan 9, 2025

edit: Nervermind, I hadn't read the final sentence in the quote.

I'm not sure if the response ordering must be guaranteed.

For instance, 2 different packets might follow different network paths in the network layer, which might still cause messages to arrive out of order if one of the paths suddenly is slower than what was expected.

What's the problem that motivated this issue/PR?

@v0idpwn
Copy link
Contributor

v0idpwn commented Jan 9, 2025

For instance, 2 different packets might follow different network paths in the network layer, which might still cause messages to arrive out of order if one of the paths suddenly is slower than what was expected.

This would be solved at TCP level, no?

@senconscious
Copy link
Contributor Author

senconscious commented Jan 9, 2025

The problem I was trying to solve is to guarantee order for a bunch of records that are sent by another server.

Server 1: requests large amount of records
Server 2: sends large amount of records to Server 1 using GRPC.Server.send_reply/2 one by one.
Records streamed from database.

I can't fit whole data in the memory

As I understand it's a single RPC call, so by specs order must be guaranteed.

Packet ordering is a concern of a more low level network stuff (sorry for my english, OSI model, TCP layer).

@polvalente
Copy link
Contributor

Yeah, I agree. I just left a comment in the PR regarding the data structure change. If there's a large number of messages this can become a significant problem. Other than that, we can move forward with this fix.

polvalente added a commit that referenced this issue Jan 9, 2025
…esponse process handles ":consume_response" (#396)

* Simple bug demonstration and bugfix

* Migrate responses from list to erlang queue

* Use queue.to_list instead of queue.peek in tests

* Use factory to build message for ordering test

* Fix bye luis test and encoding

* Fix mint adapter connection process test after responses migration to erlang queue

* Update test/grpc/client/adapters/mint/connection_process_test.exs

---------

Co-authored-by: Paulo Valente <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
3 participants