Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

signal queued and unqueued to the other party #237

Open
raulk opened this issue Aug 19, 2021 · 7 comments
Open

signal queued and unqueued to the other party #237

raulk opened this issue Aug 19, 2021 · 7 comments

Comments

@raulk
Copy link
Member

raulk commented Aug 19, 2021

When a party queues a transfer due to throttling policies (e.g. Graphsync simultaneous transfers), the other party doesn't come to know. All they see is a stalled "ongoing" transfer.

We should add support for signalling to the other party when transfers get queued and unqueued for an improved UX.

There is already a ClientEventDataTransferQueued event in go-fil-markets. That event is better managed at a lower layer like go-data-transfer.

@raulk
Copy link
Member Author

raulk commented Aug 19, 2021

So this would be a matter of enhancing the go-data-transfer protocol with an extra signalling message, potentially: https://github.com/filecoin-project/go-data-transfer/blob/ad43f2d453f12b8f32663c6e25c3fdb6c042aabf/message/message.go

@dirkmc
Copy link
Contributor

dirkmc commented Aug 23, 2021

I agree that we should improve the UX for outputing the state of the transfer.

As I understand it the current behaviour is:

  1. Graphsync client requests data
    Transfer moves to "Ongoing" state
  2. Graphsync server can process a limited number of concurrent requests, so it queues up the request to be processed later
  3. Client reports the state as "Ongoing" even though it hasn't started yet

If possible it would be nice to keep the protocol simple and avoid adding implementation-specific messages about queueing.
I'd suggest instead that we just change the naming of our states in any output to the user:

  • When the client sends a request output the state as "Queued"
  • When the first byte is received output the state as "Ongoing"

Note that this would be a cosmetic change to the output, it doesn't require any logic changes in how we transition between states / fire events etc.

@raulk
Copy link
Member Author

raulk commented Aug 25, 2021

@dirkmc So while I agree that the simpler solution would immediately improve UX, from the correctness/safety perspective there is a difference between an explicit and implicit queued event.

  • The original post talks about an explicit event, which acts like an ACK, signalling to the requester that the transfer request has been accepted, but it's queued due to throttling directives.
  • The simplified solution guesses that a zero size transfer

Also, I think the simplified solution would end up being confusing/insufficient in the face of restarts after a partial transfer. Here, the transferred amount will be non-zero, but the request may as well end up being queued on the responder side, which means that the peer will think it's ongoing when it isn't :-(

There are probably other edge cases like this, which is why I prefer state machines not to make assumptions and instead rely on explicit signalling of states between peers.

@dirkmc
Copy link
Contributor

dirkmc commented Aug 25, 2021

I agree that it's nice for the caller to have feedback on the state of their request. I suggest we think about it in terms of the trade-off between keeping the user informed and keeping the protocol simple. Each addition to the protocol needs to be supported by all future implementations.

As an example when a client opens an HTTP request, the request is implicitly added to a queue. The server can signal that it's overloaded with the response code "too many requests", but I don't believe there's anything like "I've queued your request".

@raulk
Copy link
Member Author

raulk commented Aug 25, 2021

HTTP is designed to be synchronous though, so the client has an expectation that the response will be almost immediate (and/or the server is starting work immediately). However, the graphsync/data-transfer protocol flows are asynchronous by design, and transfers could stay queued up for hours, which is one of the underlying reasons that this explicit state is necessary IMO.

If we do want to stick to the two-state approach, I would suggest different naming:

  • Queued => Requested (client doesn't make assumptions about queueing, all it knows is that it sent the request)
  • Ongoing => Receiving

However, with this two-state approach there is still the restart case, which will be relatively frequent and therefore should be non-ambiguous.

@dirkmc
Copy link
Contributor

dirkmc commented Aug 25, 2021

Agree with changing the naming, I think it's confusing at the moment 👍

@raulk
Copy link
Member Author

raulk commented Aug 25, 2021

@dirkmc On a restart of the transfer or the host, how hard would it be to reset the transfer back to Requested and then transition it to Receiving the moment that data starts flowing?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Icebox
Development

No branches or pull requests

3 participants