Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why forcing to completely fill an internal buffer when reading data, before doing network I/O ? #509

Closed
gfriloux opened this issue Dec 19, 2017 · 2 comments
Labels
question A question (converts to discussion)

Comments

@gfriloux
Copy link

gfriloux commented Dec 19, 2017

Hello, i am looking at : https://github.com/SergioBenitez/Rocket/blob/a9c66c9426bec57ee958480ae4aa3d789f20488f/lib/src/rocket.rs#L162

I am trying to understand why it has been needed to use this internal read_max() implementation instead of standard read().
If understand it correctly, a read() call will read at most n bytes if you pass a buffer whose length is n bytes.
If it reads less, it means :

  1. you've reach end of file.
  2. your app received a signal, which stopped your blocking operation.

Signals arent supposed to happen often (it's supposed to be rare). If it unblocks your read, its only so we can execute a function that is called when receiving a signal.
If you've reach end of file, read_max() doesnt add anything.

So, i dont get where its adding benefits, while i see a problem in this implementation, in my use case : it will prevent from doing streaming that manipulates small bytes and delay (as of intentionnaly setting a specific delay before sending of additional data).
This will also be a problem for SSE (#33).

In this kind of code : https://gist.github.com/gfriloux/003af62ba722a8a52009d938898123d0
App could manage the length of buffer it wants to return using the Read impl.
I believe (i can be wrong, but i want to learn) this is enough, as it is also the App that will deal with signals.

Can you explain me what isssue this read_max() function solves in this code i've link ?

@gfriloux gfriloux changed the title Why forcing to completely fill an internal buffer when reading a file, before doing network I/O ? Why forcing to completely fill an internal buffer when reading data, before doing network I/O ? Dec 19, 2017
@SergioBenitez SergioBenitez added the question A question (converts to discussion) label Dec 31, 2017
gfriloux added a commit to gfriloux/Rocket that referenced this issue Mar 29, 2018
This function is a bad idea.
See rwf2#509

This makes Chunked Transfer Encoding to not work with
streaming of small sets of data.
@SergioBenitez
Copy link
Member

We'll do better when we move to async. Let's track in #17.

@gfriloux
Copy link
Author

gfriloux commented May 16, 2019

For information, i'm using this patch (along with one on hyper 0.10.13, see : gfriloux/hyper@796f5d0 ) to do CTE without any problem since over a year.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question A question (converts to discussion)
Projects
None yet
Development

No branches or pull requests

2 participants