You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to understand why it has been needed to use this internal read_max() implementation instead of standard read().
If understand it correctly, a read() call will read at most n bytes if you pass a buffer whose length is n bytes.
If it reads less, it means :
you've reach end of file.
your app received a signal, which stopped your blocking operation.
Signals arent supposed to happen often (it's supposed to be rare). If it unblocks your read, its only so we can execute a function that is called when receiving a signal.
If you've reach end of file, read_max() doesnt add anything.
So, i dont get where its adding benefits, while i see a problem in this implementation, in my use case : it will prevent from doing streaming that manipulates small bytes and delay (as of intentionnaly setting a specific delay before sending of additional data).
This will also be a problem for SSE (#33).
In this kind of code : https://gist.github.com/gfriloux/003af62ba722a8a52009d938898123d0
App could manage the length of buffer it wants to return using the Read impl.
I believe (i can be wrong, but i want to learn) this is enough, as it is also the App that will deal with signals.
Can you explain me what isssue this read_max() function solves in this code i've link ?
The text was updated successfully, but these errors were encountered:
gfriloux
changed the title
Why forcing to completely fill an internal buffer when reading a file, before doing network I/O ?
Why forcing to completely fill an internal buffer when reading data, before doing network I/O ?
Dec 19, 2017
For information, i'm using this patch (along with one on hyper 0.10.13, see : gfriloux/hyper@796f5d0 ) to do CTE without any problem since over a year.
Hello, i am looking at : https://github.com/SergioBenitez/Rocket/blob/a9c66c9426bec57ee958480ae4aa3d789f20488f/lib/src/rocket.rs#L162
I am trying to understand why it has been needed to use this internal read_max() implementation instead of standard read().
If understand it correctly, a read() call will read at most n bytes if you pass a buffer whose length is n bytes.
If it reads less, it means :
Signals arent supposed to happen often (it's supposed to be rare). If it unblocks your read, its only so we can execute a function that is called when receiving a signal.
If you've reach end of file, read_max() doesnt add anything.
So, i dont get where its adding benefits, while i see a problem in this implementation, in my use case : it will prevent from doing streaming that manipulates small bytes and delay (as of intentionnaly setting a specific delay before sending of additional data).
This will also be a problem for SSE (#33).
In this kind of code : https://gist.github.com/gfriloux/003af62ba722a8a52009d938898123d0
App could manage the length of buffer it wants to return using the Read impl.
I believe (i can be wrong, but i want to learn) this is enough, as it is also the App that will deal with signals.
Can you explain me what isssue this read_max() function solves in this code i've link ?
The text was updated successfully, but these errors were encountered: