Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resending sending segments for lost packets in transmission #8

Open
mrsylerpowers opened this issue Nov 25, 2019 · 2 comments
Open

Comments

@mrsylerpowers
Copy link

Hello I love this repository and it is very helpful for understanding TCP. I was going over this project and discovered if I loaded a big big page ex. billion digits of pi sometimes packets would be lost between internal system and the app (This shouldn't be possible but Android ... ). From what I understand but correct me if I am wrong. There is supposed to be a resending of Segments sent to the internal system for packets that have been lost or a protocol like SACK implemented. If unhandled many many packets pile up and crash the app and infinite ACKs are resent to the app. But in this app the isAcked inside of the session isn't used and is commented out and from what I see if a packets ACK is resent no protocol is implemented. I am very vary new to TCP so please correct me on any details or if this is just wrong in general. I would love to make PR if I am lead in the right direction.
The result is output that looks like endless of

D/SessionHandler: Prev sendUnack: 643849575
   Not Accepting ack# 643849575 , it should be: 677098551
   Prev sendUnack: 643849575
   Not Accepting ack# 643849575 , it should be: 677098551
   Prev sendUnack: 643849575
D/SessionHandler: Not Accepting ack# 643849575 , it should be: 677289687
   Prev sendUnack: 643849575
   Not Accepting ack# 643849575 , it should be: 677289687
   Prev sendUnack: 643849575
   Not Accepting ack# 643849575 , it should be: 677289687
   Prev sendUnack: 643849575
   Not Accepting ack# 643849575 , it should be: 677289687
D/SessionHandler: Prev sendUnack: 643849575
   Not Accepting ack# 643849575 , it should be: 677289687
   Prev sendUnack: 643849575

With no way of telling the system that a Segment needs to be resent

@mrsylerpowers
Copy link
Author

mrsylerpowers commented Nov 25, 2019

This does seem really hard to do though without a some sort of buffer of previous data of some sort and figuring what that needs to be seems like a interesting problem. @pimterry you have any input on this?

@pimterry
Copy link

I haven't dug into it, but I can completely imagine these kind of issues exist. I don't think it's that hard to fix - we do some buffering of outgoing raw data in places already - but it's not easy.

One problem is that to make this manageable we probably want some kind of backpressure too. If the VPN is having connection problems talking to upstream servers then we need to somehow make the downstream application wait too, or it will keep sending us data that we can't send onwards. I don't think that's implemented; at the moment I think we just ack everything.

I don't have time to look into this in depth myself right now, but I probably will in a few weeks or so.

In the meantime @mrsylerpowers, if you do have a little time to look into this, I think the most important first step would be to set up an easy way to reliably reproduce this. A small script for a local server you can run that always has this issue in a consistent way would make testing and exploring the problem much easier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants