You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The reason for doing this is partly to protect clients from keeping too many requests open and to prevent SPs being able to go into "ghost mode" where they don’t get any retrieval results reported, regardless if they actually are or aren’t retrievable. They accomplish this by keeping retrieval connections open for longer than the round by making a “byte of progress every 60 seconds”. This is because Spark checkers currently only have a “progress timeout”, not a “max request duration” timeout.
The text was updated successfully, but these errors were encountered:
This is coming out of timeout discussion in https://www.notion.so/protocollabs/Spark-Request-Based-Non-Committee-Global-Retrieval-Success-Rate-4c5e8c47c45f467f80392d00cac2aae4#cb14139c56b2457c9bdd750503f41b51
The reason for doing this is partly to protect clients from keeping too many requests open and to prevent SPs being able to go into "ghost mode" where they don’t get any retrieval results reported, regardless if they actually are or aren’t retrievable. They accomplish this by keeping retrieval connections open for longer than the round by making a “byte of progress every 60 seconds”. This is because Spark checkers currently only have a “progress timeout”, not a “max request duration” timeout.
The text was updated successfully, but these errors were encountered: