-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: retry Filecoin.StateMinerInfo
requests
#96
Conversation
Signed-off-by: Miroslav Bajtoš <[email protected]>
Signed-off-by: Miroslav Bajtoš <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice!
Signed-off-by: Miroslav Bajtoš <[email protected]>
A test was failing, so I had to tweak this change. PTAL again. |
return res.PeerId | ||
} catch (err) { | ||
if (err.name === 'RetryError' && err.cause) { | ||
// eslint-disable-next-line no-ex-assign | ||
err = err.cause |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not keep err
with err.cause
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel the caller of this function does not care that we are retrying the requests; they are interested in the details about why the request failed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point!
retry
from Deno stdlibFilecoin.StateMinerInfo
requestsI noticed that sometimes my Station Desktop will go offline. The module logs contain the following messages:
In other words, when the RPC API fails, Spark waits more than a minute until it starts another check. I think that next check will pick the next task from the list, meaning that Spark effectively skips a task when this error happens.
In this pull request, I am wrapping the RPC request in a retry logic.