-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Invalid open/rejection messages with Corestore replication on socket that has many Protomux channels. #79
Comments
Interesting! Do you have any test case we canxrun? |
I'll work on one, asap. |
My apologies, as the problem isn't particularly corestore related. The error occurs for me when there is any latency between the connection and the channel open. Here is what causes the error for me:
Thanks for your attention. I'm sure I can work around this. |
I wanted to follow up here. I been battling this problem going on 3 days which I thought I could get around but I can't. I finally produced tests that nails down the issue but its simple. The problem occurs with the Hyperswarm DHT relay so I'm going to post a n issue at Hyperswarm DHT relay about the problem. |
Possibly related: #45 (comment)
I am using protomux extensively with numerous channels on a hyperswarm socket where individual hypercores are being replicated fine to all the other computers.
I decided to just replicate batches of cores with a namespacing of Corestores.
I am getting invalid open (on client) and rejection messages (on server) that close the socket of channels. If I comment out the corestore replication everything works just fine. The individual cores that I replicate in this manner replicate just fine as well. This occurs on all micro-services I have going that the corestore replication is being utilized (nodejs and browser).
I have tried these variations of replication pseudo-code:
corestore.replicate(mux);
corestore.replicate(noiseSocket);
corestore.replicate(isInitiator, mux);
corestore.replicate(isInitiator, noiseSocket)
The text was updated successfully, but these errors were encountered: