You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, there is no way for a node to migrate its mempool to a new node as part of a database migration. This is an issue for #393 since there is no way for the warp update sender to communicate transactions in the mempool with the warp update receiver, so any transactions there will be invariably dropped once the migration has completed.
A good place for this would be a new admin rpc endpoint. The issue of ownership of transactions is tricky though: we would have to shutdown the rpc on node A, enable it on node B, start synchronizing all transactions still in the mempool of node A at the time, execute them while receiving further transactions on node B, and then execute these new transactions once all the txs from node A have been processed.
Are you willing to help with this request?
Yes!
The text was updated successfully, but these errors were encountered:
Is there an existing issue?
Motivation
Currently, there is no way for a node to migrate its mempool to a new node as part of a database migration. This is an issue for #393 since there is no way for the warp update sender to communicate transactions in the mempool with the warp update receiver, so any transactions there will be invariably dropped once the migration has completed.
Request
warp update
to madara and other changes #393 to take advantage of it.Solution
A good place for this would be a new admin rpc endpoint. The issue of ownership of transactions is tricky though: we would have to shutdown the rpc on node A, enable it on node B, start synchronizing all transactions still in the mempool of node A at the time, execute them while receiving further transactions on node B, and then execute these new transactions once all the txs from node A have been processed.
Are you willing to help with this request?
Yes!
The text was updated successfully, but these errors were encountered: