Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(services): reworked Madara services for better cancellation control #405
feat(services): reworked Madara services for better cancellation control #405
Changes from 28 commits
630cb59
7323542
da45d87
ce2f8f0
62dd2fd
d451b0a
d05d9b6
3169195
ee8a7ca
bd2c879
1b48772
2eca30a
cd58d07
91716f4
b1e4e1e
4117d54
bf32cc1
c2c7655
c71a08c
f76b5ee
8383ad1
3ae5478
647895f
44a4c0d
5cd6550
78c0874
8c3aed4
6148d87
a13810f
47be1c5
5f81948
780776f
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"replace this argument with your --l1-endpoint parameter"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will add more info about this. The goal for this demonstration though is to stop on sync though, else the sender node would keep synchronizing and the update would take forever
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well you can just resync a new node, i think it makes more sense to talk about sequencers here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea fair, though this is still a bit faster than that due to being on a local network. It would also be quite easy to make this a lot faster "in the future ™️ " by computing the state root once the migration has completed, instead of at each block which is what we do on a normal sync. Something to keep in mind.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is kubernetes' concern, that's not microservices here
rolling updates in kubernetes don't work like this, the nodes dont expose their ports to the same one in k8s
in proper k8s apps you would do their rolling update thing or even canary where you slowly switch your reverse proxy from one version of the app to another
that's not the problem we're trying to solve here right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also the way you're doing it isnt technically "no downtime" i believe
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i dont understand why we would want no downtime updates for full nodes, this is definitely not the right layer to do it
i thought """no downtime""" upgrades would be something for sequencers, so that they can keep producing blocks after a migration - that's not a problem you can easily solve using existing stuff right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Zero downtime in this sense refers to the fact the receiver seamlessly takes the place of the sender, hence no downtime relating to having to restart the node. I'm not very familiar with v8.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also sequencer migrations are supported as well now (mempool is still not transferred though)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but what about when you have to close the node to remove that warp-update-receiver argument
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm confused. The node does not have to be closed if it is called with
--warp-update-receiver
: this is a cli flag which does not set any other arguments, so once the warp update has completed your node will run normally as per the other arguments that were passed to it. There is no need close the node to remove the--warp-update-receiver
argument.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
state commitments