Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Goal: graceful TSN upgrade mechanism #213

Open
zolotokrylin opened this issue May 11, 2024 · 4 comments
Open

Goal: graceful TSN upgrade mechanism #213

zolotokrylin opened this issue May 11, 2024 · 4 comments

Comments

@zolotokrylin
Copy link
Contributor

Right now, to deploy new servers, we're pushing all the data again for substantial changes. So, the new state should be the same as before redeployment. However, if this testnet was a little more public, and some other data provider would start pushing data and testing their own contracts, we could potentially reset it, purging their work.
With other 3rd party data providers, it would be important to establish that this could happen on a strict upgrade schedule so that they know when to reset and prevent experiment losses.
Would also be feasible to set an upgrade pipeline where no data is lost (publish the docker image for tsn, make a backup of docker volumes for posgres + tsn, upgrade it, roll back if necessary) but that is more work that I don't know yet if is a priority

Originally posted by @outerlook in #145 (comment)

@zolotokrylin zolotokrylin changed the title Goal: graceful TSN upgrade Goal: graceful TSN upgrade mechanism May 11, 2024
@brennanjl
Copy link
Collaborator

We're looking really hard at ways we can support this as an official feature / tool. We're still designing the solution, but it will likely work something similar to the Cosmos SDK's Cosmovisor.

@rsoury
Copy link

rsoury commented May 29, 2024

Keep in mind, the typical testnet behaviour by app-chains within the Cosmos ecosystem is to:

  1. perform complete resets/purge starting from a particular block.
  2. this block is not always the genesis block - but the reset allows for applying software updates
  3. communities are usually informed well in advance of any testnet restarts/upgrades

The issue here is that Kwil does not persist history by default.
Pruning historic blocks is (rightly so) the default.
Therefore, we may need a dedicated solution by Kwil for this.

@outerlook
Copy link
Contributor

outerlook commented Aug 7, 2024

@zolotokrylin is it time for this goal before

?

Pros:

  • if we need to change something in our architecture, our nodes won't be redeployed. We problably just update the docker image of it

Cons:

  • Making changes to our contracts / logic becomes more challenging, because we need to coordinate contract drops, etc

Alternative:
We ask other node operators to also reset their instance on our redeployments

@zolotokrylin
Copy link
Contributor Author

@outerlook, these goals are more important and urgent:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants