Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add encointer runtime #80

Merged

Conversation

brenzi
Copy link
Contributor

@brenzi brenzi commented Oct 27, 2023

In fulfillment of RFC22, this PR adds the encointer runtime.

replacing #17

notes

  • this runtime shall enable async backing at the conservative 12s block time ("Phase 1" in Santi's tutorial). in a subsequent step we shall set target block rate of 6s
  • We used the same upstream dependency versions like the other system chains on master
  • The PR includes the runtime's commit history for reference
  • strictly using published crates.io dependencies

bkchr and others added 30 commits January 26, 2021 03:32
* Switch to custom `BlockAnnounceData`

Instead of sending a `SignedFullStatement` this switches to a new struct
`BlockAnnounceData` that is being send alongside the block announcement.
The signed full statement contains the candidate commitments, meaning it
could be a full runtime upgrade that we send alongside a block
announcement... To prevent this, we now only send the candidate receipt
and the compact statement.

* Update to latest polkadot
…ws#313)

This changes the collator to print an error if the block that we can not
find is the genesis block, instead of only logging this as a `debug`
message. This should help people when they have registered the wrong
genesis state on the relay chain.
* Complete telemetry for parachain & relaychain

* Update Substrate & Polkadot
…olkadot-fellows#315)

* Block announce validation should use the correct `Validation` result

The error variant is just for internal errors and we need to return
`Failure` always when the other node send us an invalid statement.

* Update network/src/lib.rs

Co-authored-by: Sergei Shulepov <[email protected]>

Co-authored-by: Sergei Shulepov <[email protected]>
* MQC auth

Update polkadot

WIP

* Update polkadot

* Silly syntax errors

* Fix typo

* Leave some comments and docs

* Apply suggestions from code review

Co-authored-by: Bastian Köcher <[email protected]>

* Introduce the MessageQueueChain structure

* Move the HRMP channel relevance check below

* Fix the `receive_hrmp_after_pause` test

* ValidationData is passed by reference

* Replace "to cumulus" with "to the collator"

* Update the test so that they are same as in polkadot

Co-authored-by: Bastian Köcher <[email protected]>
* Fix HRMP Location

* Add call to xcm handler
…adot-fellows#317)

* Update polkadot

* Migrate all uses of MQC heads to merkle proofs

* Mass rename `relay_parent_storage_root`

* Restore parachain-system tests

* Update polkadot and libp2p swarm for testing

* Collapse match into an if let

Co-authored-by: Bastian Köcher <[email protected]>

* Fix compilation error in test-service

Co-authored-by: Bastian Köcher <[email protected]>
* Move pallets to pallets folder and rename them

* Move genesis file to service

* Rename primitives to primitives-core

* Delete cumulus-runtime

* Move stuff to client folder and rename
This renames and moves the `SystemInherentData` into its own crate.
The struct is now called `ParachainInherentData`. Besides moving the
struct, this also moves the code for creating this struct into this crate.
* Update Substrate & Polkadot

* Update chainspecs

* Update again to fix test
* Move consensus to consensus-common crate

* Move the parachain consensus out of the collator

* Add first relay chain consensus stuff

* Remove some warnings

* Fix more stuff

* Fix collator test

* Change `ParachainConsensus` to take a mutable self

* Make everything compile

* Feedback
…kadot-fellows#342)

Currently validation data is shared by using a well known key between
the parachain system pallet and the validate block implementation. This
pr changes this by passing the parachain system directly to the validate
block implementation to make use of it. Besides that, we also store the
validation params in some thread local variable to make it inspectable
by parachain system. This moves the validation of validation data and
validation params to the parachain system pallet directly, instead of
having this hidden inside the validate block implementation.

Fixes: paritytech/cumulus#217
* Update to latest Substrate and Polkadot

* log::debug!

* Add log dependecies to runtime

* Comma

* Fix tests
* Add a command to purge the relay chain only

* WIP

* Update rococo-parachains/src/cli.rs

Co-authored-by: Bastian Köcher <[email protected]>

* Move cli stuff to its own crate

* Copyright dates

* Test not working for some reason...

* WIP

* Revert "WIP"

This reverts commit f97cd63742c7df822e4a6e52a29db5e0f56b7bfa.

* Fix test to use provided relay chain

* Apply suggestions from code review

Co-authored-by: Bastian Köcher <[email protected]>

* Add hint about which database could not be purged

Co-authored-by: Bastian Köcher <[email protected]>
* Update Substrate & Polkadot

* Remove unused code
* Support xcm local execution in xcm handler.

* Add docs.
* Create builder for test nodes

* Fix syncing issue

* Adds missing file
@pifragile pifragile mentioned this pull request Jan 9, 2024
@ggwpez
Copy link
Member

ggwpez commented Jan 9, 2024

Some of the CI migration errors imply that there are pallet migrations missing. Especially for the scheduler 🙈

Balances: On chain storage version StorageVersion(0) doesn't match current storage version StorageVersion(1).
PolkadotXcm: On chain storage version StorageVersion(0) doesn't match current storage version StorageVersion(1).
Scheduler: On chain storage version StorageVersion(0) doesn't match current storage version StorageVersion(4).

I think we should not ignore this.

Comment on lines +696 to +713
/// Migrations to apply on runtime upgrade.
pub type Migrations = (
// fixing the scheduler with a local migration is necessary because we have missed intermediate
// migrations. the safest migration is, therefore, to clear all storage and bump StorageVersion
migrations_fix::scheduler::v4::MigrateToV4<Runtime>,
// also here we're actually too late with applying the migration. however, the migration does
// work as-is.
pallet_xcm::migration::v1::VersionUncheckedMigrateToV1<Runtime>,
// balances are more tricky. We missed to do the migration to V1 and now we have inconsistent
// state which can't be decoded to V0, yet the StorageVersion is V0.
// the strategy is to: just pretend we're on V1
migrations_fix::balances::v1::BruteForceToV1<Runtime>,
// then reset to V0
pallet_balances::migration::ResetInactive<Runtime>,
//then apply the proper migration as we should have done earlier
pallet_balances::migration::MigrateToTrackInactive<Runtime, xcm_config::CheckingAccount>,
);

Copy link
Contributor Author

@brenzi brenzi Jan 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the CI in this repo have brought to light that we missed to perform storage migrations in the past. My guess is that this has gone unnoticed previously because of bogus feature propoagtion for try-runtime which has now been fixed with zepter (we did use try-runtime before and had migrations in our own pallets, but these have really gone unnoticed!!)

With these local extra migrations, we should be able to recover without damage

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay thanks. It is rather simple to run the try-runtime check locally, in case you need to debug it (github is a bit slow currently).

https://github.com/paritytech/try-runtime-cli example here

--runtime ./target/production/wbuild/$PACKAGE_NAME/$RUNTIME_BLOB_NAME \

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes and no. it runs through and succeeds, but it complains about a corrupt state for one key:

error: [2024-01-09T14:08:47Z ERROR frame_support::storage] old key failed to decode at [227, 143, 24, 82, 7, 73, 138, 187, 92, 33, 61, 15, 176, 89, 179, 216, 216, 55, 28, 116, 117, 23, 21, 29, 134, 63, 146, 111, 129, 151, 24, 223, 191, 178, 127, 30, 174, 240, 107, 185, 3, 0, 0, 0, 96, 15, 180, 143, 173, 7, 149, 151, 21, 112, 71, 148, 112, 212, 184, 115, 3, 1, 0]

converting key to hex: 0xE38F185207498ABB5C213D0FB059B3D8D8371C747517151D863F926F819718DFBFB27F1EAEF06BB903000000600FB48FAD0795971570479470D4B873030100

looking up onchain raw storage
value hex: 0x600a000000000000000003000000

hard to guess what key and what value that is but I'll investigate

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like something in PolkadotXcm::VersionNotifyTargets.

Copy link
Contributor Author

@brenzi brenzi Jan 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good spot. I can confirm that!

image

from the looks of it, not much bad can happen here.
Can we accept that risk? I would. AFAIU that storage gets written anyway when we send an XCM to Encointer, right?

Copy link
Member

@ggwpez ggwpez Jan 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will ask in the XCM chat what this means since i have on clue.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you. if we need to take action and modify the migrations, I suggest to let that be a separate PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cleanest way is to remove this key altogether, however this would mean that the relay chain would not be notified of any XCM version changes on Encointer. To remedy this, the relay chain would have to resend a SubscribeVersion instruction to Encointer.

@brenzi
Copy link
Contributor Author

brenzi commented Jan 10, 2024

@joepetrowski your remarks have been addressed. May I ask you to check and consider approving?
CI passed, so I guess we could merge if you agree

@joepetrowski
Copy link
Contributor

This should have a CHANGELOG entry too (and being probably the first post-release commit, under a new [Unreleased] section).

@brenzi
Copy link
Contributor Author

brenzi commented Jan 10, 2024

/merge

@fellowship-merge-bot fellowship-merge-bot bot merged commit 6334588 into polkadot-fellows:main Jan 10, 2024
14 checks passed
@fellowship-merge-bot
Copy link
Contributor

Enabled auto-merge in Pull Request

Available commands
  • /merge: Enables auto-merge for Pull Request
  • /merge cancel: Cancels auto-merge for Pull Request
  • /merge help: Shows this menu

For more information see the documentation

@ggwpez ggwpez mentioned this pull request Feb 28, 2024
1 task
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.