From fea48cb94b4227262dce8a711864f80ed8f9ee46 Mon Sep 17 00:00:00 2001 From: bengtlofgren Date: Mon, 31 Jul 2023 12:23:00 +0100 Subject: [PATCH 01/13] base-ledger changes --- packages/specs/pages/base-ledger.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/specs/pages/base-ledger.mdx b/packages/specs/pages/base-ledger.mdx index 2f7dd00e..a86eda6e 100644 --- a/packages/specs/pages/base-ledger.mdx +++ b/packages/specs/pages/base-ledger.mdx @@ -1,3 +1,3 @@ ## Base ledger -The base ledger of Namada includes a [consensus system](./base-ledger/consensus.md), validity predicate-based [execution system](./base-ledger/execution.md), and signalling-based [governance mechanism](./base-ledger/governance.md). Namada's ledger also includes proof-of-stake, slashing, fees, and inflation funding for staking rewards, shielded pool incentives, and public goods — these are specified in the [economics section](./economics.md). This section also documents Namada's [multisignature VP](./base-ledger/multisignature.md), [fungible token VP](./base-ledger/fungible-token.md), [replay protection system](./base-ledger/replay-protection.md), and [block space allocator](./base-ledger/block-space-allocator.md). +The base ledger of Namada includes a [consensus mechanism](./base-ledger/consensus.md) and a validity predicate-based [execution system](./base-ledger/execution.md). In addition to describing the exuction model and consensus, this section also documents Namada's [replay protection system](./base-ledger/replay-protection.md), and [block space allocator](./base-ledger/block-space-allocator.md). From 0bc9cc0224a3b20dc944db7abf41837cd25cd669 Mon Sep 17 00:00:00 2001 From: bengtlofgren Date: Thu, 3 Aug 2023 23:00:30 +0100 Subject: [PATCH 02/13] new specs --- packages/specs/pages/base-ledger.mdx | 13 +++- .../specs/pages/base-ledger/consensus.mdx | 16 +++- .../specs/pages/base-ledger/execution.mdx | 22 ++++-- .../pages/base-ledger/fungible-token.mdx | 2 + .../pages/base-ledger/multisignature.mdx | 77 ++++++++++--------- .../pages/base-ledger/replay-protection.mdx | 2 +- 6 files changed, 85 insertions(+), 47 deletions(-) diff --git a/packages/specs/pages/base-ledger.mdx b/packages/specs/pages/base-ledger.mdx index a86eda6e..042e5ed0 100644 --- a/packages/specs/pages/base-ledger.mdx +++ b/packages/specs/pages/base-ledger.mdx @@ -1,3 +1,12 @@ -## Base ledger +# Base ledger -The base ledger of Namada includes a [consensus mechanism](./base-ledger/consensus.md) and a validity predicate-based [execution system](./base-ledger/execution.md). In addition to describing the exuction model and consensus, this section also documents Namada's [replay protection system](./base-ledger/replay-protection.md), and [block space allocator](./base-ledger/block-space-allocator.md). +The base ledger of Namada includes a [consensus mechanism](./base-ledger/consensus.md) and a validity-predicate based [execution system](./base-ledger/execution.md). + +## Consensus +The consensus mechanism on Namada provides an algorithmic way for validators to communicate votes and collectively agree on a consistent state. The algorithim, coupled with a cryptoeconomic assurance called "proof of stake", ensures that non-colluding validators acting in their (economic) self interest will follow the consensus algorithm in a predictable manner. + +## Validity predicates +The validity-predicate based execution mechanism is inherited from the architectural design philosophy of Anoma. The fundamental idea is that a "valid state" is defined as that which satisfies a set of boolean conditions. These boolean conditions are encoded by functional "validity predicates", which are invoked whenever a state is being proposed. If all validity predicates in the system return the boolean `true`, this defines a valid state which validators can vote on. The validity predicate based mechanism differs from the traditional "smart-contract" based execution model, where a valid state is instead defined as that which results from a series of pre-defined valid execution steps. These execution steps are defined within the smart contract, and verifying the validity of the new state requires *each* validator to run the series of execution steps. + +## Replay protection & block space allocator +In addition to describing the execuction model and consensus, this section also documents Namada's [replay protection system](./base-ledger/replay-protection.md), and [block space allocator](./base-ledger/block-space-allocator.md). diff --git a/packages/specs/pages/base-ledger/consensus.mdx b/packages/specs/pages/base-ledger/consensus.mdx index de45d842..3c2c7437 100644 --- a/packages/specs/pages/base-ledger/consensus.mdx +++ b/packages/specs/pages/base-ledger/consensus.mdx @@ -1,3 +1,17 @@ # Consensus -Namada uses [CometBFT](https://github.com/cometbft/cometbft/) (nee Tendermint Go) through the [cometbft-rs](https://github.com/heliaxdev/tendermint-rs) (nee tendermint-rs) bindings in order to provide peer-to-peer transaction gossip, BFT consensus, and state machine replication for Namada's custom state machine. CometBFT implements the Tendermint consensus algorithm, which you can read more about [here](https://arxiv.org/abs/1807.04938). \ No newline at end of file +Namada uses [CometBFT](https://github.com/cometbft/cometbft/) (nee Tendermint Go) through the [cometbft-rs](https://github.com/heliaxdev/tendermint-rs) (nee tendermint-rs) bindings in order to provide peer-to-peer transaction gossip, Byzantine fault tolerant (BFT) consensus, and state machine replication for Namada's custom state machine. CometBFT implements the Tendermint consensus algorithm, which you can read more about [here](https://arxiv.org/abs/1807.04938). + +{/* Maybe we want to leave out the below section. I just felt we should give a brief summary of CometBFT */} +## The benefits of using CometBFT + +Using the CometBFT consensus algorithm comes with a number of benefits including but not limited to: + +- Fast finality + - Simplifying cross-blockchain communication +- Inter-blockchain communication system (IBC) + - Composability with all other Tendermint based blockchains, such as Cosmos-ecosystem blockchains +- Battle tested + {/* - TODO: enter number of blockchains that have been using Tendermint */} +- Customisable + - Allows the setting of various of parameters, including the ability to implement a custom proof of stake algorithm \ No newline at end of file diff --git a/packages/specs/pages/base-ledger/execution.mdx b/packages/specs/pages/base-ledger/execution.mdx index 213d1560..d6cb05ac 100644 --- a/packages/specs/pages/base-ledger/execution.mdx +++ b/packages/specs/pages/base-ledger/execution.mdx @@ -1,18 +1,29 @@ # Execution -The Namada ledger execution system is based on an initial version of the Anoma execution model. The system implements a generic computational substrate with WASM-based transactions and validity predicate verification architecture, on top of which specific features of Namada such as IBC, proof-of-stake, and the MASP are built. +The Namada ledger execution system is based on an initial version of the Anoma execution model. The system implements a generic computational substrate with WASM-based transactions and validity predicate verification. It is on top of this system which specific features of Namada such as IBC, proof-of-stake, and the MASP are built. ## Validity predicates -Conceptually, a validity predicate (VP) is a function from the transaction's data and the storage state prior and posterior to a transaction execution returning a boolean value. A transaction may modify any data in the accounts' dynamic storage sub-space. Upon transaction execution, the VPs associated with the accounts whose storage has been modified are invoked to verify the transaction. If any of them reject the transaction, all of its storage modifications are discarded; if all accept, the storage modifications are written. +Conceptually, a validity predicate (VP) is a boolean function which takes three inputs: +1. The transaction's data {/* TODO: I am actually a bit unclear about what the point in this is. Is this just the diff between prior and posterior storage state? Is it the signatures? */} +2. The storage state prior to a transaction execution +3. The storage state after the transaction execution + +A transaction may modify any data in the accounts' dynamic storage sub-space. Upon transaction execution, the VPs associated with the accounts whose storage has been modified are invoked to verify the transaction. If any of them reject the transaction, all of its storage modifications are discarded; if all accept, the storage modifications are written. ## Namada ledger -The Namada ledger is built on top of [Tendermint](https://docs.tendermint.com/master/spec/)'s [ABCI](https://docs.tendermint.com/master/spec/abci/) interface with a slight deviation from the ABCI convention: in Namada, the transactions are currently *not* being executed in ABCI's [`DeliverTx` method](https://docs.tendermint.com/master/spec/abci/abci.html), but rather in the [`EndBlock` method](https://docs.tendermint.com/master/spec/abci/abci.html). The reason for this is to prepare for future DKG and threshold decryption integration. +The Namada ledger is built on top of {/* TODO: Fix links below to point to cometbft */} +[CometBFT](https://docs.cometbft.com/master/spec/)'s [ABCI](https://docs.tendermint.com/master/spec/abci/) interface with a slight deviation from the ABCI convention: in Namada, the transactions are currently *not* being executed in ABCI's [`DeliverTx` method](https://docs.tendermint.com/master/spec/abci/abci.html), but rather in the [`EndBlock` method](https://docs.tendermint.com/master/spec/abci/abci.html). {/* TODO: I don't know what we want to say about the above. Maybe delete the below sentence entirely? */} +The reason for this is to prepare for future DKG and threshold decryption integration. + +The ledger features an account-based system (in which UTXO-based systems such as the MASP can be internally implemented as specific accounts), where each account has a unique address and a dynamic key-value storage sub-space. Every account in Namada is associated with exactly one validity predicate. {/* TODO: is the below still true after removing the token vp? I'm assuming yes */} +Fungible tokens, for example, are accounts, whose rules are governed by their validity predicates. Many of the base ledger subsystems specified here are themselves just special Namada accounts too (e.g. PoS, IBC and MASP). This model is broadly similar to that of {/* TODO: Ethereum link */} +[Ethereum](https://ethereum.org), where each account is associated with contract code, but differs in the execution model. -The ledger features an account-based system (in which UTXO-based systems such as the MASP can be internally implemented as specific accounts), where each account has a unique address and a dynamic key-value storage sub-space. Every account in Namada is associated with exactly one validity predicate. Fungible tokens, for example, are accounts, whose rules are governed by their validity predicates. Many of the base ledger subsystems specified here are themselves just special Namada accounts too (e.g. PoS, IBC and MASP). This model is broadly similar to that of Ethereum, where each account is associated with contract code, but differs in the execution model. +Interactions with the Namada ledger are made possible via transactions. In Namada, transactions are allowed to perform arbitrary modifications to the storage of any account, but the transaction will be accepted and state changes applied only if all the validity predicates that were triggered by the transaction accept it. That is, the accounts whose storage sub-spaces were touched by the transaction will all have their validity predicates verifying the transaction. A transaction may also explicitly elect an account as the verifier of that transaction, which will result in that validity predicate being invoked as well. A transaction can add any number of additional verifiers, but cannot remove the ones determined by the protocol. For example, a transparent fungible token transfer would typically trigger 3 validity predicates - those of the token, source and target addresses. -Interactions with the Namada ledger are made possible via transactions. In Namada, transactions are allowed to perform arbitrary modifications to the storage of any account, but the transaction will be accepted and state changes applied only if all the validity predicates that were triggered by the transaction accept it. That is, the accounts whose storage sub-spaces were touched by the transaction and/or an account that was explicitly elected by the transaction as the verifier will all have their validity predicates verifying the transaction. A transaction can add any number of additional verifiers, but cannot remove the ones determined by the protocol. For example, a transparent fungible token transfer would typically trigger 3 validity predicates - those of the token, source and target addresses. +{/* TODO: Explain how the ledger knows which validity predicates to invoke based on a transaction. This is not clear to me. */} ## Supported validity predicates @@ -20,6 +31,7 @@ While the execution model is fully programmable, for Namada only a selected subs There are some native VPs for internal transparent addresses that are built into the ledger. All the other VPs are implemented as WASM programs. One can build a custom VP using the [VP template](https://github.com/anoma/namada/tree/master/wasm/vp_template) or use one of the pre-defined VPs. +{/* TODO: Make sure this is up to date with the ledger */} Supported validity predicates for Namada: - Native - Proof-of-stake (see [spec](../economics/proof-of-stake.md)) diff --git a/packages/specs/pages/base-ledger/fungible-token.mdx b/packages/specs/pages/base-ledger/fungible-token.mdx index 4ae110ca..4f4c9c01 100644 --- a/packages/specs/pages/base-ledger/fungible-token.mdx +++ b/packages/specs/pages/base-ledger/fungible-token.mdx @@ -1,5 +1,7 @@ # Fungible token +{/* TODO: Unfamiliar with this VP, should be looked at */} + The fungible token validity predicate authorises token balance changes on the basis of conservation-of-supply and approval-by-sender. Namada implements a "multitoken" validity predicate, in that all tokens have the same logic and can share one VP (with appropriate storage distinctions). A token balance is stored with a storage key. The token balance key should be `#Multitoken/{token_addr}/balance/{owner_addr}`. These keys can be made with [token functions](https://github.com/anoma/namada/blob/5da82f093f10c0381865accba99f60c557360c51/core/src/types/token.rs). diff --git a/packages/specs/pages/base-ledger/multisignature.mdx b/packages/specs/pages/base-ledger/multisignature.mdx index 6319def2..f2204439 100644 --- a/packages/specs/pages/base-ledger/multisignature.mdx +++ b/packages/specs/pages/base-ledger/multisignature.mdx @@ -1,10 +1,18 @@ +import { Callout } from 'nextra-theme-docs' + # k-of-n multisignature -The k-of-n multisignature validity predicate authorizes transactions on the basis of k out of n parties approving them. This document targets the encrypted (inner) WASM transactions. Namada does not support multiple signers on wrapper or protocol transactions. +The k-of-n multisignature validity predicate authorizes transactions on the basis of k out of n parties approving them. This document targets the encrypted (inner) WASM transactions. Namada does not support multiple signers on wrapper or protocol transactions. The signer of the wrapper transaction is the fee-payer of that transaction. ## Protocol -Namada transactions get signed before being delivered to the network. This signature is then checked by the VPs to determine the validity of the transaction. To support multisignature we need to modify the current `SignedTxData` struct to the following: +Namada transactions are signed before being delivered to the network. This signature is checked by the invoked validity predicates to determine the validity of the transaction. To support multisignature, Namada's signed transaction data includes the plaintext data of what is being signed, as well as all valid signatures over that data. + +Inherently, this implies that all user accounts are 1-of-1 multisignature accounts. + +### Rust implementation + +A rust implementation of a `SignedTxData` struct is described below: ```rust pub struct SignedTxData { @@ -16,65 +24,51 @@ pub struct SignedTxData { } ``` -The `sig` field now holds a vector of tuples where the first element is an 8-bit integer and the second one is a signature. The integer serves as an index to match a specific signature to one of the public keys in the list of accepted ones. This way, we can improve the verification algorithm and check each signature only against the public key at the provided index ($\mathcal{O}(n)$), without the need to cycle on all of them which would be $\mathcal{O}(n^2)$. - -This means that non-multisig addresses will now be implemented as 1-of-1 multisig accounts (but this difference is transparent to the user). +The `sig` field holds a vector of tuples where the first element is an 8-bit integer and the second one is a signature. The integer serves as an index to match a specific signature to one of the public keys in the list of accepted ones for efficiency purposes. This implementation improves the verification algorithm reducing execution complexity to ($\mathcal{O}(n)$), without the need to cycle on all of them which would be $\mathcal{O}(n^2)$. ## VPs -Since all the addresses will be multisig ones, we will keep using the already available `vp_user` as the default validity predicate. The only modification required is the signature check which must happen on a set of signatures instead of a single one. +All user-owner accounts use the default `vp_user` validity predicate for verifying state changes. +The `vp_user` validity predicate asserts that at least `k` out of `n` valid signatures are present in the signed transaction data for every transaction originating from the user. -To perform the validity checks, the VP will need to access two types of information: +To perform the validity checks, the VP requires two pieces of information: -1. The multisig threshold +1. The multisignature threshold 2. A list of valid signers' public keys -This data defines the requirements of a valid transaction operating on the multisignature address and it will be written in storage when the account is created: +This data defines the requirements of a valid transaction operating on the multisignature address and is written in storage when the account is created. -``` +A rust storage implementation is shown below: +```rust /$Address/threshold/: u8 /$Address/pubkeys/: LazyVec ``` -The `LazyVec` struct will split all of its elements on different subkeys in storage so that we won't need to load the entire vector of public keys in memory for validation but just the ones pointed by the indexes in the `SignedTxData` struct. - -To verify the correctness of the signatures, this VP will proceed with a two-step verification process: +To verify the correctness of the signatures, the VP checks the following conditions: -1. Check to have enough **unique** signatures for the given threshold -2. Check to have enough **valid** signatures for the given threshold +1. Enough **unique** signatures for the given threshold are provided +2. Enough **valid** signatures for the given threshold are provided -Step 1 allows us to short-circuit the validation process and avoid unnecessary processing and storage access. Each signature will be validated **only** against the public key found in the list at the specified index. Step 2 will halt as soon as it retrieves enough valid signatures to match the threshold, meaning that the remaining signatures will not be verified. +If implemented in this fashion, a couple efficiency gains are made: +Step 1 allows for short-circuiting the validation process and avoid unnecessary processing and storage access. Each signature is validated **only** against the public key found in the list at the specified index. Step 2 halts as soon as it retrieves enough valid signatures to match the threshold, meaning that the remaining signatures are not unnecessarily verified. ## Addresses -The vp introduced in the previous section is available for `established` addresses. To generate a multisig account we need to modify the `InitAccount` struct to support multiple public keys and a threshold, as follows: +The vp introduced in the previous section is available for all `established` addresses on Namada. Whenever a user wishes to initialise an `established` address as a multisignature address, they must provide the multisignature threshold and valid signatures. By default, the threshold is set to 1 if not provided. At least one key must provided as a valid signing key for the VP to check against. -```rust -pub struct InitAccount { - /// The VP code - pub vp_code: Vec, - /// Multisig threshold for k-of-n - pub threshold: u8, - /// Multisig signers' pubkeys to be written into the account's storage. This can be used - /// for signature verification of transactions for the newly created - /// account. - pub pubkeys: Vec -} -``` - -Finally, the tx performs the following writes to storage: - -- The multisig vp -- The threshold -- The list of public keys of the signers +Once this transaction is executed, the following storage writes are made in association, under the storage subspace of that user: +- The `vp_user` address +- The threshold of the multisignature account +- The list of public keys for the signers of the multisignature account Multisignature accounts can also be initialised at genesis time - in this case, the requisite parameters are kept in the genesis file and written to storage during initialisation. ## Multisig account init validation -Since the VP of an established account does not get triggered at account creation, no checks will be run on the multisig parameters, meaning that the creator could provide wrong data. +{/* TODO: Solidify the below into what actually is implemented */} +Since the VP of an established account does not get triggered at account creation, no checks will be run on the multisignature parameters, meaning that the creator could provide incorrect data. -To perform validation at account creation time we could: +To perform validation at the time of account creation Namada could: 1. Write in storage the addresses together with the public keys to trigger their VPs 2. Manually trigger the multisig VP even at creation time @@ -88,10 +82,17 @@ Solution 2 would perform just a partial check since the logic of the VP will rev Finally, solution 3 would require an internal VP dedicated to the management of multisig addresses' parameters both at creation and modification time. This could implement a logic based on the threshold or a logic requiring a signature by all the members to initialize/modify a multisig account's parameters. The former effectively collapses to the VP of the account itself (making the internal VP redundant), while the latter has the same problem as solution 1. +{/* TODO: Is this the final state? */} In the end, we don't implement any of these checks and will leave the responsibility to the signer of the transaction creating the address: in case of an error he can simply submit a new transaction to generate the correct account. On the other side, the participants of a multisig account can refuse to sign transactions if they don't agree on the parameters defining the account itself. ## Transaction construction -To craft a multisigned transaction, the involved parties will need to coordinate. More specifically, the transaction will be constructed by one entity which will then distribute it to the signers and collect their signatures: note that the constructing party doesn't necessarily need to be one of the signers. Finally, these signatures will be inserted in the `SignedTxData` struct so that it can be encrypted, wrapped and submitted to the network. +In order to craft a multisignature transaction, the involved parties must coordinate. More precisely, the transaction itself is constructed by one entity which will distribute the constructed transaction to the other signers and collect their signatures. + + +Note that the constructing party doesn't necessarily need to be one of the signers. + + +Finally, these signatures are inserted in the `SignedTxData` struct to be encrypted, wrapped and submitted to the network. Namada does not provide a layer to support this process, so the involved parties will need to rely on an external communication mechanism. diff --git a/packages/specs/pages/base-ledger/replay-protection.mdx b/packages/specs/pages/base-ledger/replay-protection.mdx index 1c97aafd..f6ff66bb 100644 --- a/packages/specs/pages/base-ledger/replay-protection.mdx +++ b/packages/specs/pages/base-ledger/replay-protection.mdx @@ -7,7 +7,7 @@ malicious user resubmitting an already executed transaction (often shortened to A replay attack causes the state of the machine to deviate from the intended one (from the perspective of the parties involved in the original transaction) and causes economic damage to the fee payer of the original transaction, who finds -himself paying more than once. Further economic damage is caused if the +themselves paying more than once. Further economic damage is caused if the transaction involved the moving of value in some form (e.g. a transfer of tokens) with the sender being deprived of more value than intended. From 8df1f36971988d7bc5ccee66fef94552e3091314 Mon Sep 17 00:00:00 2001 From: bengtlofgren Date: Fri, 4 Aug 2023 00:35:51 +0200 Subject: [PATCH 03/13] fixed some links and gian answered my qs --- packages/specs/pages/base-ledger/consensus.mdx | 4 ++-- packages/specs/pages/base-ledger/execution.mdx | 17 ++++++++--------- 2 files changed, 10 insertions(+), 11 deletions(-) diff --git a/packages/specs/pages/base-ledger/consensus.mdx b/packages/specs/pages/base-ledger/consensus.mdx index 3c2c7437..d33d49a9 100644 --- a/packages/specs/pages/base-ledger/consensus.mdx +++ b/packages/specs/pages/base-ledger/consensus.mdx @@ -8,10 +8,10 @@ Namada uses [CometBFT](https://github.com/cometbft/cometbft/) (nee Tendermint Go Using the CometBFT consensus algorithm comes with a number of benefits including but not limited to: - Fast finality - - Simplifying cross-blockchain communication + - Tendermint achieves fast and deterministic finality, meaning that once a block is committed to the blockchain, it is irreversible. This is crucial for applications rely on settled transactions that cannot be rolled back. - Inter-blockchain communication system (IBC) - Composability with all other Tendermint based blockchains, such as Cosmos-ecosystem blockchains - Battle tested - {/* - TODO: enter number of blockchains that have been using Tendermint */} + - The entire cosmos-ecosystem have been using the Tendermint - Customisable - Allows the setting of various of parameters, including the ability to implement a custom proof of stake algorithm \ No newline at end of file diff --git a/packages/specs/pages/base-ledger/execution.mdx b/packages/specs/pages/base-ledger/execution.mdx index d6cb05ac..047c9be1 100644 --- a/packages/specs/pages/base-ledger/execution.mdx +++ b/packages/specs/pages/base-ledger/execution.mdx @@ -4,26 +4,25 @@ The Namada ledger execution system is based on an initial version of the Anoma e ## Validity predicates -Conceptually, a validity predicate (VP) is a boolean function which takes three inputs: -1. The transaction's data {/* TODO: I am actually a bit unclear about what the point in this is. Is this just the diff between prior and posterior storage state? Is it the signatures? */} -2. The storage state prior to a transaction execution -3. The storage state after the transaction execution +Conceptually, a validity predicate (VP) is a boolean function which takes four inputs: +1. The transaction itself (This may be because certain parts of the transaction needs to be extracted in the VP logic) +2. The addresses that are involved with that specific VP +3. The storage state prior to a transaction execution +4. The storage state after the transaction execution A transaction may modify any data in the accounts' dynamic storage sub-space. Upon transaction execution, the VPs associated with the accounts whose storage has been modified are invoked to verify the transaction. If any of them reject the transaction, all of its storage modifications are discarded; if all accept, the storage modifications are written. ## Namada ledger -The Namada ledger is built on top of {/* TODO: Fix links below to point to cometbft */} -[CometBFT](https://docs.cometbft.com/master/spec/)'s [ABCI](https://docs.tendermint.com/master/spec/abci/) interface with a slight deviation from the ABCI convention: in Namada, the transactions are currently *not* being executed in ABCI's [`DeliverTx` method](https://docs.tendermint.com/master/spec/abci/abci.html), but rather in the [`EndBlock` method](https://docs.tendermint.com/master/spec/abci/abci.html). {/* TODO: I don't know what we want to say about the above. Maybe delete the below sentence entirely? */} +The Namada ledger is built on top of [CometBFT](https://docs.cometbft.com/v0.37/spec/)'s [ABCI](https://docs.cometbft.com/v0.37/spec/abci/) interface with a slight deviation from the ABCI convention: in Namada, the transactions are currently *not* being executed in ABCI's [`DeliverTx` method](https://docs.cometbft.com/v0.37/spec/abci/abci++_methods#delivertx), but rather in the [`EndBlock` method](https://docs.cometbft.com/v0.37/spec/abci/abci++_methods#endblock). {/* TODO: I don't know what we want to say about the above. Maybe delete the below sentence entirely? */} The reason for this is to prepare for future DKG and threshold decryption integration. -The ledger features an account-based system (in which UTXO-based systems such as the MASP can be internally implemented as specific accounts), where each account has a unique address and a dynamic key-value storage sub-space. Every account in Namada is associated with exactly one validity predicate. {/* TODO: is the below still true after removing the token vp? I'm assuming yes */} -Fungible tokens, for example, are accounts, whose rules are governed by their validity predicates. Many of the base ledger subsystems specified here are themselves just special Namada accounts too (e.g. PoS, IBC and MASP). This model is broadly similar to that of {/* TODO: Ethereum link */} -[Ethereum](https://ethereum.org), where each account is associated with contract code, but differs in the execution model. +The ledger features an account-based system (in which UTXO-based systems such as the MASP can be internally implemented as specific accounts), where each account has a unique address and a dynamic key-value storage sub-space. Every account in Namada is associated with exactly one validity predicate. Fungible tokens, for example, are accounts, whose rules are governed by their validity predicates. Many of the base ledger subsystems specified here are themselves just special Namada accounts too (e.g. PoS, IBC and MASP). This model is broadly similar to that of [Ethereum](https://ethereum.org/en), where each account is associated with contract code, but differs in the execution model. Interactions with the Namada ledger are made possible via transactions. In Namada, transactions are allowed to perform arbitrary modifications to the storage of any account, but the transaction will be accepted and state changes applied only if all the validity predicates that were triggered by the transaction accept it. That is, the accounts whose storage sub-spaces were touched by the transaction will all have their validity predicates verifying the transaction. A transaction may also explicitly elect an account as the verifier of that transaction, which will result in that validity predicate being invoked as well. A transaction can add any number of additional verifiers, but cannot remove the ones determined by the protocol. For example, a transparent fungible token transfer would typically trigger 3 validity predicates - those of the token, source and target addresses. {/* TODO: Explain how the ledger knows which validity predicates to invoke based on a transaction. This is not clear to me. */} +The ledger knows what addresses are involved in a wasm transaction because of how the storage is constructed. Each variable in storage is inherently tied to a substorage owned by some account, and thus that VP is invoked. ## Supported validity predicates From 11701f33977d6347560345cc0645eb617140bc94 Mon Sep 17 00:00:00 2001 From: bengtlofgren Date: Mon, 7 Aug 2023 09:01:15 +0100 Subject: [PATCH 04/13] complete TODOs + block-space-allocator & replay-protection --- packages/docs/pages/ledger/env-vars.mdx | 2 +- .../base-ledger/block-space-allocator.mdx | 104 +-- .../specs/pages/base-ledger/execution.mdx | 1 - .../pages/base-ledger/replay-protection.mdx | 666 +----------------- .../base-ledger/replay-protection/_meta.json | 3 + .../replay-protection/optimizations.mdx | 646 +++++++++++++++++ 6 files changed, 715 insertions(+), 707 deletions(-) create mode 100644 packages/specs/pages/base-ledger/replay-protection/_meta.json create mode 100644 packages/specs/pages/base-ledger/replay-protection/optimizations.mdx diff --git a/packages/docs/pages/ledger/env-vars.mdx b/packages/docs/pages/ledger/env-vars.mdx index 28627f1d..d7648a81 100644 --- a/packages/docs/pages/ledger/env-vars.mdx +++ b/packages/docs/pages/ledger/env-vars.mdx @@ -1,5 +1,5 @@ import { Callout } from 'nextra-theme-docs' -import Expandable from "../../components/Expandable"; // TODO: Use this component when properly css'd +import Expandable from "../../components/Expandable"; # Environment variables diff --git a/packages/specs/pages/base-ledger/block-space-allocator.mdx b/packages/specs/pages/base-ledger/block-space-allocator.mdx index c28a3f77..3410cabe 100644 --- a/packages/specs/pages/base-ledger/block-space-allocator.mdx +++ b/packages/specs/pages/base-ledger/block-space-allocator.mdx @@ -1,42 +1,45 @@ +import { Callout } from 'nextra-theme-docs' import blockSpaceEx from './images/block-space-allocator-example.svg' import blockSpaceBins from './images/block-space-allocator-bins.svg' # Block space allocator -Block space in Tendermint is a resource whose management is relinquished to the + +Note that the DKG scheme for front-running prevention is not a feature included in the first release of Namada. +The block-space-allocator infrastructure still exists, but the decryption is hardcoded and does not require validator coordination. + + +Block space in CometBFT is a resource whose management is relinquished to the running application. This section covers the design of an abstraction that facilitates the process of transparently allocating space for transactions in a block at some height $H$, whilst upholding the safety and liveness properties of Namada. -## On block sizes in Tendermint and Namada +## On block sizes in CometBFT and Namada -[Block sizes in Tendermint] -(configured through the $MaxBytes$ consensus +[Block sizes in CometBFT](https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_app_requirements.md#blockparamsmaxbytes) (configured through the `MaxBytes` consensus parameter) have a minimum value of $1\ \text{byte}$, and a hard cap of $100\ -MiB$, reflecting the header, evidence of misbehavior (used to slash +\text{MiB}$, reflecting the header, evidence of misbehavior (used to slash Byzantine validators) and transaction data, as well as any potential protobuf -serialization overhead. Some of these data are dynamic in nature (e.g. +serialization overhead. Some of this data is dynamic in nature (e.g. evidence of misbehavior), so the total size reserved to transactions in a block at some height $H_0$ might not be the same as another block's, say, at some -height $H_1 : H_1 \ne H_0$. During Tendermint's `PrepareProposal` ABCI phase, +height $H_1 : H_1 \ne H_0$. During CometBFT's `PrepareProposal` ABCI phase, applications receive a $MaxTxBytes$ parameter whose value already accounts for the total space available for transactions at some height $H$. Namada does not -rely on the $MaxTxBytes$ parameter of `RequestPrepareProposal`; instead, -app-side validators configure a $MaxProposalSize$ parameter at genesis (or -through governance) and set Tendermint blocks' $MaxBytes$ parameter to its +rely on the `MaxTxBytes` parameter of `RequestPrepareProposal`; instead, +app-side validators configure a `MaxProposalSize` parameter at genesis (or +through governance) and set CometBFT blocks' `MaxBytes` parameter to its upper bound. -[Block sizes in Tendermint](https://github.com/tendermint/tendermint/blob/v0.34.x/spec/abci/apps.md#blockparamsmaxbytes) - ## Transaction batch construction -During Tendermint's `PrepareProposal` ABCI phase, Namada (the ABCI server) is -fed a set of transactions $M = \{\ tx\ |\ tx\text{ in Tendermint's mempool}\ +During CometBFT's `PrepareProposal` ABCI phase, Namada (the ABCI server) is +fed a set of transactions $M := \{\ tx\ |\ tx\text{ in CometBFT's mempool}\ \}$, whose total combined size (i.e. the sum of the bytes occupied by each $tx -: tx \in M$) may be greater than $MaxProposalBytes$. Therefore, consensus round +: tx \in M$) may be greater than `MaxProposalBytes`. Therefore, consensus round leaders are responsible for selecting a batch of transactions $P$ whose total -combined bytes $P_{Len} \le MaxProposalBytes$. +combined bytes $P_{Len} \le $ `MaxProposalBytes`. To stay within these bounds, block space is **allotted** to different kinds of transactions: decrypted, protocol and encrypted transactions. Each kind of @@ -77,12 +80,14 @@ stops as soon as the respective `TxBin` runs out of space for some $tx_{Protocol} : tx_{Protocol} \in M$. The `TxBin` for protocol transactions is allotted half of the remaining block space, after decrypted transactions have been **allocated**. +{/* TODO: The ordering of these steps is a bit unintuitive. Feels like we should begin with `BuildingEncryptedTx` */} +{/* TODO: The WithoutEncryptedTxs is not clear. Are these special types of protocol txs? */} 3. `BuildingEncryptedTxBatch` - This state behaves a lot like the previous state, with one addition: it takes a parameter that guards the encrypted transactions `TxBin`, which in effect splits the state into two sub-states. -When `WithEncryptedTxs` is active, we fill block space with encrypted -transactions (as the name implies); orthogonal to this mode of operation, there -is `WithoutEncryptedTxs`, which, as the name implies, does not allow encrypted +When `WithEncryptedTxs` is active, block space is filled with encrypted +transactions (as the name implies). Orthogonal to this mode of operation, there +exists `WithoutEncryptedTxs`, which, as the name implies, does not allow encrypted transactions to be included in a block. The `TxBin` for encrypted transactions is allotted $\min(R,\frac{1}{3} MaxProposalBytes)$ bytes, where $R$ is the block space remaining after allocating space for decrypted and protocol @@ -90,9 +95,8 @@ transactions. 4. `FillingRemainingSpace` - The final state of the `BlockSpaceAllocator`. Due to the short-circuit behavior of a `TxBin`, on allocation errors, some space may be left unutilized at the end of the third state. At this state, the only -kinds of -transactions that are left to fill the available block space are -of type encrypted and protocol, but encrypted transactions are forbidden +transaction types that are left to fill the available block space are +encrypted and protocol transactions, but encrypted transactions are forbidden to be included, to avoid breaking their invariant regarding allotted block space (i.e. encrypted transactions can only occupy up to $\frac{1}{3}$ of the total block space for a given height $H$). As such, @@ -123,16 +127,16 @@ Consider the following diagram: We denote `D`, `P` and `E` as decrypted, protocol and encrypted transactions, respectively. -* At height $H$, block space is evenly divided in three parts, one for each +* At height $H$, block space is evenly divided into three parts, one for each kind of transaction type. -* At height $H+1$, we do not include encrypted transactions in the proposal, +* At height $H+1$, no encrypted transactions are included in the proposal, therefore protocol transactions are allowed to take up to $\frac{2}{3}$ of the available block space. * At height $H+2$, no encrypted transactions are included either. Notice that no decrypted transactions were included in the proposal, since at height $H+1$ -we did not decide on any encrypted transactions. In sum, only protocol +no encrypted transactions were committed. In sum, only protocol transactions are included in the proposal for the block with height $H+2$. -* At height $H+3$, we propose encrypted transactions once more. Just like in +* At height $H+3$, encrypted transactions are proposed once more. Just like in the previous scenario, no decrypted transactions are available. Encrypted transactions are capped at $\frac{1}{3}$ of the available block space, so the remaining $\frac{1}{2} - \frac{1}{3} = \frac{1}{6}$ of the available block @@ -146,31 +150,33 @@ Batches of transactions proposed during ABCI's `PrepareProposal` phase are validated at the `ProcessProposal` phase. The validation conditions are relaxed, compared to the rigid block structure imposed on blocks during `PrepareProposal` (i.e. with decrypted, protocol and encrypted transactions -appearing in this order, as [examplified above](#example)). Let us fix $H$ as -the height of the block $B$ currently being decided through Tendermint's -consensus mechanism, $P$ as the batch of transactions proposed at $H$ as $B$'s -payload and $V$ as the current set of active validators. To vote on $P$, each -validator $v \in V$ checks: - -* If the length of $P$ in bytes, defined as $P_{Len} := \sum_{tx \in -P} \text{size\_of}(tx)$, is not greater than $MaxProposalBytes$. -* If $P$ does not contain more than $\frac{1}{3} MaxProposalBytes$ worth of +appearing in this order, as [examplified above](#example)). + +Define $H$ as the height of block $B$ currently being decided through Tendermint's +consensus mechanism. Define $P$ as the batch of transactions proposed at $H$ as $B$'s +payload and define $V$ as the current set of active validators. To vote on $P$, each +validator $v \in V$ checks that: + +* The length of $P$ in bytes, defined as $P_{Len} := \sum_{tx \in +P} \text{size\_of}(tx)$, is not greater than `MaxProposalBytes`. +* $P$ does not contain more than $\frac{1}{3}$ `MaxProposalBytes` worth of encrypted transactions. - - While not directly checked, our batch construction invariants guarantee -that we will constrain decrypted transactions to occupy up to $\frac{1}{3} -MaxProposalBytes$ bytes of the available block space at $H$ (or any block + - While not directly checked, the batch construction invariants guarantee +that decrypted transactions are constrained to occupy up to $\frac{1}{3} $ +`MaxProposalBytes` bytes of the available block space at $H$ (or any block height, in fact). -* If all decrypted transactions from $H-1$ have been included in the proposal +* All decrypted transactions from $H-1$ have been included in the proposal $P$, for height $H$. -* That no encrypted transactions were included in the proposal $P$, if no +* No encrypted transactions were included in the proposal $P$, if no encrypted transactions should be included at $H$. - - N.b. the conditions to reject encrypted transactions are still not clearly - specced out, therefore they will be left out of this section, for the - time being. + + +N.b. the conditions to reject encrypted transactions are not specced out, and would be necessary should Namada incorporate the DKG scheme + Should any of these conditions not be met at some arbitrary round $R$ of $H$, all honest validators $V_h : V_h \subseteq V$ will reject the proposal $P$. -Byzantine validators are permitted to re-order the layout of $P$ typically +Byzantine validators would be permitted to re-order the layout of $P$ typically derived from the [`BlockSpaceAllocator`](#transaction-batch-construction) $A$, under normal operation, however this should not be a compromising factor of the safety and liveness properties of Namada. The rigid layout of $B$ is simply a @@ -180,9 +186,7 @@ consequence of $A$ allocating in different phases. Validator set updates, one type of protocol transactions decided through BFT consensus in Namada, are fundamental to the liveness properties of the Ethereum -bridge, thus, ideally we would also check if these would be included once per -epoch at the `ProcessProposal` stage. Unfortunately, achieving a quorum of -signatures for a validator set update between two adjacent block heights +bridge. Unfortunately, achieving a quorum of signatures for a validator set update between two adjacent block heights through ABCI alone is not feasible. Hence, the Ethereum bridge is not a live distributed system, since there is the possibility to cross an epoch boundary without constructing a valid proof for some validator set update. In practice, @@ -190,15 +194,13 @@ however, it is nearly impossible for the bridge to get "stuck", as validator set updates are eagerly issued at the start of an epoch, whose length should be long enough for consensus(*) to be reached on a single validator set update. -(*) Note that we loosely used consensus here to refer to the process of +(*) Note that consensus was used loosely here to refer to the process of acquiring a quorum (e.g. more than $\frac{2}{3}$ of voting power, by stake) of signatures on a single validator set update. "Chunks" of a proof (i.e. individual votes) are decided and batched together, until a complete proof is constructed. -We cover validator set updates in detail in [the Ethereum bridge section]. - -[the Ethereum bridge section](../interoperability/ethereum-bridge.md) +Validator set updates are covered in more detail in [the Ethereum bridge section](../interoperability/ethereum-bridge.md). ## Governance diff --git a/packages/specs/pages/base-ledger/execution.mdx b/packages/specs/pages/base-ledger/execution.mdx index 047c9be1..6c1ee539 100644 --- a/packages/specs/pages/base-ledger/execution.mdx +++ b/packages/specs/pages/base-ledger/execution.mdx @@ -21,7 +21,6 @@ The ledger features an account-based system (in which UTXO-based systems such as Interactions with the Namada ledger are made possible via transactions. In Namada, transactions are allowed to perform arbitrary modifications to the storage of any account, but the transaction will be accepted and state changes applied only if all the validity predicates that were triggered by the transaction accept it. That is, the accounts whose storage sub-spaces were touched by the transaction will all have their validity predicates verifying the transaction. A transaction may also explicitly elect an account as the verifier of that transaction, which will result in that validity predicate being invoked as well. A transaction can add any number of additional verifiers, but cannot remove the ones determined by the protocol. For example, a transparent fungible token transfer would typically trigger 3 validity predicates - those of the token, source and target addresses. -{/* TODO: Explain how the ledger knows which validity predicates to invoke based on a transaction. This is not clear to me. */} The ledger knows what addresses are involved in a wasm transaction because of how the storage is constructed. Each variable in storage is inherently tied to a substorage owned by some account, and thus that VP is invoked. ## Supported validity predicates diff --git a/packages/specs/pages/base-ledger/replay-protection.mdx b/packages/specs/pages/base-ledger/replay-protection.mdx index f6ff66bb..1a6bd01f 100644 --- a/packages/specs/pages/base-ledger/replay-protection.mdx +++ b/packages/specs/pages/base-ledger/replay-protection.mdx @@ -25,20 +25,20 @@ prevent the execution of already processed transactions. ## Context -This section will illustrate the pre-existing context in which we are going to -implement the replay protection mechanism. +This section illustrates the pre-existing context in the replay protection mechanims is implemented. ### Encryption-Authentication -The current implementation of Namada is built on top of Tendermint which +The current implementation of Namada is built on top of CometBFT which provides an encrypted and authenticated communication channel between every two -nodes to prevent a _man-in-the-middle_ attack (see the detailed -[spec](https://github.com/tendermint/tendermint/blob/29e5fbcc648510e4763bd0af0b461aed92c21f30/spec/p2p/peer.md)). +nodes to prevent a _man-in-the-middle_ attack (see the detailed {/* TODO: Fix link to be cometbft is possible*/} +[spec](https://github.com/cometbft/cometbft/blob/main/spec/p2p/legacy-docs/peer.md)). The Namada protocol relies on this substrate to exchange transactions (messages) -that will define the state transition of the ledger. More specifically, a +that defines the state transition of the ledger. More specifically, a transaction is composed of two parts: a `WrapperTx` and an inner `Tx` +{/* TODO: Check that this is up to date. I believe it is not */} ```rust pub struct WrapperTx { /// The fee to be payed for including the tx @@ -68,8 +68,8 @@ pub struct Tx { The wrapper transaction is composed of some metadata, an optional unshielding tx for fee payment (see [fee specs](../economics/fee-system.md)), the encrypted -inner transaction itself and the hash of this. The inner `Tx` transaction -carries the Wasm code to be executed and the associated data. +inner transaction itself and the hash of the concatenation of these values. {/* TODO: Ensure that the hash of concatentation statement is as accurate as possible. It could be values, it could be the object (which consists of keys + values)*/} +The inner `Tx` transaction carries the Wasm code to be executed and the associated data. A transaction is constructed as follows: @@ -78,9 +78,9 @@ A transaction is constructed as follows: `Tx` where the data field holds the concatenation of the original data and the signature (`SignedTxData`) 3. The produced transaction is encrypted and embedded in a `WrapperTx`. The - encryption step is there for a future implementation of threshold decryption + encryption step exists for a future implementation of threshold decryption scheme (see [Ferveo](https://github.com/anoma/ferveo)) -4. Finally, the `WrapperTx` gets converted to a `Tx` struct, signed over its +4. Finally, the `WrapperTx` is converted to a `Tx` struct, signed over its hash (same as step 2, relying on `SignedTxData`), and submitted to the network @@ -106,7 +106,7 @@ For a more in-depth view, please refer to the ### Tendermint replay protection The underlying consensus engine, -[Tendermint](https://github.com/tendermint/tendermint/blob/29e5fbcc648510e4763bd0af0b461aed92c21f30/spec/abci/apps.md), +[CometBFT](https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_app_requirements.md#connection-state), provides a first layer of protection in its mempool which is based on a cache of previously seen transactions. This mechanism is actually aimed at preventing a block proposer from including an already processed transaction in the next @@ -183,7 +183,7 @@ committed to storage in `finalize_block` and the transaction is executed. In the next block we deserialize the inner transaction, check the validity of the decrypted txs and their correct order: if the order is off a new round of -tendermint will start. If instead an error is found in any single decrypted tx, +CometBFT will start. If instead an error is found in any single decrypted tx, we remove from storage the previously inserted hash of the inner tx to allow it to be rewrapped, and discard the tx itself. Finally, in `finalize_block` we execute the tx: if it runs out of gas then we'll remove its hash from storage, @@ -394,645 +394,3 @@ met: [relative section](#wrapper-checks) - The order/number of decrypted txs differs from the order/number committed in the previous block - -## Possible optimizations - -In this section we describe two alternative solutions that come with some -optimizations. - -### Transaction counter - -Instead of relying on a hash (32 bytes) we could use a 64 bits (8 bytes) -transaction counter as nonce for the wrapper and inner transactions. The -advantage is that the space required would be much less since we only need two 8 -bytes values in storage for every address which is signing transactions. On the -other hand, the handling of the counter for the inner transaction will be -performed entirely in wasm (transactions and VPs) making it a bit less -efficient. This solution also imposes a strict ordering on the transactions -issued by a same address. - -**NOTE**: this solution requires the ability to -[yield](https://github.com/wasmerio/wasmer/issues/1127) execution from Wasmer -which is not implemented yet. - -#### InnerTx - -We will implement the protection entirely in Wasm: the check of the counter will -be carried out by the validity predicates while the actual writing of the -counter in storage will be done by the transactions themselves. - -To do so, the `SignedTxData` attached to the transaction will hold the current -value of the counter in storage: - -```rust -pub struct SignedTxData { - /// The original tx data bytes, if any - pub data: Option>, - /// The optional transaction counter for replay protection - pub tx_counter: Option, - /// The signature is produced on the tx data concatenated with the tx code - /// and the timestamp. - pub sig: common::Signature, -} -``` - -The counter must reside in `SignedTxData` and not in the data itself because -this must be checked by the validity predicate which is not aware of the -specific transaction that took place but only of the changes in the storage; -therefore, the VP is not able to correctly deserialize the data of the -transactions since it doesn't know what type of data the bytes represent. - -The counter will be signed as well to protect it from tampering and grant it the -same guarantees explained at the [beginning](#encryption-authentication) of this -document. - -The wasm transaction will simply read the value from storage and increase its -value by one. The target key in storage will be the following: - -``` -/$Address/inner_tx_counter: u64 -``` - -The VP of the _source_ address will then check the validity of the signature -and, if it's deemed valid, will proceed to check if the pre-value of the counter -in storage was equal to the one contained in the `SignedTxData` struct and if -the post-value of the key in storage has been incremented by one: if any of -these conditions doesn't hold the VP will discard the transactions and prevent -the changes from being applied to the storage. - -In the specific case of a shielded transfer, since MASP already comes with -replay protection as part of the Zcash design (see the [MASP specs](../masp.md) -and [Zcash protocol specs](https://zips.z.cash/protocol/protocol.pdf)), the -counter in `SignedTxData` is not required and therefore should be optional. - -To implement replay protection for the inner transaction we will need to update -all the VPs checking the transaction's signature to include the check on the -transaction counter: at the moment the `vp_user` validity predicate is the only -one to update. In addition, all the transactions involving `SignedTxData` should -increment the counter. - -#### WrapperTx - -To protect this transaction we can implement an in-protocol mechanism. Since the -wrapper transaction gets signed before being submitted to the network, we can -leverage the `tx_counter` field of the `SignedTxData` already introduced for the -inner tx. - -In addition, we need another counter in the storage subspace of every address: - -``` -/$Address/wrapper_tx_counter: u64 -``` - -where `$Address` is the one signing the transaction (the same implied by the -`pk` field of the `WrapperTx` struct). - -The check will consist of a signature check first followed by a check on the -counter that will make sure that the counter attached to the transaction matches -the one in storage for the signing address. This will be done in the -`process_proposal` function so that validators can decide whether the -transaction is valid or not; if it's not, then they will discard the transaction -and skip to the following one. - -At last, in `finalize_block`, the ledger will update the counter key in storage, -increasing its value by one. This will happen when the following conditions are -met: - -- `process_proposal` has accepted the tx by validating its signature and - transaction counter -- The tx was correctly applied in `finalize_block` (for `WrapperTx` this simply - means inclusion in the block and gas accounting) - -Now, if a malicious user tried to replay this transaction, the `tx_counter` in -the struct would no longer be equal to the one in storage and the transaction -would be deemed invalid. - -#### Implementation details - -In this section we'll talk about some details of the replay protection mechanism -that derive from the solution proposed in this section. - -##### Storage counters - -Replay protection will require interaction with the storage from both the -protocol and Wasm. To do so we can take advantage of the `StorageRead` and -`StorageWrite` traits to work with a single interface. - -This implementation requires two transaction counters in storage for every -address, so that the storage subspace of a given address looks like the -following: - -``` -/$Address/wrapper_tx_counter: u64 -/$Address/inner_tx_counter: u64 -``` - -An implementation requiring a single counter in storage has been taken into -consideration and discarded because that would not support batching; see the -[relative section](#single-counter-in-storage) for a more in-depth explanation. - -For both the wrapper and inner transaction, the increase of the counter in -storage is an important step that must be correctly executed. First, the -implementation will return an error in case of a counter overflow to prevent -wrapping, since this would allow for the replay of previous transactions. Also, -we want to increase the counter as soon as we verify that the signature, the -chain id and the passed-in transaction counter are valid. The increase should -happen immediately after the checks because of two reasons: - -- Prevent replay attack of a transaction in the same block -- Update the transaction counter even in case the transaction fails, to prevent - a possible replay attack in the future (since a transaction invalid at state - Sx could become valid at state Sn where `n > x`) - -For `WrapperTx`, the counter increase and fee accounting will per performed in -`finalize_block` (as stated in the [relative](#wrappertx) section). - -For `InnerTx`, instead, the logic is not straightforward. The transaction code -will be executed in a Wasm environment ([Wasmer](https://wasmer.io)) till it -eventually completes or raises an exception. In case of success, the counter in -storage will be updated correctly but, in case of failure, the protocol will -discard all of the changes brought by the transactions to the write-ahead-log, -including the updated transaction counter. This is a problem because the -transaction could be successfully replayed in the future if it will become -valid. - -The ideal solution would be to interrupt the execution of the Wasm code after -the transaction counter (if any) has been increased. This would allow performing -a first run of the involved VPs and, if all of them accept the changes, let the -protocol commit these changes before any possible failure. After that, the -protocol would resume the execution of the transaction from the previous -interrupt point until completion or failure, after which a second pass of the -VPs is initiated to validate the remaining state modifications. In case of a VP -rejection after the counter increase there would be no need to resume execution -and the transaction could be immediately deemed invalid so that the protocol -could skip to the next tx to be executed. With this solution, the counter update -would be committed to storage regardless of a failure of the transaction itself. - -Unfortunately, at the moment, Wasmer doesn't allow -[yielding](https://github.com/wasmerio/wasmer/issues/1127) from the execution. - -In case the transaction went out of gas (given the `gas_limit` field of the -wrapper), all the changes applied will be discarded from the WAL and will not -affect the state of the storage. The inner transaction could then be rewrapped -with a correct gas limit and replayed until the `expiration` time has been -reached. - -##### Batching and transaction ordering - -This replay protection technique supports the execution of multiple transactions -with the same address as _source_ in a single block. Actually, the presence of -the transaction counters and the checks performed on them now impose a strict -ordering on the execution sequence (which can be an added value for some use -cases). The correct execution of more than one transaction per source address in -the same block is preserved as long as: - -1. The wrapper transactions are inserted in the block with the correct ascending - order -2. No hole is present in the counters' sequence -3. The counter of the first transaction included in the block matches the - expected one in storage - -The conditions are enforced by the block proposer who has an interest in -maximizing the amount of fees extracted by the proposed block. To support this -incentive, validators will reject the block proposed if any of the included -wrapper transactions are invalid, effectively incentivizing the block proposer -to include only valid transactions and correctly reorder them to gain the fees. - -In case of a missing transaction causes a hole in the sequence of transaction -counters, the block proposer will include in the block all the transactions up -to the missing one and discard all the ones following that one, effectively -preserving the correct ordering. - -Correctly ordering the transactions is not enough to guarantee the correct -execution. As already mentioned in the [WrapperTx](#wrappertx) section, the -block proposer and the validators also need to access the storage to check that -the first transaction counter of a sequence is actually the expected one. - -The entire counter ordering is only done on the `WrapperTx`: if the inner -counter is wrong then the inner transaction will fail and the signer of the -corresponding wrapper will be charged with fees. This incentivizes submitters to -produce valid transactions and discourages malicious user from rewrapping and -resubmitting old transactions. - -##### Mempool checks - -As a form of optimization to prevent mempool spamming, some of the checks that -have been introduced in this document will also be brought to the -`mempool_validate` function. Of course, we always refer to checks on the -`WrapperTx` only. More specifically: - -- Check the `ChainId` field -- Check the signature of the transaction against the `pk` field of the - `WrapperTx` -- Perform a limited check on the transaction counter - -Regarding the last point, `mempool_validate` will check if the counter in the -transaction is `>=` than the one in storage for the address signing the -`WrapperTx`. A complete check (checking for strict equality) is not feasible, as -described in the [relative](#mempool-counter-validation) section. - -#### Alternatives considered - -In this section we list some possible solutions that were taken into -consideration during the writing of this solution but were eventually discarded. - -##### Mempool counter validation - -The idea of performing a complete validation of the transaction counters in the -`mempool_validate` function was discarded because of a possible flaw. - -Suppose a client sends five transactions (counters from 1 to 5). The mempool of -the next block proposer is not guaranteed to receive them in order: something on -the network could shuffle the transactions up so that they arrive in the -following order: 2-3-4-5-1. Now, since we validate every single transaction to -be included in the mempool in the exact order in which we receive them, we would -discard the first four transactions and only accept the last one, that with -counter 1. Now the next block proposer might have the four discarded -transactions in its mempool (since those were not added to the previous block -and therefore not evicted from the other mempools, at least they shouldn't, see -[block rejection](#block-rejection)) and could therefore include them in the -following block. But still, a process that could have ended in a single block -actually took two blocks. Moreover, there are two more issues: - -- The next block proposer might have the remaining transactions out of order in - his mempool as well, effectively propagating the same issue down to the next - block proposer -- The next block proposer might not have these transactions in his mempool at - all - -Finally, transactions that are not allowed into the mempool don't get propagated -to the other peers, making their inclusion in a block even harder. It is instead -better to avoid a complete filter on the transactions based on their order in -the mempool: instead we are going to perform a simpler check and then let the -block proposer rearrange them correctly when proposing the block. - -##### In-protocol protection for InnerTx - -An alternative implementation could place the protection for the inner tx in -protocol, just like the wrapper one, based on the transaction counter inside -`SignedTxData`. The check would run in `process_proposal` and the update in -`finalize_block`, just like for the wrapper transaction. This implementation, -though, shows two drawbacks: - -- it implies the need for an hard fork in case of a modification of the replay - protection mechanism -- it's not clear who's the source of the inner transaction from the outside, as - that depends on the specific code of the transaction itself. We could use - specific whitelisted txs set to define when it requires a counter (would not - work for future programmable transactions), but still, we have no way to - define which address should be targeted for replay protection (**blocking - issue**) - -##### In-protocol counter increase for InnerTx - -In the [storage counter](#storage-counters) section we mentioned the issue of -increasing the transaction counter for an inner tx even in case of failure. A -possible solution that we took in consideration and discarded was to increase -the counter from protocol in case of a failure. - -This is technically feasible since the protocol is aware of the keys modified by -the transaction and also of the results of the validity predicates (useful in -case the transaction updated more than one counter in storage). It is then -possible to recover the value and reapply the change directly from protocol. -This logic though, is quite dispersive, since it effectively splits the -management of the counter for the `InnerTx` among Wasm and protocol, while our -initial intent was to keep it completely in Wasm. - -##### Single counter in storage - -We can't use a single transaction counter in storage because this would prevent -batching. - -As an example, if a client (with a current counter in storage holding value 5) -generates two transactions to be included in the same block, signing both the -outer and the inner (default behavior of the client), it would need to generate -the following transaction counters: - -``` -[ - T1: (WrapperCtr: 5, InnerCtr: 6), - T2: (WrapperCtr: 7, InnerCtr: 8) -] -``` - -Now, the current execution model of Namada includes the `WrapperTx` in a block -first to then decrypt and execute the inner tx in the following block -(respecting the committed order of the transactions). That would mean that the -outer tx of `T1` would pass validation and immediately increase the counter to 6 -to prevent a replay attack in the same block. Now, the outer tx of `T2` will be -processed but it won't pass validation because it carries a counter with value 7 -while the ledger expects 6. - -To fix this, one could think to set the counters as follows: - -``` -[ - T1: (WrapperCtr: 5, InnerCtr: 7), - T2: (WrapperCtr: 6, InnerCtr: 8) -] -``` - -This way both the transactions will be considered valid and executed. The issue -is that, if the second transaction is not included in the block (for any -reason), than the first transaction (the only one remaining at this point) will -fail. In fact, after the outer tx has correctly increased the counter in storage -to value 6 the block will be accepted. In the next block the inner transaction -will be decrypted and executed but this last step will fail since the counter in -`SignedTxData` carries a value of 7 and the counter in storage has a value of 6. - -To cope with this there are two possible ways. The first one is that, instead of -checking the exact value of the counter in storage and increasing its value by -one, we could check that the transaction carries a counter `>=` than the one in -storage and write this one (not increase) to storage. The problem with this is -that it the lack of support for strict ordering of execution. - -The second option is to keep the usual increase strategy of the counter -(increase by one and check for strict equality) and simply use two different -counters in storage for each address. The transaction will then look like this: - -``` -[ - T1: (WrapperCtr: 5, InnerCtr: 5), - T2: (WrapperCtr: 6, InnerCtr: 6) -] -``` - -Since the order of inclusion of the `WrapperTxs` forces the same order of the -execution for the inner ones, both transactions can be correctly executed and -the correctness will be maintained even in case `T2` didn't make it to the block -(note that the counter for an inner tx and the corresponding wrapper one don't -need to coincide). - -### Wrapper-bound InnerTx - -The solution is to tie an `InnerTx` to the corresponding `WrapperTx`. By doing -so, it becomes impossible to rewrap an inner transaction and, therefore, all the -attacks related to this practice would be unfeasible. This mechanism requires -even less space in storage (only a 64 bit counter for every address signing -wrapper transactions) and only one check on the wrapper counter in protocol. As -a con, it requires communication between the signer of the inner transaction and -that of the wrapper during the transaction construction. This solution also -imposes a strict ordering on the wrapper transactions issued by a same address. - -To do so we will have to change the current definition of the two tx structs to -the following: - -```rust -pub struct WrapperTx { - /// The fee to be payed for including the tx - pub fee: Fee, - /// Used to determine an implicit account of the fee payer - pub pk: common::PublicKey, - /// Max amount of gas that can be used when executing the inner tx - pub gas_limit: GasLimit, - /// Lifetime of the transaction, also determines which decryption key will be used - pub expiration: DateTimeUtc, - /// Chain identifier for replay protection - pub chain_id: ChainId, - /// Transaction counter for replay protection - pub tx_counter: u64, - /// the encrypted payload - pub inner_tx: EncryptedTx, -} - -pub struct Tx { - pub code: Vec, - pub data: Option>, - pub timestamp: DateTimeUtc, - pub wrapper_commit: Option, -} -``` - -The Wrapper transaction no longer holds the inner transaction hash while the -inner one now holds a commit to the corresponding wrapper tx in the form of the -hash of a `WrapperCommit` struct, defined as: - -```rust -pub struct WrapperCommit { - pub pk: common::PublicKey, - pub tx_counter: u64, - pub expiration: DateTimeUtc, - pub chain_id: ChainId, -} -``` - -The `pk-tx_counter` couple contained in this struct, uniquely identifies a -single `WrapperTx` (since a valid tx_counter is unique given the address) so -that the inner one is now bound to this specific wrapper. The remaining fields, -`expiration` and `chain_id`, will tie these two values given their importance in -terms of safety (see the [relative](#wrappertx-checks) section). Note that the -`wrapper_commit` field must be optional because the `WrapperTx` struct itself -gets converted to a `Tx` struct before submission but it doesn't need any -commitment. - -Both the inner and wrapper tx get signed on their hash, as usual, to prevent -tampering with data. When a wrapper gets processed by the ledger, we first check -the validity of the signature, checking that none of the fields were modified: -this means that the inner tx embedded within the wrapper is, in fact, the -intended one. This last statement means that no external attacker has tampered -data, but the tampering could still have been performed by the signer of the -wrapper before signing the wrapper transaction. - -If this check (and others, explained later in the [checks](#wrappertx-checks) -section) passes, then the inner tx gets decrypted in the following block -proposal process. At this time we check that the order in which the inner txs -are inserted in the block matches that of the corresponding wrapper txs in the -previous block. To do so, we rely on an in-storage queue holding the hash of the -`WrapperCommit` struct computed from the wrapper tx. From the inner tx we -extract the `WrapperCommit` hash and check that it matches that in the queue: if -they don't it means that the inner tx has been reordered and we reject the -block. - -If this check passes then we can send the inner transaction to the wasm -environment for execution: if the transaction is signed, then at least one VP -will check its signature to spot possible tampering of the data (especially by -the wrapper signer, since this specific case cannot be checked before this step) -and, if this is the case, will reject this transaction and no storage -modifications will be applied. - -In summary: - -- The `InnerTx` carries a unique identifier of the `WrapperTx` embedding it -- Both the inner and wrapper txs are signed on all of their data -- The signature check on the wrapper tx ensures that the inner transaction is - the intended one and that this wrapper has not been used to wrap a different - inner tx. It also verifies that no tampering happened with the inner - transaction by a third party. Finally, it ensures that the public key is the - one of the signer -- The check on the `WrapperCommit` ensures that the inner tx has not been - reordered nor rewrapped (this last one is a non-exhaustive check, inner tx - data could have been tampered with by the wrapper signer) -- The signature check of the inner tx performed in Vp grants that no data of the - inner tx has been tampered with, effectively verifying the correctness of the - previous check (`WrapperCommit`) - -This sequence of controls makes it no longer possible to rewrap an `InnerTx` -which is now bound to its wrapper. This implies that replay protection is only -needed on the `WrapperTx` since there's no way to extract the inner one, rewrap -it and replay it. - -#### WrapperTx checks - -In `mempool_validation` we will perform some checks on the wrapper tx to -validate it. These will involve: - -- Valid signature -- `GasLimit` is below the block gas limit (see the - [fee specs](../economics/fee-system.md) for more details) -- `Fees` are paid with an accepted token and match the minimum amount required - (see the [fee specs](../economics/fee-system.md) for more details) -- Valid chainId -- Valid transaction counter -- Valid expiration - -These checks can all be done before executing the transactions themselves. If -any of these fails, the transaction should be considered invalid and the action -to take will be one of the followings: - -1. If the checks fail on the signature, chainId, expiration or transaction - counter, then this transaction will be forever invalid, regardless of the - possible evolution of the ledger's state. There's no need to include the - transaction in the block nor to increase the transaction counter. Moreover, - we **cannot** include this transaction in the block to charge a fee (as a - sort of punishment) because these errors may not depend on the signer of the - tx (could be due to malicious users or simply a delay in the tx inclusion in - the block) -2. If the checks fail on `Fee` or `GasLimit` the transaction should be - discarded. In theory the gas limit of a block is a Namada parameter - controlled by governance, so there's a chance that the transaction could - become valid in the future should this limit be raised. The same applies to - the token whitelist and the minimum fee required. However we can expect a - slow rate of change of these parameters so we can reject the tx (the - submitter can always resubmit it at a future time) -3. If all the checks pass validation we will include the transaction in the - block to increase the counter and charge the fee - -Note that, regarding point one, there's a distinction to be made about an -invalid `tx_counter` which could be invalid because of being old or being in -advance. To solve this last issue (counter greater than the expected one), we -have to introduce the concept of a lifetime (or timeout) for the transactions: -basically, the `WrapperTx` will hold an extra field called `expiration` stating -the maximum time up until which the submitter is willing to see the transaction -executed. After the specified time the transaction will be considered invalid -and discarded regardless of all the other checks. This way, in case of a -transaction with a counter greater than expected, it is sufficient to wait till -after the expiration to submit more transactions, so that the counter in storage -is not modified (kept invalid for the transaction under observation) and -replaying that tx would result in a rejection. - -This actually generalizes to a more broad concept. In general, a transaction is -valid at the moment of submission, but after that, a series of external factors -(ledger state, etc.) might change the mind of the submitter who's now not -interested in the execution of the transaction anymore. By introducing this new -field we are introducing a new constraint in the transaction's contract, where -the ledger will make sure to prevent the execution of the transaction after the -deadline and, on the other side, the submitter commits himself to the result of -the execution at least until its expiration. If the expiration is reached and -the transaction has not been executed the submitter can decide to submit a new, -identical transaction if he's still interested in the changes carried by it. - -In our design, the `expiration` will hold until the transaction is executed, -once it's executed, either in case of success or failure, the `tx_counter` will -be increased and the transaction will not be replayable. In essence, the -transaction submitter commits himself to one of these three conditions: - -- Transaction is invalid regardless of the specific state -- Transaction is executed (either with success or not) and the transaction - counter is increased -- Expiration time has passed - -The first condition satisfied will invalidate further executions of the same tx. - -Since the signer of the wrapper may be different from the one of the inner we -also need to include this `expiration` field in the `WrapperCommit` struct, to -prevent the signer of the wrapper from setting a lifetime which is in conflict -with the interests of the inner signer. Note that adding a separate lifetime for -the wrapper alone (which would require two separate checks) doesn't carry any -benefit: a wrapper with a lifetime greater than the inner would have no sense -since the inner would fail. Restricting the lifetime would work but it also -means that the wrapper could prevent a valid inner transaction from being -executed. We will then keep a single `expiration` field specifying the wrapper -tx max time (the inner one will actually be executed one block later because of -the execution mechanism of Namada). - -To prevent the signer of the wrapper from submitting the transaction to a -different chain, the `ChainId` field should also be included in the commit. - -Finally, in case the transaction run out of gas (based on the provided -`GasLimit` field of the wrapper) we don't need to take any action: by this time -the transaction counter will have already been incremented and the tx is not -replayable anymore. In theory, we don't even need to increment the counter since -the only way this transaction could become valid is a change in the way gas is -accounted, which might require a fork anyway, and consequently a change in the -required `ChainId`. However, since we can't tell the gas consumption before the -inner tx has been executed, we cannot anticipate this check. - -All these checks are also run in `process_proposal` with an addition: validators -also check that the wrapper signer has enough funds to pay the fee. This check -should not be done in mempool because the funds available for a certain address -are variable in time and should only be checked at block inclusion time. If any -of the checks fail here, the entire block is rejected forcing a new Tendermint -round to begin (see a better explanation of this choice in the -[relative](#block-rejection) section). - -The `expiration` parameter also justifies that the check on funds is only done -in `process_proposal` and not in mempool. Without it, the transaction could be -potentially executed at any future moment, possibly going against the mutated -interests of the submitter. With the expiration parameter, now, the submitter -commits himself to accept the execution of the transaction up to the specified -time: it's going to be his responsibility to provide a sensible value for this -parameter. Given this constraint the transaction will be kept in mempool up -until the expiration (since it would become invalid after that in any case), to -prevent the mempool from increasing too much in size. - -This mechanism can also be applied to another scenario. Suppose a transaction -was not propagated to the network by a node (or a group of colluding nodes). -Now, this tx might be valid, but it doesn't get inserted into a block. Without -an expiration, if the submitter doesn't submit any other transaction (which gets -included in a block to increase the transaction counter), this tx can be -replayed (better, applied, since it was never executed in the first place) at a -future moment in time when the submitter might not be willing to execute it any -more. - -#### WrapperCommit - -The fields of `WrapperTx` not included in `WrapperCommit` are at the discretion -of the `WrapperTx` producer. These fields are not included in the commit because -of one of these two reasons: - -- They depend on the specific state of the wrapper signer and cannot be forced - (like `fee`, since the wrapper signer must have enough funds to pay for those) -- They are not a threat (in terms of replay attacks) to the signer of the inner - transaction in case of failure of the transaction - -In a certain way, the `WrapperCommit` not only binds an `InnerTx` no a wrapper, -but effectively allows the inner to control the wrapper by requesting some -specific parameters for its creation and bind these parameters among the two -transactions: this allows us to apply the same constraints to both txs while -performing the checks on the wrapper only. - -#### Transaction creation process - -To craft a transaction, the process will now be the following (optional steps -are only required if the signer of the inner differs from that of the wrapper): - -- (**Optional**) the `InnerTx` constructor request, to the wrapper signer, his - public key and the `tx_counter` to be used -- The `InnerTx` is constructed in its entirety with also the `wrapper_commit` - field to define the constraints of the future wrapper -- The produced `Tx` struct get signed over all of its data (with `SignedTxData`) - producing a new struct `Tx` -- (**Optional**) The inner tx produced is sent to the `WrapperTx` producer - together with the `WrapperCommit` struct (required since the inner tx only - holds the hash of it) -- The signer of the wrapper constructs a `WrapperTx` compliant with the - `WrapperCommit` fields -- The produced `WrapperTx` gets signed over all of its fields - -Compared to a solution not binding the inner tx to the wrapper one, this -solution requires the exchange of 3 messages (request `tx_counter`, receive -`tx_counter`, send `InnerTx`) between the two signers (in case they differ), -instead of one. However, it allows the signer of the inner to send the `InnerTx` -to the wrapper signer already encrypted, guaranteeing a higher level of safety: -only the `WrapperCommit` struct should be sent clear, but this doesn't reveal -any sensitive information about the inner transaction itself. diff --git a/packages/specs/pages/base-ledger/replay-protection/_meta.json b/packages/specs/pages/base-ledger/replay-protection/_meta.json new file mode 100644 index 00000000..fbfdfa33 --- /dev/null +++ b/packages/specs/pages/base-ledger/replay-protection/_meta.json @@ -0,0 +1,3 @@ +{ + "optimizations": "Possible optimizations" +} \ No newline at end of file diff --git a/packages/specs/pages/base-ledger/replay-protection/optimizations.mdx b/packages/specs/pages/base-ledger/replay-protection/optimizations.mdx new file mode 100644 index 00000000..69721852 --- /dev/null +++ b/packages/specs/pages/base-ledger/replay-protection/optimizations.mdx @@ -0,0 +1,646 @@ +import { Callout } from 'nextra-theme-docs' + +{/* TODO: Maket this section more clear. At the moment I'm not sure what optmisation is what in the "possible optimisations" */} +# Possible optimizations + +In this section we describe two alternative solutions that come with some optimizations. + +## Transaction counter + +Instead of relying on a hash (32 bytes) Namada could use a 64 bits (8 bytes) +transaction counter as nonce for the wrapper and inner transactions. The +advantage is that the space required would be much less since only two 8 +bytes values in storage are needed for every address which is signing transactions. On the +other hand, the handling of the counter for the inner transaction is +performed entirely in wasm (transactions and VPs) making it slightly less +efficient. This solution also imposes a strict ordering on the transactions +issued by a same address. + + +**NOTE**: this solution requires the ability to +[yield](https://github.com/wasmerio/wasmer/issues/1127) execution from Wasmer +which is not implemented. + + + +### InnerTx + +We will implement the protection entirely in Wasm: the check of the counter will +be carried out by the validity predicates while the actual writing of the +counter in storage will be done by the transactions themselves. + +To do so, the `SignedTxData` attached to the transaction will hold the current +value of the counter in storage: + +```rust +pub struct SignedTxData { + /// The original tx data bytes, if any + pub data: Option>, + /// The optional transaction counter for replay protection + pub tx_counter: Option, + /// The signature is produced on the tx data concatenated with the tx code + /// and the timestamp. + pub sig: common::Signature, +} +``` + +The counter must reside in `SignedTxData` and not in the data itself because +this must be checked by the validity predicate which is not aware of the +specific transaction that took place but only of the changes in the storage; +therefore, the VP is not able to correctly deserialize the data of the +transactions since it doesn't know what type of data the bytes represent. + +The counter will be signed as well to protect it from tampering and grant it the +same guarantees explained at the [beginning](#encryption-authentication) of this +document. + +The wasm transaction will simply read the value from storage and increase its +value by one. The target key in storage will be the following: + +``` +/$Address/inner_tx_counter: u64 +``` + +The VP of the _source_ address will then check the validity of the signature +and, if it's deemed valid, will proceed to check if the pre-value of the counter +in storage was equal to the one contained in the `SignedTxData` struct and if +the post-value of the key in storage has been incremented by one: if any of +these conditions doesn't hold the VP will discard the transactions and prevent +the changes from being applied to the storage. + +In the specific case of a shielded transfer, since MASP already comes with +replay protection as part of the Zcash design (see the [MASP specs](../masp.md) +and [Zcash protocol specs](https://zips.z.cash/protocol/protocol.pdf)), the +counter in `SignedTxData` is not required and therefore should be optional. + +To implement replay protection for the inner transaction we will need to update +all the VPs checking the transaction's signature to include the check on the +transaction counter: at the moment the `vp_user` validity predicate is the only +one to update. In addition, all the transactions involving `SignedTxData` should +increment the counter. + +### WrapperTx + +To protect this transaction we can implement an in-protocol mechanism. Since the +wrapper transaction gets signed before being submitted to the network, we can +leverage the `tx_counter` field of the `SignedTxData` already introduced for the +inner tx. + +In addition, we need another counter in the storage subspace of every address: + +``` +/$Address/wrapper_tx_counter: u64 +``` + +where `$Address` is the one signing the transaction (the same implied by the +`pk` field of the `WrapperTx` struct). + +The check will consist of a signature check first followed by a check on the +counter that will make sure that the counter attached to the transaction matches +the one in storage for the signing address. This will be done in the +`process_proposal` function so that validators can decide whether the +transaction is valid or not; if it's not, then they will discard the transaction +and skip to the following one. + +At last, in `finalize_block`, the ledger will update the counter key in storage, +increasing its value by one. This will happen when the following conditions are +met: + +- `process_proposal` has accepted the tx by validating its signature and + transaction counter +- The tx was correctly applied in `finalize_block` (for `WrapperTx` this simply + means inclusion in the block and gas accounting) + +Now, if a malicious user tried to replay this transaction, the `tx_counter` in +the struct would no longer be equal to the one in storage and the transaction +would be deemed invalid. + +### Implementation details + +In this section we'll talk about some details of the replay protection mechanism +that derive from the solution proposed in this section. + +#### Storage counters + +Replay protection will require interaction with the storage from both the +protocol and Wasm. To do so we can take advantage of the `StorageRead` and +`StorageWrite` traits to work with a single interface. + +This implementation requires two transaction counters in storage for every +address, so that the storage subspace of a given address looks like the +following: + +``` +/$Address/wrapper_tx_counter: u64 +/$Address/inner_tx_counter: u64 +``` + +An implementation requiring a single counter in storage has been taken into +consideration and discarded because that would not support batching; see the +[relative section](#single-counter-in-storage) for a more in-depth explanation. + +For both the wrapper and inner transaction, the increase of the counter in +storage is an important step that must be correctly executed. First, the +implementation will return an error in case of a counter overflow to prevent +wrapping, since this would allow for the replay of previous transactions. Also, +we want to increase the counter as soon as we verify that the signature, the +chain id and the passed-in transaction counter are valid. The increase should +happen immediately after the checks because of two reasons: + +- Prevent replay attack of a transaction in the same block +- Update the transaction counter even in case the transaction fails, to prevent + a possible replay attack in the future (since a transaction invalid at state + Sx could become valid at state Sn where `n > x`) + +For `WrapperTx`, the counter increase and fee accounting will per performed in +`finalize_block` (as stated in the [relative](#wrappertx) section). + +For `InnerTx`, instead, the logic is not straightforward. The transaction code +will be executed in a Wasm environment ([Wasmer](https://wasmer.io)) till it +eventually completes or raises an exception. In case of success, the counter in +storage will be updated correctly but, in case of failure, the protocol will +discard all of the changes brought by the transactions to the write-ahead-log, +including the updated transaction counter. This is a problem because the +transaction could be successfully replayed in the future if it will become +valid. + +The ideal solution would be to interrupt the execution of the Wasm code after +the transaction counter (if any) has been increased. This would allow performing +a first run of the involved VPs and, if all of them accept the changes, let the +protocol commit these changes before any possible failure. After that, the +protocol would resume the execution of the transaction from the previous +interrupt point until completion or failure, after which a second pass of the +VPs is initiated to validate the remaining state modifications. In case of a VP +rejection after the counter increase there would be no need to resume execution +and the transaction could be immediately deemed invalid so that the protocol +could skip to the next tx to be executed. With this solution, the counter update +would be committed to storage regardless of a failure of the transaction itself. + +Unfortunately, at the moment, Wasmer doesn't allow +[yielding](https://github.com/wasmerio/wasmer/issues/1127) from the execution. + +In case the transaction went out of gas (given the `gas_limit` field of the +wrapper), all the changes applied will be discarded from the WAL and will not +affect the state of the storage. The inner transaction could then be rewrapped +with a correct gas limit and replayed until the `expiration` time has been +reached. + +#### Batching and transaction ordering + +This replay protection technique supports the execution of multiple transactions +with the same address as _source_ in a single block. Actually, the presence of +the transaction counters and the checks performed on them now impose a strict +ordering on the execution sequence (which can be an added value for some use +cases). The correct execution of more than one transaction per source address in +the same block is preserved as long as: + +1. The wrapper transactions are inserted in the block with the correct ascending + order +2. No hole is present in the counters' sequence +3. The counter of the first transaction included in the block matches the + expected one in storage + +The conditions are enforced by the block proposer who has an interest in +maximizing the amount of fees extracted by the proposed block. To support this +incentive, validators will reject the block proposed if any of the included +wrapper transactions are invalid, effectively incentivizing the block proposer +to include only valid transactions and correctly reorder them to gain the fees. + +In case of a missing transaction causes a hole in the sequence of transaction +counters, the block proposer will include in the block all the transactions up +to the missing one and discard all the ones following that one, effectively +preserving the correct ordering. + +Correctly ordering the transactions is not enough to guarantee the correct +execution. As already mentioned in the [WrapperTx](#wrappertx) section, the +block proposer and the validators also need to access the storage to check that +the first transaction counter of a sequence is actually the expected one. + +The entire counter ordering is only done on the `WrapperTx`: if the inner +counter is wrong then the inner transaction will fail and the signer of the +corresponding wrapper will be charged with fees. This incentivizes submitters to +produce valid transactions and discourages malicious user from rewrapping and +resubmitting old transactions. + +#### Mempool checks + +As a form of optimization to prevent mempool spamming, some of the checks that +have been introduced in this document will also be brought to the +`mempool_validate` function. Of course, we always refer to checks on the +`WrapperTx` only. More specifically: + +- Check the `ChainId` field +- Check the signature of the transaction against the `pk` field of the + `WrapperTx` +- Perform a limited check on the transaction counter + +Regarding the last point, `mempool_validate` will check if the counter in the +transaction is `>=` than the one in storage for the address signing the +`WrapperTx`. A complete check (checking for strict equality) is not feasible, as +described in the [relative](#mempool-counter-validation) section. + +### Alternatives considered + +In this section we list some possible solutions that were taken into +consideration during the writing of this solution but were eventually discarded. + +#### Mempool counter validation + +The idea of performing a complete validation of the transaction counters in the +`mempool_validate` function was discarded because of a possible flaw. + +Suppose a client sends five transactions (counters from 1 to 5). The mempool of +the next block proposer is not guaranteed to receive them in order: something on +the network could shuffle the transactions up so that they arrive in the +following order: 2-3-4-5-1. Now, since we validate every single transaction to +be included in the mempool in the exact order in which we receive them, we would +discard the first four transactions and only accept the last one, that with +counter 1. Now the next block proposer might have the four discarded +transactions in its mempool (since those were not added to the previous block +and therefore not evicted from the other mempools, at least they shouldn't, see +[block rejection](#block-rejection)) and could therefore include them in the +following block. But still, a process that could have ended in a single block +actually took two blocks. Moreover, there are two more issues: + +- The next block proposer might have the remaining transactions out of order in + his mempool as well, effectively propagating the same issue down to the next + block proposer +- The next block proposer might not have these transactions in his mempool at + all + +Finally, transactions that are not allowed into the mempool don't get propagated +to the other peers, making their inclusion in a block even harder. It is instead +better to avoid a complete filter on the transactions based on their order in +the mempool: instead we are going to perform a simpler check and then let the +block proposer rearrange them correctly when proposing the block. + +#### In-protocol protection for InnerTx + +An alternative implementation could place the protection for the inner tx in +protocol, just like the wrapper one, based on the transaction counter inside +`SignedTxData`. The check would run in `process_proposal` and the update in +`finalize_block`, just like for the wrapper transaction. This implementation, +though, shows two drawbacks: + +- it implies the need for an hard fork in case of a modification of the replay + protection mechanism +- it's not clear who's the source of the inner transaction from the outside, as + that depends on the specific code of the transaction itself. We could use + specific whitelisted txs set to define when it requires a counter (would not + work for future programmable transactions), but still, we have no way to + define which address should be targeted for replay protection (**blocking + issue**) + +#### In-protocol counter increase for InnerTx + +In the [storage counter](#storage-counters) section we mentioned the issue of +increasing the transaction counter for an inner tx even in case of failure. A +possible solution that we took in consideration and discarded was to increase +the counter from protocol in case of a failure. + +This is technically feasible since the protocol is aware of the keys modified by +the transaction and also of the results of the validity predicates (useful in +case the transaction updated more than one counter in storage). It is then +possible to recover the value and reapply the change directly from protocol. +This logic though, is quite dispersive, since it effectively splits the +management of the counter for the `InnerTx` among Wasm and protocol, while our +initial intent was to keep it completely in Wasm. + +#### Single counter in storage + +We can't use a single transaction counter in storage because this would prevent +batching. + +As an example, if a client (with a current counter in storage holding value 5) +generates two transactions to be included in the same block, signing both the +outer and the inner (default behavior of the client), it would need to generate +the following transaction counters: + +``` +[ + T1: (WrapperCtr: 5, InnerCtr: 6), + T2: (WrapperCtr: 7, InnerCtr: 8) +] +``` + +Now, the current execution model of Namada includes the `WrapperTx` in a block +first to then decrypt and execute the inner tx in the following block +(respecting the committed order of the transactions). That would mean that the +outer tx of `T1` would pass validation and immediately increase the counter to 6 +to prevent a replay attack in the same block. Now, the outer tx of `T2` will be +processed but it won't pass validation because it carries a counter with value 7 +while the ledger expects 6. + +To fix this, one could think to set the counters as follows: + +``` +[ + T1: (WrapperCtr: 5, InnerCtr: 7), + T2: (WrapperCtr: 6, InnerCtr: 8) +] +``` + +This way both the transactions will be considered valid and executed. The issue +is that, if the second transaction is not included in the block (for any +reason), than the first transaction (the only one remaining at this point) will +fail. In fact, after the outer tx has correctly increased the counter in storage +to value 6 the block will be accepted. In the next block the inner transaction +will be decrypted and executed but this last step will fail since the counter in +`SignedTxData` carries a value of 7 and the counter in storage has a value of 6. + +To cope with this there are two possible ways. The first one is that, instead of +checking the exact value of the counter in storage and increasing its value by +one, we could check that the transaction carries a counter `>=` than the one in +storage and write this one (not increase) to storage. The problem with this is +that it the lack of support for strict ordering of execution. + +The second option is to keep the usual increase strategy of the counter +(increase by one and check for strict equality) and simply use two different +counters in storage for each address. The transaction will then look like this: + +``` +[ + T1: (WrapperCtr: 5, InnerCtr: 5), + T2: (WrapperCtr: 6, InnerCtr: 6) +] +``` + +Since the order of inclusion of the `WrapperTxs` forces the same order of the +execution for the inner ones, both transactions can be correctly executed and +the correctness will be maintained even in case `T2` didn't make it to the block +(note that the counter for an inner tx and the corresponding wrapper one don't +need to coincide). + +## Wrapper-bound InnerTx + +The solution is to tie an `InnerTx` to the corresponding `WrapperTx`. By doing +so, it becomes impossible to rewrap an inner transaction and, therefore, all the +attacks related to this practice would be unfeasible. This mechanism requires +even less space in storage (only a 64 bit counter for every address signing +wrapper transactions) and only one check on the wrapper counter in protocol. As +a con, it requires communication between the signer of the inner transaction and +that of the wrapper during the transaction construction. This solution also +imposes a strict ordering on the wrapper transactions issued by a same address. + +To do so we will have to change the current definition of the two tx structs to +the following: + +```rust +pub struct WrapperTx { + /// The fee to be payed for including the tx + pub fee: Fee, + /// Used to determine an implicit account of the fee payer + pub pk: common::PublicKey, + /// Max amount of gas that can be used when executing the inner tx + pub gas_limit: GasLimit, + /// Lifetime of the transaction, also determines which decryption key will be used + pub expiration: DateTimeUtc, + /// Chain identifier for replay protection + pub chain_id: ChainId, + /// Transaction counter for replay protection + pub tx_counter: u64, + /// the encrypted payload + pub inner_tx: EncryptedTx, +} + +pub struct Tx { + pub code: Vec, + pub data: Option>, + pub timestamp: DateTimeUtc, + pub wrapper_commit: Option, +} +``` + +The Wrapper transaction no longer holds the inner transaction hash while the +inner one now holds a commit to the corresponding wrapper tx in the form of the +hash of a `WrapperCommit` struct, defined as: + +```rust +pub struct WrapperCommit { + pub pk: common::PublicKey, + pub tx_counter: u64, + pub expiration: DateTimeUtc, + pub chain_id: ChainId, +} +``` + +The `pk-tx_counter` couple contained in this struct, uniquely identifies a +single `WrapperTx` (since a valid tx_counter is unique given the address) so +that the inner one is now bound to this specific wrapper. The remaining fields, +`expiration` and `chain_id`, will tie these two values given their importance in +terms of safety (see the [relative](#wrappertx-checks) section). Note that the +`wrapper_commit` field must be optional because the `WrapperTx` struct itself +gets converted to a `Tx` struct before submission but it doesn't need any +commitment. + +Both the inner and wrapper tx get signed on their hash, as usual, to prevent +tampering with data. When a wrapper gets processed by the ledger, we first check +the validity of the signature, checking that none of the fields were modified: +this means that the inner tx embedded within the wrapper is, in fact, the +intended one. This last statement means that no external attacker has tampered +data, but the tampering could still have been performed by the signer of the +wrapper before signing the wrapper transaction. + +If this check (and others, explained later in the [checks](#wrappertx-checks) +section) passes, then the inner tx gets decrypted in the following block +proposal process. At this time we check that the order in which the inner txs +are inserted in the block matches that of the corresponding wrapper txs in the +previous block. To do so, we rely on an in-storage queue holding the hash of the +`WrapperCommit` struct computed from the wrapper tx. From the inner tx we +extract the `WrapperCommit` hash and check that it matches that in the queue: if +they don't it means that the inner tx has been reordered and we reject the +block. + +If this check passes then we can send the inner transaction to the wasm +environment for execution: if the transaction is signed, then at least one VP +will check its signature to spot possible tampering of the data (especially by +the wrapper signer, since this specific case cannot be checked before this step) +and, if this is the case, will reject this transaction and no storage +modifications will be applied. + +In summary: + +- The `InnerTx` carries a unique identifier of the `WrapperTx` embedding it +- Both the inner and wrapper txs are signed on all of their data +- The signature check on the wrapper tx ensures that the inner transaction is + the intended one and that this wrapper has not been used to wrap a different + inner tx. It also verifies that no tampering happened with the inner + transaction by a third party. Finally, it ensures that the public key is the + one of the signer +- The check on the `WrapperCommit` ensures that the inner tx has not been + reordered nor rewrapped (this last one is a non-exhaustive check, inner tx + data could have been tampered with by the wrapper signer) +- The signature check of the inner tx performed in Vp grants that no data of the + inner tx has been tampered with, effectively verifying the correctness of the + previous check (`WrapperCommit`) + +This sequence of controls makes it no longer possible to rewrap an `InnerTx` +which is now bound to its wrapper. This implies that replay protection is only +needed on the `WrapperTx` since there's no way to extract the inner one, rewrap +it and replay it. + +### WrapperTx checks + +In `mempool_validation` we will perform some checks on the wrapper tx to +validate it. These will involve: + +- Valid signature +- `GasLimit` is below the block gas limit (see the + [fee specs](../economics/fee-system.md) for more details) +- `Fees` are paid with an accepted token and match the minimum amount required + (see the [fee specs](../economics/fee-system.md) for more details) +- Valid chainId +- Valid transaction counter +- Valid expiration + +These checks can all be done before executing the transactions themselves. If +any of these fails, the transaction should be considered invalid and the action +to take will be one of the followings: + +1. If the checks fail on the signature, chainId, expiration or transaction + counter, then this transaction will be forever invalid, regardless of the + possible evolution of the ledger's state. There's no need to include the + transaction in the block nor to increase the transaction counter. Moreover, + we **cannot** include this transaction in the block to charge a fee (as a + sort of punishment) because these errors may not depend on the signer of the + tx (could be due to malicious users or simply a delay in the tx inclusion in + the block) +2. If the checks fail on `Fee` or `GasLimit` the transaction should be + discarded. In theory the gas limit of a block is a Namada parameter + controlled by governance, so there's a chance that the transaction could + become valid in the future should this limit be raised. The same applies to + the token whitelist and the minimum fee required. However we can expect a + slow rate of change of these parameters so we can reject the tx (the + submitter can always resubmit it at a future time) +3. If all the checks pass validation we will include the transaction in the + block to increase the counter and charge the fee + +Note that, regarding point one, there's a distinction to be made about an +invalid `tx_counter` which could be invalid because of being old or being in +advance. To solve this last issue (counter greater than the expected one), we +have to introduce the concept of a lifetime (or timeout) for the transactions: +basically, the `WrapperTx` will hold an extra field called `expiration` stating +the maximum time up until which the submitter is willing to see the transaction +executed. After the specified time the transaction will be considered invalid +and discarded regardless of all the other checks. This way, in case of a +transaction with a counter greater than expected, it is sufficient to wait till +after the expiration to submit more transactions, so that the counter in storage +is not modified (kept invalid for the transaction under observation) and +replaying that tx would result in a rejection. + +This actually generalizes to a more broad concept. In general, a transaction is +valid at the moment of submission, but after that, a series of external factors +(ledger state, etc.) might change the mind of the submitter who's now not +interested in the execution of the transaction anymore. By introducing this new +field we are introducing a new constraint in the transaction's contract, where +the ledger will make sure to prevent the execution of the transaction after the +deadline and, on the other side, the submitter commits himself to the result of +the execution at least until its expiration. If the expiration is reached and +the transaction has not been executed the submitter can decide to submit a new, +identical transaction if he's still interested in the changes carried by it. + +In our design, the `expiration` will hold until the transaction is executed, +once it's executed, either in case of success or failure, the `tx_counter` will +be increased and the transaction will not be replayable. In essence, the +transaction submitter commits himself to one of these three conditions: + +- Transaction is invalid regardless of the specific state +- Transaction is executed (either with success or not) and the transaction + counter is increased +- Expiration time has passed + +The first condition satisfied will invalidate further executions of the same tx. + +Since the signer of the wrapper may be different from the one of the inner we +also need to include this `expiration` field in the `WrapperCommit` struct, to +prevent the signer of the wrapper from setting a lifetime which is in conflict +with the interests of the inner signer. Note that adding a separate lifetime for +the wrapper alone (which would require two separate checks) doesn't carry any +benefit: a wrapper with a lifetime greater than the inner would have no sense +since the inner would fail. Restricting the lifetime would work but it also +means that the wrapper could prevent a valid inner transaction from being +executed. We will then keep a single `expiration` field specifying the wrapper +tx max time (the inner one will actually be executed one block later because of +the execution mechanism of Namada). + +To prevent the signer of the wrapper from submitting the transaction to a +different chain, the `ChainId` field should also be included in the commit. + +Finally, in case the transaction run out of gas (based on the provided +`GasLimit` field of the wrapper) we don't need to take any action: by this time +the transaction counter will have already been incremented and the tx is not +replayable anymore. In theory, we don't even need to increment the counter since +the only way this transaction could become valid is a change in the way gas is +accounted, which might require a fork anyway, and consequently a change in the +required `ChainId`. However, since we can't tell the gas consumption before the +inner tx has been executed, we cannot anticipate this check. + +All these checks are also run in `process_proposal` with an addition: validators +also check that the wrapper signer has enough funds to pay the fee. This check +should not be done in mempool because the funds available for a certain address +are variable in time and should only be checked at block inclusion time. If any +of the checks fail here, the entire block is rejected forcing a new Tendermint +round to begin (see a better explanation of this choice in the +[relative](#block-rejection) section). + +The `expiration` parameter also justifies that the check on funds is only done +in `process_proposal` and not in mempool. Without it, the transaction could be +potentially executed at any future moment, possibly going against the mutated +interests of the submitter. With the expiration parameter, now, the submitter +commits himself to accept the execution of the transaction up to the specified +time: it's going to be his responsibility to provide a sensible value for this +parameter. Given this constraint the transaction will be kept in mempool up +until the expiration (since it would become invalid after that in any case), to +prevent the mempool from increasing too much in size. + +This mechanism can also be applied to another scenario. Suppose a transaction +was not propagated to the network by a node (or a group of colluding nodes). +Now, this tx might be valid, but it doesn't get inserted into a block. Without +an expiration, if the submitter doesn't submit any other transaction (which gets +included in a block to increase the transaction counter), this tx can be +replayed (better, applied, since it was never executed in the first place) at a +future moment in time when the submitter might not be willing to execute it any +more. + +### WrapperCommit + +The fields of `WrapperTx` not included in `WrapperCommit` are at the discretion +of the `WrapperTx` producer. These fields are not included in the commit because +of one of these two reasons: + +- They depend on the specific state of the wrapper signer and cannot be forced + (like `fee`, since the wrapper signer must have enough funds to pay for those) +- They are not a threat (in terms of replay attacks) to the signer of the inner + transaction in case of failure of the transaction + +In a certain way, the `WrapperCommit` not only binds an `InnerTx` no a wrapper, +but effectively allows the inner to control the wrapper by requesting some +specific parameters for its creation and bind these parameters among the two +transactions: this allows us to apply the same constraints to both txs while +performing the checks on the wrapper only. + +### Transaction creation process + +To craft a transaction, the process will now be the following (optional steps +are only required if the signer of the inner differs from that of the wrapper): + +- (**Optional**) the `InnerTx` constructor request, to the wrapper signer, his + public key and the `tx_counter` to be used +- The `InnerTx` is constructed in its entirety with also the `wrapper_commit` + field to define the constraints of the future wrapper +- The produced `Tx` struct get signed over all of its data (with `SignedTxData`) + producing a new struct `Tx` +- (**Optional**) The inner tx produced is sent to the `WrapperTx` producer + together with the `WrapperCommit` struct (required since the inner tx only + holds the hash of it) +- The signer of the wrapper constructs a `WrapperTx` compliant with the + `WrapperCommit` fields +- The produced `WrapperTx` gets signed over all of its fields + +Compared to a solution not binding the inner tx to the wrapper one, this +solution requires the exchange of 3 messages (request `tx_counter`, receive +`tx_counter`, send `InnerTx`) between the two signers (in case they differ), +instead of one. However, it allows the signer of the inner to send the `InnerTx` +to the wrapper signer already encrypted, guaranteeing a higher level of safety: +only the `WrapperCommit` struct should be sent clear, but this doesn't reveal +any sensitive information about the inner transaction itself. \ No newline at end of file From b6299e29a7ce944b1ec218f558eea43d364ef001 Mon Sep 17 00:00:00 2001 From: bengtlofgren Date: Mon, 7 Aug 2023 09:44:40 +0100 Subject: [PATCH 05/13] replay protection passive voice --- .../pages/base-ledger/replay-protection.mdx | 232 +++++------------- 1 file changed, 63 insertions(+), 169 deletions(-) diff --git a/packages/specs/pages/base-ledger/replay-protection.mdx b/packages/specs/pages/base-ledger/replay-protection.mdx index 1a6bd01f..4033dd85 100644 --- a/packages/specs/pages/base-ledger/replay-protection.mdx +++ b/packages/specs/pages/base-ledger/replay-protection.mdx @@ -85,7 +85,7 @@ A transaction is constructed as follows: network Note that the signer of the `WrapperTx` and that of the inner one don't need to -coincide, but the signer of the wrapper will be charged with gas and fees. In +coincide, but the signer of the wrapper is charged with gas and fees. In the execution steps: 1. The `WrapperTx` signature is verified and, only if valid, the tx is processed @@ -98,31 +98,19 @@ the execution steps: signature of the transaction: if the signature is not valid, the VP will deem the transaction invalid and the changes won't be applied to the storage -The signature checks effectively prevent any tampering with the transaction data -because that would cause the checks to fail and the transaction to be rejected. +The transaction data is effectively prevented from being tampered with by the signature checks, since any such tampering would cause the checks to fail and the transaction to be rejected. For a more in-depth view, please refer to the [Namada execution spec](./execution.md). -### Tendermint replay protection +### CometBFT replay protection -The underlying consensus engine, +A first layer of protection is provided in the mempool of the underlying consensus engine, [CometBFT](https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_app_requirements.md#connection-state), -provides a first layer of protection in its mempool which is based on a cache of -previously seen transactions. This mechanism is actually aimed at preventing a -block proposer from including an already processed transaction in the next -block, which can happen when the transaction has been received late. Of course, -this also acts as a countermeasure against intentional replay attacks. This -check though, like all the checks performed in `CheckTx`, is weak, since a -malicious validator could always propose a block containing invalid -transactions. There's therefore the need for a more robust replay protection -mechanism implemented directly in the application. +which is based on a cache of transactions previously seen. This mechanism is aimed at preventing an already processed transaction from being included in the next block by a block proposer, which can occur when the transaction is received late. Of course, this also acts as a countermeasure against intentional replay attacks. However, this check, like all the checks performed in CheckTx, is weak, as a block containing invalid transactions could still be proposed by a malicious validator. Therefore, there is a need for a more robust replay protection mechanism implemented directly in the application. ## Implementation -Namada replay protection consists of three parts: the hash-based solution for -both `EncryptedTx` (also called the `InnerTx`) and `WrapperTx`, a way to -mitigate replay attacks in case of a fork and a concept of a lifetime for the -transactions. +Namada replay protection consists of three parts: the hash-based solution for both `EncryptedTx` (also referred to as the `InnerTx`) and `WrapperTx`, a way to mitigate replay attacks in case of a fork, and a concept of a lifetime for transactions. ### Hash register @@ -130,65 +118,32 @@ The actual Wasm code and data for the transaction are encapsulated inside a struct `Tx`, which gets encrypted as an `EncryptedTx` and wrapped inside a `WrapperTx` (see the [relative](#encryption-authentication) section). This inner transaction must be protected from replay attacks because it carries the actual -semantics of the state transition. Moreover, even if the wrapper transaction was -protected from replay attacks, an attacker could extract the inner transaction, -rewrap it, and replay it. Note that for this attack to work, the attacker will -need to sign the outer transaction himself and pay gas and fees for that, but -this could still cause much greater damage to the parties involved in the inner -transaction. - -`WrapperTx` is the only type of transaction currently accepted by the ledger. It -must be protected from replay attacks because, if it wasn't, a malicious user -could replay the transaction as is. Even if the inner transaction implemented -replay protection or, for any reason, wasn't accepted, the signer of the wrapper -would still pay for gas and fees, effectively suffering economic damage. - -To prevent the replay of both these transactions we will rely on a set of -already processed transactions' digests that will be kept in storage. These -digests will be computed on the **unsigned** transactions, to support replay -protection even for [multisigned](multisignature.md) transactions: in this case, -if hashes were taken from the signed transactions, a different set of signatures -on the same tx would produce a different hash, effectively allowing for a -replay. To support this, we'll first need to update the `WrapperTx` hash field -to contain the hash of the unsigned inner tx, instead of the signed one: this -doesn't affect the overall safety of Namada (since the wrapper is still signed -over all of its bytes, including the inner signature) and allows for early -replay attack checks in mempool and at wrapper block-inclusion time. -Additionally, we need a subspace in storage headed by a `ReplayProtection` -internal address: +semantics of the state transition. This inner transaction must be protected from replay attacks, as it carries the actual semantics of the state transition. Furthermore, even if the wrapper transaction were protected from replay attacks, the inner transaction could still be extracted by an attacker, rewrapped, and replayed. It should be noted that for this attack to succeed, the attacker would need to sign the outer transaction themselves and pay gas and fees for it. However, this could still cause significant damage to the parties involved in the inner transaction. -``` +`WrapperTx` is the only type of transaction currently accepted by the ledger. It must be protected from replay attacks; otherwise, a malicious user could replay the transaction as is. Even if the inner transaction implemented replay protection or was not accepted for any reason, the signer of the wrapper would still incur gas and fees, resulting in economic damage. + +To prevent the replay of both these transactions, reliance is placed on a set of digests from already processed transactions that are kept in storage. These digests are computed on the **unsigned** transactions, to support replay +protection even for [multisigned](multisignature.md) transactions. In this case, using hashes from the signed transactions would result in a different hash for a different set of signatures on the same transaction, allowing for a replay. To achieve this, the `WrapperTx` hash field contins the hash of the unsigned inner transaction, instead of the signed one. This change does not impact the overall safety of Namada, as the wrapper is still signed over all its bytes, including the inner signature. The modification also allows for early replay attack checks in the mempool and during wrapper block-inclusion. + +In addition, a subspace in storage is required, headed by a `ReplayProtection` internal address: +``` bash /\$ReplayProtectionAddress/\$tx0_hash: None /\$ReplayProtectionAddress/\$tx1_hash: None /\$ReplayProtectionAddress/\$tx2_hash: None ... ``` -The hashes will form the last part of the path to allow for a fast storage -lookup. - -The consistency of the storage subspace is of critical importance for the -correct working of the replay protection mechanism. To protect it, a validity -predicate will check that no changes to this subspace are applied by any wasm -transaction, as those should only be available from protocol. - -Both in `mempool_validation` and `process_proposal` we will perform a check -(together with others, see the [relative](#wrapper-checks) section) on both the -digests against the storage to check that neither of the transactions has -already been executed: if this doesn't hold, the `WrapperTx` will not be -included into the mempool/block respectively. In `process_proposal` we'll use a -temporary cache to prevent a replay of a transaction in the same block. If both -checks pass then the transaction is included in the block. The hashes are +The hashes are positioned at the end of the path to enable rapid storage lookups. + +The consistency of the storage subspace is critically important for the correct functioning of the replay protection mechanism. To safeguard it, a validity predicate will verify that no changes to this subspace are applied by any wasm transaction, as these changes should only originate from the protocol. + +In both `mempool_validation` and `process_proposal` a check is performed +(in conjuction with other checks, as described in the [relative section](#wrapper-checks)) on the digests against the storage to ensure that neither of the transactions has already been executed. If this condition is not met, the `WrapperTx` is not included in the mempool or block, respectively. In `process_proposal` a +temporary cache is used to prevent the replay of a transaction in the same block. If both +checks pass, the transaction is included in the block. The hashes are committed to storage in `finalize_block` and the transaction is executed. -In the next block we deserialize the inner transaction, check the validity of -the decrypted txs and their correct order: if the order is off a new round of -CometBFT will start. If instead an error is found in any single decrypted tx, -we remove from storage the previously inserted hash of the inner tx to allow it -to be rewrapped, and discard the tx itself. Finally, in `finalize_block` we -execute the tx: if it runs out of gas then we'll remove its hash from storage, -again to allow rewrapping and executing the transaction, otherwise we'll keep -the hash in storage (both in case of success or failure of the tx). +In the subsequent block, the inner transaction is deserialized, and the validity of the decrypted transactions and their correct order are verified. If the order is incorrect, a new round of CometBFT commences. If an error is found in any individual decrypted transaction, the previously inserted hash of the inner transaction is removed from storage to allow for rewrapping, and the transaction itself is discarded. Finally, in `finalize_block` the transaction is executed. If the transaction runs out of gas, its hash will be removed from storage to enable rewrapping and execution of the transaction. Otherwise, the hash will remain in storage, regardless of the success or failure of the transaction. #### Optional unshielding @@ -246,12 +201,10 @@ case, a review of the replay protection mechanism might be required. ### Forks -In the case of a fork, the transaction hash is not enough to prevent replay -attacks. Transactions, in fact, could still be replayed on the other branch as -long as their format is kept unchanged and the counters in storage match. +In the case of a fork, replay attacks are not prevented by the transaction hash alone. Transactions could still be replayed on the other branch as long as their format remains unchanged and the counters in storage match. -To mitigate this problem, transactions will need to carry a `ChainId` identifier -to tie them to a specific fork. This field needs to be added to the `Tx` struct +To mitigate this problem, transactions need to carry a `ChainId` identifier +to tie them to a specific fork. This field must be added to the `Tx` struct so that it applies to both `WrapperTx` and `EncryptedTx`: ```rust @@ -263,45 +216,25 @@ pub struct Tx { } ``` -This new field will be signed just like the other ones and is therefore subject -to the same guarantees explained in the [initial](#encryption-authentication) -section. The validity of this identifier will be checked in `process_proposal` -for both the outer and inner tx: if a transaction carries an unexpected chain -id, it won't be applied, meaning that no modifications will be applied to -storage. +This new field is signed just like the other ones and is therefore subject to the same guarantees explained in the [initial](#encryption-authentication) section. The validity of this identifier is checked in `process_proposal` +for both the outer and inner tx: for both the outer and inner transactions. If a transaction carries an unexpected chain id, it won't be applied, meaning that no modifications will be applied to storage. ### Transaction lifetime -In general, a transaction is valid at the moment of submission, but after that, -a series of external factors (ledger state, etc.) might change the mind of the -submitter who's now not interested in the execution of the transaction anymore. - -We have to introduce the concept of a lifetime (or timeout) for the -transactions: basically, the `Tx` struct will hold an optional extra field -called `expiration` stating the maximum `DateTimeUtc` up until which the -submitter is willing to see the transaction executed. After the specified time, -the transaction will be considered invalid and discarded regardless of all the -other checks. - -By introducing this new field we are setting a new constraint in the -transaction's contract, where the ledger will make sure to prevent the execution -of the transaction after the deadline and, on the other side, the submitter -commits himself to the result of the execution at least until its expiration. If -the expiration is reached and the transaction has not been executed the -submitter can decide to submit a new transaction if he's still interested in the -changes carried by it. - -In our design, the `expiration` will hold until the transaction is executed: -once it's executed, either in case of success or failure, the tx hash will be -written to storage and the transaction will not be replayable. In essence, the -transaction submitter commits himself to one of these three conditions: +In general, a transaction is valid at the moment of submission, but after that, various external factors (ledger state, etc.) might change the submitter's intent. The submitter may no longer be interested in the execution of the transaction. + +The concept of a lifetime (or timeout) for transactions needs to be introduced: the `Tx` struct will include an optional additional field called `expiration`, indicating the maximum `DateTimeUtc` by which the submitter is willing to see the transaction executed. After the specified time, the transaction is considered invalid and discarded, irrespective of other checks. + +By introducing this new field, a new constraint is added to the transaction's contract. The ledger ensures that the transaction is prevented from execution after the deadline. On the other hand, the submitter commits to the execution outcome until its expiration. If the expiration is reached and the transaction has not been executed, the submitter can decide to submit a new transaction if still interested in the changes carried by it. + +In the current design, the `expiration` holds until the transaction is executed. Once executed, the transaction hash is committed to storage, preventing further replays (regardless of whether the tx was successful or not). Essentially, the transaction submitter commits to one of these three conditions: - Transaction is invalid regardless of the specific state - Transaction is executed (either with success or not) and the transaction hash is saved in the storage - Expiration time has passed -The first condition satisfied will invalidate further executions of the same tx. +Any satisfied condition invalidates further executions of the same transaction. ```rust pub struct Tx { @@ -314,83 +247,44 @@ pub struct Tx { } ``` -The wrapper transaction will match the `expiration` of the inner (if any) for a -correct execution. Note that we need this field also for the wrapper to -anticipate the check at mempool/proposal evaluation time, but also to prevent -someone from inserting a wrapper transaction after the corresponding inner has -expired forcing the wrapper signer to pay for the fees. +The wrapper transaction must match the `expiration` of the inner transaction (if any). This field is also needed for the wrapper to anticipate the check at mempool/proposal evaluation time. Additionally, it prevents someone from inserting a wrapper transaction after the corresponding inner transaction has expired, compelling the wrapper signer to pay the fees. ### Wrapper checks -In `mempool_validation` we will perform some checks on the wrapper tx to -validate it. These will involve: +In mempool_validation, several checks are performed on the wrapper transaction to validate it. These checks include: - Signature -- `GasLimit` is below the block gas limit -- `Fees` are paid with an accepted token and match the minimum amount required -- `ChainId` +- GasLimit is below the block gas limit +- Fees are paid with an accepted token and match the minimum required amount +- ChainId - Transaction hash - Expiration -- Wrapper signer has enough funds to pay the fee -- Unshielding tx (if present), is indeed a masp unshielding transfer -- The unshielding tx (if present) releases the minimum amount of tokens required - to pay fees -- The unshielding tx (if present) runs succesfully - -For gas, fee and the unshielding tx more details can be found in the -[fee specs](../economics/fee-system.md). - -These checks can all be done before executing the transactions themselves. If -any of these fails, the transaction should be considered invalid and the action -to take will be one of the followings: - -1. If the checks fail on the signature, chainId, expiration, transaction hash, - balance or the unshielding tx, then this transaction will be forever invalid, - regardless of the possible evolution of the ledger's state. There's no need - to include the transaction in the block. Moreover, we **cannot** include this - transaction in the block to charge a fee (as a sort of punishment) because - these errors may not depend on the signer of the tx (could be due to - malicious users or simply a delay in the tx inclusion in the block) -2. If the checks fail on `Fee` or `GasLimit` the transaction should be - discarded. In theory the gas limit of a block is a Namada parameter - controlled by governance, so there's a chance that the transaction could - become valid in the future should this limit be raised. The same applies to - the token whitelist and the minimum fee required. However we can expect a - slow rate of change of these parameters so we can reject the tx (the - submitter can always resubmit it at a future time) - -If instead all the checks pass validation we will include the transaction in the -block to store the hash and charge the fee. - -All these checks are also run in `process_proposal`. - -This mechanism can also be applied to another scenario. Suppose a transaction -was not propagated to the network by a node (or a group of colluding nodes). -Now, this tx might be valid, but it doesn't get inserted into a block. Without -an expiration, this tx can be replayed (better, applied, since it was never -executed in the first place) at a future moment in time when the submitter might -not be willing to execute it any more. +- Sufficient funds for wrapper signer to pay the fee +- Unshielding transaction (if present) is a valid masp unshielding transfer +- Unshielding transaction (if present) releases the minimum required tokens for fee payment +- Successful execution of the unshielding transaction (if present) + +More details about gas, fees, and the unshielding transaction can be found in the [fee specs](../economics/fee-system.md). + +These checks can all be conducted before executing the transactions themselves. If any of these checks fail, the transaction is considered invalid, and the appropriate action is taken: + +1. If the checks fail for signature, chainId, expiration, transaction hash, balance, or the unshielding transaction, the transaction is permanently invalid. It does not need to be included in the block. Furthermore, including the transaction in the block to impose a fee (as a form of punishment) is not possible, as these errors may not be due to the signer of the transaction (could be caused by malicious users or simply delayed transaction inclusion in the block). +2. If the checks fail for `Fee` or `GasLimit`, the transaction is discarded. In theory, the gas limit of a block is a Namada parameter governed by governance. Thus, the transaction could become valid in the future if this limit is increased. The same principle applies to the token whitelist and the minimum required fee. However, these parameters are expected to change slowly, so rejecting the transaction is reasonable (the submitter can always resubmit it in the future). + +If all checks pass validation, the transaction is included in the block to store the hash and apply the fee. + +All of these checks are also run in `process_proposal`. + +This mechanism can also be applied to another scenario. Suppose a transaction was not propagated to the network by a node (or a group of colluding nodes). This transaction might be valid, but it is not inserted into a block. Without an expiration, this transaction could be replayed (more accurately, applied, as it was never executed in the first place) at a later time when the submitter may no longer wish to execute it. ### Block rejection -To prevent a block proposer from including invalid transactions in a block, the -validators will reject the entire block in case they find a single invalid -wrapper transaction. +To prevent the inclusion of invalid transactions in a block by a block proposer, the validators reject the entire block if they encounter a single invalid wrapper transaction. -Rejecting the single invalid transaction while still accepting the block is not -a valid solution. In this case, in fact, the block proposer has no incentive to -include invalid transactions in the block because these would gain him no fees -but, at the same time, he doesn't really have a disincentive to not include -them, since in this case the validators will simply discard the invalid tx but -accept the rest of the block granting the proposer his fees on all the other -transactions. This, of course, applies in case the proposer has no other valid -tx to include. A malicious proposer could act like this to spam the block -without suffering any penalty. +Rejecting only the single invalid transaction while still allowing the acceptance of the entire block constitutes an inadequate solution. In this scenario, the block proposer lacks the motivation to incorporate invalid transactions into the block, as these do not yield any fees. However, simultaneously, there exists no real disincentive for the proposer to exclude such transactions. In such cases, validators simply discard the invalid transaction while approving the remainder of the block, thereby enabling the proposer to collect fees from all other transactions. This circumstance applies, of course, when the proposer lacks other valid transactions to include. A malicious proposer could employ this strategy to inundate the block with spam without facing any penalties. -To recap, a block is rejected when at least one of the following conditions is -met: +In summary, a block faces rejection if at least one of the following conditions is satisfied: -- At least one `WrapperTx` is invalid with respect to the checks listed in the +- One or more `WrapperTx` transactions are invalid in accordance with the checks delineated in the [relative section](#wrapper-checks) -- The order/number of decrypted txs differs from the order/number committed in - the previous block +- The order or number of decrypted transactions differs from the order or number committed in the previous block From 181a25eafce11ad6d41f08c9ed5e04625d3a6396 Mon Sep 17 00:00:00 2001 From: Bengt Lofgren <51077282+bengtlofgren@users.noreply.github.com> Date: Mon, 7 Aug 2023 14:39:40 +0100 Subject: [PATCH 06/13] Update packages/specs/pages/base-ledger.mdx Co-authored-by: Christopher Goes --- packages/specs/pages/base-ledger.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/specs/pages/base-ledger.mdx b/packages/specs/pages/base-ledger.mdx index 042e5ed0..f58a4288 100644 --- a/packages/specs/pages/base-ledger.mdx +++ b/packages/specs/pages/base-ledger.mdx @@ -3,7 +3,7 @@ The base ledger of Namada includes a [consensus mechanism](./base-ledger/consensus.md) and a validity-predicate based [execution system](./base-ledger/execution.md). ## Consensus -The consensus mechanism on Namada provides an algorithmic way for validators to communicate votes and collectively agree on a consistent state. The algorithim, coupled with a cryptoeconomic assurance called "proof of stake", ensures that non-colluding validators acting in their (economic) self interest will follow the consensus algorithm in a predictable manner. +The consensus mechanism on Namada provides an algorithmic way for validators to communicate votes and collectively agree on a consistent state. The algorithm, coupled with a cryptoeconomic voting power allocation mechanism called "proof of stake", is designed so that that non-colluding validators acting in their (economic) self interest will follow the consensus algorithm in a predictable manner. ## Validity predicates The validity-predicate based execution mechanism is inherited from the architectural design philosophy of Anoma. The fundamental idea is that a "valid state" is defined as that which satisfies a set of boolean conditions. These boolean conditions are encoded by functional "validity predicates", which are invoked whenever a state is being proposed. If all validity predicates in the system return the boolean `true`, this defines a valid state which validators can vote on. The validity predicate based mechanism differs from the traditional "smart-contract" based execution model, where a valid state is instead defined as that which results from a series of pre-defined valid execution steps. These execution steps are defined within the smart contract, and verifying the validity of the new state requires *each* validator to run the series of execution steps. From 7099146b7c32600ca04b24a17fae6549a485ae5a Mon Sep 17 00:00:00 2001 From: Bengt Lofgren <51077282+bengtlofgren@users.noreply.github.com> Date: Mon, 7 Aug 2023 14:39:56 +0100 Subject: [PATCH 07/13] Update packages/specs/pages/base-ledger/consensus.mdx Co-authored-by: Christopher Goes --- packages/specs/pages/base-ledger/consensus.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/specs/pages/base-ledger/consensus.mdx b/packages/specs/pages/base-ledger/consensus.mdx index d33d49a9..072e40d7 100644 --- a/packages/specs/pages/base-ledger/consensus.mdx +++ b/packages/specs/pages/base-ledger/consensus.mdx @@ -8,7 +8,7 @@ Namada uses [CometBFT](https://github.com/cometbft/cometbft/) (nee Tendermint Go Using the CometBFT consensus algorithm comes with a number of benefits including but not limited to: - Fast finality - - Tendermint achieves fast and deterministic finality, meaning that once a block is committed to the blockchain, it is irreversible. This is crucial for applications rely on settled transactions that cannot be rolled back. + - CometBFT achieves fast and deterministic finality, meaning that once a block is committed to the blockchain, it is irreversible. This is crucial for applications rely on settled transactions that cannot be rolled back. - Inter-blockchain communication system (IBC) - Composability with all other Tendermint based blockchains, such as Cosmos-ecosystem blockchains - Battle tested From 0557ab16b828ab960edaf4d9b270ee8ea67b80bd Mon Sep 17 00:00:00 2001 From: Bengt Lofgren <51077282+bengtlofgren@users.noreply.github.com> Date: Mon, 7 Aug 2023 14:40:05 +0100 Subject: [PATCH 08/13] Update packages/specs/pages/base-ledger/consensus.mdx Co-authored-by: Christopher Goes --- packages/specs/pages/base-ledger/consensus.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/specs/pages/base-ledger/consensus.mdx b/packages/specs/pages/base-ledger/consensus.mdx index 072e40d7..a328618c 100644 --- a/packages/specs/pages/base-ledger/consensus.mdx +++ b/packages/specs/pages/base-ledger/consensus.mdx @@ -10,7 +10,7 @@ Using the CometBFT consensus algorithm comes with a number of benefits including - Fast finality - CometBFT achieves fast and deterministic finality, meaning that once a block is committed to the blockchain, it is irreversible. This is crucial for applications rely on settled transactions that cannot be rolled back. - Inter-blockchain communication system (IBC) - - Composability with all other Tendermint based blockchains, such as Cosmos-ecosystem blockchains + - Composability with all other CometBFT-based blockchains, such as Cosmos-ecosystem blockchains - Battle tested - The entire cosmos-ecosystem have been using the Tendermint - Customisable From 3f2601936a04256c7d12e6db193fa44e2932e7f3 Mon Sep 17 00:00:00 2001 From: Bengt Lofgren <51077282+bengtlofgren@users.noreply.github.com> Date: Mon, 7 Aug 2023 14:40:16 +0100 Subject: [PATCH 09/13] Update packages/specs/pages/base-ledger/consensus.mdx Co-authored-by: Christopher Goes --- packages/specs/pages/base-ledger/consensus.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/specs/pages/base-ledger/consensus.mdx b/packages/specs/pages/base-ledger/consensus.mdx index a328618c..cb2ae99b 100644 --- a/packages/specs/pages/base-ledger/consensus.mdx +++ b/packages/specs/pages/base-ledger/consensus.mdx @@ -12,6 +12,6 @@ Using the CometBFT consensus algorithm comes with a number of benefits including - Inter-blockchain communication system (IBC) - Composability with all other CometBFT-based blockchains, such as Cosmos-ecosystem blockchains - Battle tested - - The entire cosmos-ecosystem have been using the Tendermint + - The entire Cosmos ecosystem has been using CometBFT (nee Tendermint) for years - Customisable - Allows the setting of various of parameters, including the ability to implement a custom proof of stake algorithm \ No newline at end of file From 56ed3a1645c9ed6089070a0d34d241ffe2cff3cf Mon Sep 17 00:00:00 2001 From: Bengt Lofgren <51077282+bengtlofgren@users.noreply.github.com> Date: Mon, 7 Aug 2023 14:40:23 +0100 Subject: [PATCH 10/13] Update packages/specs/pages/base-ledger/consensus.mdx Co-authored-by: Christopher Goes --- packages/specs/pages/base-ledger/consensus.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/specs/pages/base-ledger/consensus.mdx b/packages/specs/pages/base-ledger/consensus.mdx index cb2ae99b..e459e5ea 100644 --- a/packages/specs/pages/base-ledger/consensus.mdx +++ b/packages/specs/pages/base-ledger/consensus.mdx @@ -14,4 +14,4 @@ Using the CometBFT consensus algorithm comes with a number of benefits including - Battle tested - The entire Cosmos ecosystem has been using CometBFT (nee Tendermint) for years - Customisable - - Allows the setting of various of parameters, including the ability to implement a custom proof of stake algorithm \ No newline at end of file + - Allows the setting of various parameters, including the ability to implement a custom proof of stake algorithm \ No newline at end of file From 8023f4a6f2c932c4130159e1b5226040caed064f Mon Sep 17 00:00:00 2001 From: Bengt Lofgren <51077282+bengtlofgren@users.noreply.github.com> Date: Mon, 7 Aug 2023 08:51:44 -0600 Subject: [PATCH 11/13] Update packages/specs/pages/base-ledger/execution.mdx Co-authored-by: Christopher Goes --- packages/specs/pages/base-ledger/execution.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/specs/pages/base-ledger/execution.mdx b/packages/specs/pages/base-ledger/execution.mdx index 6c1ee539..25fc5c54 100644 --- a/packages/specs/pages/base-ledger/execution.mdx +++ b/packages/specs/pages/base-ledger/execution.mdx @@ -5,7 +5,7 @@ The Namada ledger execution system is based on an initial version of the Anoma e ## Validity predicates Conceptually, a validity predicate (VP) is a boolean function which takes four inputs: -1. The transaction itself (This may be because certain parts of the transaction needs to be extracted in the VP logic) +1. The transaction itself (certain parts of the transaction are sometimes analyzed in the VP logic) 2. The addresses that are involved with that specific VP 3. The storage state prior to a transaction execution 4. The storage state after the transaction execution From 66f14c4778a10bfd84057121b85ce263fc3c3d4e Mon Sep 17 00:00:00 2001 From: Bengt Lofgren <51077282+bengtlofgren@users.noreply.github.com> Date: Mon, 7 Aug 2023 08:51:58 -0600 Subject: [PATCH 12/13] Update packages/specs/pages/base-ledger/execution.mdx Co-authored-by: Christopher Goes --- packages/specs/pages/base-ledger/execution.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/specs/pages/base-ledger/execution.mdx b/packages/specs/pages/base-ledger/execution.mdx index 25fc5c54..4cdc39a5 100644 --- a/packages/specs/pages/base-ledger/execution.mdx +++ b/packages/specs/pages/base-ledger/execution.mdx @@ -21,7 +21,7 @@ The ledger features an account-based system (in which UTXO-based systems such as Interactions with the Namada ledger are made possible via transactions. In Namada, transactions are allowed to perform arbitrary modifications to the storage of any account, but the transaction will be accepted and state changes applied only if all the validity predicates that were triggered by the transaction accept it. That is, the accounts whose storage sub-spaces were touched by the transaction will all have their validity predicates verifying the transaction. A transaction may also explicitly elect an account as the verifier of that transaction, which will result in that validity predicate being invoked as well. A transaction can add any number of additional verifiers, but cannot remove the ones determined by the protocol. For example, a transparent fungible token transfer would typically trigger 3 validity predicates - those of the token, source and target addresses. -The ledger knows what addresses are involved in a wasm transaction because of how the storage is constructed. Each variable in storage is inherently tied to a substorage owned by some account, and thus that VP is invoked. +The ledger knows what addresses are involved in a WASM transaction because of how the storage is constructed. Each variable in storage is inherently tied to a substorage owned by some account, and thus that VP is invoked. ## Supported validity predicates From 0a4ef6e56abd0b31ec74f3459f312dd7aa0cf657 Mon Sep 17 00:00:00 2001 From: Bengt Lofgren <51077282+bengtlofgren@users.noreply.github.com> Date: Mon, 7 Aug 2023 08:52:14 -0600 Subject: [PATCH 13/13] Update packages/specs/pages/base-ledger/multisignature.mdx Co-authored-by: Christopher Goes --- packages/specs/pages/base-ledger/multisignature.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/specs/pages/base-ledger/multisignature.mdx b/packages/specs/pages/base-ledger/multisignature.mdx index f2204439..4f4d906e 100644 --- a/packages/specs/pages/base-ledger/multisignature.mdx +++ b/packages/specs/pages/base-ledger/multisignature.mdx @@ -8,7 +8,7 @@ The k-of-n multisignature validity predicate authorizes transactions on the basi Namada transactions are signed before being delivered to the network. This signature is checked by the invoked validity predicates to determine the validity of the transaction. To support multisignature, Namada's signed transaction data includes the plaintext data of what is being signed, as well as all valid signatures over that data. -Inherently, this implies that all user accounts are 1-of-1 multisignature accounts. +There are no special non-multisignature established accounts - all user accounts are just 1-of-1 multisignature accounts. ### Rust implementation