diff --git a/CHANGELOG.md b/CHANGELOG.md index 4a7cb193ce..4b22829430 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -130,7 +130,7 @@ Namada 0.27.0 is a minor release that incorporates the remaining essential proof can execute transactions that manipulate its own validator data ([\#2169](https://github.com/anoma/namada/pull/2169)) - Various improvements to the PoS code, including adding a panic on a slashing - failure, some more checked arithmetics, aesthetic code cleanup, and fixing a + failure, some more checked arithmetic, aesthetic code cleanup, and fixing a bug in is_delegator. ([\#2178](https://github.com/anoma/namada/pull/2178)) - Added type tags to transactions to enable hardware wallets to fully decode transactions even after minor Namada updates. @@ -279,7 +279,7 @@ Namada 0.24.0 is a minor release that introduces an SDK crate, PoS redelegation, data in storage. ([\#1944](https://github.com/anoma/namada/pull/1944)) - Query also IBC token balances ([\#1946](https://github.com/anoma/namada/issues/1946)) -- Increased resoultion of gas accounting for signature verification. +- Increased resolution of gas accounting for signature verification. ([\#1954](https://github.com/anoma/namada/pull/1954)) - Refactor benchmarks to avoid enabling `"testing`" and `"dev"`` features by default in the workspace. @@ -582,7 +582,7 @@ stability. show more info. ([\#1656](https://github.com/anoma/namada/pull/1656)) - Removed associated type on `masp::ShieldedUtils`. This type was an attempt to reduce the number of generic parameters needed when interacting - with MASP but resulted in making code re-use extremely difficult. + with MASP but resulted in making code reuse extremely difficult. ([\#1670](https://github.com/anoma/namada/pull/1670)) - Removed `impl From for EthBridgeVotingPower` and replaced it with a `TryFrom`. ([\#1692](https://github.com/anoma/namada/pull/1692)) @@ -597,7 +597,7 @@ stability. ETH bridge. ([\#1693](https://github.com/anoma/namada/pull/1693)) - PoS: Keep the data for last two epochs by default. ([\#1733](https://github.com/anoma/namada/pull/1733)) -- Refactored CLI into libraries for future re-use in integration tests and +- Refactored CLI into libraries for future reuse in integration tests and to enable generic IO. ([\#1738](https://github.com/anoma/namada/pull/1738)) ### TESTING @@ -757,7 +757,7 @@ Namada 0.17.2 is a minor release featuring improvements to the client stability. ([\#1512](https://github.com/anoma/namada/pull/1512)) - Improve help message for address add command ([\#1514](https://github.com/anoma/namada/issues/1514)) -- PoS: make a re-usable bonds and unbonds details query. +- PoS: make a reusable bonds and unbonds details query. ([\#1518](https://github.com/anoma/namada/pull/1518)) ## v0.17.1 @@ -809,7 +809,7 @@ wallet address derivation, transaction structure and the ledger stability. ([\#1425](https://github.com/anoma/namada/issues/1425)) - Added some missing cli option for cli wallet ([#1432](https://github.com/anoma/namada/pull/1432)) -- Improve logging error when submiting an invalid validator commission change tx +- Improve logging error when submitting an invalid validator commission change tx ([#1434](https://github.com/anoma/namada/pull/1434)) - Correct a typo in the error change commission error handling ([#1435](https://github.com/anoma/namada/pull/1435)) @@ -854,7 +854,7 @@ proposal. ### IMPROVEMENTS -- Make Tendermint consensus paramenters configurable via Namada configuration. +- Make Tendermint consensus parameters configurable via Namada configuration. ([#1399](https://github.com/anoma/namada/pull/1399)) - Improved error logs in `process_proposal` and added more info to `InternalStats` ([#1407](https://github.com/anoma/namada/pull/1407)) @@ -1379,7 +1379,7 @@ integrations. ### BUG FIXES -- Fix compatiblity of IBC Acknowledgement message and FungibleTokenData with +- Fix compatibility of IBC Acknowledgement message and FungibleTokenData with ibc-go ([#261](https://github.com/anoma/namada/pull/261)) - Fix the block header merkle root hash for response to finalizing block. ([#298](https://github.com/anoma/namada/pull/298)) @@ -1478,7 +1478,7 @@ Namada 0.8.0 is a regular minor release. ([#324](https://github.com/anoma/namada/pull/324)) - Added a StorageWrite trait for a common interface for transactions and direct storage access for protocol ([#331](https://github.com/anoma/namada/pull/331)) -- Re-use encoding/decoding storage write/read and handle any errors +- Reuse encoding/decoding storage write/read and handle any errors ([#334](https://github.com/anoma/namada/pull/334)) - Added a simpler prefix iterator API that returns `std::iter::Iterator` with the storage keys parsed and a variant that also decodes stored values with @@ -1486,12 +1486,12 @@ Namada 0.8.0 is a regular minor release. - Handles the case where a custom `$CARGO_TARGET_DIR` is set during WASM build ([#337](https://github.com/anoma/anoma/pull/337)) - Added `pre/post` methods into `trait VpEnv` that return objects implementing - `trait StorageRead` for re-use of library code written on top of `StorageRead` + `trait StorageRead` for reuse of library code written on top of `StorageRead` inside validity predicates. ([#380](https://github.com/anoma/namada/pull/380)) - Fix order of prefix iterator to be sorted by storage keys and add support for a reverse order prefix iterator. ([#409](https://github.com/anoma/namada/issues/409)) -- Re-use `storage_api::Error` type that supports wrapping custom error in `VpEnv` and `TxEnv` traits. +- Reuse `storage_api::Error` type that supports wrapping custom error in `VpEnv` and `TxEnv` traits. ([#465](https://github.com/anoma/namada/pull/465)) - Fixed governance parameters, tally, tx whitelist and renamed treasury ([#467](https://github.com/anoma/namada/issues/467)) @@ -1503,7 +1503,7 @@ Namada 0.8.0 is a regular minor release. - Added WASM transaction and validity predicate `Ctx` with methods for host environment functions to unify the interface of native VPs and WASM VPs under `trait VpEnv` ([#1093](https://github.com/anoma/anoma/pull/1093)) -- Allows simple retrival of aliases from addresses in the wallet without +- Allows simple retrieval of aliases from addresses in the wallet without the need for multiple hashmaps. This is the first step to improving the UI if one wants to show aliases when fetching addresses from anoma wallet ([#1138](https://github.com/anoma/anoma/pull/1138)) @@ -1760,7 +1760,7 @@ Anoma 0.5.0 is a scheduled minor release. - Dependency: Backport libp2p-noise patch that fixes a compilation issue from ([#908](https://github.com/anoma/anoma/issues/908)) -- Wasm: Re-add accidentaly removed `tx_ibc` WASM and `vm_env::ibc` module +- Wasm: Re-add accidentally removed `tx_ibc` WASM and `vm_env::ibc` module ([#916](https://github.com/anoma/anoma/pull/916)) - Ledger & Matchmaker: In "dev" chain with "dev" build, load WASM directly from the root `wasm` directory. ([#933](https://github.com/anoma/anoma/issues/933)) @@ -1900,7 +1900,7 @@ Anoma 0.4.0 is a scheduled minor release, released 31 January 2022. command. The command now doesn't unpack the network config archive into its default directories, if any of them are specified with non-default values. ([#813](https://github.com/anoma/anoma/issues/813)) -- Install the default token exchange matchmaker implemenetation into +- Install the default token exchange matchmaker implementation into `~/.cargo/lib` directory when building from source. When not absolute, the matchmaker will attempt to load the matchmaker from the same path as where the binary is being ran from, from `~/.cargo/lib` or the current working diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 029ae946c6..841e42fbe2 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -52,7 +52,7 @@ for i in $(ls -d .changelog/*/*/); do basename "$i"; done | sort | uniq The Namada SDK is exposed to any developer building upon Namada. Thus, any change made to a public facing function is a breaking change, and therefore should be documented in the Changelog under the `SDK` section. -The message should outline the exact API change, along with a small section describing *how* and *why* the componenet was change. This should give motivation and context to any developer building upon Namada on how they can update their code to the next version. +The message should outline the exact API change, along with a small section describing *how* and *why* the component was change. This should give motivation and context to any developer building upon Namada on how they can update their code to the next version. ## Development priorities diff --git a/Makefile b/Makefile index 59f105b1c7..61377f20a7 100644 --- a/Makefile +++ b/Makefile @@ -170,7 +170,7 @@ test-integration-save-proofs: TEST_FILTER=masp \ make test-integration-slow -# Run integration tests without specifiying any pre-built MASP proofs option +# Run integration tests without specifying any pre-built MASP proofs option test-integration-slow: RUST_BACKTRACE=$(RUST_BACKTRACE) \ $(cargo) +$(nightly) test integration::$(TEST_FILTER) --features integration \ @@ -230,7 +230,7 @@ test-benches: $(cargo) +$(nightly) test --package namada_benchmarks --benches # Run PoS state machine tests with shrinking disabled by default (can be -# overriden with `PROPTEST_MAX_SHRINK_ITERS`) +# overridden with `PROPTEST_MAX_SHRINK_ITERS`) test-pos-sm: cd proof_of_stake && \ RUST_BACKTRACE=1 \ diff --git a/apps/Cargo.toml b/apps/Cargo.toml index b0d098665f..e01b4bac86 100644 --- a/apps/Cargo.toml +++ b/apps/Cargo.toml @@ -55,7 +55,7 @@ mainnet = [ "namada/mainnet", ] std = ["ed25519-consensus/std", "rand/std", "rand_core/std", "namada/std", "namada_sdk/std"] -# for integration tests and test utilies +# for integration tests and test utilities testing = ["namada_test_utils"] benches = ["testing", "namada_test_utils"] integration = [] diff --git a/apps/src/lib/cli.rs b/apps/src/lib/cli.rs index c04f5a531b..384331a8be 100644 --- a/apps/src/lib/cli.rs +++ b/apps/src/lib/cli.rs @@ -1293,7 +1293,7 @@ pub mod cmds { fn def() -> App { App::new(Self::CMD) - .about("Query pgf stewards and continous funding.") + .about("Query pgf stewards and continuous funding.") .add_args::>() } } @@ -4784,7 +4784,7 @@ pub mod args { .def() .help( "Flag if the proposal is of type pgf-funding. \ - Used to control continous/retro pgf fundings.", + Used to control continuous/retro pgf fundings.", ) .conflicts_with_all([ PROPOSAL_ETH.name, @@ -6003,7 +6003,7 @@ pub mod args { .arg(FEE_TOKEN.def().help("The token for paying the gas")) .arg(FEE_UNSHIELD_SPENDING_KEY.def().help( "The spending key to be used for fee unshielding. If none is \ - provided, fee will be payed from the unshielded balance only.", + provided, fee will be paid from the unshielded balance only.", )) .arg(GAS_LIMIT.def().help( "The multiplier of the gas limit resolution defining the \ diff --git a/apps/src/lib/client/rpc.rs b/apps/src/lib/client/rpc.rs index 43acdc9302..b4a5ab20c6 100644 --- a/apps/src/lib/client/rpc.rs +++ b/apps/src/lib/client/rpc.rs @@ -1268,7 +1268,7 @@ pub async fn query_pgf(context: &impl Namada, _args: args::QueryPgf) { true => { display_line!( context.io(), - "Pgf stewards: no stewards are currectly set." + "Pgf stewards: no stewards are currently set." ) } false => { @@ -1375,7 +1375,7 @@ pub async fn query_protocol_parameters( let epoch_duration: EpochDuration = query_storage_value(context.client(), &key) .await - .expect("Parameter should be definied."); + .expect("Parameter should be defined."); display_line!( context.io(), "{:4}Min. epoch duration: {}", @@ -2487,7 +2487,7 @@ pub async fn query_has_storage_key< } /// Call the corresponding `tx_event_query` RPC method, to fetch -/// the current status of a transation. +/// the current status of a transaction. pub async fn query_tx_events( client: &C, tx_event_query: namada_sdk::rpc::TxEventQuery<'_>, diff --git a/apps/src/lib/client/tx.rs b/apps/src/lib/client/tx.rs index 74ef2bab4c..e1a0d5511c 100644 --- a/apps/src/lib/client/tx.rs +++ b/apps/src/lib/client/tx.rs @@ -105,7 +105,7 @@ pub async fn with_hardware_wallet<'a, U: WalletIo + Clone>( })?; if response_pubkey != pubkey { return Err(error::Error::Other(format!( - "Unrecognized public key fetched fom Ledger: {}. Expected {}.", + "Unrecognized public key fetched from Ledger: {}. Expected {}.", response_pubkey, pubkey, ))); } diff --git a/apps/src/lib/client/utils.rs b/apps/src/lib/client/utils.rs index 01e764c0a2..5007df9f21 100644 --- a/apps/src/lib/client/utils.rs +++ b/apps/src/lib/client/utils.rs @@ -1166,5 +1166,5 @@ fn safe_exit(code: i32) -> ! { #[cfg(test)] fn safe_exit(code: i32) -> ! { - panic!("Process exited unsuccesfully with error code: {}", code); + panic!("Process exited unsuccessfully with error code: {}", code); } diff --git a/apps/src/lib/config/genesis/transactions.rs b/apps/src/lib/config/genesis/transactions.rs index 191cac088c..0fcf3ff7cc 100644 --- a/apps/src/lib/config/genesis/transactions.rs +++ b/apps/src/lib/config/genesis/transactions.rs @@ -1230,7 +1230,7 @@ pub fn validate_established_account( let established_address = tx.derive_address(); if tx.threshold == 0 { - eprintln!("An established account may not have zero thresold"); + eprintln!("An established account may not have zero threshold"); is_valid = false; } if tx.threshold as usize > tx.public_keys.len() { diff --git a/apps/src/lib/logging.rs b/apps/src/lib/logging.rs index b0188aa6c4..213564ed7c 100644 --- a/apps/src/lib/logging.rs +++ b/apps/src/lib/logging.rs @@ -140,7 +140,7 @@ fn rolling_freq() -> RollingFreq { "daily" => RollingFreq::Daily, _ => { panic!( - "Unrecognized option set for {ROLLING_ENV_KEY}. Expecing \ + "Unrecognized option set for {ROLLING_ENV_KEY}. Expecting \ one of: never, minutely, hourly, daily. Default is never." ); } diff --git a/apps/src/lib/node/ledger/ethereum_oracle/mod.rs b/apps/src/lib/node/ledger/ethereum_oracle/mod.rs index 796563a27d..93722e7708 100644 --- a/apps/src/lib/node/ledger/ethereum_oracle/mod.rs +++ b/apps/src/lib/node/ledger/ethereum_oracle/mod.rs @@ -242,7 +242,7 @@ impl Oracle { true } - /// Check if a new config has been sent from teh Shell. + /// Check if a new config has been sent from the Shell. fn update_config(&mut self) -> Option { match self.control.try_recv() { Ok(Command::UpdateConfig(config)) => Some(config), @@ -1169,7 +1169,7 @@ mod test_oracle { controller .apply_cmd(TestCmd::NewHeight(Uint256::from(synced_block_height))); - // check that the oracle still checks the blocks inbetween + // check that the oracle still checks the blocks in between for height in (confirmed_block_height + 1) ..(confirmed_block_height + difference + 1) { diff --git a/apps/src/lib/node/ledger/shell/block_alloc.rs b/apps/src/lib/node/ledger/shell/block_alloc.rs index feb17045b5..254db45b9a 100644 --- a/apps/src/lib/node/ledger/shell/block_alloc.rs +++ b/apps/src/lib/node/ledger/shell/block_alloc.rs @@ -416,7 +416,7 @@ mod tests { proptest_reject_tx_on_bin_cap_reached(max) } - /// Check if the initial bin capcity of the [`BlockAllocator`] + /// Check if the initial bin capacity of the [`BlockAllocator`] /// is correct. #[test] fn test_initial_bin_capacity(max in prop::num::u64::ANY) { diff --git a/apps/src/lib/node/ledger/shell/mod.rs b/apps/src/lib/node/ledger/shell/mod.rs index b927f6c442..88797d8486 100644 --- a/apps/src/lib/node/ledger/shell/mod.rs +++ b/apps/src/lib/node/ledger/shell/mod.rs @@ -1229,7 +1229,7 @@ where { response.code = ErrorCodes::InvalidVoteExtension.into(); response.log = format!( - "{INVALID_MSG}: Invalid Brige pool roots vote \ + "{INVALID_MSG}: Invalid Bridge pool roots vote \ extension: {err}", ); } else { @@ -1564,7 +1564,7 @@ where ValidatorSetUpdate::Deactivated(consensus_key) => { // Any validators that have been dropped from the // consensus set must have voting power set to 0 to - // remove them from the conensus set + // remove them from the consensus set let power = 0_i64; (consensus_key, power) } @@ -1926,7 +1926,7 @@ mod test_utils { /// Config parameters to set up a test shell. pub struct SetupCfg { - /// The last comitted block height. + /// The last committed block height. pub last_height: H, /// The number of validators to configure // in `InitChain`. diff --git a/apps/src/lib/node/ledger/shell/process_proposal.rs b/apps/src/lib/node/ledger/shell/process_proposal.rs index ff46eb8e0f..465dd57fbc 100644 --- a/apps/src/lib/node/ledger/shell/process_proposal.rs +++ b/apps/src/lib/node/ledger/shell/process_proposal.rs @@ -549,7 +549,7 @@ where TxResult { code: ErrorCodes::Ok.into(), info: "Process Proposal accepted this \ - tranasction" + transaction" .into(), } } diff --git a/apps/src/lib/node/ledger/shell/vote_extensions/bridge_pool_vext.rs b/apps/src/lib/node/ledger/shell/vote_extensions/bridge_pool_vext.rs index 5e4f34ac2c..3aded3035a 100644 --- a/apps/src/lib/node/ledger/shell/vote_extensions/bridge_pool_vext.rs +++ b/apps/src/lib/node/ledger/shell/vote_extensions/bridge_pool_vext.rs @@ -159,7 +159,7 @@ where /// Takes an iterator over Bridge pool root vote extension instances, /// and returns another iterator. The latter yields - /// valid Brige pool root vote extensions, or the reason why these + /// valid Bridge pool root vote extensions, or the reason why these /// are invalid, in the form of a `VoteExtensionError`. #[inline] pub fn validate_bp_roots_vext_list<'iter>( diff --git a/apps/src/lib/wallet/defaults.rs b/apps/src/lib/wallet/defaults.rs index e632fe6269..6b53ee5545 100644 --- a/apps/src/lib/wallet/defaults.rs +++ b/apps/src/lib/wallet/defaults.rs @@ -134,7 +134,7 @@ mod dev { .into_owned() } - /// Get an unecrypted keypair from the pre-genesis wallet. + /// Get an unencrypted keypair from the pre-genesis wallet. pub fn get_unencrypted_keypair(name: &str) -> common::SecretKey { let sk = match PREGENESIS_WALLET.get_secret_keys().get(name).unwrap().0 { diff --git a/apps/src/lib/wallet/mod.rs b/apps/src/lib/wallet/mod.rs index 6c1034c994..6696f7ee3a 100644 --- a/apps/src/lib/wallet/mod.rs +++ b/apps/src/lib/wallet/mod.rs @@ -107,7 +107,7 @@ impl WalletIo for CliWalletUtils { // The given alias has been selected but conflicts with another alias in // the store. Offer the user to either replace existing mapping, alter the - // chosen alias to a name of their chosing, or cancel the aliasing. + // chosen alias to a name of their choosing, or cancel the aliasing. fn show_overwrite_confirmation( alias: &Alias, alias_for: &str, @@ -187,7 +187,7 @@ pub fn read_and_confirm_passphrase_tty( /// Generate keypair /// for signing protocol txs and for the DKG (which will also be stored) /// A protocol keypair may be optionally provided, indicating that -/// we should re-use a keypair already in the wallet +/// we should reuse a keypair already in the wallet pub fn gen_validator_keys( wallet: &mut Wallet, eth_bridge_pk: Option, diff --git a/benches/README.md b/benches/README.md index 86978eb6f7..dab052df5f 100644 --- a/benches/README.md +++ b/benches/README.md @@ -4,7 +4,7 @@ The benchmarks are built with [criterion.rs](https://bheisler.github.io/criterio Measurements are taken on the elapsed wall-time. -The benchmarks only focus on sucessfull transactions and vps: in case of failure, the bench function shall panic to avoid timing incomplete execution paths. +The benchmarks only focus on successful transactions and vps: in case of failure, the bench function shall panic to avoid timing incomplete execution paths. In addition, this crate also contains benchmarks for `WrapperTx` (`namada::core::types::transaction::wrapper::WrapperTx`) validation and `host_env` (`namada::vm::host_env`) exposed functions that define the gas constants of `gas` (`namada::core::ledger::gas`). diff --git a/benches/host_env.rs b/benches/host_env.rs index 289b0eb564..1ca5a7b680 100644 --- a/benches/host_env.rs +++ b/benches/host_env.rs @@ -188,8 +188,8 @@ fn write_log_read(c: &mut Criterion) { // than invert it to calculate the desired metric (time/byte) // NOTE: criterion states that the throughput is measured on the // processed bytes but in this case we are interested in the input + - // output bytes, i.e. the combined legth of the key and value red, so we - // set this as the throughput parameter + // output bytes, i.e. the combined length of the key and value red, so + // we set this as the throughput parameter let throughput_len = value_len + key.len() as u64; group.throughput(criterion::Throughput::Bytes(throughput_len)); // Generate random bytes for the value and write it to storage @@ -219,8 +219,8 @@ fn storage_read(c: &mut Criterion) { // than invert it to calculate the desired metric (time/byte) // NOTE: criterion states that the throughput is measured on the // processed bytes but in this case we are interested in the input + - // output bytes, i.e. the combined legth of the key and value red, so we - // set this as the throughput parameter + // output bytes, i.e. the combined length of the key and value red, so + // we set this as the throughput parameter let throughput_len = value_len + key.len() as u64; group.throughput(criterion::Throughput::Bytes(throughput_len)); // Generate random bytes for the value and write it to storage @@ -259,7 +259,7 @@ fn write_log_write(c: &mut Criterion) { // than invert it to calculate the desired metric (time/byte) // NOTE: criterion states that the throughput is measured on the // processed bytes but in this case we are interested in the input + - // output bytes, i.e. the combined legth of the key and value written, + // output bytes, i.e. the combined length of the key and value written, // so we set this as the throughput parameter let throughput_len = value_len + key.len() as u64; group.throughput(criterion::Throughput::Bytes(throughput_len)); @@ -294,7 +294,7 @@ fn storage_write(c: &mut Criterion) { // than invert it to calculate the desired metric (time/byte) // NOTE: criterion states that the throughput is measured on the // processed bytes but in this case we are interested in the input + - // output bytes, i.e. the combined legth of the key and value written, + // output bytes, i.e. the combined length of the key and value written, // so we set this as the throughput parameter let throughput_len = value_len + key.len() as u64; group.throughput(criterion::Throughput::Bytes(throughput_len)); diff --git a/benches/vps.rs b/benches/vps.rs index f5a035e542..472ffb3faa 100644 --- a/benches/vps.rs +++ b/benches/vps.rs @@ -273,7 +273,7 @@ fn vp_implicit(c: &mut Criterion) { .unwrap(); if bench_name != "reveal_pk" { - // Reveal publick key + // Reveal public key shell.execute_tx(&reveal_pk); shell.wl_storage.commit_tx(); shell.commit(); diff --git a/core/Cargo.toml b/core/Cargo.toml index df25f86120..7f5d6b7c61 100644 --- a/core/Cargo.toml +++ b/core/Cargo.toml @@ -20,7 +20,7 @@ rand = ["dep:rand", "rand_core"] ethers-derive = [ "ethbridge-structs/ethers-derive" ] -# for integration tests and test utilies +# for integration tests and test utilities testing = [ "ibc-testkit", "rand", diff --git a/core/src/ledger/gas.rs b/core/src/ledger/gas.rs index fc09636766..7d88c3404b 100644 --- a/core/src/ledger/gas.rs +++ b/core/src/ledger/gas.rs @@ -35,7 +35,7 @@ const STORAGE_OCCUPATION_GAS_PER_BYTE: u64 = // codebase. For these two reasons we just set an arbitrary value (based on // actual SSDs latency) per byte here const PHYSICAL_STORAGE_LATENCY_PER_BYTE: u64 = 75; -// This is based on the global avarage bandwidth +// This is based on the global average bandwidth const NETWORK_TRANSMISSION_GAS_PER_BYTE: u64 = 13; /// The cost of accessing data from memory (both read and write mode), per byte @@ -46,7 +46,7 @@ pub const STORAGE_ACCESS_GAS_PER_BYTE: u64 = /// The cost of writing data to storage, per byte pub const STORAGE_WRITE_GAS_PER_BYTE: u64 = MEMORY_ACCESS_GAS_PER_BYTE + 848 + STORAGE_OCCUPATION_GAS_PER_BYTE; -/// The cost of verifying a signle signature of a transaction +/// The cost of verifying a single signature of a transaction pub const VERIFY_TX_SIG_GAS: u64 = 9_793; /// The cost for requesting one more page in wasm (64KiB) pub const WASM_MEMORY_PAGE_GAS: u32 = @@ -152,7 +152,7 @@ impl Display for Gas { } impl From for Gas { - // Derive a Gas instance with a sub amount which is exaclty a whole amount + // Derive a Gas instance with a sub amount which is exactly a whole amount // since the limit represents gas in whole units fn from(value: GasLimit) -> Self { Self { diff --git a/core/src/ledger/governance/cli/onchain.rs b/core/src/ledger/governance/cli/onchain.rs index b47985af3f..60e70ddf7d 100644 --- a/core/src/ledger/governance/cli/onchain.rs +++ b/core/src/ledger/governance/cli/onchain.rs @@ -276,8 +276,8 @@ impl PgfAction { Debug, Clone, BorshSerialize, BorshDeserialize, Serialize, Deserialize, )] pub struct PgfFunding { - /// Pgf continous funding - pub continous: Vec, + /// Pgf continuous funding + pub continuous: Vec, /// pgf retro fundings pub retro: Vec, } diff --git a/core/src/ledger/governance/cli/validation.rs b/core/src/ledger/governance/cli/validation.rs index d1f2e4f54c..cb70d146ee 100644 --- a/core/src/ledger/governance/cli/validation.rs +++ b/core/src/ledger/governance/cli/validation.rs @@ -38,7 +38,7 @@ pub enum ProposalValidation { epoch must be at most {1}, but found {0}" )] InvalidProposalPeriod(u64, u64), - /// The proposal author does not have enought balance to pay for proposal + /// The proposal author does not have enough balance to pay for proposal /// fees #[error( "Invalid proposal minimum funds: the author address has {0} but \ @@ -244,7 +244,7 @@ pub fn is_valid_pgf_stewards_data( pub fn is_valid_pgf_funding_data( data: &PgfFunding, ) -> Result<(), ProposalValidation> { - if !data.continous.is_empty() || !data.retro.is_empty() { + if !data.continuous.is_empty() || !data.retro.is_empty() { Ok(()) } else { Err(ProposalValidation::InvalidPgfFundingExtraData) diff --git a/core/src/ledger/governance/storage/proposal.rs b/core/src/ledger/governance/storage/proposal.rs index c4a59389ef..014b24a16c 100644 --- a/core/src/ledger/governance/storage/proposal.rs +++ b/core/src/ledger/governance/storage/proposal.rs @@ -21,7 +21,7 @@ pub enum ProposalTypeError { InvalidProposalType, } -/// Storage struture for pgf fundings +/// Storage structure for pgf fundings #[derive( Debug, Clone, diff --git a/core/src/ledger/masp_conversions.rs b/core/src/ledger/masp_conversions.rs index ab915c22e2..1b9facc614 100644 --- a/core/src/ledger/masp_conversions.rs +++ b/core/src/ledger/masp_conversions.rs @@ -464,7 +464,7 @@ where // overwritten before the creation of the next commitment tree for addr in masp_reward_keys { for denom in token::MaspDenom::iter() { - // Add the decoding entry for the new asset type. An uncommited + // Add the decoding entry for the new asset type. An uncommitted // node position is used since this is not a conversion. let new_asset = encode_asset_type( addr.clone(), diff --git a/core/src/ledger/parameters/storage.rs b/core/src/ledger/parameters/storage.rs index fd7f4fabad..19bd7784ac 100644 --- a/core/src/ledger/parameters/storage.rs +++ b/core/src/ledger/parameters/storage.rs @@ -14,7 +14,7 @@ struct Keys { /// Sub-key for storing the initial Ethereum block height when /// events will first be extracted from. eth_start_height: &'static str, - /// Sub-key for storing the acitve / inactive status of the Ethereum + /// Sub-key for storing the active / inactive status of the Ethereum /// bridge. active_status: &'static str, /// Sub-key for storing the minimum confirmations parameter diff --git a/core/src/ledger/pgf/storage/mod.rs b/core/src/ledger/pgf/storage/mod.rs index 9204afeba3..59dc8fd622 100644 --- a/core/src/ledger/pgf/storage/mod.rs +++ b/core/src/ledger/pgf/storage/mod.rs @@ -1,4 +1,4 @@ /// Pgf storage keys pub mod keys; -/// Pgf steward strutures +/// Pgf steward structures pub mod steward; diff --git a/core/src/ledger/storage/mockdb.rs b/core/src/ledger/storage/mockdb.rs index 5d45f2832c..caf3ee88ac 100644 --- a/core/src/ledger/storage/mockdb.rs +++ b/core/src/ledger/storage/mockdb.rs @@ -484,7 +484,7 @@ impl DB for MockDB { key: &Key, value: impl AsRef<[u8]>, ) -> Result { - // batch_write are directry committed + // batch_write are directly committed self.batch_write_subspace_val(&mut MockDBWriteBatch, height, key, value) } @@ -493,7 +493,7 @@ impl DB for MockDB { height: BlockHeight, key: &Key, ) -> Result { - // batch_delete are directry committed + // batch_delete are directly committed self.batch_delete_subspace_val(&mut MockDBWriteBatch, height, key) } diff --git a/core/src/ledger/storage/write_log.rs b/core/src/ledger/storage/write_log.rs index d32284f50f..428ae86e3d 100644 --- a/core/src/ledger/storage/write_log.rs +++ b/core/src/ledger/storage/write_log.rs @@ -92,7 +92,7 @@ pub struct WriteLog { tx_write_log: HashMap, /// A precommit bucket for the `tx_write_log`. This is useful for /// validation when a clean `tx_write_log` is needed without committing any - /// modification already in there. These modifications can be temporarely + /// modification already in there. These modifications can be temporarily /// stored here and then discarded or committed to the `block_write_log`, /// together with th content of `tx_write_log`. No direct key /// write/update/delete should ever happen on this field, this log should diff --git a/core/src/ledger/storage_api/governance.rs b/core/src/ledger/storage_api/governance.rs index ab4ad27b0b..3f3dfb2af6 100644 --- a/core/src/ledger/storage_api/governance.rs +++ b/core/src/ledger/storage_api/governance.rs @@ -227,23 +227,23 @@ where { let key = governance_keys::get_max_proposal_code_size_key(); let max_proposal_code_size: u64 = - storage.read(&key)?.expect("Parameter should be definied."); + storage.read(&key)?.expect("Parameter should be defined."); let key = governance_keys::get_max_proposal_content_key(); let max_proposal_content_size: u64 = - storage.read(&key)?.expect("Parameter should be definied."); + storage.read(&key)?.expect("Parameter should be defined."); let key = governance_keys::get_min_proposal_fund_key(); let min_proposal_fund: token::Amount = - storage.read(&key)?.expect("Parameter should be definied."); + storage.read(&key)?.expect("Parameter should be defined."); let key = governance_keys::get_min_proposal_grace_epoch_key(); let min_proposal_grace_epochs: u64 = - storage.read(&key)?.expect("Parameter should be definied."); + storage.read(&key)?.expect("Parameter should be defined."); let key = governance_keys::get_min_proposal_voting_period_key(); let min_proposal_voting_period: u64 = - storage.read(&key)?.expect("Parameter should be definied."); + storage.read(&key)?.expect("Parameter should be defined."); let max_proposal_period: u64 = get_max_proposal_period(storage)?; diff --git a/core/src/ledger/storage_api/pgf.rs b/core/src/ledger/storage_api/pgf.rs index 91cc17b71e..29d7ac2486 100644 --- a/core/src/ledger/storage_api/pgf.rs +++ b/core/src/ledger/storage_api/pgf.rs @@ -61,7 +61,7 @@ where Ok(()) } -/// Query the current pgf continous payments +/// Query the current pgf continuous payments pub fn get_payments( storage: &S, ) -> storage_api::Result> diff --git a/core/src/ledger/storage_api/token.rs b/core/src/ledger/storage_api/token.rs index c372c0bfc1..5d3e244a80 100644 --- a/core/src/ledger/storage_api/token.rs +++ b/core/src/ledger/storage_api/token.rs @@ -90,7 +90,7 @@ where /// Transfer `token` from `src` to `dest`. Returns an `Err` if `src` has /// insufficient balance or if the transfer the `dest` would overflow (This can -/// only happen if the total supply does't fit in `token::Amount`). +/// only happen if the total supply doesn't fit in `token::Amount`). pub fn transfer( storage: &mut S, token: &Address, diff --git a/core/src/proto/types.rs b/core/src/proto/types.rs index 7784b35b70..a4e7b6df85 100644 --- a/core/src/proto/types.rs +++ b/core/src/proto/types.rs @@ -1430,7 +1430,7 @@ impl Tx { let mut filtered = Vec::new(); for i in (0..self.sections.len()).rev() { if let Section::MaspBuilder(_) = self.sections[i] { - // MASP Builders containin extended full viewing keys amongst + // MASP Builders containing extended full viewing keys amongst // other private information and must be removed prior to // submission to protocol filtered.push(self.sections.remove(i)); diff --git a/core/src/types/account.rs b/core/src/types/account.rs index b38c8b6792..eb360ecb1c 100644 --- a/core/src/types/account.rs +++ b/core/src/types/account.rs @@ -23,7 +23,7 @@ pub struct Account { } impl Account { - /// Retrive a public key from the index + /// Retrieve a public key from the index pub fn get_public_key_from_index( &self, index: u8, @@ -31,7 +31,7 @@ impl Account { self.public_keys_map.get_public_key_from_index(index) } - /// Retrive the index of a public key + /// Retrieve the index of a public key pub fn get_index_from_public_key( &self, public_key: &common::PublicKey, @@ -49,7 +49,7 @@ impl Account { Deserialize, Default, )] -/// Holds the public key map data as a bimap for efficient quering +/// Holds the public key map data as a bimap for efficient querying pub struct AccountPublicKeysMap { /// Hashmap from public key to index pub pk_to_idx: HashMap, @@ -80,7 +80,7 @@ impl FromIterator for AccountPublicKeysMap { } impl AccountPublicKeysMap { - /// Retrive a public key from the index + /// Retrieve a public key from the index pub fn get_public_key_from_index( &self, index: u8, @@ -88,7 +88,7 @@ impl AccountPublicKeysMap { self.idx_to_pk.get(&index).cloned() } - /// Retrive the index of a public key + /// Retrieve the index of a public key pub fn get_index_from_public_key( &self, public_key: &common::PublicKey, diff --git a/core/src/types/dec.rs b/core/src/types/dec.rs index 21d3158115..4552acc84d 100644 --- a/core/src/types/dec.rs +++ b/core/src/types/dec.rs @@ -30,7 +30,7 @@ pub type Result = std::result::Result; /// A 256 bit number with [`POS_DECIMAL_PRECISION`] number of Dec places. /// -/// To be precise, an instance X of this type should be interpeted as the Dec +/// To be precise, an instance X of this type should be interpreted as the Dec /// X * 10 ^ (-[`POS_DECIMAL_PRECISION`]) #[derive( Clone, @@ -188,8 +188,8 @@ impl Dec { } /// Do multiply two [`Dec`]s. Return `None` if overflow. - /// This methods will overflow incorretly if both arguments are greater than - /// 128bit. + /// This methods will overflow incorrectly if both arguments are greater + /// than 128bit. pub fn checked_mul(&self, other: &Self) -> Option { let result = self.0.checked_mul(&other.0)?; Some(Dec(result / Uint::exp10(POS_DECIMAL_PRECISION as usize))) diff --git a/core/src/types/eth_bridge_pool.rs b/core/src/types/eth_bridge_pool.rs index 8e533ea262..fcd89f76a9 100644 --- a/core/src/types/eth_bridge_pool.rs +++ b/core/src/types/eth_bridge_pool.rs @@ -266,7 +266,7 @@ impl From<&PendingTransfer> for Key { } } -/// The amount of fees to be payed, in Namada, to the relayer +/// The amount of fees to be paid, in Namada, to the relayer /// of a transfer across the Ethereum Bridge, compensating /// for Ethereum gas costs. #[derive( diff --git a/core/src/types/internal.rs b/core/src/types/internal.rs index f2f69c1482..ef9da587ba 100644 --- a/core/src/types/internal.rs +++ b/core/src/types/internal.rs @@ -91,7 +91,7 @@ mod tx_queue { } /// Get reference to the element at the given index. - /// Returns [`None`] if index exceeds the queue lenght. + /// Returns [`None`] if index exceeds the queue length. pub fn get(&self, index: usize) -> Option<&TxInQueue> { self.0.get(index) } diff --git a/core/src/types/key/mod.rs b/core/src/types/key/mod.rs index 9d22ee3af1..e4a5e04672 100644 --- a/core/src/types/key/mod.rs +++ b/core/src/types/key/mod.rs @@ -66,7 +66,7 @@ pub fn is_pks_key(key: &Key) -> Option<&Address> { } } -/// Check if the given storage key is a threshol key. +/// Check if the given storage key is a threshold key. pub fn is_threshold_key(key: &Key) -> Option<&Address> { match &key.segments[..] { [DbKeySeg::AddressSeg(owner), DbKeySeg::StringSeg(prefix)] diff --git a/core/src/types/storage.rs b/core/src/types/storage.rs index e16318568e..16d931ba40 100644 --- a/core/src/types/storage.rs +++ b/core/src/types/storage.rs @@ -1287,7 +1287,7 @@ pub struct EthEventsQueue { /// __INVARIANT:__ At any given moment, the queue holds the nonce `N` /// of the next confirmed event to be processed by the ledger, and any /// number of events that have been confirmed with a nonce greater than -/// or equal to `N`. Events in the queue must be returned in asceding +/// or equal to `N`. Events in the queue must be returned in ascending /// order of their nonce. #[derive(Debug, BorshSerialize, BorshDeserialize)] pub struct InnerEthEventsQueue { @@ -1446,7 +1446,7 @@ mod tests { /// because they are reserved for `Address` or a validity predicate. #[test] fn test_key_parse(s in "[^#?/][^/]*/[^#?/][^/]*/[^#?/][^/]*") { - let key = Key::parse(s.clone()).expect("cannnot parse the string"); + let key = Key::parse(s.clone()).expect("cannot parse the string"); assert_eq!(key.to_string(), s); } @@ -1457,7 +1457,7 @@ mod tests { #[test] fn test_key_push(s in "[^#?/][^/]*") { let addr = address::testing::established_address_1(); - let key = Key::from(addr.to_db_key()).push(&s).expect("cannnot push the segment"); + let key = Key::from(addr.to_db_key()).push(&s).expect("cannot push the segment"); assert_eq!(key.segments[1].raw(), s); } @@ -1618,19 +1618,19 @@ mod tests { let target = KeySeg::raw(&other); let key = Key::from(addr.to_db_key()) .push(&target) - .expect("cannnot push the segment"); + .expect("cannot push the segment"); assert_eq!(key.segments[1].raw(), target); let target = "?test".to_owned(); let key = Key::from(addr.to_db_key()) .push(&target) - .expect("cannnot push the segment"); + .expect("cannot push the segment"); assert_eq!(key.segments[1].raw(), target); let target = "?".to_owned(); let key = Key::from(addr.to_db_key()) .push(&target) - .expect("cannnot push the segment"); + .expect("cannot push the segment"); assert_eq!(key.segments[1].raw(), target); } diff --git a/core/src/types/token.rs b/core/src/types/token.rs index 6a730e9889..5410b7c6a4 100644 --- a/core/src/types/token.rs +++ b/core/src/types/token.rs @@ -275,7 +275,7 @@ impl Amount { /// Given a number represented as `M*B^D`, then /// `M` is the matissa, `B` is the base and `D` -/// is the denomination, represented by this stuct. +/// is the denomination, represented by this struct. #[derive( Debug, Copy, diff --git a/core/src/types/transaction/governance.rs b/core/src/types/transaction/governance.rs index 4d32801d52..dbc0b4ae5e 100644 --- a/core/src/types/transaction/governance.rs +++ b/core/src/types/transaction/governance.rs @@ -76,7 +76,7 @@ pub struct VoteProposalData { pub vote: StorageProposalVote, /// The proposal author address pub voter: Address, - /// Delegator addreses + /// Delegator addresses pub delegations: Vec
, } @@ -119,9 +119,9 @@ impl TryFrom for InitProposalData { type Error = ProposalError; fn try_from(value: PgfFundingProposal) -> Result { - let continous_fundings = value + let continuous_fundings = value .data - .continous + .continuous .iter() .cloned() .map(|funding| { @@ -151,7 +151,7 @@ impl TryFrom for InitProposalData { }) .collect::>(); - let extra_data = [continous_fundings, retro_fundings].concat(); + let extra_data = [continuous_fundings, retro_fundings].concat(); Ok(InitProposalData { id: value.proposal.id, diff --git a/core/src/types/transaction/wrapper.rs b/core/src/types/transaction/wrapper.rs index d66e61dcb3..92b3059a3f 100644 --- a/core/src/types/transaction/wrapper.rs +++ b/core/src/types/transaction/wrapper.rs @@ -176,7 +176,7 @@ pub mod wrapper_tx { Deserialize, )] pub struct WrapperTx { - /// The fee to be payed for including the tx + /// The fee to be paid for including the tx pub fee: Fee, /// Used for signature verification and to determine an implicit /// account of the fee payer diff --git a/documentation/dev/src/archive/README.md b/documentation/dev/src/archive/README.md index 415bb0a8b9..e873a28650 100644 --- a/documentation/dev/src/archive/README.md +++ b/documentation/dev/src/archive/README.md @@ -1,3 +1,3 @@ # Archive -Deprecated pages archived for possible later re-use. +Deprecated pages archived for possible later reuse. diff --git a/documentation/dev/src/archive/domain-name-addresses.md b/documentation/dev/src/archive/domain-name-addresses.md index c0740b6e66..9a2247def6 100644 --- a/documentation/dev/src/archive/domain-name-addresses.md +++ b/documentation/dev/src/archive/domain-name-addresses.md @@ -2,7 +2,7 @@ The transparent addresses are similar to domain names and the ones used in e.g. [ENS as specified in EIP-137](https://eips.ethereum.org/EIPS/eip-137) and [account IDs in Near protocol](https://nomicon.io/DataStructures/Account.html). These are the addresses of accounts associated with dynamic storage sub-spaces, where the address of the account is the prefix key segment of its sub-space. -A transparent address is a human-readable string very similar to a domain name, containing only alpha-numeric ASCII characters, hyphen (`-`) and full stop (`.`) as a separator between the "labels" of the address. The letter case is not significant and any upper case letters are converted to lower case. The last label of an address is said to be the top-level name and each predecessor segment is the sub-name of its successor. +A transparent address is a human-readable string very similar to a domain name, containing only alphanumeric ASCII characters, hyphen (`-`) and full stop (`.`) as a separator between the "labels" of the address. The letter case is not significant and any upper case letters are converted to lower case. The last label of an address is said to be the top-level name and each predecessor segment is the sub-name of its successor. The length of an address must be at least 3 characters. For compatibility with a legacy DNS TXT record, we'll use syntax as defined in [RFC-1034 - section 3.5 DNS preferred name syntax](https://www.ietf.org/rfc/rfc1034.txt). That is, the upper limit is 255 characters and 63 for each label in an address (which should be sufficient anyway); and the label must not begin or end with hyphen (`-`) and must not begin with a digit. diff --git a/documentation/dev/src/explore/design/ledger/ibc.md b/documentation/dev/src/explore/design/ledger/ibc.md index f83a244bc0..4f2705e663 100644 --- a/documentation/dev/src/explore/design/ledger/ibc.md +++ b/documentation/dev/src/explore/design/ledger/ibc.md @@ -2,7 +2,7 @@ [IBC](https://arxiv.org/pdf/2006.15918.pdf) allows a ledger to track another ledger's consensus state using a light client. IBC is a protocol to agree the consensus state and to send/receive packets between ledgers. -We have mainly two components for IBC integration, IBC hander and IBC validity predicate. IBC hander is a set of functions to handle IBC-related data. A transaction calls these functions for IBC operations. IBC validity predicate is a native validity predicate to validate the transaction which mutates IBC-related data. +We have mainly two components for IBC integration, IBC handler and IBC validity predicate. IBC handler is a set of functions to handle IBC-related data. A transaction calls these functions for IBC operations. IBC validity predicate is a native validity predicate to validate the transaction which mutates IBC-related data. ## Storage key of IBC-related data Its storage key should be prefixed with [`InternalAddress::Ibc`](https://github.com/anoma/namada/blob/e3c2bd0b463b35d66fcc6d2643fd0e6509e03d99/core/src/types/address.rs#L446) to differ them from other storage operations. A path after the prefix specifies an IBC-related data. The paths are defined by [ICS 24](https://github.com/cosmos/ibc/blob/master/spec/core/ics-024-host-requirements/README.md#path-space). The utility functions for the keys are defined [here](https://github.com/anoma/namada/blob/e3c2bd0b463b35d66fcc6d2643fd0e6509e03d99/core/src/ledger/ibc/storage.rs). For example, a client state of a counterparty ledger will be stored with a storage key `#IBC_encoded_addr/clients/{client_id}/clientState`. The IBC transaction and IBC validity predicate can use the storage keys to read/write IBC-related data according to IBC protocol. @@ -514,7 +514,7 @@ A query for a proven IBC-related data returns the value and the proof. The proof The query response has the proof as [`tendermint::merkle::proof::Proof`](https://github.com/informalsystems/tendermint-rs/blob/dd371372da58921efe1b48a4dd24a2597225df11/tendermint/src/merkle/proof.rs#L15), which consists of a vector of [`tendermint::merkle::proof::ProofOp`](https://github.com/informalsystems/tendermint-rs/blob/dd371372da58921efe1b48a4dd24a2597225df11/tendermint/src/merkle/proof.rs#L25). `ProofOp` should have `data`, which is encoded to `Vec` from [`ibc_proto::ics23::CommitmentProof`](https://github.com/informalsystems/ibc-rs/blob/66049e29a3f5a0c9258d228b9a6c21704e7e2fa4/proto/src/prost/ics23.rs#L49). The relayer getting the proof converts the proof from `tendermint::merkle::proof::Proof` to `ibc::ics23_commitment::commitment::CommitmentProofBytes` by [`convert_tm_to_ics_merkle_proof()`](https://github.com/informalsystems/ibc-rs/blob/66049e29a3f5a0c9258d228b9a6c21704e7e2fa4/modules/src/ics23_commitment/merkle.rs#L84) and sets it to the request data of the following IBC operation. ## Relayer (ICS 18) -IBC relayer monitors the ledger, gets the status, state and proofs on the ledger, and requests transactions to the ledger via Tendermint RPC according to IBC protocol. For relayers, the ledger has to make a packet, emits an IBC event and stores proofs if needed. And, a relayer has to support Namada ledger to query and validate the ledger state. It means that `ChainEndpoint` in IBC Relayer of [ibc-rs](https://github.com/informalsystems/ibc-rs) should be implemented for Anoma like [that of CosmosSDK](https://github.com/informalsystems/ibc-rs/blob/66049e29a3f5a0c9258d228b9a6c21704e7e2fa4/relayer/src/chain/cosmos.rs). As those of Cosmos, these querys can request ABCI queries to Namada. +IBC relayer monitors the ledger, gets the status, state and proofs on the ledger, and requests transactions to the ledger via Tendermint RPC according to IBC protocol. For relayers, the ledger has to make a packet, emits an IBC event and stores proofs if needed. And, a relayer has to support Namada ledger to query and validate the ledger state. It means that `ChainEndpoint` in IBC Relayer of [ibc-rs](https://github.com/informalsystems/ibc-rs) should be implemented for Anoma like [that of CosmosSDK](https://github.com/informalsystems/ibc-rs/blob/66049e29a3f5a0c9258d228b9a6c21704e7e2fa4/relayer/src/chain/cosmos.rs). As those of Cosmos, these queries can request ABCI queries to Namada. ```rust impl ChainEndpoint for Namada { @@ -534,7 +534,7 @@ In a transaction with `MsgTransfer` (defined in ibc-rs) including `FungibleToken The transaction updates the sender's balance by escrowing or burning the amount of the token. The account, the sent token(denomination), and the amount are specified by `MsgTransfer`. [The denomination field would indicate that this chain is the source zone or the sink zone](https://github.com/cosmos/ibc/blob/master/spec/app/ics-020-fungible-token-transfer/README.md#technical-specification). #### Sender -Basically, the sender key is `{token_addr}/balance/{sender_addr}`. `{token_addr}` and `{sender_addr}` is specified by `FungibleTokenPacketData`. When the denomination `{denom}` in `FungibleTokenPacketData` specifies the source chain, the transfer operation is executed from the origin-specific account `{token_addr}/ibc/{ibc_token_hash}/balance/{sender_addr}` (Ref. [Receiver](#Receiver)). We can set `{token_addr}`, `{port_id}/{channel_id}/../{token_addr}`, or `ibc/{ibc_token_hash}/{token_addr}` to the denomination. When `ibc/{ibc_token_hash}/` is prefixed, the transfer looks up the prefixed denomination `{port_id}/{channel_id}/{denom}` by the `{ibc_token_hash}`. `{denom}` might have more prefixes to specify the source chains, e.g. `{port_id_b}/{channel_id_b}/{port_id_a}/{channel_id_a}/{token_addr}`. Accoding to the prefixed port ID and channel ID, the transfer operation escrows or burns the amount of the token (ICS20). +Basically, the sender key is `{token_addr}/balance/{sender_addr}`. `{token_addr}` and `{sender_addr}` is specified by `FungibleTokenPacketData`. When the denomination `{denom}` in `FungibleTokenPacketData` specifies the source chain, the transfer operation is executed from the origin-specific account `{token_addr}/ibc/{ibc_token_hash}/balance/{sender_addr}` (Ref. [Receiver](#Receiver)). We can set `{token_addr}`, `{port_id}/{channel_id}/../{token_addr}`, or `ibc/{ibc_token_hash}/{token_addr}` to the denomination. When `ibc/{ibc_token_hash}/` is prefixed, the transfer looks up the prefixed denomination `{port_id}/{channel_id}/{denom}` by the `{ibc_token_hash}`. `{denom}` might have more prefixes to specify the source chains, e.g. `{port_id_b}/{channel_id_b}/{port_id_a}/{channel_id_a}/{token_addr}`. According to the prefixed port ID and channel ID, the transfer operation escrows or burns the amount of the token (ICS20). #### Escrow When this chain is the source zone, i.e. the denomination does NOT start with the port ID and the channel ID of this chain, the amount of the specified token is sent from the sender's account key to the escrow key `{token_addr}/ibc/{port_id}/{channel_id}/balance/IbcEscrow`. The escrow address should be associated with IBC port ID and channel ID to unescrow it later. The escrow address is one of internal addresses, `InternalAddress::IbcEscrow`. It is not allowed to transfer from the escrow account without IBC token transfer operation. IBC token VP should check the transfer from the escrow accounts. diff --git a/documentation/dev/src/explore/design/ledger/storage/data-schema.md b/documentation/dev/src/explore/design/ledger/storage/data-schema.md index 2535aeb8e5..aa98631b64 100644 --- a/documentation/dev/src/explore/design/ledger/storage/data-schema.md +++ b/documentation/dev/src/explore/design/ledger/storage/data-schema.md @@ -27,7 +27,7 @@ schema may be almost free. A single address in the ledger is define with all schema. A specific schema can be looked up with a key in its subspace. The schema variable is not yet -implemented and the definition might change to something more appropiate. +implemented and the definition might change to something more appropriate. ## Schema derived library code diff --git a/documentation/dev/src/explore/design/ledger/vp.md b/documentation/dev/src/explore/design/ledger/vp.md index 8e9b3a680b..c89774b611 100644 --- a/documentation/dev/src/explore/design/ledger/vp.md +++ b/documentation/dev/src/explore/design/ledger/vp.md @@ -18,7 +18,7 @@ fn validate_tx( tx_data: Vec, // Address of this VP addr: Address, - // Storage keys that have been modified by the transation, relevant to this VP + // Storage keys that have been modified by the transaction, relevant to this VP keys_changed: BTreeSet, // Set of all the addresses whose VP was triggered by the transaction verifiers: BTreeSet
, diff --git a/documentation/dev/src/explore/dev/development-considerations.md b/documentation/dev/src/explore/dev/development-considerations.md index 42457d03b1..138c84aac0 100644 --- a/documentation/dev/src/explore/dev/development-considerations.md +++ b/documentation/dev/src/explore/dev/development-considerations.md @@ -8,7 +8,7 @@ For safety critical parts it is good to add redundancy in safety checks, especia A very related concern to correctness is error handling. Whenever possible, it is best to rule out errors using the type system, i.e. make invalid states impossible to represent using the type system. However, there are many places where that is not practical or possible (for example, when we consume some values from Tendermint, in complex logic or in IO operations like reading and writing from/to storage). How errors should be handled depends on the context. -When you're not sure which context some piece of code falls into or if you want to make it re-usable in different settings, the default should be "defensive coding" approach, with any possible issues captured in `Result`'s errors and propagated up to the caller. The caller can then decide how to handle errors. +When you're not sure which context some piece of code falls into or if you want to make it reusable in different settings, the default should be "defensive coding" approach, with any possible issues captured in `Result`'s errors and propagated up to the caller. The caller can then decide how to handle errors. ### Native code that doesn't depend on interactions diff --git a/documentation/dev/src/explore/dev/storage_api.md b/documentation/dev/src/explore/dev/storage_api.md index ba0e34e022..cc3fe35d6e 100644 --- a/documentation/dev/src/explore/dev/storage_api.md +++ b/documentation/dev/src/explore/dev/storage_api.md @@ -39,7 +39,7 @@ All the methods in the `StorageRead` and `StorageWrite` return `storage_api::Res A custom `storage_api::Error` can be constructed from a static str with `new_const`, or from another Error type with `new`. Furthermore, you can wrap your custom `Result` with `into_storage_result` using the `trait ResultExt`. ```admonish warning -In library code written over `storage_api`, it is critical to propagate errors correctly (no `unwrap/expect`) to be able to re-use these in native environment. +In library code written over `storage_api`, it is critical to propagate errors correctly (no `unwrap/expect`) to be able to reuse these in native environment. ``` In native VPs the `storage_api` methods may return an error when we run out of gas in the current execution and a panic would crash the node. This is a good motivation to document error conditions of your functions. Furthermore, adding new error conditions to existing functions should be considered a breaking change and reviewed carefully! diff --git a/documentation/dev/src/explore/libraries/async.md b/documentation/dev/src/explore/libraries/async.md index ae93c67673..831f3adf7d 100644 --- a/documentation/dev/src/explore/libraries/async.md +++ b/documentation/dev/src/explore/libraries/async.md @@ -3,7 +3,7 @@ [Rust book on asynchronous programming](https://rust-lang.github.io/async-book/01_getting_started/01_chapter.html) Rust does not incorporate a default runtime, and implementations are not -compatible with eachother. +compatible with each other. c.f. The three main one are async-std, futures and tokio. diff --git a/ethereum_bridge/src/protocol/transactions/utils.rs b/ethereum_bridge/src/protocol/transactions/utils.rs index d2f44c995e..beee69a0de 100644 --- a/ethereum_bridge/src/protocol/transactions/utils.rs +++ b/ethereum_bridge/src/protocol/transactions/utils.rs @@ -13,7 +13,7 @@ use namada_proof_of_stake::types::WeightedValidator; pub(super) trait GetVoters { /// Extract all the voters and the block heights at which they voted from /// the given proof. - // TODO(feature = "abcipp"): we do not neet to return block heights + // TODO(feature = "abcipp"): we do not need to return block heights // anymore. votes will always be from `storage.last_height`. fn get_voters(self) -> HashSet<(Address, BlockHeight)>; } @@ -190,7 +190,7 @@ mod tests { #[test] /// Assert we error if we are passed an `(Address, BlockHeight)` but are not - /// given a corrseponding set of validators for the block height + /// given a corresponding set of validators for the block height fn test_get_voting_powers_for_selected_no_consensus_validators_for_height() { let all_consensus = BTreeMap::default(); diff --git a/ethereum_bridge/src/test_utils.rs b/ethereum_bridge/src/test_utils.rs index cdc33398ac..b74eb14bc3 100644 --- a/ethereum_bridge/src/test_utils.rs +++ b/ethereum_bridge/src/test_utils.rs @@ -1,4 +1,4 @@ -//! Test utilies for the Ethereum bridge crate. +//! Test utilities for the Ethereum bridge crate. use std::collections::HashMap; use std::num::NonZeroU64; diff --git a/proof_of_stake/src/epoched.rs b/proof_of_stake/src/epoched.rs index 06fdf148dc..fd1c248c74 100644 --- a/proof_of_stake/src/epoched.rs +++ b/proof_of_stake/src/epoched.rs @@ -1059,7 +1059,7 @@ pub enum DynEpochOffset { /// Offset at slash processing delay (unbonding + /// cubic_slashing_window + 1). SlashProcessingLen, - /// Offset at slash processing delay plus the defaul num past epochs + /// Offset at slash processing delay plus the default num past epochs SlashProcessingLenPlus, /// Offset at the max proposal period MaxProposalPeriod, diff --git a/proof_of_stake/src/lib.rs b/proof_of_stake/src/lib.rs index 3d79a345e9..531ea5e31a 100644 --- a/proof_of_stake/src/lib.rs +++ b/proof_of_stake/src/lib.rs @@ -2305,7 +2305,7 @@ where remaining = token::Amount::zero(); // NOTE: When there are multiple `src_validators` from which we're - // unbonding, `validator_to_modify` cannot get overriden, because + // unbonding, `validator_to_modify` cannot get overridden, because // only one of them can be a partial unbond (`new_entry` // is partial unbond) if let Some((bond_epoch, new_bond_amount)) = @@ -2542,8 +2542,8 @@ where } /// Compute a token amount after slashing, given the initial amount and a set of -/// slashes. It is assumed that the input `slashes` are those commited while the -/// `amount` was contributing to voting power. +/// slashes. It is assumed that the input `slashes` are those committed while +/// the `amount` was contributing to voting power. fn get_slashed_amount( params: &PosParams, amount: token::Amount, @@ -4651,7 +4651,7 @@ where /// Process a slash by (i) slashing the misbehaving validator; and (ii) any /// validator to which it has redelegated some tokens and the slash misbehaving -/// epoch is wihtin the redelegation slashing window. +/// epoch is within the redelegation slashing window. /// /// `validator` - the misbehaving validator. /// `slash_rate` - the slash rate. @@ -4740,7 +4740,7 @@ where /// In the context of a redelegation, the function computes how much a validator /// (the destination validator of the redelegation) should be slashed due to the /// misbehaving of a second validator (the source validator of the -/// redelegation). The function computes how much the validator whould be +/// redelegation). The function computes how much the validator would be /// slashed at all epochs between the current epoch (curEpoch) + 1 and the /// current epoch + 1 + PIPELINE_OFFSET, accounting for any tokens of the /// redelegation already unbonded. @@ -5783,8 +5783,8 @@ where Some(missed_votes) => missed_votes + 1, None => { // Missing liveness data for the validator (newly - // added to the conensus - // set), intialize it + // added to the consensus + // set), initialize it 1 } } @@ -5869,7 +5869,7 @@ pub mod test_utils { use crate::parameters::PosParams; use crate::types::GenesisValidator; - /// Helper function to intialize storage with PoS data + /// Helper function to initialize storage with PoS data /// about validators for tests. pub fn init_genesis_helper( storage: &mut S, diff --git a/proof_of_stake/src/tests.rs b/proof_of_stake/src/tests.rs index 0398a7b3d7..bc6c71c841 100644 --- a/proof_of_stake/src/tests.rs +++ b/proof_of_stake/src/tests.rs @@ -5685,7 +5685,7 @@ fn test_from_sm_case_1() { - unbond_amount, new_bond_amount ); - // The current bond should be sum of redelegations fom the modified epoch + // The current bond should be sum of redelegations from the modified epoch let cur_bond_amount = bonds_handle .get_delta_val(&storage, new_entry_epoch) .unwrap() diff --git a/proof_of_stake/src/types.rs b/proof_of_stake/src/types.rs index 20c9b2a0ca..0149f2d365 100644 --- a/proof_of_stake/src/types.rs +++ b/proof_of_stake/src/types.rs @@ -259,7 +259,7 @@ pub type LivenessMissedVotes = NestedMap>; /// The sum of missed votes within some interval for each of the consensus /// validators. The value in this map should in principle be the number of -/// elements in the correspoding inner LazySet of [`LivenessMissedVotes`]. +/// elements in the corresponding inner LazySet of [`LivenessMissedVotes`]. pub type LivenessSumMissedVotes = LazyMap; #[derive( @@ -673,7 +673,7 @@ impl Display for SlashType { pub fn into_tm_voting_power(votes_per_token: Dec, tokens: Amount) -> i64 { let pow = votes_per_token * u128::try_from(tokens).expect("Voting power out of bounds"); - i64::try_from(pow.to_uint().expect("Cant fail")) + i64::try_from(pow.to_uint().expect("Can't fail")) .expect("Invalid voting power") } diff --git a/scripts/get_cometbft.sh b/scripts/get_cometbft.sh index 59e585ab78..5695f28875 100755 --- a/scripts/get_cometbft.sh +++ b/scripts/get_cometbft.sh @@ -2,7 +2,7 @@ set -Eo pipefail -# an examplary download-url +# an example download-url # https://github.com/tendermint/tendermint/releases/download/v0.34.13/tendermint_0.34.13_linux_amd64.tar.gz # https://github.com/heliaxdev/tendermint/releases/download/v0.1.1-abcipp/tendermint_0.1.0-abcipp_darwin_amd64.tar.gz CMT_MAJORMINOR="0.37" diff --git a/scripts/get_tendermint.sh b/scripts/get_tendermint.sh index 024299ec93..65e19c77e8 100755 --- a/scripts/get_tendermint.sh +++ b/scripts/get_tendermint.sh @@ -2,7 +2,7 @@ set -Eo pipefail -# an examplary download-url +# an example download-url # https://github.com/tendermint/tendermint/releases/download/v0.34.13/tendermint_0.34.13_linux_amd64.tar.gz # https://github.com/heliaxdev/tendermint/releases/download/v0.1.1-abcipp/tendermint_0.1.0-abcipp_darwin_amd64.tar.gz TM_MAJORMINOR="0.1" diff --git a/sdk/Cargo.toml b/sdk/Cargo.toml index 3fd7219a9f..3093de003a 100644 --- a/sdk/Cargo.toml +++ b/sdk/Cargo.toml @@ -43,7 +43,7 @@ async-client = [ async-send = [] -# for integration tests and test utilies +# for integration tests and test utilities testing = [ "namada_core/testing", "namada_ethereum_bridge/testing", diff --git a/sdk/src/args.rs b/sdk/src/args.rs index 6599b0e9fb..86a489cf7f 100644 --- a/sdk/src/args.rs +++ b/sdk/src/args.rs @@ -1862,7 +1862,7 @@ pub struct Tx { /// Whether to force overwrite the above alias, if it is provided, in the /// wallet. pub wallet_alias_force: bool, - /// The amount being payed (for gas unit) to include the transaction + /// The amount being paid (for gas unit) to include the transaction pub fee_amount: Option, /// The fee payer signing key pub wrapper_fee_payer: Option, @@ -1956,7 +1956,7 @@ pub trait TxBuilder: Sized { ..x }) } - /// The amount being payed (for gas unit) to include the transaction + /// The amount being paid (for gas unit) to include the transaction fn fee_amount(self, fee_amount: InputAmount) -> Self { self.tx(|x| Tx { fee_amount: Some(fee_amount), diff --git a/sdk/src/error.rs b/sdk/src/error.rs index a741227cba..18be902d10 100644 --- a/sdk/src/error.rs +++ b/sdk/src/error.rs @@ -24,7 +24,7 @@ pub enum Error { /// Errors that are caused by trying to retrieve a pinned transaction #[error("Error in retrieving pinned balance: {0}")] Pinned(#[from] PinnedBalanceError), - /// Key Retrival Errors + /// Key Retrieval Errors #[error("Key Error: {0}")] KeyRetrival(#[from] storage::Error), /// Transaction Errors @@ -139,7 +139,7 @@ pub enum TxError { /// Error during broadcasting a transaction #[error("Encountered error while broadcasting transaction: {0}")] TxBroadcast(RpcError), - /// Invalid comission rate set + /// Invalid commission rate set #[error("Invalid new commission rate, received {0}")] InvalidCommissionRate(Dec), /// Invalid validator address diff --git a/sdk/src/eth_bridge/bridge_pool.rs b/sdk/src/eth_bridge/bridge_pool.rs index 639257fc70..e9784b81bb 100644 --- a/sdk/src/eth_bridge/bridge_pool.rs +++ b/sdk/src/eth_bridge/bridge_pool.rs @@ -416,7 +416,7 @@ pub async fn query_relay_progress( Ok(()) } -/// Internal methdod to construct a proof that a set of transfers are in the +/// Internal method to construct a proof that a set of transfers are in the /// bridge pool. async fn construct_bridge_pool_proof( client: &(impl Client + Sync), diff --git a/sdk/src/masp.rs b/sdk/src/masp.rs index 68467e2673..812db3171c 100644 --- a/sdk/src/masp.rs +++ b/sdk/src/masp.rs @@ -1567,7 +1567,7 @@ impl ShieldedContext { // Determine epoch in which to submit potential shielded transaction let epoch = rpc::query_epoch(context.client()).await?; // Context required for storing which notes are in the source's - // possesion + // possession let memo = MemoBytes::empty(); // Try to get a seed from env var, if any. diff --git a/sdk/src/queries/vp/pgf.rs b/sdk/src/queries/vp/pgf.rs index 9e8ea2f5cc..a7400f2732 100644 --- a/sdk/src/queries/vp/pgf.rs +++ b/sdk/src/queries/vp/pgf.rs @@ -15,7 +15,7 @@ router! {PGF, ( "parameters" ) -> PgfParameters = parameters, } -/// Query the currect pgf steward set +/// Query the current pgf steward set fn stewards( ctx: RequestCtx<'_, D, H, V, T>, ) -> storage_api::Result> @@ -38,7 +38,7 @@ where storage_api::pgf::is_steward(ctx.wl_storage, &address) } -/// Query the continous pgf fundings +/// Query the continuous pgf fundings fn funding( ctx: RequestCtx<'_, D, H, V, T>, ) -> storage_api::Result> diff --git a/sdk/src/queries/vp/pos.rs b/sdk/src/queries/vp/pos.rs index f5ceb06e11..1011964588 100644 --- a/sdk/src/queries/vp/pos.rs +++ b/sdk/src/queries/vp/pos.rs @@ -138,7 +138,7 @@ pub struct Enriched { pub unbonds_total: token::Amount, /// Sum of the unbond slashed amounts pub unbonds_total_slashed: token::Amount, - /// Sum ofthe withdrawable amounts + /// Sum of the withdrawable amounts pub total_withdrawable: token::Amount, } diff --git a/sdk/src/rpc.rs b/sdk/src/rpc.rs index 6dc6850193..38e4ee5b37 100644 --- a/sdk/src/rpc.rs +++ b/sdk/src/rpc.rs @@ -452,7 +452,7 @@ impl<'a> From> for Query { } /// Call the corresponding `tx_event_query` RPC method, to fetch -/// the current status of a transation. +/// the current status of a transaction. pub async fn query_tx_events( client: &C, tx_event_query: TxEventQuery<'_>, @@ -1115,7 +1115,7 @@ pub async fn wait_until_node_is_synched( .map_err(|_| { edisplay_line!( io, - "Node is still catching up, wait for it to finish synching." + "Node is still catching up, wait for it to finish syncing." ); Error::Query(QueryError::CatchingUp) })? diff --git a/sdk/src/signing.rs b/sdk/src/signing.rs index 7f854b59b0..3a3528f2f3 100644 --- a/sdk/src/signing.rs +++ b/sdk/src/signing.rs @@ -67,7 +67,7 @@ const ENV_VAR_LEDGER_LOG_PATH: &str = "NAMADA_LEDGER_LOG_PATH"; /// Env. var specifying where to store transaction debug outputs const ENV_VAR_TX_LOG_PATH: &str = "NAMADA_TX_LOG_PATH"; -/// A struture holding the signing data to craft a transaction +/// A structure holding the signing data to craft a transaction #[derive(Clone)] pub struct SigningTxData { /// The address owning the transaction @@ -132,7 +132,7 @@ pub fn find_key_by_pk( public_key: &common::PublicKey, ) -> Result { if *public_key == masp_tx_key().ref_to() { - // We already know the secret key corresponding to the MASP sentinal key + // We already know the secret key corresponding to the MASP sentinel key Ok(masp_tx_key()) } else { // Otherwise we need to search the wallet for the secret key @@ -425,7 +425,7 @@ pub async fn init_validator_signing_data( }) } -/// Informations about the post-tx balance of the tx's source. Used to correctly +/// Information about the post-tx balance of the tx's source. Used to correctly /// handle fee validation in the wrapper tx pub struct TxSourcePostBalance { /// The balance of the tx source after the tx has been applied @@ -944,7 +944,7 @@ pub async fn generate_test_vector( } /// Convert decimal numbers into the format used by Ledger. Specifically remove -/// all insignificant zeros occuring after decimal point. +/// all insignificant zeros occurring after decimal point. fn to_ledger_decimal(amount: &str) -> String { if amount.contains('.') { let mut amount = amount.trim_end_matches('0').to_string(); diff --git a/sdk/src/tx.rs b/sdk/src/tx.rs index d99a9c6f44..447beef3e1 100644 --- a/sdk/src/tx.rs +++ b/sdk/src/tx.rs @@ -502,7 +502,7 @@ pub async fn save_initialized_accounts( } } -/// Submit validator comission rate change +/// Submit validator commission rate change pub async fn build_validator_commission_change( context: &impl Namada, args::CommissionRateChange { @@ -2812,7 +2812,7 @@ async fn check_balance_too_low_err( } } // We're either facing a no response or a conversion error - // either way propigate it up + // either way propagate it up Err(err) => Err(err), } } diff --git a/shared/Cargo.toml b/shared/Cargo.toml index 1141dbc8d6..48572b2516 100644 --- a/shared/Cargo.toml +++ b/shared/Cargo.toml @@ -53,7 +53,7 @@ http-client = [ "tendermint-rpc/http-client" ] -# for integration tests and test utilies +# for integration tests and test utilities testing = [ "namada_core/testing", "namada_ethereum_bridge/testing", diff --git a/shared/src/ledger/governance/mod.rs b/shared/src/ledger/governance/mod.rs index fe02257a50..610aaa831f 100644 --- a/shared/src/ledger/governance/mod.rs +++ b/shared/src/ledger/governance/mod.rs @@ -155,7 +155,7 @@ where for counter in pre_counter..post_counter { // Construct the set of expected keys - // NOTE: we don't check the existance of committing_epoch because + // NOTE: we don't check the existence of committing_epoch because // it's going to be checked later into the VP let mandatory_keys = BTreeSet::from([ counter_key.clone(), diff --git a/shared/src/ledger/native_vp/ethereum_bridge/bridge_pool_vp.rs b/shared/src/ledger/native_vp/ethereum_bridge/bridge_pool_vp.rs index 52aa738fd4..99dfe14b9e 100644 --- a/shared/src/ledger/native_vp/ethereum_bridge/bridge_pool_vp.rs +++ b/shared/src/ledger/native_vp/ethereum_bridge/bridge_pool_vp.rs @@ -311,7 +311,7 @@ where Ok(true) } - /// Deteremine the debit and credit amounts that should be checked. + /// Determine the debit and credit amounts that should be checked. fn determine_escrow_checks<'trans, 'this: 'trans>( &'this self, wnam_address: &EthAddress, @@ -513,7 +513,7 @@ fn sum_gas_and_token_amounts( .checked_add(transfer.transfer.amount) .ok_or_else(|| { Error(eyre!( - "Addition oveflowed adding gas fee + transfer amount." + "Addition overflowed adding gas fee + transfer amount." )) }) } diff --git a/shared/src/ledger/native_vp/ethereum_bridge/vp.rs b/shared/src/ledger/native_vp/ethereum_bridge/vp.rs index 7b6fd767b2..9e98f18dbb 100644 --- a/shared/src/ledger/native_vp/ethereum_bridge/vp.rs +++ b/shared/src/ledger/native_vp/ethereum_bridge/vp.rs @@ -69,7 +69,7 @@ where // The amount escrowed should increase. if escrow_pre < escrow_post { // NB: normally, we only escrow NAM under the Ethereum bridge - // addresss in the context of a Bridge pool transfer + // address in the context of a Bridge pool transfer Ok(verifiers.contains(&storage::bridge_pool::BRIDGE_POOL_ADDRESS)) } else { tracing::info!( @@ -122,7 +122,7 @@ where /// Checks if `keys_changed` represents a valid set of changed keys. /// -/// This implies cheking if two distinct keys were changed: +/// This implies checking if two distinct keys were changed: /// /// 1. The Ethereum bridge escrow account's NAM balance key. /// 2. Another account's NAM balance key. diff --git a/shared/src/ledger/native_vp/masp.rs b/shared/src/ledger/native_vp/masp.rs index 54048f6759..50362bae29 100644 --- a/shared/src/ledger/native_vp/masp.rs +++ b/shared/src/ledger/native_vp/masp.rs @@ -182,7 +182,7 @@ where if !(1..=4).contains(&out_length) { tracing::debug!( "Transparent output to a transaction to the masp must be \ - beteween 1 and 4 but is {}", + between 1 and 4 but is {}", transp_bundle.vout.len() ); diff --git a/shared/src/ledger/pgf/utils.rs b/shared/src/ledger/pgf/utils.rs index e1bec701ba..3d320de3e4 100644 --- a/shared/src/ledger/pgf/utils.rs +++ b/shared/src/ledger/pgf/utils.rs @@ -34,7 +34,7 @@ impl ProposalEvent { } } - /// Create a new proposal event for pgf continous funding + /// Create a new proposal event for pgf continuous funding pub fn pgf_funding_payment( target: Address, amount: token::Amount, diff --git a/shared/src/ledger/protocol/mod.rs b/shared/src/ledger/protocol/mod.rs index 3dd3a8710d..c1f165a8bf 100644 --- a/shared/src/ledger/protocol/mod.rs +++ b/shared/src/ledger/protocol/mod.rs @@ -461,7 +461,7 @@ where /// Transfer `token` from `src` to `dest`. Returns an `Err` if `src` has /// insufficient balance or if the transfer the `dest` would overflow (This can -/// only happen if the total supply does't fit in `token::Amount`). Contrary to +/// only happen if the total supply doesn't fit in `token::Amount`). Contrary to /// `storage_api::token::transfer` this function updates the tx write log and /// not the block write log. fn token_transfer( @@ -1034,7 +1034,7 @@ where // transaction from consuming resources that have not // been acquired in the corresponding wrapper tx. For // all the other errors we keep evaluating the vps. This - // allows to display a consistent VpsResult accross all + // allows to display a consistent VpsResult across all // nodes and find any invalid signatures Error::GasError(_) => { return Err(err); diff --git a/shared/src/ledger/vp_host_fns.rs b/shared/src/ledger/vp_host_fns.rs index 5bf9d487e4..5b4bc4f671 100644 --- a/shared/src/ledger/vp_host_fns.rs +++ b/shared/src/ledger/vp_host_fns.rs @@ -36,7 +36,7 @@ pub enum RuntimeError { MemoryError(Box), #[error("Trying to read a temporary value with read_post")] ReadTemporaryValueError, - #[error("Trying to read a permament value with read_temp")] + #[error("Trying to read a permanent value with read_temp")] ReadPermanentValueError, #[error("Invalid transaction code hash")] InvalidCodeHash, diff --git a/shared/src/vm/host_env.rs b/shared/src/vm/host_env.rs index 2a527d2ff5..d29bd4b257 100644 --- a/shared/src/vm/host_env.rs +++ b/shared/src/vm/host_env.rs @@ -1879,8 +1879,8 @@ where } /// Verify a transaction signature -/// TODO: this is just a warkaround to track gas for multiple singature -/// verifications. When the runtime gas meter is implemented, this funcion can +/// TODO: this is just a warkaround to track gas for multiple signature +/// verifications. When the runtime gas meter is implemented, this function can /// be removed #[allow(clippy::too_many_arguments)] pub fn vp_verify_tx_section_signature( diff --git a/tests/src/e2e/eth_bridge_tests.rs b/tests/src/e2e/eth_bridge_tests.rs index 9f6c14fb1f..11166037eb 100644 --- a/tests/src/e2e/eth_bridge_tests.rs +++ b/tests/src/e2e/eth_bridge_tests.rs @@ -278,7 +278,7 @@ async fn test_roundtrip_eth_transfer() -> Result<()> { } /// In this test, we check the following: -/// 1. We can successfully add tranfers to the bridge pool. +/// 1. We can successfully add transfers to the bridge pool. /// 2. We can query the bridge pool and it is non-empty. /// 3. We request a proof of inclusion of the transfer into the /// bridge pool. diff --git a/tests/src/e2e/ibc_tests.rs b/tests/src/e2e/ibc_tests.rs index d69d3f1844..1ea7667f57 100644 --- a/tests/src/e2e/ibc_tests.rs +++ b/tests/src/e2e/ibc_tests.rs @@ -1473,7 +1473,7 @@ fn check_balances_after_non_ibc( client.exp_string(&expected)?; client.assert_success(); - // Check the traget + // Check the target let query_args = vec!["balance", "--owner", ALBERT, "--token", NAM, "--node", &rpc]; let expected = format!("{}/nam: 50000", trace_path); diff --git a/tests/src/e2e/ledger_tests.rs b/tests/src/e2e/ledger_tests.rs index e11317d4fe..5b3aa128ba 100644 --- a/tests/src/e2e/ledger_tests.rs +++ b/tests/src/e2e/ledger_tests.rs @@ -685,7 +685,7 @@ fn ledger_txs_and_queries() -> Result<()> { /// Test the optional disposable keypair for wrapper signing /// /// 1. Test that a tx requesting a disposable signer with a correct unshielding -/// operation is succesful +/// operation is successful /// 2. Test that a tx requesting a disposable signer /// providing an insufficient unshielding fails #[test] @@ -2083,7 +2083,7 @@ fn proposal_submission() -> Result<()> { /// Test submission and vote of a PGF proposal /// -/// 1 - Sumbit two proposals +/// 1 - Submit two proposals /// 2 - Check balance /// 3 - Vote for the accepted proposals /// 4 - Check one proposal passed and the other one didn't @@ -2337,7 +2337,7 @@ fn pgf_governance_proposal() -> Result<()> { let christel = find_address(&test, CHRISTEL)?; let pgf_funding = PgfFunding { - continous: vec![PgfFundingTarget { + continuous: vec![PgfFundingTarget { amount: token::Amount::from_u64(10), address: bertha.clone(), }], diff --git a/tests/src/e2e/setup.rs b/tests/src/e2e/setup.rs index 6c041b67a0..c0c2bd59fb 100644 --- a/tests/src/e2e/setup.rs +++ b/tests/src/e2e/setup.rs @@ -100,7 +100,7 @@ pub fn update_actor_config( .unwrap(); } -/// Configure validator p2p settings to allow duplicat ips +/// Configure validator p2p settings to allow duplicate ips pub fn allow_duplicate_ips(test: &Test, chain_id: &ChainId, who: &Who) { update_actor_config(test, chain_id, who, |config| { config.ledger.cometbft.p2p.allow_duplicate_ip = true; diff --git a/tests/src/vm_host_env/mod.rs b/tests/src/vm_host_env/mod.rs index eed12a6112..f4b204bd66 100644 --- a/tests/src/vm_host_env/mod.rs +++ b/tests/src/vm_host_env/mod.rs @@ -895,7 +895,7 @@ mod tests { .add_serialized_data(tx_data.clone()) .sign_raw(keypairs, pks_map, None) .sign_wrapper(keypair); - // open the channle with the message + // open the channel with the message tx_host_env::ibc::ibc_actions(tx::ctx()) .execute(&tx_data) .expect("opening the channel failed"); diff --git a/wasm_for_tests/wasm_source/src/lib.rs b/wasm_for_tests/wasm_source/src/lib.rs index d1c1b6696c..25f4d669ab 100644 --- a/wasm_for_tests/wasm_source/src/lib.rs +++ b/wasm_for_tests/wasm_source/src/lib.rs @@ -9,7 +9,7 @@ pub mod main { } } -/// A tx that fails everytime. +/// A tx that fails every time. #[cfg(feature = "tx_fail")] pub mod main { use namada_tx_prelude::*;