Skip to content

Latest commit

 

History

History
107 lines (81 loc) · 6.63 KB

TESTING.md

File metadata and controls

107 lines (81 loc) · 6.63 KB

Pallet Verifier: Testing Guide

Setup

Follow the installation instructions outlined in the project README.md

Running UI tests

You can run UI tests by running the following command from the project root:

cargo test

Test Structure

Like the Rust compiler itself, pallet-verifier is mainly tested using UI tests (i.e. "a collection of general-purpose tests which primarily focus on validating the console output of the compiler"). And similar to other Rust compiler plugins (like clippy and miri), pallet-verifier also leverages the ui_test framework as its UI tests runner.

pallet-verifier's UI tests are defined in the tests/ui directory. Test cases are divided into three main test suites, each in its own subdirectory:

However, at a higher-level, test cases in the tests/ui/driver and tests/ui/cargo test suites are essentially minimal sanity checks with descriptive names based on the specific feature/behaviour they check/validate (e.g. this and this for integer cast overflow detection), while test cases in the tests/ui/sdk test suite are production FRAME pallet tests, and include FRAME pallets copied directly from the Polkadot SDK.

The expected stdout and stderr output for each test case is defined in *.stdout and *.stderr files (e.g. see this and this, or this and this), with the absence of a *.stderr file implying that a test case has no expected diagnostics.

The custom benchmark

pallet-verifier includes a simple custom benchmark used to test its accuracy and speed on a few production pallets from the Polkadot SDK.

You can run the benchmark by running the following command from the project root:

cargo bench

The benchmark works by invoking pallet-verifier on 2 versions/variants of each production FRAME pallet in the benchmark suite:

It then compares the returned diagnostics, to fixtures that describe the expected results (see also), and reports metrics including:

at different levels of granularity i.e for:

  • each dispatchable or public associated function
  • each pallet version/variant (both "sdk" and "edited")
  • the entire benchmark suite

Check out the inline comments in the benchmark runner for more details.