Skip to content

Add SPEEDTEST #10920

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jul 2, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 24 additions & 0 deletions book/src/development/speedtest.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Speedtest
`SPEEDTEST` is the tool we use to measure lint's performance, it works by executing the same test several times.

It's useful for measuring changes to current lints and deciding if the performance changes too much. `SPEEDTEST` is
accessed by the `SPEEDTEST` (and `SPEEDTEST_*`) environment variables.

## Checking Speedtest

To do a simple speed test of a lint (e.g. `allow_attributes`), use this command.

```sh
$ SPEEDTEST=ui TESTNAME="allow_attributes" cargo uitest -- --nocapture
```

This will test all `ui` tests (`SPEEDTEST=ui`) whose names start with `allow_attributes`. By default, `SPEEDTEST` will
iterate your test 1000 times. But you can change this with `SPEEDTEST_ITERATIONS`.

```sh
$ SPEEDTEST=toml SPEEDTEST_ITERATIONS=100 TESTNAME="semicolon_block" cargo uitest -- --nocapture
```

> **WARNING**: Be sure to use `-- --nocapture` at the end of the command to see the average test time. If you don't
> use `-- --nocapture` (e.g. `SPEEDTEST=ui` `TESTNAME="let_underscore_untyped" cargo uitest -- --nocapture`), this
> will not show up.
45 changes: 39 additions & 6 deletions tests/compile-test.rs
Original file line number Diff line number Diff line change
Expand Up @@ -213,12 +213,45 @@ fn run_ui_cargo() {

fn main() {
set_var("CLIPPY_DISABLE_DOCS_LINKS", "true");
run_ui();
run_ui_toml();
run_ui_cargo();
run_internal_tests();
rustfix_coverage_known_exceptions_accuracy();
ui_cargo_toml_metadata();
// The SPEEDTEST_* env variables can be used to check Clippy's performance on your PR. It runs the
// affected test 1000 times and gets the average.
if let Ok(speedtest) = std::env::var("SPEEDTEST") {
println!("----------- STARTING SPEEDTEST -----------");
let f = match speedtest.as_str() {
"ui" => run_ui as fn(),
"cargo" => run_ui_cargo as fn(),
"toml" => run_ui_toml as fn(),
"internal" => run_internal_tests as fn(),
"rustfix-coverage-known-exceptions-accuracy" => rustfix_coverage_known_exceptions_accuracy as fn(),
"ui-cargo-toml-metadata" => ui_cargo_toml_metadata as fn(),

_ => panic!("unknown speedtest: {speedtest} || accepted speedtests are: [ui, cargo, toml, internal]"),
};

let iterations;
if let Ok(iterations_str) = std::env::var("SPEEDTEST_ITERATIONS") {
iterations = iterations_str
.parse::<u64>()
.unwrap_or_else(|_| panic!("Couldn't parse `{iterations_str}`, please use a valid u64"));
} else {
iterations = 1000;
}

let mut sum = 0;
for _ in 0..iterations {
let start = std::time::Instant::now();
f();
sum += start.elapsed().as_millis();
}
Comment on lines +240 to +245
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since the timing itself also takes some time to run, would it make sense to move the timing stuff out of the loop and divide the time taken by the number of iterations after the loop? the Instant::now().elapsed() overhead might also be negligible compared to how long a lint pass takes, so maybe this doesn't matter that much (from what I remember, it was around 40ns on the playground...)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the usual way to micro benchmark is to time variable numbers of iterations, then do a linear regression, but this is clearly a macro benchmark, so don't worry about either loop or time measurement overhead.

println!("average {} time: {} millis.", speedtest.to_uppercase(), sum / 1000);
} else {
run_ui();
run_ui_toml();
run_ui_cargo();
run_internal_tests();
rustfix_coverage_known_exceptions_accuracy();
ui_cargo_toml_metadata();
}
}

const RUSTFIX_COVERAGE_KNOWN_EXCEPTIONS: &[&str] = &[
Expand Down