FlakeGuard is a free, open-source software that allows developers to run Jest tests to automatically detect, report, and track flaky tests in software development.
Flaky test: a test that sometimes passes and sometimes fails for the same code, often due to nondeterministic factors like timing issues, network variability, or reliance on external systems.
By identifying flaky tests, FlakeGuard helps users improve test assurances.
Install the FlakeGuard NPM package as a dev dependency by running the command
npm i flake-guard --save-dev
To run FlakeGuard in your project, simply execute the command
npx flake-guard <filename>
. Change <filename>
to the name of the test file that you want to examine.
๐๏ธFlakeGuard will analyze your tests for flakiness by executing multiple test runs and analyzing the results. The default number of test runs is 10, but this can be adjusted as described below.
In general, there is a time versus accuracy tradeoff. More test executions increase accuracy but reduce speed.
To adjust FlakeGuard configuration variables, you can create a file in the root directory of your project called fg.config.json . Below are the defaults, which can be overridden in your local 'fg.config.json' file.
{
runs: 10
}
For example, if you want to increase accuracy, you can increase test runs.
{
runs: 100
}
Under the hood, the flake-guard npm package is automating repeated runs of your test file. It will do a parsing of the results locally to log an object in your terminal with all of your test assertions and their pass/fail metrics. It sends off the raw Jest results objects to the FlakeGuard server for further analysis which you can view at flakeguard.com.
The flake-guard NPM package pairs with the FlakeGuard web application. After the package runs in the terminal, the user will have the option to press Enter to send the results to the FlakeGuard server and open the FlakeGuard app in the browser.The user will be directed to a page where they have the option to either view a one-time simplified version of the user dashboard, or log in via Github to view advanced metrics and save their data to view the evolution of their test suite over time.
We welcome feedback, new ideas, and contributors!
Some features next in line for development include:
- Allowing users to organize their stored results by filename
- Incorporating Jest's code coverage metrics to visualize test suite coverage metrics and track changes over time
- A history page where users can review previous results individually
- Further tools to help users mitigate test flake, such as pinpointing test failure points and generating potential solutions
Name | Connect with Us |
---|---|
Ashley Hannigan | LinkedIn - Github |
Brendan Xiong | LinkedIn - Github |
Tommy Martinez | LinkedIn - Github |
Paloma Reynolds | LinkedIn - Github |
Will Suto | LinkedIn - Github |