Skip to content

Latest commit

 

History

History
30 lines (21 loc) · 3.54 KB

DEVELOPMENT_INFO.md

File metadata and controls

30 lines (21 loc) · 3.54 KB

Releasing new versions

Go to Actions, select 'Release a new version' workflow, click on Run workflow, enter the release version and run it.

Running tests

Short tests are run automatically on every push. To run long tests on Github, go to Actions, select 'Long running automated tests' workflow and run it using the desired branch. When running tests, temporary folders are used instead of standard zebrazoom data folders, so tests won't interfere with the existing local data and won't leave any files behind.

To run tests locally, install the dependencies with pip install pytest pytest-qt pytest-cov and then run tests using python -m pytest --long -s test/. This command will run all tests; to skip long tests, simply omit the --long argument.

Generating coverage report

Coverage report can be generated by specifying additional arguments when running tests, for example python -m pytest --cov-report html --cov=zebrazoom --long -s test/. To look at the generated report, open index.html in folder htmlcov using your browser.

Updating tests

When adding a new parameter, simply run python -m pytest --store-results -s test/ locally. This will create a folder ZebraZoom/test/ui/dataAnalysis/generated_results, mirroring the structure of the expected_results folder and automatically check whether all existing parameters are unchanged.

If the tests pass without errors, this means existing parameters were not changed, and the only thing left to do is manually check the values of the new parameter (in most cases, checking test_kinematic_parameters/allBoutsMixed should suffice). If that also looks good, all contents of the generated_results folder can be copied into the expected_results folder and commited. If any of the tests fail with an error, it means one of the existing parameters was also affected by the changes and the generated file has to be manually inspected and compared to the corresponding file from the expected_results folder.

When modifying the calculation of existing parameters, the same thing should be done, but the tests will always fail, meaning the files will have to be checked manually.

Investigating test failures

When running tests, if the error is in _checkResults method, line assert f1.read() == f2.read() it means the parameters were not correctly calculated. If there are errors elsewhere, those are likely GitHub runner related errors, but the only way to be sure is to run the tests locally and ensure they work. If there is a persistent error elsewhere, it could be due to some changes in an external library. Also, if multiple tests fail, it's usually the best to check the first failed test, since others could've been caused by that failure.

Updating Python versions in GitHub actions

To update Python versions on which automated tests will run, simply modify python_version variable under jobs-run_tests-strategy-matrix in test.yml and long_test.yml - it contains a list of major versions on which tests will run (e.g. [3.9, 3.13]). To change the Python version used for releases, change the following variables in release.yml, under env (near the top of the file):

  • FULL_PYTHON_VERSION: the full Python version to use for the release (e.g. 3.13.0)

Updating macOS runner versions for the release action

Release action uses the oldest supported macOS version to ensure maximum compatibility. Once the currently used version reaches its end of life, the action should be updated to use the new oldest version. To do that, simply search for 'deploy_mac' and 'deploy_mac_legacy' in release.yaml and update 'runs-on' value for both.