Skip to content

Commit

Permalink
Merge branch '45-plan-de-validation' into 'release'
Browse files Browse the repository at this point in the history
Resolve "Plan de validation : étape 1 affinage"

See merge request 3d/PandoraBox/pandora2d!47
  • Loading branch information
lecontm committed Feb 27, 2024
2 parents 3084519 + 200324a commit 82cfdeb
Show file tree
Hide file tree
Showing 36 changed files with 896 additions and 217 deletions.
4 changes: 3 additions & 1 deletion .github/workflows/pandora2d_ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ jobs:
pip install pytest
pip install pytest-cov
pip install pytest-mock
pip install pytest-html
pip install codecov
pip install build
- name: Install Pandora2d
Expand All @@ -31,7 +32,8 @@ jobs:
- name: Test with pytest
run: |
export NUMBA_DISABLE_JIT="1"
pytest --junitxml=pytest-report.xml --cov-config=.coveragerc --cov-report xml --cov
pytest -m "unit_tests" --html=unit-test-report.html --cov-config=.coveragerc --cov-report xml --cov
pytest -m "functional_tests" --html=functional-test-report.html
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
- name: Create source distrubition
Expand Down
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,9 @@ coverage.xml
.pytest_cache/
cover/

# Monitoring test
.pymon

# Translations
*.mo
*.pot
Expand Down
34 changes: 30 additions & 4 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,32 @@ install: venv ## install pandora2D (pip editable mode) without plugins
## Test section

.PHONY: test
test: install ## run all tests (except notebooks) + coverage (source venv before)
@${PANDORA2D_VENV}/bin/pytest --junitxml=pytest-report.xml --cov-config=.coveragerc --cov-report xml --cov
test: install test-unit test-functional ## run unit tests and functional tests

.PHONY: test-all
test-all: install test-unit test-functional test-resource test-performance ## run all tests

.PHONY: test-unit
test-unit: install ## run unit tests only (for dev) + coverage (source venv before)
@echo "Run unit tests"
@${PANDORA2D_VENV}/bin/pytest -m "unit_tests" --html=unit-test-report.html --cov-config=.coveragerc --cov-report xml --cov

.PHONY: test-functional
test-functional: install ## run functional tests only (for dev and validation plan)
@echo "Run functional tests"
@${PANDORA2D_VENV}/bin/pytest -m "functional_tests" --html=functional-test-report.html

.PHONY: test-resource
test-resource: install ## run resource tests only (for validation plan)
@echo "Run resource tests"
@rm -f tests/resource_tests/.pymon
@${PANDORA2D_VENV}/bin/pytest -m "resource_tests and not metrics" --db tests/resource_tests/.pymon
@${PANDORA2D_VENV}/bin/pytest tests/resource_tests/test_metrics.py --database tests/resource_tests/.pymon --html=resource-test-report.html

.PHONY: test-performance
test-performance: install ## run performance tests only (for validation plan)
@echo "Run performance tests"
@${PANDORA2D_VENV}/bin/pytest -m "performance_tests" --html=performance-test-report.html

## Code quality, linting section

Expand Down Expand Up @@ -94,7 +118,7 @@ lint/mypy: ## check linting with mypy
.PHONY: lint/pylint
lint/pylint: ## check linting with pylint
@echo "+ $@"
@set -o pipefail; ${PANDORA2D_VENV}/bin/pylint pandora2d "tests/*" --rcfile=.pylintrc --output-format=parseable --msg-template="{path}:{line}: [{msg_id}({symbol}), {obj}] {msg}" # | tee pylint-report.txt # pipefail to propagate pylint exit code in bash
@set -o pipefail; ${PANDORA2D_VENV}/bin/pylint pandora2d tests ./*.py --rcfile=.pylintrc --output-format=parseable --msg-template="{path}:{line}: [{msg_id}({symbol}), {obj}] {msg}" # | tee pylint-report.txt # pipefail to propagate pylint exit code in bash

## Documentation section

Expand Down Expand Up @@ -158,9 +182,11 @@ clean-test:
@rm -rf coverage.xml
@rm -fr htmlcov/
@rm -fr .pytest_cache
@rm -f pytest-report.xml
@rm -f pylint-report.txt
@rm -f debug.log
@rm -f .pymon
@rm -f tests/resource_tests/.pymon
@rm -f *-test-report.html

.PHONY: clean-doc
clean-doc:
Expand Down
2 changes: 1 addition & 1 deletion pandora2d/img_tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,9 @@

import xarray as xr
import numpy as np
from scipy.ndimage import shift

import pandora.img_tools as pandora_img_tools
from scipy.ndimage import shift


class Datasets(NamedTuple):
Expand Down
10 changes: 8 additions & 2 deletions pytest.ini
Original file line number Diff line number Diff line change
@@ -1,4 +1,10 @@
[pytest]
addopts = -ra
addopts = -ra --parametrization-explicit
testpaths = tests
norecursedirs = .git doc conf .gitlab
markers =
unit_tests: unit tests
functional_tests: functional tests
resource_tests: resource tests
performance_tests: accuracy tests
norecursedirs = .git doc conf .gitlab
generate_report_on_test = True
2 changes: 2 additions & 0 deletions setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,8 @@ dev =
pytest
pytest-cov
pytest-mock
pytest-monitor
pytest-html
pre-commit
isort>=5.8.0 # Check imports
black>=21.5b0 # PEP8 format code
Expand Down
79 changes: 79 additions & 0 deletions tests/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
# Description
This directory contains the different tests to validate Pandora2D.

# Tests

## Directory
The Pandora2D test cases correspond to the following tree structure:

- unit_tests : TU
- A TU validates a function as a unit.
- A TU does not necessarily meet a customer requirement because a function can meet a technical need (as opposed to a user need).
- functional_tests : TF
- A TF validates the end-to-end operation of Pandora2D and therefore, potentially, the combination of several parameters.
- A TF meets one or more customer requirements (user needs).
- A matrix representing the scenarios and operating requirements to be met is presented in each sub-directory.
- performance_tests : TP
- A practical work session validates the accuracy that Pandora2D can achieve in the field (essentially, the accuracy of alignment).
- A TP can meet a customer requirement or be provided for information purposes.
- resource_tests : TR
- A TR validates the machine resources (time/occupancy and memory) required by Pandora2D for end-to-end operation.
- A TR may meet a customer requirement or be provided for information purposes.

## Functionality
It's the primary function validated by this test case. The list below shows the different functionalities tested :

- target_grid : l'utilisateur peut avoir recours à une roi ou à un pas
- mode : the type of search
- criteria : invalidity indicators are raised depending on the calculation on the pixel in question (use of masks, area of disparity too large, etc.)
- matching_cost : the stage where a similarity score is calculated between the two images.
- disparity : selecting the best similarity score, for the moment there is only the WTA method (Winner takes all).
- refinement : accurate the disparity to smooth outliers.
- validation : a criterion that gives the user additional confidence in the result.

These different functionalities are then divided into sub-functionalities which will be described in the xx file.
A folder is created for each functionality/sub-functionality.

## Docstring template for test
For each test, a full description with name, id and data is included in the function's docstring. Below is the template:

```python
"""
<brief description of the test>
ID : <test number>
name : <category><function tested><ID>
input : <data name>
"""
```

with category which can take the following values:
- TU
- TF
- TP
- TR

## Test execution
There are several options for launching the various tests:

1. Using the `Makefile`: the different targets defined
- `make test` : run unit tests and functional tests
- `make test-all` : run all tests in this directory and sub-directory
- `make test-unit` : run unit tests only
- `make test-functional` : run functional tests only
- `make test-resource` : run resource tests only
- `make test-performance` : run performance only

2. Using the command line with pytest with virtual environment `venv` directory:
```shell
source venv/bin/active ## active venv
pytest -m "<target_1> or <target_2>" --parametrization-explicit -vv ## Using a target defined in pytest.ini
```

## Monitoring test
The aim is to check the execution time of certain tests as well as the CPU and memory load. pytest-monitor has been used to check this, see the [page](https://pypi.org/project/pytest-monitor/) for more information.

:exclamation: Only the tests in resource_tests directory use the monitoring.

# Data (Coming soon)
At present, only 'cone' images are used for unit tests.
112 changes: 0 additions & 112 deletions tests/common.py

This file was deleted.

Loading

0 comments on commit 82cfdeb

Please sign in to comment.