Skip to content

Commit

Permalink
Merge pull request #35 from Open-EO/issue30-central-run-options
Browse files Browse the repository at this point in the history
Centralize run options (and their docs) better
  • Loading branch information
soxofaan authored Jan 23, 2024
2 parents e44c22e + 2bd1d1f commit f14d2b6
Show file tree
Hide file tree
Showing 6 changed files with 161 additions and 111 deletions.
87 changes: 87 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,93 @@ or through an environment variable `OPENEO_BACKEND_URL`

If no explicit protocol is specified, `https://` will be assumed automatically.


## Additional run options

In addition to the `--openeo-backend-url` option, there are several other options
to control some aspects of the test suite run.

### Process selection

Various tests depend on the availability of certain openEO processes.
It is possible to only run tests for a subset of processes with these
process selection options:

- `--processes` to define a comma-separated list of processes to test against.
- Example: `--processes=min,load_stac_,apply,reduce_dimension`.
- `--process-levels` to select whole groups of processes based on
predefined [process profiles](https://openeo.org/documentation/1.0/developers/profiles/processes.html),
specified as a comma-separated list of levels.
- Example: `--process-levels=L1,L2`.`
- A level does not imply other levels, so each desired level must be specified explicitly.
For example, L2 does **not** include L1 automatically.
- `--experimental`: Enables tests for experimental processes.
By default experimental processes will be skipped.


### Runner for individual process testing

One module of the test suite is dedicated to **individual process testing**,
where each process is tested individually with a given set of inputs and expected outputs.
Because there are a lot of these tests (order of thousands),
it is very time-consuming to run these through the standard, HTTP based openEO REST API.
As a countermeasure, the test suite ships with several experimental **runners**
that aim to execute the tests directly against a (locally running) back-end implementation
to eliminate HTTP-related and other overhead.
Note that this typically requires additional dependencies to be installed in your test environment.

The desired runner can be specified through the `--runner` option,
with currently one of the following options:

- `skip` (**default**): Skip all individual process tests.
- This is the default to avoid accidentally running a very heavy/costly test suite.
- `http`: Run the individual processing tests through a standard openEO REST API.
- Requires `--openeo-backend-url` to be set as described above.
- As noted above, this will very likely result a very heavy test suite run by default.
It is therefore recommended to limit the test suite scope
in some way: e.g. limited process selection through `--processes`
or running against a dedicated/localhost deployment.
- Another limitation of this runner is that not all process tests
can be executed as some input-output pairs are not JSON encodable.
These tests will be marked as skipped.
- `dask`: Executes the tests directly via the [openEO Dask implementation](https://github.com/Open-EO/openeo-processes-dask) (as used by EODC, EURAC, and others)
- Requires [openeo_processes_dask](https://github.com/Open-EO/openeo-processes-dask) package being installed in test environment.
- Covers all implemented processes.
- `vito`: Executes the tests directly via the
[openEO Python Driver implementation](https://github.com/Open-EO/openeo-python-driver) (as used by CDSE, VITO/Terrascope, and others).
- Requires [openeo_driver](https://github.com/Open-EO/openeo-python-driver) package being installed in test environment.
- Only covers a subset of processes due to the underlying architecture of the back-end implementation.
In particular, it only covers the pure Python code paths, but not the PySpark related aspects.

See [openeo_test_suite/lib/process_runner](https://github.com/Open-EO/openeo-test-suite/tree/main/src/openeo_test_suite/lib/process_runner)
for more details about these runners and inspiration to implement your own runner.


#### Usage examples of individual process testing with runner option

The individual process tests can be run by specifying the `src/openeo_test_suite/tests/processes/processing` as test path.
Some use examples with different options discussed above:

```bash
# Basic default behavior: run all individual process tests,
# but with the default runner (skip), so no tests will actually be executed.
pytest src/openeo_test_suite/tests/processes

# Run tests for a subset of processes with the HTTP runner
# against the openEO Platform backend at openeo.cloud
pytest --runner=http --openeo-backend-url=openeo.cloud --processes=min,max src/openeo_test_suite/tests/processes/processing

# Run tests for a subset of processes with the VITO runner
pytest --runner=vito --process-levels=L1,L2,L2A src/openeo_test_suite/tests/processes/processing

# Run all individual process tests with the Dask runner
pytest --runner=dask src/openeo_test_suite/tests/processes
```





## Authentication of the basic `connection` fixture

The test suite provides a basic `connection` fixture
Expand Down
2 changes: 1 addition & 1 deletion conftest.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
pytest_plugins = [
"openeo_test_suite.lib.backend_under_test",
"openeo_test_suite.lib.pytest_plugin",
]
28 changes: 6 additions & 22 deletions src/openeo_test_suite/lib/backend_under_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,40 +85,24 @@ def get_backend_url(config: pytest.Config, required: bool = False) -> Union[str,
_backend_under_test: Union[None, _BackendUnderTest] = None


def pytest_addoption(parser):
"""Implementation of `pytest_addoption` hook."""
parser.addoption(
"-U",
"--openeo-backend-url",
action="store",
default=None,
help="The openEO backend URL to connect to.",
)


def pytest_configure(config):
"""Implementation of `pytest_configure` hook."""
def set_backend_under_test(backend: _BackendUnderTest):
global _backend_under_test
assert _backend_under_test is None
backend_url = get_backend_url(config)
if backend_url is None:
_backend_under_test = NoBackend()
else:
connection = openeo.connect(url=backend_url, auto_validate=False)
_backend_under_test = HttpBackend(connection=connection)
assert isinstance(backend, _BackendUnderTest)
_backend_under_test = backend


def _get_backend_under_test() -> _BackendUnderTest:
def get_backend_under_test() -> _BackendUnderTest:
global _backend_under_test
assert isinstance(_backend_under_test, _BackendUnderTest)
return _backend_under_test


@functools.lru_cache
def get_collection_ids() -> List[str]:
return _get_backend_under_test().list_collection_ids()
return get_backend_under_test().list_collection_ids()


@functools.lru_cache
def get_process_ids() -> List[str]:
return _get_backend_under_test().list_process_ids()
return get_backend_under_test().list_process_ids()
67 changes: 67 additions & 0 deletions src/openeo_test_suite/lib/pytest_plugin.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
import argparse

import openeo

from openeo_test_suite.lib.backend_under_test import (
HttpBackend,
NoBackend,
get_backend_url,
set_backend_under_test,
)


def pytest_addoption(parser):
"""Implementation of `pytest_addoption` hook."""
parser.addoption(
"-U",
"--openeo-backend-url",
action="store",
default=None,
help="The openEO backend URL to connect to.",
)

parser.addoption(
"--process-levels",
action="store",
default="",
help="The openEO process profiles you want to test against, e.g. 'L1,L2,L2A'. Mutually exclusive with --processes.",
)
parser.addoption(
"--processes",
action="store",
default="",
help="The openEO processes you want to test against, e.g. 'apply,reduce_dimension'. Mutually exclusive with --process-levels.",
)

parser.addoption(
"--experimental",
type=bool,
action=argparse.BooleanOptionalAction,
default=False,
help="Run tests for experimental functionality or not. By default the tests will be skipped.",
)

parser.addoption(
"--runner",
action="store",
default="skip",
help="A specific test runner to use for individual process tests. If not provided, uses a default HTTP API runner.",
)

parser.addoption(
"--s2-collection",
action="store",
default=None,
help="The data collection to test against. It can be either a Sentinel-2 STAC Collection or the name of an openEO Sentinel-2 Collection provided by the back-end.",
)


def pytest_configure(config):
"""Implementation of `pytest_configure` hook."""
backend_url = get_backend_url(config)
if backend_url is None:
backend = NoBackend()
else:
connection = openeo.connect(url=backend_url, auto_validate=False)
backend = HttpBackend(connection=connection)
set_backend_under_test(backend)
34 changes: 0 additions & 34 deletions src/openeo_test_suite/tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,40 +13,6 @@
_log = logging.getLogger(__name__)


def pytest_addoption(parser):
parser.addoption(
"--experimental",
type=bool,
action=argparse.BooleanOptionalAction,
default=False,
help="Run tests for experimental functionality or not. By default the tests will be skipped.",
)
parser.addoption(
"--process-levels",
action="store",
default="",
help="The openEO process profiles you want to test against, e.g. 'L1,L2,L2A'. Mutually exclusive with --processes.",
)
parser.addoption(
"--processes",
action="store",
default="",
help="The openEO processes you want to test against, e.g. 'apply,reduce_dimension'. Mutually exclusive with --process-levels.",
)
parser.addoption(
"--runner",
action="store",
default="skip",
help="A specific test runner to use for individual process tests. If not provided, uses a default HTTP API runner.",
)
parser.addoption(
"--s2-collection",
action="store",
default=None,
help="The data collection to test against. It can be either a Sentinel-2 STAC Collection or the name of an openEO Sentinel-2 Collection provided by the back-end.",
)


@pytest.fixture(scope="session")
def skip_experimental(request) -> bool:
"""
Expand Down
54 changes: 0 additions & 54 deletions src/openeo_test_suite/tests/processes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,57 +2,3 @@

- tests for validation of process metadata against the [openEO API](https://openeo.org/).
- individual process testing, e.g. based on input-output examples.

## Process Metadata tests

...

## Individual Process Testing

### Usage examples

Specify test path `src/openeo_test_suite/tests/processes/processing` to only run individual process tests.


```bash
# Basic default behavior: run all individual process tests,
# but with the default runner (skip), so no tests will actually be executed.
pytest src/openeo_test_suite/tests/processes

# Run tests for a subset of processes with the HTTP runner
# against the openEO Platform backend at openeo.cloud
pytest --runner=http --openeo-backend-url=openeo.cloud --processes=min,max src/openeo_test_suite/tests/processes/processing

# Run tests for a subset of processes with the VITO runner
pytest --runner=vito --process-levels=L1,L2,L2A src/openeo_test_suite/tests/processes/processing

# Run all individual process tests with the Dask runner
pytest --runner=dask src/openeo_test_suite/tests/processes
```

### Parameters

- `--runner`: The execution engine. One of:
- `skip` (**default**) skip all individual process tests
- `vito` (requires [openeo_driver](https://github.com/Open-EO/openeo-python-driver) package being installed in test environment)
- `dask` (requires [openeo_processes_dask](https://github.com/Open-EO/openeo-processes-dask) package being installed in test environment)
- `http` (requires `--openeo-backend-url` to be set)
- `--process-levels`: All [process profiles](https://openeo.org/documentation/1.0/developers/profiles/processes.html) to test against, separated by comma. You need to list all levels explicitly, e.g., L2 does **not** include L1 automatically. Example: `L1,L2,L2A`. By default tests against all processes.
- `--processes`: A list of processes to test against, separated by comma. Example: `apply,reduce_dimension`. By default tests against all processes.
- `--experimental`: Enables tests for experimental processes. By default experimental processes will be skipped.

### Runners

The individual process testing ships 3 experimental runners:

- [HTTP](../../lib/process_runner/http.py) (subset of processes due to JSON limitations)
Executes all 1000 tests via the openEO API synchronously, expect 1000+ requests to hit your back-end in a short amount of time.
- [Dask](../../lib/process_runner/dask.py) (all implemented processes)
Executes the tests directly via the [openEO Dask implementation](https://github.com/Open-EO/openeo-processes-dask) (used by EODC, EURAC, and others)
- [VITO](../../lib/process_runner/vito.py) (subset of processes due to the underlying architecture of the back-end implementation)
Executes the tests directly via the [openEO Python Driver implementation](https://github.com/Open-EO/openeo-python-driver) (used by CDSE, VITO, and others)

You can implement your own runner by implementing the process runner base class:
<https://github.com/Open-EO/openeo-test-suite/blob/main/src/openeo_test_suite/lib/process_runner/base.py>

See the runners above for examples.

0 comments on commit f14d2b6

Please sign in to comment.