Skip to content

Scenario Failure: forest_fire_mapping #249

@github-actions

Description

@github-actions

Benchmark scenario ID: forest_fire_mapping
Benchmark scenario definition: https://github.com/ESA-APEx/apex_algorithms/blob/bde2fcb99e62f4394a5830a88d5ea772cc440a3f/algorithm_catalog/vito/random_forest_firemapping/benchmark_scenarios/random_forest_firemapping.json
openEO backend: openeo.vito.be

GitHub Actions workflow run: https://github.com/ESA-APEx/apex_algorithms/actions/runs/18038589257
Workflow artifacts: https://github.com/ESA-APEx/apex_algorithms/actions/runs/18038589257#artifacts

Test start: 2025-09-26 13:05:29.679468+00:00
Test duration: 0:26:59.690890
Test outcome: ❌ failed

Last successful test phase: download-reference
Failure in test phase: compare

Contact Information

Name Organization Contact
Pratichhya Sharma VITO Contact via VITO (VITO Website, GitHub)

Process Graph

{
  "randomforestfiremapping1": {
    "arguments": {
      "padding_window_size": 33,
      "spatial_extent": {
        "coordinates": [
          [
            [
              -17.996638457335074,
              28.771993378019005
            ],
            [
              -17.960989271845406,
              28.822652746872745
            ],
            [
              -17.913144312372435,
              28.85454938652139
            ],
            [
              -17.842315009623224,
              28.83015783855478
            ],
            [
              -17.781805207936817,
              28.842353612538087
            ],
            [
              -17.728331429702315,
              28.74103487483061
            ],
            [
              -17.766795024572748,
              28.681932277834584
            ],
            [
              -17.75131577297855,
              28.624236885528937
            ],
            [
              -17.756944591740076,
              28.579206335436727
            ],
            [
              -17.838093395552082,
              28.451150708612
            ],
            [
              -17.871397239891113,
              28.480702007110015
            ],
            [
              -17.88969090086607,
              28.57404658490533
            ],
            [
              -17.957705794234517,
              28.658947934558352
            ],
            [
              -18.003674480786984,
              28.76167387695621
            ],
            [
              -18.003674480786984,
              28.76167387695621
            ],
            [
              -17.996638457335074,
              28.771993378019005
            ]
          ]
        ],
        "type": "Polygon"
      },
      "temporal_extent": [
        "2023-07-15",
        "2023-09-15"
      ]
    },
    "namespace": "https://raw.githubusercontent.com/ESA-APEx/apex_algorithms/0962bf79f836859e701fa7437307240ef689ff2e/algorithm_catalog/vito/random_forest_firemapping/openeo_udp/random_forest_firemapping.json",
    "process_id": "random_forest_firemapping",
    "result": true
  }
}

Error Logs

scenario = BenchmarkScenario(id='forest_fire_mapping', description='Forest Fire Mapping using Random Forest based on Sentinel-2 a.../apex_algorithms/algorithm_catalog/vito/random_forest_firemapping/benchmark_scenarios/random_forest_firemapping.json'))
connection_factory = <function connection_factory.<locals>.get_connection at 0x7fe0fbdbf6a0>
tmp_path = PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_forest_fire0')
track_metric = <function track_metric.<locals>.track at 0x7fe0fbdbf9c0>
track_phase = <function track_phase.<locals>.track at 0x7fe0fbdbf7e0>
upload_assets_on_fail = <function upload_assets_on_fail.<locals>.collect at 0x7fe0fbdbfa60>
request = <FixtureRequest for <Function test_run_benchmark[forest_fire_mapping]>>

    @pytest.mark.parametrize(
        "scenario",
        [
            # Use scenario id as parameterization id to give nicer test names.
            pytest.param(uc, id=uc.id)
            for uc in get_benchmark_scenarios()
        ],
    )
    def test_run_benchmark(
        scenario: BenchmarkScenario,
        connection_factory,
        tmp_path: Path,
        track_metric,
        track_phase,
        upload_assets_on_fail,
        request,
    ):
        track_metric("scenario_id", scenario.id)

        with track_phase(phase="connect"):
            # Check if a backend override has been provided via cli options.
            override_backend = request.config.getoption("--override-backend")
            backend_filter = request.config.getoption("--backend-filter")
            if backend_filter and not re.match(backend_filter, scenario.backend):
                # TODO apply filter during scenario retrieval, but seems to be hard to retrieve cli param
                pytest.skip(
                    f"skipping scenario {scenario.id} because backend {scenario.backend} does not match filter {backend_filter!r}"
                )
            backend = scenario.backend
            if override_backend:
                _log.info(f"Overriding backend URL with {override_backend!r}")
                backend = override_backend

            connection: openeo.Connection = connection_factory(url=backend)

        with track_phase(phase="create-job"):
            # TODO #14 scenario option to use synchronous instead of batch job mode?
            job = connection.create_job(
                process_graph=scenario.process_graph,
                title=f"APEx benchmark {scenario.id}",
                additional=scenario.job_options,
            )
            track_metric("job_id", job.job_id)

        with track_phase(phase="run-job"):
            # TODO: monitor timing and progress
            # TODO: abort excessively long batch jobs? https://github.com/Open-EO/openeo-python-client/issues/589
            job.start_and_wait()
            # TODO: separate "job started" and run phases?

        with track_phase(phase="collect-metadata"):
            collect_metrics_from_job_metadata(job, track_metric=track_metric)

            results = job.get_results()
            collect_metrics_from_results_metadata(results, track_metric=track_metric)

        with track_phase(phase="download-actual"):
            # Download actual results
            actual_dir = tmp_path / "actual"
            paths = results.download_files(target=actual_dir, include_stac_metadata=True)

            # Upload assets on failure
            upload_assets_on_fail(*paths)

        with track_phase(phase="download-reference"):
            reference_dir = download_reference_data(
                scenario=scenario, reference_dir=tmp_path / "reference"
            )

        with track_phase(
            phase="compare", describe_exception=analyse_results_comparison_exception
        ):
            # Compare actual results with reference data
>           assert_job_results_allclose(
                actual=actual_dir,
                expected=reference_dir,
                tmp_path=tmp_path,
                rtol=scenario.reference_options.get("rtol", 1e-6),
                atol=scenario.reference_options.get("atol", 1e-6),
                pixel_tolerance=scenario.reference_options.get("pixel_tolerance", 0.0),
            )

tests/test_benchmarks.py:95:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

actual = PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_forest_fire0/actual')
expected = PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_forest_fire0/reference')

    def assert_job_results_allclose(
        actual: Union[BatchJob, JobResults, str, Path],
        expected: Union[BatchJob, JobResults, str, Path],
        *,
        rtol: float = _DEFAULT_RTOL,
        atol: float = _DEFAULT_ATOL,
        pixel_tolerance: float = _DEFAULT_PIXELTOL,
        tmp_path: Optional[Path] = None,
    ):
        """
        Assert that two job results sets are equal (with tolerance).

        :param actual: actual job results, provided as :py:class:`~openeo.rest.job.BatchJob` object,
            :py:meth:`~openeo.rest.job.JobResults` object or path to directory with downloaded assets.
        :param expected: expected job results, provided as :py:class:`~openeo.rest.job.BatchJob` object,
            :py:meth:`~openeo.rest.job.JobResults` object or path to directory with downloaded assets.
        :param rtol: relative tolerance
        :param atol: absolute tolerance
        :param pixel_tolerance: maximum fraction of pixels (in percent)
            that is allowed to be significantly different (considering ``atol`` and ``rtol``)
        :param tmp_path: root temp path to download results if needed.
            It's recommended to pass pytest's `tmp_path` fixture here
        :raises AssertionError: if not equal within the given tolerance

        .. versionadded:: 0.31.0

        .. warning::
            This function is experimental and subject to change.
        """
        issues = _compare_job_results(
            actual, expected, rtol=rtol, atol=atol, pixel_tolerance=pixel_tolerance, tmp_path=tmp_path
        )
        if issues:
>           raise AssertionError("\n".join(issues))
E           AssertionError: File set mismatch: {'job-results.json', 'openEO.tif'} != {'output.tif'}

/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/site-packages/openeo/testing/results.py:515: AssertionError
----------------------------- Captured stdout call -----------------------------
0:00:00 Job 'j-250926130531497caaa23c64e66cdc8c': send 'start'
0:00:14 Job 'j-250926130531497caaa23c64e66cdc8c': queued (progress 0%)
0:00:19 Job 'j-250926130531497caaa23c64e66cdc8c': queued (progress 0%)
0:00:26 Job 'j-250926130531497caaa23c64e66cdc8c': queued (progress 0%)
0:00:34 Job 'j-250926130531497caaa23c64e66cdc8c': queued (progress 0%)
0:00:44 Job 'j-250926130531497caaa23c64e66cdc8c': queued (progress 0%)
0:00:56 Job 'j-250926130531497caaa23c64e66cdc8c': queued (progress 0%)
0:01:11 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 8.9%)
0:01:31 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 11.5%)
0:01:55 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 14.6%)
0:02:25 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 18.1%)
0:03:02 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 22.0%)
0:03:49 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 26.5%)
0:04:47 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 31.4%)
0:05:47 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 35.8%)
0:06:48 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 39.7%)
0:07:48 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 43.1%)
0:08:48 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 46.2%)
0:09:48 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 49.0%)
0:10:48 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 51.4%)
0:11:48 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 53.7%)
0:12:49 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 55.8%)
0:13:49 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 57.6%)
0:14:49 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 59.4%)
0:15:50 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 61.0%)
0:16:51 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 62.5%)
0:17:51 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 63.8%)
0:18:51 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 65.1%)
0:19:51 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 66.3%)
0:20:51 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 67.4%)
0:21:51 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 68.4%)
0:22:52 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 69.4%)
0:23:52 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 70.3%)
0:24:52 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 71.1%)
0:25:52 Job 'j-250926130531497caaa23c64e66cdc8c': running (progress 72.0%)
0:26:53 Job 'j-250926130531497caaa23c64e66cdc8c': finished (progress 100%)
------------------------------ Captured log call -------------------------------
INFO     conftest:conftest.py:131 Connecting to 'openeo.vito.be'
INFO     openeo.config:config.py:193 Loaded openEO client config from sources: []
INFO     conftest:conftest.py:144 Checking for auth_env_var='OPENEO_AUTH_CLIENT_CREDENTIALS_TERRASCOPE' to drive auth against url='openeo.vito.be'.
INFO     conftest:conftest.py:148 Extracted provider_id='terrascope' client_id='openeo-apex-service-account' from auth_env_var='OPENEO_AUTH_CLIENT_CREDENTIALS_TERRASCOPE'
INFO     openeo.rest.connection:connection.py:255 Found OIDC providers: ['egi', 'terrascope', 'CDSE']
INFO     openeo.rest.auth.oidc:oidc.py:404 Doing 'client_credentials' token request 'https://sso.terrascope.be/auth/realms/terrascope/protocol/openid-connect/token' with post data fields ['grant_type', 'client_id', 'client_secret', 'scope'] (client_id 'openeo-apex-service-account')
INFO     openeo.rest.connection:connection.py:352 Obtained tokens: ['access_token', 'id_token']
INFO     openeo.rest.auth.oidc:oidc.py:404 Doing 'client_credentials' token request 'https://sso.terrascope.be/auth/realms/terrascope/protocol/openid-connect/token' with post data fields ['grant_type', 'client_id', 'client_secret', 'scope'] (client_id 'openeo-apex-service-account')
INFO     openeo.rest.connection:connection.py:352 Obtained tokens: ['access_token', 'id_token']
INFO     openeo.rest.connection:connection.py:703 OIDC access token expired (403 TokenInvalid). Obtained new access token (grant 'client_credentials').
INFO     openeo.rest.job:job.py:436 Downloading Job result asset 'openEO.tif' from https://openeo.vito.be/openeo/1.2/jobs/j-250926130531497caaa23c64e66cdc8c/results/assets/YWIwNGFjMGMtMzI2OS00MWMyLWJkODgtZTIxNTQxNTc2NDQw/0a30a521f8ef7cad0b7af01777185b16/openEO.tif?expires=1759498345 to /home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_forest_fire0/actual/openEO.tif
INFO     apex_algorithm_qa_tools.scenarios:util.py:345 Downloading reference data for scenario.id='forest_fire_mapping' to reference_dir=PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_forest_fire0/reference'): start 2025-09-26 13:32:27.995410
INFO     apex_algorithm_qa_tools.scenarios:util.py:345 Downloading source='https://s3.waw3-1.cloudferro.com/swift/v1/apex-examples/RF-ForestFire/RF_ForestFire.tif' to path=PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_forest_fire0/reference/output.tif'): start 2025-09-26 13:32:27.995738
INFO     apex_algorithm_qa_tools.scenarios:util.py:351 Downloading source='https://s3.waw3-1.cloudferro.com/swift/v1/apex-examples/RF-ForestFire/RF_ForestFire.tif' to path=PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_forest_fire0/reference/output.tif'): end 2025-09-26 13:32:29.368238, elapsed 0:00:01.372500
INFO     apex_algorithm_qa_tools.scenarios:util.py:351 Downloading reference data for scenario.id='forest_fire_mapping' to reference_dir=PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_forest_fire0/reference'): end 2025-09-26 13:32:29.368423, elapsed 0:00:01.373013
INFO     openeo.testing.results:results.py:418 Comparing job results: PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_forest_fire0/actual') vs PosixPath('/home/runner/work/apex_algorithms/apex_algorithms/qa/benchmarks/tmp_path_root/test_run_benchmark_forest_fire0/reference')

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions