Skip to content

Commit

Permalink
Exclude zappend.levels from coverage (3)
Browse files Browse the repository at this point in the history
  • Loading branch information
forman committed Sep 17, 2024
1 parent e61af00 commit 259ac6d
Show file tree
Hide file tree
Showing 3 changed files with 26 additions and 2 deletions.
16 changes: 16 additions & 0 deletions CHANGES.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,19 @@
## Version 0.8.0 (in development)

* Added experimental function `zappend.levels.write_levels()` that generates
datasets using the
[multi-level dataset format](https://xcube.readthedocs.io/en/latest/mldatasets.html)
as specified by
[xcube](https://github.com/xcube-dev/xcube).
It resembles the `store.write_data(cube, "<name>.levels", ...)` method
provided by the xcube filesystem data stores ("file", "s3", "memory", etc.).
The zappend version may be used for potentially very large datasets in terms
of dimension sizes or for datasets with very large number of chunks.
It is considerably slower than the xcube version (which basically uses
`xarray.to_zarr()` for each resolution level), but should run robustly with
stable memory consumption.
The function requires `xcube` package to be installed. (#19)

## Version 0.7.1 (from 2024-05-30)

* The function `zappend.api.zappend()` now returns the number of slices
Expand Down
2 changes: 1 addition & 1 deletion zappend/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@
# Permissions are hereby granted under the terms of the MIT License:
# https://opensource.org/licenses/MIT.

__version__ = "0.7.1"
__version__ = "0.8.0.dev0"
10 changes: 9 additions & 1 deletion zappend/levels.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,15 @@ def write_levels(
as specified by
[xcube](https://github.com/xcube-dev/xcube).
The source dataset is opened and subdivided into dataset slices
It resembles the `store.write_data(cube, "<name>.levels", ...)` method
provided by the xcube filesystem data stores ("file", "s3", "memory", etc.).
The zappend version may be used for potentially very large datasets in terms
of dimension sizes or for datasets with very large number of chunks.
It is considerably slower than the xcube version (which basically uses
`xarray.to_zarr()` for each resolution level), but should run robustly with
stable memory consumption.
The function opens the source dataset and subdivides it into dataset slices
along the append dimension given by `append_dim`, which defaults
to `"time"`. The slice size in the append dimension is one.
Each slice is downsampled to the number of levels and each slice level
Expand Down

0 comments on commit 259ac6d

Please sign in to comment.