From 02ed2c828fb134cfcfe8cc6cc4cbfb2e09a1477a Mon Sep 17 00:00:00 2001 From: Serge Smertin <259697+nfx@users.noreply.github.com> Date: Wed, 18 Sep 2024 10:06:39 +0200 Subject: [PATCH] Release v0.11.0 (#291) * Added filter spec implementation ([#276](https://github.com/databrickslabs/lsql/issues/276)). In this commit, a new `FilterHandler` class has been introduced to handle filter files with the suffix `.filter.json`, which can parse filter specifications in the header of the filter file and validate the filter columns and types. The commit also adds support for three types of filters: `DATE_RANGE_PICKER`, `MULTI_SELECT`, and `DROPDOWN`, which can be linked with multiple visualization widgets. Additionally, a `FilterTile` class has been added to the `Tile` class, which represents a filter tile in the dashboard and includes methods to validate the tile, create widgets, and generate filter encodings and queries. The `DashboardMetadata` class has been updated to include a new method `get_datasets()` to retrieve the datasets for the dashboard. These changes enhance the functionality of the dashboard by adding support for filtering data using various filter types and linking them with multiple visualization widgets, improving the customization and interactivity of the dashboard, and making it more user-friendly and efficient. * Bugfix: `MockBackend` wasn't mocking `savetable` properly when the mode is `append` ([#289](https://github.com/databrickslabs/lsql/issues/289)). This release includes a bugfix and enhancements for the `MockBackend` component, which is used to mock the `SQLBackend`. The `.savetable()` method failed to function as expected in `append` mode, writing all rows to the same table instead of accumulating them. This bug has been addressed, ensuring that rows accumulate correctly in `append` mode. Additionally, a new test function, `test_mock_backend_save_table_overwrite()`, has been added to demonstrate the corrected behavior of `overwrite` mode, showing that it now replaces only the existing rows for the given table while preserving other tables' contents. The type signature for `.save_table()` has been updated, restricting the `mode` parameter to accept only two string literals: `"append"` and `"overwrite"`. The `MockBackend` behavior has been updated accordingly, and rows are now filtered to exclude any `None` or `NULL` values prior to saving. These improvements to the `MockBackend` functionality and test suite increase reliability when using the `MockBackend` as a testing backend for the system. * Changed filter spec to use YML instead of JSON ([#290](https://github.com/databrickslabs/lsql/issues/290)). In this release, the filter specification files have been converted from JSON to YAML format, providing a more human-readable format for the filter specifications. The schema for the filter file includes flags for column, columns, type, title, description, order, and id, with the type flag taking on values of DROPDOWN, MULTI_SELECT, or DATE_RANGE_PICKER. This change impacts the FilterHandler, is_filter method, and _from_dashboard_folder method, as well as relevant parts of the documentation. Additionally, the parsing methods have been updated to use yaml.safe_load instead of json.loads, and the is_filter method now checks for .filter.yml suffix. A new file, '00_0_date.filter.yml', has been added to the 'tests/integration/dashboards/filter_spec_basic' directory, containing a sample date filter definition. Furthermore, various tests have been added to validate filter specifications, such as checking for invalid type and both `column` and `columns` keys being present. These updates aim to enhance readability, maintainability, and ease of use for filter configuration. * Increase testing of generic types storage ([#282](https://github.com/databrickslabs/lsql/issues/282)). A new commit enhances the testing of generic types storage by expanding the test suite to include a list of structs, ensuring more comprehensive testing of the system. The `Foo` struct has been renamed to `Nested` for clarity, and two new structs, `NestedWithDict` and `Nesting`, have been added. The `Nesting` struct contains a `Nested` object, while `NestedWithDict` includes a string and an optional dictionary of strings. A new test case demonstrates appending complex types to a table by creating and saving a table with two rows, each containing a `Nesting` struct. The test then fetches the data and asserts the expected number of rows are returned, ensuring the proper functioning of the storage system with complex data types. * Minor Changes to avoid redundancy in code and follow code patterns ([#279](https://github.com/databrickslabs/lsql/issues/279)). In this release, we have made significant improvements to the `dashboards.py` file to make the code more concise, maintainable, and in line with the standard library's recommended usage. The `export_to_zipped_csv` method has undergone major changes, including the removal of the `BytesIO` module import and the use of `StringIO` for handling strings as files. The method no longer creates a separate ZIP file for the CSV files, instead using the provided `export_path`. Additionally, the method skips tiles that don't contain queries. We have also introduced a new method, `dataclass_transform`, which transforms a given dataclass into a new one with specific attributes and behavior. This method creates a new dataclass with a custom metaclass and adds a new method, `to_dict()`, which converts the instances of the new dataclass to dictionaries. These changes promote code reusability and reduce redundancy in the codebase, making it easier for software engineers to work with. * New example with bar chart in dashboards-as-code ([#281](https://github.com/databrickslabs/lsql/issues/281)). A new example of a dashboard featuring a bar chart has been added to the `dashboards-as-code` feature using the existing metadata overrides feature to support the new widget type, without bloating the TileMetadata structure. An integration test was added to demonstrate the creation of a bar chart, and the resulting dashboard can be seen in the attached screenshot. Additionally, a new SQL file has been added for the `Product Sales` dashboard, showcasing sales data for different product categories. This approach can potentially be used to support other widget types such as Bar, Pivot, Area, etc. The team is encouraged to provide feedback on this proposed solution. --- CHANGELOG.md | 10 ++++++++++ src/databricks/labs/lsql/__about__.py | 2 +- 2 files changed, 11 insertions(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 8614c52b..308388f7 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,15 @@ # Version changelog +## 0.11.0 + +* Added filter spec implementation ([#276](https://github.com/databrickslabs/lsql/issues/276)). In this commit, a new `FilterHandler` class has been introduced to handle filter files with the suffix `.filter.json`, which can parse filter specifications in the header of the filter file and validate the filter columns and types. The commit also adds support for three types of filters: `DATE_RANGE_PICKER`, `MULTI_SELECT`, and `DROPDOWN`, which can be linked with multiple visualization widgets. Additionally, a `FilterTile` class has been added to the `Tile` class, which represents a filter tile in the dashboard and includes methods to validate the tile, create widgets, and generate filter encodings and queries. The `DashboardMetadata` class has been updated to include a new method `get_datasets()` to retrieve the datasets for the dashboard. These changes enhance the functionality of the dashboard by adding support for filtering data using various filter types and linking them with multiple visualization widgets, improving the customization and interactivity of the dashboard, and making it more user-friendly and efficient. +* Bugfix: `MockBackend` wasn't mocking `savetable` properly when the mode is `append` ([#289](https://github.com/databrickslabs/lsql/issues/289)). This release includes a bugfix and enhancements for the `MockBackend` component, which is used to mock the `SQLBackend`. The `.savetable()` method failed to function as expected in `append` mode, writing all rows to the same table instead of accumulating them. This bug has been addressed, ensuring that rows accumulate correctly in `append` mode. Additionally, a new test function, `test_mock_backend_save_table_overwrite()`, has been added to demonstrate the corrected behavior of `overwrite` mode, showing that it now replaces only the existing rows for the given table while preserving other tables' contents. The type signature for `.save_table()` has been updated, restricting the `mode` parameter to accept only two string literals: `"append"` and `"overwrite"`. The `MockBackend` behavior has been updated accordingly, and rows are now filtered to exclude any `None` or `NULL` values prior to saving. These improvements to the `MockBackend` functionality and test suite increase reliability when using the `MockBackend` as a testing backend for the system. +* Changed filter spec to use YML instead of JSON ([#290](https://github.com/databrickslabs/lsql/issues/290)). In this release, the filter specification files have been converted from JSON to YAML format, providing a more human-readable format for the filter specifications. The schema for the filter file includes flags for column, columns, type, title, description, order, and id, with the type flag taking on values of DROPDOWN, MULTI_SELECT, or DATE_RANGE_PICKER. This change impacts the FilterHandler, is_filter method, and _from_dashboard_folder method, as well as relevant parts of the documentation. Additionally, the parsing methods have been updated to use yaml.safe_load instead of json.loads, and the is_filter method now checks for .filter.yml suffix. A new file, '00_0_date.filter.yml', has been added to the 'tests/integration/dashboards/filter_spec_basic' directory, containing a sample date filter definition. Furthermore, various tests have been added to validate filter specifications, such as checking for invalid type and both `column` and `columns` keys being present. These updates aim to enhance readability, maintainability, and ease of use for filter configuration. +* Increase testing of generic types storage ([#282](https://github.com/databrickslabs/lsql/issues/282)). A new commit enhances the testing of generic types storage by expanding the test suite to include a list of structs, ensuring more comprehensive testing of the system. The `Foo` struct has been renamed to `Nested` for clarity, and two new structs, `NestedWithDict` and `Nesting`, have been added. The `Nesting` struct contains a `Nested` object, while `NestedWithDict` includes a string and an optional dictionary of strings. A new test case demonstrates appending complex types to a table by creating and saving a table with two rows, each containing a `Nesting` struct. The test then fetches the data and asserts the expected number of rows are returned, ensuring the proper functioning of the storage system with complex data types. +* Minor Changes to avoid redundancy in code and follow code patterns ([#279](https://github.com/databrickslabs/lsql/issues/279)). In this release, we have made significant improvements to the `dashboards.py` file to make the code more concise, maintainable, and in line with the standard library's recommended usage. The `export_to_zipped_csv` method has undergone major changes, including the removal of the `BytesIO` module import and the use of `StringIO` for handling strings as files. The method no longer creates a separate ZIP file for the CSV files, instead using the provided `export_path`. Additionally, the method skips tiles that don't contain queries. We have also introduced a new method, `dataclass_transform`, which transforms a given dataclass into a new one with specific attributes and behavior. This method creates a new dataclass with a custom metaclass and adds a new method, `to_dict()`, which converts the instances of the new dataclass to dictionaries. These changes promote code reusability and reduce redundancy in the codebase, making it easier for software engineers to work with. +* New example with bar chart in dashboards-as-code ([#281](https://github.com/databrickslabs/lsql/issues/281)). A new example of a dashboard featuring a bar chart has been added to the `dashboards-as-code` feature using the existing metadata overrides feature to support the new widget type, without bloating the TileMetadata structure. An integration test was added to demonstrate the creation of a bar chart, and the resulting dashboard can be seen in the attached screenshot. Additionally, a new SQL file has been added for the `Product Sales` dashboard, showcasing sales data for different product categories. This approach can potentially be used to support other widget types such as Bar, Pivot, Area, etc. The team is encouraged to provide feedback on this proposed solution. + + ## 0.10.0 * Added Functionality to export any dashboards-as-code into CSV ([#269](https://github.com/databrickslabs/lsql/issues/269)). The `DashboardMetadata` class now includes a new method, `export_to_zipped_csv`, which enables exporting any dashboard as CSV files in a ZIP archive. This method accepts `sql_backend` and `export_path` as parameters and exports dashboard queries to CSV files in the specified ZIP archive by iterating through tiles and fetching dashboard queries if the tile is a query. To ensure the proper functioning of this feature, unit tests and manual testing have been conducted. A new test, `test_dashboards_export_to_zipped_csv`, has been added to verify the correct export of dashboard data to a CSV file. diff --git a/src/databricks/labs/lsql/__about__.py b/src/databricks/labs/lsql/__about__.py index 61fb31ca..ae6db5f1 100644 --- a/src/databricks/labs/lsql/__about__.py +++ b/src/databricks/labs/lsql/__about__.py @@ -1 +1 @@ -__version__ = "0.10.0" +__version__ = "0.11.0"