diff --git a/CHANGELOG.md b/CHANGELOG.md index f73d4fa7..cfb5689f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,10 @@ # Version changelog +## 0.4.1 + +* Fixing ovewrite integration tests ([#92](https://github.com/databrickslabs/lsql/issues/92)). A new enhancement has been implemented for the `overwrite` feature's integration tests, addressing a concern with write operations. Two new variables, `catalog` and "schema", have been incorporated using the `env_or_skip` function. These variables are utilized in the `save_table` method, which is now invoked twice with the same table, once with the `append` and once with the `overwrite` option. The data in the table is retrieved and checked for accuracy after each call, employing the updated `Row` class with revised field names `first` and "second", formerly `name` and "id". This modification ensures the proper operation of the `overwrite` feature during integration tests and resolves any related issues. The commit message `Fixing overwrite integration tests` signifies this change. + + ## 0.4.0 * Added catalog and schema parameters to execute and fetch ([#90](https://github.com/databrickslabs/lsql/issues/90)). In this release, we have added optional `catalog` and `schema` parameters to the `execute` and `fetch` methods in the `SqlBackend` abstract base class, allowing for more flexibility when executing SQL statements in specific catalogs and schemas. These updates include new method signatures and their respective implementations in the `SparkSqlBackend` and `DatabricksSqlBackend` classes. The new parameters control the catalog and schema used by the `SparkSession` instance in the `SparkSqlBackend` class and the `SqlClient` instance in the `DatabricksSqlBackend` class. This enhancement enables better functionality in multi-catalog and multi-schema environments. Additionally, this change comes with unit tests and integration tests to ensure proper functionality. The new parameters can be used when calling the `execute` and `fetch` methods. For example, with a `SparkSqlBackend` instance `spark_backend`, you can execute a SQL statement in a specific catalog and schema with the following code: `spark_backend.execute("SELECT * FROM my_table", catalog="my_catalog", schema="my_schema")`. Similarly, the `fetch` method can also be used with the new parameters. diff --git a/src/databricks/labs/lsql/__about__.py b/src/databricks/labs/lsql/__about__.py index 6a9beea8..3d26edf7 100644 --- a/src/databricks/labs/lsql/__about__.py +++ b/src/databricks/labs/lsql/__about__.py @@ -1 +1 @@ -__version__ = "0.4.0" +__version__ = "0.4.1"