Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release v0.1.11 #168

Merged
merged 1 commit into from
Feb 12, 2025
Merged

Release v0.1.11 #168

merged 1 commit into from
Feb 12, 2025

Conversation

mwojtyczka
Copy link
Contributor

  • Provided option to customize reporting column names (#127). In this release, the DQEngine library has been enhanced to allow for customizable reporting column names. A new constructor has been added to DQEngine, which accepts an optional ExtraParams object for extra configurations. A new Enum class, DefaultColumnNames, has been added to represent the columns used for error and warning reporting. New tests have been added to verify the application of checks with custom column naming. These changes aim to improve the customizability, flexibility, and user experience of DQEngine by providing more control over the reporting columns and resolving issue #46.
  • Fixed parsing error when loading checks from a file (#165). In this release, we have addressed a parsing error that occurred when loading checks (data quality rules) from a file, fixing issue #162. The specific issue being resolved is a SQL expression parsing error. The changes include refactoring tests to eliminate code duplication and improve maintainability, as well as updating method and variable names to use filepath instead of "path". Additionally, new unit and integration tests have been added and manually tested to ensure the correct functionality of the updated code.
  • Removed usage of try_cast spark function from the checks to make sure DQX can be run on more runtimes (#163). In this release, we have refactored the code to remove the usage of the try_cast Spark function and replace it with cast and isNull checks to improve code compatibility, particularly for runtimes where try_cast is not available. The affected functionality includes null and empty column checks, checking if a column value is in a list, and checking if a column value is a valid date or timestamp. We have added unit and integration tests to ensure functionality is working as intended.
  • Added filter to rules so that you can make conditional checks (#141). The filter serves as a condition that data must meet to be evaluated by the check function. The filters restrict the evaluation of checks to only apply to rows that meet the specified conditions. This feature enhances the flexibility and customizability of data quality checks in the DQEngine.

* Provided option to customize reporting column names ([#127](#127)). In this release, the DQEngine library has been enhanced to allow for customizable reporting column names. A new constructor has been added to DQEngine, which accepts an optional ExtraParams object for extra configurations. A new Enum class, DefaultColumnNames, has been added to represent the columns used for error and warning reporting. New tests have been added to verify the application of checks with custom column naming. These changes aim to improve the customizability, flexibility, and user experience of DQEngine by providing more control over the reporting columns and resolving issue [#46](#46).
* Fixed parsing error when loading checks from a file ([#165](#165)). In this release, we have addressed a parsing error that occurred when loading checks (data quality rules) from a file, fixing issue [#162](#162). The specific issue being resolved is a SQL expression parsing error. The changes include refactoring tests to eliminate code duplication and improve maintainability, as well as updating method and variable names to use `filepath` instead of "path". Additionally, new unit and integration tests have been added and manually tested to ensure the correct functionality of the updated code.
* Removed usage of try_cast spark function from the checks to make sure DQX can be run on more runtimes ([#163](#163)). In this release, we have refactored the code to remove the usage of the `try_cast` Spark function and replace it with `cast` and `isNull` checks to improve code compatibility, particularly for runtimes where `try_cast` is not available. The affected functionality includes null and empty column checks, checking if a column value is in a list, and checking if a column value is a valid date or timestamp. We have added unit and integration tests to ensure functionality is working as intended.
* Added filter to rules so that you can make conditional checks ([#141](#141)). The filter serves as a condition that data must meet to be evaluated by the check function. The filters restrict the evaluation of checks to only apply to rows that meet the specified conditions. This feature enhances the flexibility and customizability of data quality checks in the DQEngine.
@mwojtyczka mwojtyczka requested a review from a team as a code owner February 12, 2025 11:53
@mwojtyczka mwojtyczka requested review from gergo-databricks and removed request for a team February 12, 2025 11:53
@mwojtyczka mwojtyczka requested a review from alexott February 12, 2025 11:53
Copy link

github-actions bot commented Feb 12, 2025

✅ 126/126 passed, 1 skipped, 31m27s total

Running from acceptance #471

@mwojtyczka mwojtyczka merged commit 8710329 into main Feb 12, 2025
9 checks passed
@mwojtyczka mwojtyczka deleted the prepare/0.1.11 branch February 12, 2025 12:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants