Allow specification of Snowflake schema and database #63
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Overview
This pull-request allows users to specify the schema and database for the Snowflake connector. It also increases the number of records per batch to 1000, and reduces the amount of data collected from
QUERY_HISTORY
for the first time to the last hour only.This is due to the volume of data present in this table usually being very high, so auditing will only effectively start from the time that Grove is first deployed.
Finally, this pull-request allows the use of internal Pydantic field names in connector configuration documents. This is required due to the new
schema
field name - which is the name of a method internal to Pydantic. This is a hack, which can be removed once we update Grove to Pydantic >= 2.