Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
resolves # #1265
docs dbt-labs/docs.getdbt.com/#NA
Problem
Before this PR is merged, we could not identify if the target table have different description and default_value_expression than the model transformation. If the model use the Bigquery API copy_table, and we don't have description and default_value_expression defined in the schema.yml, the metadata will be deleted.
By extending this function _get_dbt_columns_from_bq_table to extract and store the description and default_value_expression from BigQuery's SchemaField objects, we could seamlessly compare the model transformation data with the table destination and decide if replace it or not with the information in the project yaml file.
Solution
One option to avoid deleting the metadata in the target table is using the feature persist_docs = true, but this means an extra step in the process, have to fill all the column descriptions in yml files, and is needed a table.update permission.
With this PR, we avoid this extra step and the metadata is updated with the transformation itself, with the API copy_table or MERGE statement, faster and cheaper.