From 6806f5a9901c7c6bd8dae1c4c970f5e208e0c6ea Mon Sep 17 00:00:00 2001 From: Anton Burnashev Date: Sun, 17 Sep 2023 13:32:29 +0300 Subject: [PATCH] Rework 'Understanding the tables' (#629) * Rework 'Understanding the tables' * Relink destinations section * Add staging and versioned datasets * Rename the page and update the links * Put destination tables after pipeline * Update page content so it fits the new page name --- .../website/docs/build-a-pipeline-tutorial.md | 2 +- .../dlt-ecosystem/destinations/filesystem.md | 2 +- .../understanding-the-tables.md | 84 ----- .../docs/general-usage/destination-tables.md | 320 ++++++++++++++++++ .../docs/general-usage/full-loading.md | 10 +- docs/website/docs/getting-started.mdx | 2 +- .../website/docs/user-guides/data-beginner.md | 2 +- .../docs/user-guides/data-scientist.md | 4 +- .../docs/user-guides/engineering-manager.md | 4 +- docs/website/sidebars.js | 3 +- 10 files changed, 334 insertions(+), 99 deletions(-) delete mode 100644 docs/website/docs/dlt-ecosystem/visualizations/understanding-the-tables.md create mode 100644 docs/website/docs/general-usage/destination-tables.md diff --git a/docs/website/docs/build-a-pipeline-tutorial.md b/docs/website/docs/build-a-pipeline-tutorial.md index 14c3a78411..2462a9a32a 100644 --- a/docs/website/docs/build-a-pipeline-tutorial.md +++ b/docs/website/docs/build-a-pipeline-tutorial.md @@ -391,7 +391,7 @@ utilization, schema enforcement and curation, and schema change alerts. which consist of a timestamp and pipeline name. Load IDs enable incremental transformations and data vaulting by tracking data loads and facilitating data lineage and traceability. -Read more about [lineage.](dlt-ecosystem/visualizations/understanding-the-tables.md#load-ids) +Read more about [lineage](general-usage/destination-tables.md#data-lineage). ### Schema Enforcement and Curation diff --git a/docs/website/docs/dlt-ecosystem/destinations/filesystem.md b/docs/website/docs/dlt-ecosystem/destinations/filesystem.md index 8db9a35514..32bf561a82 100644 --- a/docs/website/docs/dlt-ecosystem/destinations/filesystem.md +++ b/docs/website/docs/dlt-ecosystem/destinations/filesystem.md @@ -155,7 +155,7 @@ All the files are stored in a single folder with the name of the dataset that yo The name of each file contains essential metadata on the content: - **schema_name** and **table_name** identify the [schema](../../general-usage/schema.md) and table that define the file structure (column names, data types etc.) -- **load_id** is the [id of the load package](https://dlthub.com/docs/dlt-ecosystem/visualizations/understanding-the-tables#load-ids) form which the file comes from. +- **load_id** is the [id of the load package](../../general-usage/destination-tables.md#load-packages-and-load-ids) form which the file comes from. - **file_id** is there are many files with data for a single table, they are copied with different file id. - **ext** a format of the file ie. `jsonl` or `parquet` diff --git a/docs/website/docs/dlt-ecosystem/visualizations/understanding-the-tables.md b/docs/website/docs/dlt-ecosystem/visualizations/understanding-the-tables.md deleted file mode 100644 index e14ef554f5..0000000000 --- a/docs/website/docs/dlt-ecosystem/visualizations/understanding-the-tables.md +++ /dev/null @@ -1,84 +0,0 @@ ---- -title: Understanding the tables -description: Understanding the tables that have been loaded -keywords: [understanding tables, loaded data, data structure] ---- - -# Understanding the tables - -## Show tables and data in the destination - -``` -dlt pipeline show -``` - -[This command](../../reference/command-line-interface.md#show-tables-and-data-in-the-destination) -generates and launches a simple Streamlit app that you can use to inspect the schemas -and data in the destination as well as your pipeline state and loading status / stats. It should be -executed from the same folder where you ran the pipeline script to access destination credentials. -It requires `streamlit` and `pandas` to be installed. - -## Table and column names - -We [normalize table and column names,](../../general-usage/schema.md#naming-convention) so they fit -what the destination database allows. We convert all the names in your source data into -`snake_case`, alphanumeric identifiers. Please note that in many cases the names you had in your -input document will be (slightly) different from identifiers you see in the database. - -## Child and parent tables - -When creating a schema during normalization, `dlt` recursively unpacks this nested structure into -relational tables, creating and linking children and parent tables. - -This is how table linking works: - -1. Each row in all (top level and child) data tables created by `dlt` contains UNIQUE column named - `_dlt_id`. -1. Each child table contains FOREIGN KEY column `_dlt_parent_id` linking to a particular row - (`_dlt_id`) of a parent table. -1. Rows in child tables come from the lists: `dlt` stores the position of each item in the list in - `_dlt_list_idx`. -1. For tables that are loaded with the `merge` write disposition, we add a ROOT KEY column - `_dlt_root_id`, which links child table to a row in top level table. - -> 💡 Note: If you define your own primary key in a child table, it will be used to link to parent table -and the `_dlt_parent_id` and `_dlt_list_idx` will not be added. `_dlt_id` is always added even in -case the primary key or other unique columns are defined. - -## Load IDs - -Each pipeline run creates one or more load packages, which can be identified by their `load_id`. A load -package typically contains data from all [resources](../../general-usage/glossary.md#resource) of a -particular [source](../../general-usage/glossary.md#source). The `load_id` of a particular package -is added to the top data tables (`_dlt_load_id` column) and to the `_dlt_loads` table with a status 0 (when the load process -is fully completed). - -The `_dlt_loads` table tracks complete loads and allows chaining transformations on top of them. -Many destinations do not support distributed and long-running transactions (e.g. Amazon Redshift). -In that case, the user may see the partially loaded data. It is possible to filter such data out—any -row with a `load_id` that does not exist in `_dlt_loads` is not yet completed. The same procedure may be used to delete and identify -and delete data for packages that never got completed. - -For each load, you can test and [alert](../../running-in-production/alerting.md) on anomalies (e.g. -no data, too much loaded to a table). There are also some useful load stats in the `Load info` tab -of the [Streamlit app](understanding-the-tables.md#show-tables-and-data-in-the-destination) -mentioned above. - -You can add [transformations](../transformations) and chain them together -using the `status` column. You start the transformation for all the data with a particular -`load_id` with a status of 0 and then update it to 1. The next transformation starts with the status -of 1 and is then updated to 2. This can be repeated for every additional transformation. - -### Data lineage - -Data lineage can be super relevant for architectures like the -[data vault architecture](https://www.data-vault.co.uk/what-is-data-vault/) or when troubleshooting. -The data vault architecture is a data warehouse that large organizations use when representing the -same process across multiple systems, which adds data lineage requirements. Using the pipeline name -and `load_id` provided out of the box by `dlt`, you are able to identify the source and time of -data. - -You can [save](../../running-in-production/running.md#inspect-and-save-the-load-info-and-trace) -complete lineage info for a particular `load_id` including a list of loaded files, error messages -(if any), elapsed times, schema changes. This can be helpful, for example, when troubleshooting -problems. diff --git a/docs/website/docs/general-usage/destination-tables.md b/docs/website/docs/general-usage/destination-tables.md new file mode 100644 index 0000000000..8f95639f87 --- /dev/null +++ b/docs/website/docs/general-usage/destination-tables.md @@ -0,0 +1,320 @@ +--- +title: Destination tables +description: Understanding the tables created in the destination database +keywords: [destination tables, loaded data, data structure, schema, table, child table, load package, load id, lineage, staging dataset, versioned dataset] +--- + +# Destination tables + +When you run a [pipeline](pipeline.md), dlt creates tables in the destination database and loads the data +from your [source](source.md) into these tables. In this section, we will take a closer look at what +destination tables look like and how they are organized. + +We start with a simple dlt pipeline: + +```py +import dlt + +data = [ + {'id': 1, 'name': 'Alice'}, + {'id': 2, 'name': 'Bob'} +] + +pipeline = dlt.pipeline( + pipeline_name='quick_start', + destination='duckdb', + dataset_name='mydata' +) +load_info = pipeline.run(data, table_name="users") +``` + +:::note + +Here we are using the [DuckDb destination](../dlt-ecosystem/destinations/duckdb.md), which is an in-memory database. Other database destinations +will behave similarly and have similar concepts. + +::: + +Running this pipeline will create a database schema in the destination database (DuckDB) along with a table named `users`. Quick tip: you can use the `show` command of the `dlt pipeline` CLI [to see the tables](../dlt-ecosystem/visualizations/exploring-the-data.md#exploring-the-data) in the destination database. + +## Database schema + +The database schema is a collection of tables that represent the data you loaded into the database. +The schema name is the same as the `dataset_name` you provided in the pipeline definition. +In the example above, we explicitly set the `dataset_name` to `mydata`. If you don't set it, +it will be set to the pipeline name with a suffix `_dataset`. + +Be aware that the schema referred to in this section is distinct from the [dlt Schema](schema.md). +The database schema pertains to the structure and organization of data within the database, including table +definitions and relationships. On the other hand, the "dlt Schema" specifically refers to the format +and structure of normalized data within the dlt pipeline. + +## Tables + +Each [resource](resource.md) in your pipeline definition will be represented by a table in +the destination. In the example above, we have one resource, `users`, so we will have one table, `mydata.users`, +in the destination. Where `mydata` is the schema name, and `users` is the table name. Here also, we explicitly set +the `table_name` to `users`. When `table_name` is not set, the table name will be set to the resource name. + +For example, we can rewrite the pipeline above as: + +```py +@dlt.resource +def users(): + yield [ + {'id': 1, 'name': 'Alice'}, + {'id': 2, 'name': 'Bob'} + ] + +pipeline = dlt.pipeline( + pipeline_name='quick_start', + destination='duckdb', + dataset_name='mydata' +) +load_info = pipeline.run(users) +``` + +The result will be the same, but the table is implicitly named `users` based on the resource name. + +:::note + +Special tables are created to track the pipeline state. These tables are prefixed with `_dlt_` +and are not shown in the `show` command of the `dlt pipeline` CLI. However, you can see them when +connecting to the database directly. + +::: + +## Child and parent tables + +Now let's look at a more complex example: + +```py +import dlt + +data = [ + { + 'id': 1, + 'name': 'Alice', + 'pets': [ + {'id': 1, 'name': 'Fluffy', 'type': 'cat'}, + {'id': 2, 'name': 'Spot', 'type': 'dog'} + ] + }, + { + 'id': 2, + 'name': 'Bob', + 'pets': [ + {'id': 3, 'name': 'Fido', 'type': 'dog'} + ] + } +] + +pipeline = dlt.pipeline( + pipeline_name='quick_start', + destination='duckdb', + dataset_name='mydata' +) +load_info = pipeline.run(data, table_name="users") +``` + +Running this pipeline will create two tables in the destination, `users` and `users__pets`. The +`users` table will contain the top level data, and the `users__pets` table will contain the child +data. Here is what the tables may look like: + +**mydata.users** + +| id | name | _dlt_id | _dlt_load_id | +| --- | --- | --- | --- | +| 1 | Alice | wX3f5vn801W16A | 1234562350.98417 | +| 2 | Bob | rX8ybgTeEmAmmA | 1234562350.98417 | + +**mydata.users__pets** + +| id | name | type | _dlt_id | _dlt_parent_id | _dlt_list_idx | +| --- | --- | --- | --- | --- | --- | +| 1 | Fluffy | cat | w1n0PEDzuP3grw | wX3f5vn801W16A | 0 | +| 2 | Spot | dog | 9uxh36VU9lqKpw | wX3f5vn801W16A | 1 | +| 3 | Fido | dog | pe3FVtCWz8VuNA | rX8ybgTeEmAmmA | 0 | + +When creating a database schema, dlt recursively unpacks nested structures into relational tables, +creating and linking children and parent tables. + +This is how it works: + +1. Each row in all (top level and child) data tables created by `dlt` contains UNIQUE column named + `_dlt_id`. +1. Each child table contains FOREIGN KEY column `_dlt_parent_id` linking to a particular row + (`_dlt_id`) of a parent table. +1. Rows in child tables come from the lists: `dlt` stores the position of each item in the list in + `_dlt_list_idx`. +1. For tables that are loaded with the `merge` write disposition, we add a ROOT KEY column + `_dlt_root_id`, which links child table to a row in top level table. + + +:::note + +If you define your own primary key in a child table, it will be used to link to parent table +and the `_dlt_parent_id` and `_dlt_list_idx` will not be added. `_dlt_id` is always added even in +case the primary key or other unique columns are defined. + +::: + +## Naming convention: tables and columns + +During a pipeline run, dlt [normalizes both table and column names](schema.md#naming-convention) to ensure compatibility with the destination database's accepted format. All names from your source data will be transformed into snake_case and will only include alphanumeric characters. Please be aware that the names in the destination database may differ somewhat from those in your original input. + +## Load Packages and Load IDs + +Each execution of the pipeline generates one or more load packages. A load package typically contains data retrieved from +all the [resources](glossary.md#resource) of a particular [source](glossary.md#source). +These packages are uniquely identified by a `load_id`. The `load_id` of a particular package is added to the top data tables +(referenced as `_dlt_load_id` column in the example above) and to the special `_dlt_loads` table with a status 0 +(when the load process is fully completed). + +To illustrate this, let's load more data into the same destination: + +```py +data = [ + { + 'id': 3, + 'name': 'Charlie', + 'pets': [] + }, +] +``` + +The rest of the pipeline definition remains the same. Running this pipeline will create a new load +package with a new `load_id` and add the data to the existing tables. The `users` table will now +look like this: + +**mydata.users** + +| id | name | _dlt_id | _dlt_load_id | +| --- | --- | --- | --- | +| 1 | Alice | wX3f5vn801W16A | 1234562350.98417 | +| 2 | Bob | rX8ybgTeEmAmmA | 1234562350.98417 | +| 3 | Charlie | h8lehZEvT3fASQ | **1234563456.12345** | + +The `_dlt_loads` table will look like this: + +**mydata._dlt_loads** + +| load_id | schema_name | status | inserted_at | schema_version_hash | +| --- | --- | --- | --- | --- | +| 1234562350.98417 | quick_start | 0 | 2023-09-12 16:45:51.17865+00 | aOEb...Qekd/58= | +| **1234563456.12345** | quick_start | 0 | 2023-09-12 16:46:03.10662+00 | aOEb...Qekd/58= | + +The `_dlt_loads` table tracks complete loads and allows chaining transformations on top of them. +Many destinations do not support distributed and long-running transactions (e.g. Amazon Redshift). +In that case, the user may see the partially loaded data. It is possible to filter such data out: any +row with a `load_id` that does not exist in `_dlt_loads` is not yet completed. The same procedure may be used to identify +and delete data for packages that never got completed. + +For each load, you can test and [alert](../running-in-production/alerting.md) on anomalies (e.g. +no data, too much loaded to a table). There are also some useful load stats in the `Load info` tab +of the [Streamlit app](../dlt-ecosystem/visualizations/exploring-the-data.md#exploring-the-data) +mentioned above. + +You can add [transformations](../dlt-ecosystem/transformations/) and chain them together +using the `status` column. You start the transformation for all the data with a particular +`load_id` with a status of 0 and then update it to 1. The next transformation starts with the status +of 1 and is then updated to 2. This can be repeated for every additional transformation. + +### Data lineage + +Data lineage can be super relevant for architectures like the +[data vault architecture](https://www.data-vault.co.uk/what-is-data-vault/) or when troubleshooting. +The data vault architecture is a data warehouse that large organizations use when representing the +same process across multiple systems, which adds data lineage requirements. Using the pipeline name +and `load_id` provided out of the box by `dlt`, you are able to identify the source and time of +data. + +You can [save](../running-in-production/running.md#inspect-and-save-the-load-info-and-trace) +complete lineage info for a particular `load_id` including a list of loaded files, error messages +(if any), elapsed times, schema changes. This can be helpful, for example, when troubleshooting +problems. + +## Staging dataset + +So far we've been using the `append` write disposition in our example pipeline. This means that +each time we run the pipeline, the data is appended to the existing tables. When you use [the +merge write disposition](incremental-loading.md), dlt creates a staging database schema for +staging data. This schema is named `_staging` and contains the same tables as the +destination schema. When you run the pipeline, the data from the staging tables is loaded into the +destination tables in a single atomic transaction. + +Let's illustrate this with an example. We change our pipeline to use the `merge` write disposition: + +```py +import dlt + +@dlt.resource(primary_key="id", write_disposition="merge") +def users(): + yield [ + {'id': 1, 'name': 'Alice 2'}, + {'id': 2, 'name': 'Bob 2'} + ] + +pipeline = dlt.pipeline( + pipeline_name='quick_start', + destination='duckdb', + dataset_name='mydata' +) + +load_info = pipeline.run(users) +``` + +Running this pipeline will create a schema in the destination database with the name `mydata_staging`. +If you inspect the tables in this schema, you will find `mydata_staging.users` table identical to the +`mydata.users` table in the previous example. + +Here is what the tables may look like after running the pipeline: + +**mydata_staging.users** + +| id | name | _dlt_id | _dlt_load_id | +| --- | --- | --- | --- | +| 1 | Alice 2 | wX3f5vn801W16A | 2345672350.98417 | +| 2 | Bob 2 | rX8ybgTeEmAmmA | 2345672350.98417 | + +**mydata.users** + +| id | name | _dlt_id | _dlt_load_id | +| --- | --- | --- | --- | +| 1 | Alice 2 | wX3f5vn801W16A | 2345672350.98417 | +| 2 | Bob 2 | rX8ybgTeEmAmmA | 2345672350.98417 | +| 3 | Charlie | h8lehZEvT3fASQ | 1234563456.12345 | + +Notice that the `mydata.users` table now contains the data from both the previous pipeline run and +the current one. + +## Versioned datasets + +When you set the `full_refresh` argument to `True` in `dlt.pipeline` call, dlt creates a versioned dataset. +This means that each time you run the pipeline, the data is loaded into a new dataset (a new database schema). +The dataset name is the same as the `dataset_name` you provided in the pipeline definition with a +datetime-based suffix. + +We modify our pipeline to use the `full_refresh` option to see how this works: + +```py +import dlt + +data = [ + {'id': 1, 'name': 'Alice'}, + {'id': 2, 'name': 'Bob'} +] + +pipeline = dlt.pipeline( + pipeline_name='quick_start', + destination='duckdb', + dataset_name='mydata', + full_refresh=True # <-- add this line +) +load_info = pipeline.run(data, table_name="users") +``` + +Every time you run this pipeline, a new schema will be created in the destination database with a +datetime-based suffix. The data will be loaded into tables in this schema. +For example, the first time you run the pipeline, the schema will be named +`mydata_20230912064403`, the second time it will be named `mydata_20230912064407`, and so on. diff --git a/docs/website/docs/general-usage/full-loading.md b/docs/website/docs/general-usage/full-loading.md index f6f359914d..92fdf064fd 100644 --- a/docs/website/docs/general-usage/full-loading.md +++ b/docs/website/docs/general-usage/full-loading.md @@ -40,15 +40,15 @@ replace_strategy = "staging-optimized" ### The `truncate-and-insert` strategy The `truncate-and-insert` replace strategy is the default and the fastest of all three strategies. If you load data with this setting, then the -destination tables will be truncated at the beginning of the load and the new data will be inserted consecutively but not within the same transaction. +destination tables will be truncated at the beginning of the load and the new data will be inserted consecutively but not within the same transaction. The downside of this strategy is, that your tables will have no data for a while until the load is completed. You -may end up with new data in some tables and no data in other tables if the load fails during the run. Such incomplete load may be however detected by checking if the -[_dlt_loads table contains load id](../dlt-ecosystem/visualizations/understanding-the-tables.md#load-ids) from _dlt_load_id of the replaced tables. If you prefer to have no data downtime, please use one of the other strategies. +may end up with new data in some tables and no data in other tables if the load fails during the run. Such incomplete load may be however detected by checking if the +[_dlt_loads table contains load id](destination-tables.md#load-packages-and-load-ids) from _dlt_load_id of the replaced tables. If you prefer to have no data downtime, please use one of the other strategies. ### The `insert-from-staging` strategy -The `insert-from-staging` is the slowest of all three strategies. It will load all new data into staging tables away from your final destination tables and will then truncate and insert the new data in one transaction. -It also maintains a consistent state between child and parent tables at all times. Use this strategy if you have the requirement for consistent destination datasets with zero downtime and the `optimized` strategy does not work for you. +The `insert-from-staging` is the slowest of all three strategies. It will load all new data into staging tables away from your final destination tables and will then truncate and insert the new data in one transaction. +It also maintains a consistent state between child and parent tables at all times. Use this strategy if you have the requirement for consistent destination datasets with zero downtime and the `optimized` strategy does not work for you. This strategy behaves the same way across all destinations. ### The `staging-optimized` strategy diff --git a/docs/website/docs/getting-started.mdx b/docs/website/docs/getting-started.mdx index 5afa0fb0da..cdbeac8eab 100644 --- a/docs/website/docs/getting-started.mdx +++ b/docs/website/docs/getting-started.mdx @@ -109,7 +109,7 @@ Learn more: - [The full list of available destinations.](dlt-ecosystem/destinations/) - [Exploring the data](dlt-ecosystem/visualizations/exploring-the-data). - What happens after loading? - [Understanding the tables](dlt-ecosystem/visualizations/understanding-the-tables). + [Destination tables](general-usage/destination-tables). ## Load your data diff --git a/docs/website/docs/user-guides/data-beginner.md b/docs/website/docs/user-guides/data-beginner.md index 69b8b8bdee..e6dd8b8d22 100644 --- a/docs/website/docs/user-guides/data-beginner.md +++ b/docs/website/docs/user-guides/data-beginner.md @@ -116,7 +116,7 @@ Good docs pages to check out: - [Create a pipeline.](../walkthroughs/create-a-pipeline) - [Run a pipeline.](../walkthroughs/run-a-pipeline) - [Deploy a pipeline with GitHub Actions.](../walkthroughs/deploy-a-pipeline/deploy-with-github-actions) -- [Understand the loaded data.](../dlt-ecosystem/visualizations/understanding-the-tables) +- [Understand the loaded data.](../general-usage/destination-tables.md) - [Explore the loaded data in Streamlit.](../dlt-ecosystem/visualizations/exploring-the-data.md) - [Transform the data with SQL or python.](../dlt-ecosystem/transformations) - [Contribute a pipeline.](https://github.com/dlt-hub/verified-sources/blob/master/CONTRIBUTING.md) diff --git a/docs/website/docs/user-guides/data-scientist.md b/docs/website/docs/user-guides/data-scientist.md index c0bcf289be..b8415937e4 100644 --- a/docs/website/docs/user-guides/data-scientist.md +++ b/docs/website/docs/user-guides/data-scientist.md @@ -53,7 +53,7 @@ with the production environment, leading to smoother integration and deployment ### `dlt` is optimized for local use on laptops - It offers a seamless - [integration with Streamlit](../dlt-ecosystem/visualizations/understanding-the-tables#show-tables-and-data-in-the-destination). + [integration with Streamlit](../dlt-ecosystem/visualizations/exploring-the-data.md). This integration enables a smooth and interactive data analysis experience, where Data Scientists can leverage the power of `dlt` alongside Streamlit's intuitive interface and visualization capabilities. @@ -107,7 +107,7 @@ analysis process. Besides, having a schema imposed on the data acts as a technical description of the data, accelerating the discovery process. -See [Understanding the tables](../dlt-ecosystem/visualizations/understanding-the-tables), +See [Destination tables](../general-usage/destination-tables.md) and [Exploring the data](../dlt-ecosystem/visualizations/exploring-the-data) in our documentation. ## Use case #3: Data Preprocessing and Transformation diff --git a/docs/website/docs/user-guides/engineering-manager.md b/docs/website/docs/user-guides/engineering-manager.md index cdd8ad4172..70e23eb2c1 100644 --- a/docs/website/docs/user-guides/engineering-manager.md +++ b/docs/website/docs/user-guides/engineering-manager.md @@ -102,7 +102,7 @@ open source communities can. involved in curation. This makes both the engineer and the others happy. - Better governance with end to end pipelining via dbt: [run dbt packages on the fly](../dlt-ecosystem/transformations/dbt.md), - [lineage out of the box](../dlt-ecosystem/visualizations/understanding-the-tables). + [lineage out of the box](../general-usage/destination-tables.md#data-lineage). - Zero learning curve: Declarative loading, simple functional programming. By using `dlt`'s declarative, standard approach to loading data, there is no complicated code to maintain, and the analysts can thus maintain the code. @@ -144,7 +144,7 @@ The implications: - Rapid Data Exploration and Prototyping: By running in Colab with DuckDB, you can explore semi-structured data much faster by structuring it with `dlt` and analysing it in SQL. [Schema inference](../general-usage/schema#data-normalizer), - [exploring the loaded data](../dlt-ecosystem/visualizations/understanding-the-tables#show-tables-and-data-in-the-destination). + [exploring the loaded data](../dlt-ecosystem/visualizations/exploring-the-data.md). - No vendor limits: `dlt` is forever free, with no vendor strings. We do not create value by creating a pain for you and solving it. We create value by supporting you beyond. - `dlt` removes complexity: You can use `dlt` in your existing stack, no overheads, no race conditions, diff --git a/docs/website/sidebars.js b/docs/website/sidebars.js index 4b7aa65c26..59a422a98a 100644 --- a/docs/website/sidebars.js +++ b/docs/website/sidebars.js @@ -133,7 +133,6 @@ const sidebars = { 'dlt-ecosystem/transformations/dbt', 'dlt-ecosystem/transformations/sql', 'dlt-ecosystem/transformations/pandas', - , ] }, { @@ -141,7 +140,6 @@ const sidebars = { label: 'Visualizations', items: [ 'dlt-ecosystem/visualizations/exploring-the-data', - 'dlt-ecosystem/visualizations/understanding-the-tables' ] }, ], @@ -213,6 +211,7 @@ const sidebars = { 'general-usage/resource', 'general-usage/source', 'general-usage/pipeline', + 'general-usage/destination-tables', 'general-usage/state', 'general-usage/incremental-loading', 'general-usage/full-loading',