Skip to content

Commit

Permalink
fix links
Browse files Browse the repository at this point in the history
  • Loading branch information
sh-rp committed Nov 18, 2024
1 parent d1d5001 commit d127119
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
4 changes: 2 additions & 2 deletions docs/website/docs/general-usage/destination-tables.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ will behave similarly and have similar concepts.

:::

Running this pipeline will create a database schema in the destination database (DuckDB) along with a table named `users`. Quick tip: you can use the `show` command of the `dlt pipeline` CLI [to see the tables](../dlt-ecosystem/dataset-access/streamlit) in the destination database.
Running this pipeline will create a database schema in the destination database (DuckDB) along with a table named `users`. Quick tip: you can use the `show` command of the `dlt pipeline` CLI [to see the tables](../general-usage/dataset-access/streamlit) in the destination database.

## Database schema

Expand Down Expand Up @@ -190,7 +190,7 @@ The `_dlt_loads` table will look like this:

The `_dlt_loads` table tracks complete loads and allows chaining transformations on top of them. Many destinations do not support distributed and long-running transactions (e.g., Amazon Redshift). In that case, the user may see the partially loaded data. It is possible to filter such data out: any row with a `load_id` that does not exist in `_dlt_loads` is not yet completed. The same procedure may be used to identify and delete data for packages that never got completed.

For each load, you can test and [alert](../running-in-production/alerting.md) on anomalies (e.g., no data, too much loaded to a table). There are also some useful load stats in the `Load info` tab of the [Streamlit app](../dlt-ecosystem/dataset-access/streamlit) mentioned above.
For each load, you can test and [alert](../running-in-production/alerting.md) on anomalies (e.g., no data, too much loaded to a table). There are also some useful load stats in the `Load info` tab of the [Streamlit app](../general-usage/dataset-access/streamlit) mentioned above.

You can add [transformations](../dlt-ecosystem/transformations/) and chain them together using the `status` column. You start the transformation for all the data with a particular `load_id` with a status of 0 and then update it to 1. The next transformation starts with the status of 1 and is then updated to 2. This can be repeated for every additional transformation.

Expand Down
2 changes: 1 addition & 1 deletion docs/website/docs/walkthroughs/adjust-a-schema.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ players_games:
```
Run the pipeline script again and make sure that the change is visible in the export schema. Then,
[launch the Streamlit app](../dlt-ecosystem/dataset-access/streamlit) to see the changed data.
[launch the Streamlit app](../general-usage/dataset-access/streamlit) to see the changed data.
:::note
Do not rename the tables or columns in the YAML file. `dlt` infers those from the data, so the schema will be recreated.
Expand Down

0 comments on commit d127119

Please sign in to comment.