Skip to content

Commit

Permalink
removes non-docs content from docs (#639)
Browse files Browse the repository at this point in the history
* removes content that is not a tutorial, how to, reference or dlt relate knowledge from docs

* post merge fixes
  • Loading branch information
rudolfix authored Sep 17, 2023
1 parent 6806f5a commit 49cce03
Show file tree
Hide file tree
Showing 26 changed files with 116 additions and 1,017 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -108,5 +108,5 @@ To try out schema evolution with `dlt`, check out our [colab demo.](https://cola
### Want more?

- Join our [Slack](https://join.slack.com/t/dlthub-community/shared_invite/zt-1slox199h-HAE7EQoXmstkP_bTqal65g)
- Read our [docs on implementing schema evolution](https://dlthub.com/docs/reference/explainers/schema-evolution)
- Read our [schema evolution blog post](https://dlthub.com/docs/blog/schema-evolution)
- Stay tuned for the next article in the series: *How to do schema evolution with* `dlt` *in the most effective way*
Original file line number Diff line number Diff line change
@@ -1,7 +1,12 @@
---
title: Schema evolution
description: Schema evolution with dlt
keywords: [schema evolution, schema versioning, data contracts]
slug: schema-evolution
title: "Schema Evolution"
authors:
name: Adrian Brudaru
title: Schema Evolution
url: https://github.com/adrianbr
image_url: https://avatars.githubusercontent.com/u/5762770?v=4
tags: [data engineer shortage, structured data, schema evolution]
---

# Schema evolution
Expand Down Expand Up @@ -131,10 +136,10 @@ business-logic tests, you would still need to implement them in a custom way.
## The implementation recipe

1. Use `dlt`. It will automatically infer and version schemas, so you can simply check if there are
changes. You can just use the [normaliser + loader](../../general-usage/pipeline.md) or
[build extraction with dlt](../../general-usage/resource.md). If you want to define additional
constraints, you can do so in the [schema](../../general-usage/schema.md).
1. [Define your slack hook](../../running-in-production/running.md#using-slack-to-send-messages) or
changes. You can just use the [normaliser + loader](https://dlthub.com/docs/general-usage/pipeline.md) or
[build extraction with dlt](https://dlthub.com/docs/general-usage/resource.md). If you want to define additional
constraints, you can do so in the [schema](https://dlthub.com/docs/general-usage/schema.md).
1. [Define your slack hook](https://dlthub.com/docs/running-in-production/running.md#using-slack-to-send-messages) or
create your own notification function. Make sure the slack channel contains the data producer and
any stakeholders.
1. [Capture the load job info and send it to the hook](../../running-in-production/running#inspect-save-and-alert-on-schema-changes).
1. [Capture the load job info and send it to the hook](https://dlthub.com/docs/running-in-production/running#inspect-save-and-alert-on-schema-changes).
2 changes: 1 addition & 1 deletion docs/website/blog/2023-06-15-automating-data-engineers.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,5 +122,5 @@ Not only that, but doing things this way lets your team focus on what they do be
2. Notify stakeholders and producers of data changes, so they can curate it.
3. Don’t explore json with data engineers - let analyst explore structured data.

Ready to stop the pain? Read [this explainer on how to do schema evolution with dlt](/docs/reference/explainers/schema-evolution).
Ready to stop the pain? Read [this explainer on how to do schema evolution with dlt](https://dlthub.com/docs/blog/schema-evolution).
Want to discuss? Join our [slack](https://join.slack.com/t/dlthub-community/shared_invite/zt-1n5193dbq-rCBmJ6p~ckpSFK4hCF2dYA).
2 changes: 0 additions & 2 deletions docs/website/docs/build-a-pipeline-tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -413,8 +413,6 @@ These governance features in `dlt` pipelines contribute to better data managemen
compliance adherence, and overall data governance, promoting data consistency, traceability, and
control throughout the data processing lifecycle.

Read more about [schema evolution.](reference/explainers/schema-evolution.md)

### Scaling and finetuning

`dlt` offers several mechanism and configuration options to scale up and finetune pipelines:
Expand Down
18 changes: 18 additions & 0 deletions docs/website/docs/dlt-ecosystem/deployments/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
title: Deployments
description: dlt can run on almost any python environment and hardware
keywords: [dlt, running, environment]
---
import DocCardList from '@theme/DocCardList';

# Deployments
Where Python runs, `dlt` will run too. Besides deployments you see below, our users report that they ran us successfully on:
* notebook environments including Colab
* orchestrators including Kestra, Prefect or Dagster
* AWS Lambda and other serverless
* local laptops with `duckdb` or `weaviate` as destinations
* Github codespaces and other devcontainers
* regular VMs from all major providers: AWS, GCP or Azure
* containers via Docker, docker-compose and Kubernetes

<DocCardList />

This file was deleted.

This file was deleted.

This file was deleted.

This file was deleted.

56 changes: 0 additions & 56 deletions docs/website/docs/dlt-ecosystem/deployments/where-can-dlt-run.md

This file was deleted.

Loading

0 comments on commit 49cce03

Please sign in to comment.