Skip to content

Commit

Permalink
fixed code strings
Browse files Browse the repository at this point in the history
  • Loading branch information
AstrakhantsevaAA committed Sep 19, 2023
1 parent 4d5d415 commit 835fc81
Show file tree
Hide file tree
Showing 7 changed files with 84 additions and 82 deletions.
4 changes: 3 additions & 1 deletion docs/pydoc-markdown.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,4 +12,6 @@ renderer:
relative_output_path: api_reference
relative_sidebar_path: sidebar.json
sidebar_top_level_label: dlt
sidebar_top_level_module_label: null
sidebar_top_level_module_label: pipeline
markdown:
escape_html_in_docstring: false
70 changes: 35 additions & 35 deletions docs/website/docs/api_reference/__init__/__init__.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,33 +36,33 @@ Please refer to the following doc pages:
- `pipeline_name` _str, optional_ - A name of the pipeline that will be used to identify it in monitoring events and to restore its state and data schemas on subsequent runs.
Defaults to the file name of a pipeline script with `dlt_` prefix added.

- `pipelines_dir` _str, optional_ - A working directory in which pipeline state and temporary files will be stored. Defaults to user home directory: `pipeline`0.
- `pipelines_dir` _str, optional_ - A working directory in which pipeline state and temporary files will be stored. Defaults to user home directory: `~/dlt/pipelines/`.

- `pipeline`1 _TSecretValue, optional_ - A random value used for deterministic hashing during data anonymization. Defaults to a value derived from the pipeline name.
- `pipeline_salt` _TSecretValue, optional_ - A random value used for deterministic hashing during data anonymization. Defaults to a value derived from the pipeline name.
Default value should not be used for any cryptographic purposes.

- `pipeline`2 _str | DestinationReference, optional_ - A name of the destination to which dlt will load the data, or a destination module imported from `pipeline`3.
- `destination` _str | DestinationReference, optional_ - A name of the destination to which dlt will load the data, or a destination module imported from `dlt.destination`.
May also be provided to `run` method of the `pipeline`.

- `pipeline`6 _str | DestinationReference, optional_ - A name of the destination where dlt will stage the data before final loading, or a destination module imported from `pipeline`3.
- `staging` _str | DestinationReference, optional_ - A name of the destination where dlt will stage the data before final loading, or a destination module imported from `dlt.destination`.
May also be provided to `run` method of the `pipeline`.

- `Pipeline`0 _str, optional_ - A name of the dataset to which the data will be loaded. A dataset is a logical group of tables i.e. `Pipeline`1 in relational databases or folder grouping many files.
- `dataset_name` _str, optional_ - A name of the dataset to which the data will be loaded. A dataset is a logical group of tables i.e. `schema` in relational databases or folder grouping many files.
May also be provided later to the `run` or `load` methods of the `Pipeline`. If not provided at all, then default to the `pipeline_name`

- `Pipeline`6 _str, optional_ - A path from which the schema `Pipeline`7 file will be imported on each pipeline run. Defaults to None which disables importing.
- `import_schema_path` _str, optional_ - A path from which the schema `yaml` file will be imported on each pipeline run. Defaults to None which disables importing.

- `Pipeline`8 _str, optional_ - A path where the schema `Pipeline`7 file will be exported after every schema change. Defaults to None which disables exporting.
- `export_schema_path` _str, optional_ - A path where the schema `yaml` file will be exported after every schema change. Defaults to None which disables exporting.

- `run`0 _bool, optional_ - When set to True, each instance of the pipeline with the `pipeline_name` starts from scratch when run and loads the data to a separate dataset.
The datasets are identified by `run`2 + datetime suffix. Use this setting whenever you experiment with your data to be sure you start fresh on each run. Defaults to False.
- `full_refresh` _bool, optional_ - When set to True, each instance of the pipeline with the `pipeline_name` starts from scratch when run and loads the data to a separate dataset.
The datasets are identified by `dataset_name_` + datetime suffix. Use this setting whenever you experiment with your data to be sure you start fresh on each run. Defaults to False.

- `run`3 _Any, optional_ - Credentials for the `pipeline`2 i.e. database connection string or a dictionary with Google cloud credentials.
In most cases should be set to None, which lets `dlt` to use `run`6 or environment variables to infer the right credentials values.
- `credentials` _Any, optional_ - Credentials for the `destination` i.e. database connection string or a dictionary with Google cloud credentials.
In most cases should be set to None, which lets `dlt` to use `secrets.toml` or environment variables to infer the right credentials values.

- `run`7 _str, Collector_ - A progress monitor that shows progress bars, console or log messages with current information on sources, resources, data items etc. processed in
`extract`, `normalize` and `load` stage. Pass a string with a collector name or configure your own by choosing from `extract`1 module.
We support most of the progress libraries: try passing `extract`2, `extract`3 or `extract`4 or `extract`5 to write to console/log.
- `progress` _str, Collector_ - A progress monitor that shows progress bars, console or log messages with current information on sources, resources, data items etc. processed in
`extract`, `normalize` and `load` stage. Pass a string with a collector name or configure your own by choosing from `dlt.progress` module.
We support most of the progress libraries: try passing `tqdm`, `enlighten` or `alive_progress` or `log` to write to console/log.


**Returns**:
Expand Down Expand Up @@ -115,43 +115,43 @@ Summary:
This method will `extract` the data from the `data` argument, infer the schema, `normalize` the data into a load package (i.e. jsonl or PARQUET files representing tables) and then `load` such packages into the `destination`.

The data may be supplied in several forms:
- a `list` or `Iterable` of any JSON-serializable objects i.e. `destination`0
- any `destination`1 or a function that yield (`destination`2) i.e. `destination`3
- a function or a list of functions decorated with @dlt.resource i.e. `destination`4
- a `list` or `Iterable` of any JSON-serializable objects i.e. `dlt.run([1, 2, 3], table_name="numbers")`
- any `Iterator` or a function that yield (`Generator`) i.e. `dlt.run(range(1, 10), table_name="range")`
- a function or a list of functions decorated with @dlt.resource i.e. `dlt.run([chess_players(title="GM"), chess_games()])`
- a function or a list of functions decorated with @dlt.source.

Please note that `destination`5 deals with `destination`6, `destination`7, `destination`8 and `destination`9 objects, so you are free to load binary data or documents containing dates.
Please note that `dlt` deals with `bytes`, `datetime`, `decimal` and `uuid` objects, so you are free to load binary data or documents containing dates.

Execution:
The `dataset_name`0 method will first use `dataset_name`1 method to synchronize pipeline state and schemas with the destination. You can disable this behavior with `dataset_name`2 configuration option.
Next, it will make sure that data from the previous is fully processed. If not, `dataset_name`0 method normalizes and loads pending data items.
The `run` method will first use `sync_destination` method to synchronize pipeline state and schemas with the destination. You can disable this behavior with `restore_from_destination` configuration option.
Next, it will make sure that data from the previous is fully processed. If not, `run` method normalizes and loads pending data items.
Only then the new data from `data` argument is extracted, normalized and loaded.

**Arguments**:

- `data` _Any_ - Data to be loaded to destination.

- `destination` _str | DestinationReference, optional_ - A name of the destination to which dlt will load the data, or a destination module imported from `dataset_name`7.
If not provided, the value passed to `dataset_name`8 will be used.
- `destination` _str | DestinationReference, optional_ - A name of the destination to which dlt will load the data, or a destination module imported from `dlt.destination`.
If not provided, the value passed to `dlt.pipeline` will be used.

- `dataset_name` _str, optional_ - A name of the dataset to which the data will be loaded. A dataset is a logical group of tables i.e. `extract`0 in relational databases or folder grouping many files.
If not provided, the value passed to `dataset_name`8 will be used. If not provided at all, then default to the `extract`2
- `dataset_name` _str, optional_ - A name of the dataset to which the data will be loaded. A dataset is a logical group of tables i.e. `schema` in relational databases or folder grouping many files.
If not provided, the value passed to `dlt.pipeline` will be used. If not provided at all, then default to the `pipeline_name`

- `extract`3 _Any, optional_ - Credentials for the `destination` i.e. database connection string or a dictionary with Google cloud credentials.
In most cases should be set to None, which lets `destination`5 to use `extract`6 or environment variables to infer the right credentials values.
- `credentials` _Any, optional_ - Credentials for the `destination` i.e. database connection string or a dictionary with Google cloud credentials.
In most cases should be set to None, which lets `dlt` to use `secrets.toml` or environment variables to infer the right credentials values.

- `extract`7 _str, optional_ - The name of the table to which the data should be loaded within the `extract`8. This argument is required for a `data` that is a list/Iterable or Iterator without `data`0 attribute.
- `table_name` _str, optional_ - The name of the table to which the data should be loaded within the `dataset`. This argument is required for a `data` that is a list/Iterable or Iterator without `__name__` attribute.
The behavior of this argument depends on the type of the `data`:
* generator functions: the function name is used as table name, `extract`7 overrides this default
* `data`3: resource contains the full table schema, and that includes the table name. `extract`7 will override this property. Use with care!
* `data`5: source contains several resources each with a table schema. `extract`7 will override all table names within the source and load the data into a single table.
* generator functions: the function name is used as table name, `table_name` overrides this default
* `@dlt.resource`: resource contains the full table schema, and that includes the table name. `table_name` will override this property. Use with care!
* `@dlt.source`: source contains several resources each with a table schema. `table_name` will override all table names within the source and load the data into a single table.

- `data`7 _Literal["skip", "append", "replace", "merge"], optional_ - Controls how to write data to a table. `data`8 will always add new data at the end of the table. `data`9 will replace existing data with new data. `normalize`0 will prevent data from loading. "merge" will deduplicate and merge data based on "primary_key" and "merge_key" hints. Defaults to "append".
Please note that in case of `normalize`1 the table schema value will be overwritten and in case of `normalize`2, the values in all resources will be overwritten.
- `write_disposition` _Literal["skip", "append", "replace", "merge"], optional_ - Controls how to write data to a table. `append` will always add new data at the end of the table. `replace` will replace existing data with new data. `skip` will prevent data from loading. "merge" will deduplicate and merge data based on "primary_key" and "merge_key" hints. Defaults to "append".
Please note that in case of `dlt.resource` the table schema value will be overwritten and in case of `dlt.source`, the values in all resources will be overwritten.

- `normalize`3 _Sequence[TColumnSchema], optional_ - A list of column schemas. Typed dictionary describing column names, data types, write disposition and performance hints that gives you full control over the created table schema.
- `columns` _Sequence[TColumnSchema], optional_ - A list of column schemas. Typed dictionary describing column names, data types, write disposition and performance hints that gives you full control over the created table schema.

- `extract`0 _Schema, optional_ - An explicit `normalize`5 object in which all table schemas will be grouped. By default, `destination`5 takes the schema from the source (if passed in `data` argument) or creates a default one itself.
- `schema` _Schema, optional_ - An explicit `Schema` object in which all table schemas will be grouped. By default, `dlt` takes the schema from the source (if passed in `data` argument) or creates a default one itself.


**Raises**:
Expand All @@ -160,5 +160,5 @@ Only then the new data from `data` argument is extracted, normalized and loaded.

**Returns**:

- `load`1 - Information on loaded data including the list of package ids and failed job statuses. Please note that `destination`5 will not raise if a single job terminally fails. Such information is provided via LoadInfo.
- `LoadInfo` - Information on loaded data including the list of package ids and failed job statuses. Please note that `dlt` will not raise if a single job terminally fails. Such information is provided via LoadInfo.

14 changes: 7 additions & 7 deletions docs/website/docs/api_reference/dbt.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,8 @@ as required by `pipeline`.

- `pipeline` _Pipeline_ - A pipeline for which the required dbt dependencies are inferred
- `venv_path` _str, optional_ - A path where virtual environment is created or restored from.
If relative path is provided, the environment will be created within pipeline's working directory. Defaults to "dbt".
- `dbt_version` _str, optional_ - Version of dbt to be used. Exact version (ie. "1.2.4") or pip requirements string (ie. ">=1.1<1.5" may be provided).
If relative path is provided, the environment will be created within pipeline's working directory. Defaults to "dbt".
- `dbt_version` _str, optional_ - Version of dbt to be used. Exact version (ie. "1.2.4") or pip requirements string (ie. ">=1.1<1.5" may be provided).


**Returns**:
Expand Down Expand Up @@ -54,14 +54,14 @@ The wrapper creates a `dbt` profile from `dlt` pipeline configuration. Specifica

- `pipeline` _Pipeline_ - A pipeline containing destination, credentials and dataset_name used to configure the dbt package.
- `package_location` _str_ - A git repository url to be cloned or a local path where dbt package is present
- `dbt`0 _str, optional_ - A branch name, tag name or commit-id to check out. Defaults to None.
- `dbt`1 _TSecretValue, optional_ - SSH key to be used to clone private repositories. Defaults to TSecretValue(&quot;&quot;).
- `dbt`2 _bool, optional_ - If set to True (default), the wrapper will automatically fall back to full-refresh mode when schema is out of sync
- `dbt`3 - https://docs.getdbt.com/docs/build/incremental-models#what-if-the-columns-of-my-incremental-model-change
- `package_repository_branch` _str, optional_ - A branch name, tag name or commit-id to check out. Defaults to None.
- `package_repository_ssh_key` _TSecretValue, optional_ - SSH key to be used to clone private repositories. Defaults to TSecretValue("").
- `auto_full_refresh_when_out_of_sync` _bool, optional_ - If set to True (default), the wrapper will automatically fall back to full-refresh mode when schema is out of sync
- `See` - https://docs.getdbt.com/docs/build/incremental-models#what-if-the-columns-of-my-incremental-model-change
- `venv` _Venv, optional_ - A virtual environment with required dbt dependencies. Defaults to None which will execute dbt package in current environment.


**Returns**:

- `dbt`5 - A configured and authenticated Python `dbt` wrapper
- `DBTPackageRunner` - A configured and authenticated Python `dbt` wrapper

10 changes: 5 additions & 5 deletions docs/website/docs/api_reference/helpers.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,12 @@ A retry strategy for Tenacity that, with default setting, will repeat `load` ste
Use this condition with tenacity `retry_if_exception`. Terminal exceptions are exceptions that will not go away when operations is repeated.
Examples: missing configuration values, Authentication Errors, terminally failed jobs exceptions etc.

&gt;&gt;&gt; data = source(...)
&gt;&gt;&gt; for attempt in Retrying(stop=stop_after_attempt(3), retry=retry_if_exception(retry_load(())), reraise=True):
&gt;&gt;&gt; with attempt:
&gt;&gt;&gt; p.run(data)
>>> data = source(...)
>>> for attempt in Retrying(stop=stop_after_attempt(3), retry=retry_if_exception(retry_load(())), reraise=True):
>>> with attempt:
>>> p.run(data)

**Arguments**:

- `retry_on_pipeline_steps` _Tuple[TPipelineStep, ...], optional_ - which pipeline steps are allowed to be repeated. Default: &quot;load&quot;
- `retry_on_pipeline_steps` _Tuple[TPipelineStep, ...], optional_ - which pipeline steps are allowed to be repeated. Default: "load"

Loading

0 comments on commit 835fc81

Please sign in to comment.