Skip to content

Commit

Permalink
fix snippets
Browse files Browse the repository at this point in the history
  • Loading branch information
sh-rp committed Nov 25, 2024
1 parent 3d952fe commit 4f4ccd7
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion docs/website/docs/general-usage/dataset-access/dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -220,7 +220,7 @@ Since the `iter_arrow` and `iter_df` methods are generators that iterate over th
limited_items_relation = dataset.items.limit(1_000_000)

# Create a new pipeline
other_pipeline = ...
other_pipeline = dlt.pipeline(pipeline_name="other_pipeline", destination="duckdb")

# We can now load these 1m rows into this pipeline in 10k chunks
other_pipeline.run(limited_items_relation.iter_arrow(chunk_size=10_000), table_name="limited_items")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ The cursor returned by `execute_query` has several methods for retrieving the da
The code below shows how to retrieve the data as a Pandas DataFrame and then manipulate it in memory:

```py
pipeline = dlt.pipeline(...)
pipeline = dlt.pipeline(pipeline_name="my_pipeline", destination="duckdb")
with pipeline.sql_client() as client:
with client.execute_query(
'SELECT "reactions__+1", "reactions__-1", reactions__laugh, reactions__hooray, reactions__rocket FROM issues'
Expand Down

0 comments on commit 4f4ccd7

Please sign in to comment.