diff --git a/docs/website/docs/dlt-ecosystem/verified-sources/arrow-pandas.md b/docs/website/docs/dlt-ecosystem/verified-sources/arrow-pandas.md
index 2d839d2d3b..f3ac6f83d6 100644
--- a/docs/website/docs/dlt-ecosystem/verified-sources/arrow-pandas.md
+++ b/docs/website/docs/dlt-ecosystem/verified-sources/arrow-pandas.md
@@ -13,7 +13,7 @@ or [book a call](https://calendar.app.google/kiLhuMsWKpZUpfho6) with our support
 :::
 
 You can load data directly from an Arrow table or Pandas dataframe.
-This is supported by all destinations, but recommended especially when using destinations that support the `parquet` foramt natively (e.g. [Snowflake](../destinations/snowflake.md) and [Filesystem](../destinations/filesystem.md)).
+This is supported by all destinations, but recommended especially when using destinations that support the `parquet` file format natively (e.g. [Snowflake](../destinations/snowflake.md) and [Filesystem](../destinations/filesystem.md)).
 See the [destination support](#destination-support-and-fallback) section for more information.
 
 When used with a `parquet` supported destination this is a more performant way to load structured data since `dlt` bypasses many processing steps normally involved in passing JSON objects through the pipeline.
@@ -151,4 +151,4 @@ pipeline.run(df.to_dict(orient='records'), table_name="orders")
 # yield arrow table
 pipeline.run(table.to_pylist(), table_name="orders")
 ```
-Both Pandas and Arrow allow to stream records in batches.
\ No newline at end of file
+Both Pandas and Arrow allow to stream records in batches.