Skip to content

Commit 5825e88

Browse files
committed
update statements
1 parent 606d6f1 commit 5825e88

File tree

4 files changed

+5
-5
lines changed

4 files changed

+5
-5
lines changed

docs/docs/spark-getting-started.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ spark-shell --packages org.apache.iceberg:iceberg-spark-runtime-{{ sparkVersionM
3636

3737
!!! info
3838
<!-- markdown-link-check-disable-next-line -->
39-
If you want to include Iceberg in your Spark installation, add the [`iceberg-spark-runtime-{{ sparkVersionMajor }}` Jar](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-{{ sparkVersion }}_{{ scalaVersion }}/{{ icebergVersion }}/iceberg-spark-runtime-{{ sparkVersion }}_{{ scalaVersion }}-{{ icebergVersion }}.jar) to Spark's `jars` folder.
39+
If you want to include Iceberg in your Spark installation, add the [`iceberg-spark-runtime-{{ sparkVersionMajor }}` Jar](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-{{ sparkVersionMajor }}/{{ icebergVersion }}/iceberg-spark-runtime-{{ sparkVersionMajor }}-{{ icebergVersion }}.jar) to Spark's `jars` folder.
4040

4141

4242
### Adding catalogs

docs/docs/spark-procedures.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ title: "Procedures"
2222

2323
To use Iceberg in Spark, first configure [Spark catalogs](spark-configuration.md).
2424
For Spark 3.x, stored procedures are only available when using [Iceberg SQL extensions](spark-configuration.md#sql-extensions) in Spark.
25-
For Spark 4.0, stored procedures are supported natively without requiring the Iceberg SQL extensions. However, note that they are _q_case-sensitive__ in Spark 4.0.
25+
For Spark 4.0, stored procedures are supported natively without requiring the Iceberg SQL extensions. However, note that they are __case-sensitive__ in Spark 4.0.
2626

2727
## Usage
2828

docs/docs/spark-structured-streaming.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ data.writeStream
7373
.outputMode("append")
7474
.trigger(Trigger.ProcessingTime(1, TimeUnit.MINUTES))
7575
.option("checkpointLocation", checkpointPath)
76-
.to_table("database.table_name")
76+
.toTable("database.table_name")
7777
```
7878

7979
In the case of the directory-based Hadoop catalog:
@@ -112,7 +112,7 @@ data.writeStream
112112
.trigger(Trigger.ProcessingTime(1, TimeUnit.MINUTES))
113113
.option("fanout-enabled", "true")
114114
.option("checkpointLocation", checkpointPath)
115-
.to_table("database.table_name")
115+
.toTable("database.table_name")
116116
```
117117

118118
Fanout writer opens the files per partition value and doesn't close these files till the write task finishes. Avoid using the fanout writer for batch writing, as explicit sort against output rows is cheap for batch workloads.

site/docs/spark-quickstart.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -332,7 +332,7 @@ If you already have a Spark environment, you can add Iceberg, using the `--packa
332332
You can download the runtime by visiting to the [Releases](releases.md) page.
333333

334334
<!-- markdown-link-check-disable-next-line -->
335-
[spark-runtime-jar]: https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-{{ sparkVersionMajor }}/{{ icebergVersion }}/iceberg-spark-runtime-{{ sparkVersion }}_{{ scalaVersion }}-{{ icebergVersion }}.jar
335+
[spark-runtime-jar]: https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-{{ sparkVersionMajor }}/{{ icebergVersion }}/iceberg-spark-runtime-{{ sparkVersionMajor }}-{{ icebergVersion }}.jar
336336

337337
#### Learn More
338338

0 commit comments

Comments
 (0)