Skip to content

Commit

Permalink
Add descriptions (#463)
Browse files Browse the repository at this point in the history
* Add descriptions

* Fix spelling

* fix whitespace

* fix whitespace
  • Loading branch information
fhennig authored Sep 17, 2024
1 parent 9cd61dd commit 38d4a25
Show file tree
Hide file tree
Showing 9 changed files with 66 additions and 45 deletions.
40 changes: 23 additions & 17 deletions docs/modules/spark-k8s/pages/getting_started/first_steps.adoc
Original file line number Diff line number Diff line change
@@ -1,14 +1,16 @@
= First steps
:description: Create and run your first Spark job with the Stackable Operator. Includes steps for job setup, verification, and inspecting driver logs.

Once you have followed the steps in the xref:getting_started/installation.adoc[] section to install the operator and its dependencies, you will now create a Spark job. Afterwards you can <<_verify_that_it_works, verify that it works>> by looking at the logs from the driver pod.
Once you have followed the steps in the xref:getting_started/installation.adoc[] section to install the operator and its dependencies, you will now create a Spark job.
Afterwards you can <<_verify_that_it_works, verify that it works>> by looking at the logs from the driver pod.

== Starting a Spark job

A Spark application is made of up three components:

- Job: this will build a `spark-submit` command from the resource, passing this to internal spark code together with templates for building the driver and executor pods
- Driver: the driver starts the designated number of executors and removes them when the job is completed.
- Executor(s): responsible for executing the job itself
* Job: this will build a `spark-submit` command from the resource, passing this to internal spark code together with templates for building the driver and executor pods
* Driver: the driver starts the designated number of executors and removes them when the job is completed.
* Executor(s): responsible for executing the job itself

Create a `SparkApplication`:

Expand All @@ -19,34 +21,38 @@ include::example$getting_started/getting_started.sh[tag=install-sparkapp]

Where:

- `metadata.name` contains the name of the SparkApplication
- `spec.version`: SparkApplication version (1.0). This can be freely set by the users and is added by the operator as label to all workload resources created by the application.
- `spec.sparkImage`: the image used by the job, driver and executor pods. This can be a custom image built by the user or an official Stackable image. Available official images are listed in the Stackable https://repo.stackable.tech/#browse/browse:docker:v2%2Fstackable%spark-k8s%2Ftags[image registry].
- `spec.mode`: only `cluster` is currently supported
- `spec.mainApplicationFile`: the artifact (Java, Scala or Python) that forms the basis of the Spark job. This path is relative to the image, so in this case we are running an example python script (that calculates the value of pi): it is bundled with the Spark code and therefore already present in the job image
- `spec.driver`: driver-specific settings.
- `spec.executor`: executor-specific settings.
* `metadata.name` contains the name of the SparkApplication
* `spec.version`: SparkApplication version (1.0). This can be freely set by the users and is added by the operator as label to all workload resources created by the application.
* `spec.sparkImage`: the image used by the job, driver and executor pods. This can be a custom image built by the user or an official Stackable image. Available official images are listed in the Stackable https://repo.stackable.tech/#browse/browse:docker:v2%2Fstackable%spark-k8s%2Ftags[image registry].
* `spec.mode`: only `cluster` is currently supported
* `spec.mainApplicationFile`: the artifact (Java, Scala or Python) that forms the basis of the Spark job. This path is relative to the image, so in this case we are running an example python script (that calculates the value of pi): it is bundled with the Spark code and therefore already present in the job image
* `spec.driver`: driver-specific settings.
* `spec.executor`: executor-specific settings.

== Verify that it works

As mentioned above, the `SparkApplication` that has just been created will build a `spark-submit` command and pass it to the driver pod, which in turn will create executor pods that run for the duration of the job before being clean up. A running process will look like this:
As mentioned above, the SparkApplication that has just been created will build a `spark-submit` command and pass it to the driver Pod, which in turn will create executor Pods that run for the duration of the job before being clean up.
A running process will look like this:

image::getting_started/spark_running.png[Spark job]

- `pyspark-pi-xxxx`: this is the initialising job that creates the spark-submit command (named as `metadata.name` with a unique suffix)
- `pyspark-pi-xxxxxxx-driver`: the driver pod that drives the execution
- `pythonpi-xxxxxxxxx-exec-x`: the set of executors started by the driver (in our example `spec.executor.instances` was set to 3 which is why we have 3 executors)
* `pyspark-pi-xxxx`: this is the initializing job that creates the spark-submit command (named as `metadata.name` with a unique suffix)
* `pyspark-pi-xxxxxxx-driver`: the driver pod that drives the execution
* `pythonpi-xxxxxxxxx-exec-x`: the set of executors started by the driver (in our example `spec.executor.instances` was set to 3 which is why we have 3 executors)

Job progress can be followed by issuing this command:

----
include::example$getting_started/getting_started.sh[tag=wait-for-job]
----

When the job completes the driver cleans up the executor. The initial job is persisted for several minutes before being removed. The completed state will look like this:
When the job completes the driver cleans up the executor.
The initial job is persisted for several minutes before being removed.
The completed state will look like this:

image::getting_started/spark_complete.png[Completed job]

The driver logs can be inspected for more information about the results of the job. In this case we expect to find the results of our (approximate!) pi calculation:
The driver logs can be inspected for more information about the results of the job.
In this case we expect to find the results of our (approximate!) pi calculation:

image::getting_started/spark_log.png[Driver log]
3 changes: 2 additions & 1 deletion docs/modules/spark-k8s/pages/getting_started/index.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
= Getting started

This guide will get you started with Spark using the Stackable Operator for Apache Spark. It will guide you through the installation of the Operator and its dependencies, executing your first Spark job and reviewing its result.
This guide will get you started with Spark using the Stackable Operator for Apache Spark.
It will guide you through the installation of the Operator and its dependencies, executing your first Spark job and reviewing its result.

== Prerequisites

Expand Down
29 changes: 15 additions & 14 deletions docs/modules/spark-k8s/pages/getting_started/installation.adoc
Original file line number Diff line number Diff line change
@@ -1,20 +1,20 @@
= Installation
:description: Learn how to set up Spark with the Stackable Operator, from installation to running your first job, including prerequisites and resource recommendations.

On this page you will install the Stackable Spark-on-Kubernetes operator as well as the commons, secret and listener operators
which are required by all Stackable operators.

== Dependencies

Spark applications almost always require dependencies like database drivers, REST api clients and many others. These
dependencies must be available on the `classpath` of each executor (and in some cases of the driver, too). There are
multiple ways to provision Spark jobs with such dependencies: some are built into Spark itself while others are
implemented at the operator level. In this guide we are going to keep things simple and look at executing a Spark job
that has a minimum of dependencies.
Spark applications almost always require dependencies like database drivers, REST api clients and many others.
These dependencies must be available on the `classpath` of each executor (and in some cases of the driver, too).
There are multiple ways to provision Spark jobs with such dependencies: some are built into Spark itself while others are implemented at the operator level.
In this guide we are going to keep things simple and look at executing a Spark job that has a minimum of dependencies.

More information about the different ways to define Spark jobs and their dependencies is given on the following pages:

- xref:usage-guide/index.adoc[]
- xref:job_dependencies.adoc[]
* xref:usage-guide/index.adoc[]
* xref:job_dependencies.adoc[]

== Stackable Operators

Expand All @@ -25,8 +25,8 @@ There are 2 ways to install Stackable operators

=== stackablectl

`stackablectl` is the command line tool to interact with Stackable operators and our recommended way to install
Operators. Follow the xref:management:stackablectl:installation.adoc[installation steps] for your platform.
`stackablectl` is the command line tool to interact with Stackable operators and our recommended way to install Operators.
Follow the xref:management:stackablectl:installation.adoc[installation steps] for your platform.

After you have installed `stackablectl` run the following command to install the Spark-k8s operator:

Expand All @@ -42,12 +42,13 @@ The tool will show
include::example$getting_started/install_output.txt[]
----

TIP: Consult the xref:management:stackablectl:quickstart.adoc[] to learn more about how to use stackablectl. For
example, you can use the `--cluster kind` flag to create a Kubernetes cluster with link:https://kind.sigs.k8s.io/[kind].
TIP: Consult the xref:management:stackablectl:quickstart.adoc[] to learn more about how to use stackablectl.
For example, you can use the `--cluster kind` flag to create a Kubernetes cluster with link:https://kind.sigs.k8s.io/[kind].

=== Helm

You can also use Helm to install the operator. Add the Stackable Helm repository:
You can also use Helm to install the operator.
Add the Stackable Helm repository:
[source,bash]
----
include::example$getting_started/getting_started.sh[tag=helm-add-repo]
Expand All @@ -59,8 +60,8 @@ Then install the Stackable Operators:
include::example$getting_started/getting_started.sh[tag=helm-install-operators]
----

Helm will deploy the operators in a Kubernetes Deployment and apply the CRDs for the `SparkApplication` (as well as the
CRDs for the required operators). You are now ready to create a Spark job.
Helm will deploy the operators in a Kubernetes Deployment and apply the CRDs for the SparkApplication (as well as the CRDs for the required operators).
You are now ready to create a Spark job.

== What's next

Expand Down
2 changes: 1 addition & 1 deletion docs/modules/spark-k8s/pages/index.adoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
= Stackable Operator for Apache Spark
:description: The Stackable operator for Apache Spark is a Kubernetes operator that can manage Apache Spark clusters. Learn about its features, resources, dependencies and demos, and see the list of supported Spark versions.
:description: Manage Apache Spark clusters on Kubernetes with Stackable Operator, featuring SparkApplication CRDs, history server, S3 integration, and demos for big data tasks.
:keywords: Stackable operator, Apache Spark, Kubernetes, operator, data science, engineer, big data, CRD, StatefulSet, ConfigMap, Service, S3, demo, version
:spark: https://spark.apache.org/
:github: https://github.com/stackabletech/spark-k8s-operator/
Expand Down
1 change: 1 addition & 0 deletions docs/modules/spark-k8s/pages/usage-guide/examples.adoc
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
= Examples
:description: Explore Spark job examples with various setups for PySpark and Scala, including external datasets, PVC mounts, and S3 access configurations.

The following examples have the following `spec` fields in common:

Expand Down
22 changes: 16 additions & 6 deletions docs/modules/spark-k8s/pages/usage-guide/history-server.adoc
Original file line number Diff line number Diff line change
@@ -1,13 +1,18 @@
= Spark History Server
:description: Set up Spark History Server on Kubernetes to access Spark logs via S3, with configuration for cleanups and web UI access details.
:page-aliases: history_server.adoc

== Overview

The Stackable Spark-on-Kubernetes operator runs Apache Spark workloads in a Kubernetes cluster, whereby driver- and executor-pods are created for the duration of the job and then terminated. One or more Spark History Server instances can be deployed independently of `SparkApplication` jobs and used as an end-point for spark logging, so that job information can be viewed once the job pods are no longer available.
The Stackable Spark-on-Kubernetes operator runs Apache Spark workloads in a Kubernetes cluster, whereby driver- and executor-pods are created for the duration of the job and then terminated.
One or more Spark History Server instances can be deployed independently of SparkApplication jobs and used as an end-point for spark logging, so that job information can be viewed once the job pods are no longer available.

== Deployment

The example below demonstrates how to set up the history server running in one Pod with scheduled cleanups of the event logs. The event logs are loaded from an S3 bucket named `spark-logs` and the folder `eventlogs/`. The credentials for this bucket are provided by the secret class `s3-credentials-class`. For more details on how the Stackable Data Platform manages S3 resources see the xref:concepts:s3.adoc[S3 resources] page.
The example below demonstrates how to set up the history server running in one Pod with scheduled cleanups of the event logs.
The event logs are loaded from an S3 bucket named `spark-logs` and the folder `eventlogs/`.
The credentials for this bucket are provided by the secret class `s3-credentials-class`.
For more details on how the Stackable Data Platform manages S3 resources see the xref:concepts:s3.adoc[S3 resources] page.


[source,yaml]
Expand Down Expand Up @@ -52,7 +57,8 @@ include::example$example-history-app.yaml[]

== History Web UI

To access the history server web UI, use one of the `NodePort` services created by the operator. For the example above, the operator created two services as shown:
To access the history server web UI, use one of the `NodePort` services created by the operator.
For the example above, the operator created two services as shown:

[source,bash]
----
Expand All @@ -70,13 +76,17 @@ image::history-server-ui.png[History Server Console]

For a role group of the Spark history server, you can specify: `configOverrides` for the following files:

- `security.properties`
* `security.properties`

=== The security.properties file

The `security.properties` file is used to configure JVM security properties. It is very seldom that users need to tweak any of these, but there is one use-case that stands out, and that users need to be aware of: the JVM DNS cache.
The `security.properties` file is used to configure JVM security properties.
It is very seldom that users need to tweak any of these, but there is one use-case that stands out, and that users need to be aware of: the JVM DNS cache.

The JVM manages its own cache of successfully resolved host names as well as a cache of host names that cannot be resolved. Some products of the Stackable platform are very sensible to the contents of these caches and their performance is heavily affected by them. As of version 3.4.0, Apache Spark may perform poorly if the positive cache is disabled. To cache resolved host names, and thus speeding up queries you can configure the TTL of entries in the positive cache like this:
The JVM manages its own cache of successfully resolved host names as well as a cache of host names that cannot be resolved.
Some products of the Stackable platform are very sensible to the contents of these caches and their performance is heavily affected by them.
As of version 3.4.0, Apache Spark may perform poorly if the positive cache is disabled.
To cache resolved host names, and thus speeding up queries you can configure the TTL of entries in the positive cache like this:

[source,yaml]
----
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
= Job Dependencies
:description: Learn how to provision dependencies for Spark jobs using custom images, volumes, Maven packages, or Python packages, and their trade-offs.
:page-aliases: job_dependencies.adoc

== Overview
Expand Down
9 changes: 5 additions & 4 deletions docs/modules/spark-k8s/pages/usage-guide/s3.adoc
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
= S3 bucket specification
:description: Learn how to configure S3 access in SparkApplications using inline credentials or external resources, including TLS for secure connections.

You can specify S3 connection details directly inside the `SparkApplication` specification or by referring to an external `S3Bucket` custom resource.
You can specify S3 connection details directly inside the SparkApplication specification or by referring to an external S3Bucket custom resource.

== S3 access using credentials

To specify S3 connection details directly as part of the `SparkApplication` resource you add an inline connection configuration as shown below.
To specify S3 connection details directly as part of the SparkApplication resource you add an inline connection configuration as shown below.

[source,yaml]
----
Expand All @@ -21,7 +22,7 @@ s3connection: # <1>
<3> Optional connection port.
<4> Name of the `Secret` object expected to contain the following keys: `accessKey` and `secretKey`

It is also possible to configure the connection details as a separate Kubernetes resource and only refer to that object from the `SparkApplication` like this:
It is also possible to configure the connection details as a separate Kubernetes resource and only refer to that object from the SparkApplication like this:

[source,yaml]
----
Expand All @@ -47,7 +48,7 @@ spec:
secretClass: minio-credentials-class
----

This has the advantage that one connection configuration can be shared across `SparkApplications` and reduces the cost of updating these details.
This has the advantage that one connection configuration can be shared across SparkApplications and reduces the cost of updating these details.

== S3 access with TLS

Expand Down
4 changes: 2 additions & 2 deletions docs/modules/spark-k8s/pages/usage-guide/security.adoc
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
= Security
:description: Learn how to configure Apache Spark applications with Kerberos authentication using Stackable Secret Operator for secure data access in HDFS.

== Authentication

Expand Down Expand Up @@ -56,7 +57,7 @@ executor:
volumes:
- name: hdfs-config <4>
configMap:
name: hdfs
name: hdfs
- name: kerberos
ephemeral:
volumeClaimTemplate:
Expand Down Expand Up @@ -94,4 +95,3 @@ sparkConf:
----
<1> Location of the keytab file.
<2> Principal name. This needs to have the format `<SERVICE_NAME>.default.svc.cluster.local@<REALM>` where `SERVICE_NAME` matches the volume claim annotation `secrets.stackable.tech/kerberos.service.names` and `REALM` must be `CLUSTER.LOCAL` unless a different realm was used explicitly. In that case, the `KERBEROS_REALM` environment variable must also be set accordingly.

0 comments on commit 38d4a25

Please sign in to comment.