Skip to content

fix(lakehouse): Adapt for release 25.7 #258

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Jul 17, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,14 @@ spec:
initContainers:
- name: wait-for-kafka
image: oci.stackable.tech/sdp/tools:1.0.0-stackable0.0.0-dev
command: ["bash", "-c", "echo 'Waiting for all kafka brokers to be ready' && kubectl wait --for=condition=ready --timeout=30m pod -l app.kubernetes.io/instance=kafka -l app.kubernetes.io/name=kafka"]
command:
- bash
- -euo
- pipefail
- -c
- |
echo 'Waiting for all kafka brokers to be ready'
kubectl wait --for=condition=ready --timeout=30m pod -l app.kubernetes.io/instance=kafka,app.kubernetes.io/name=kafka
containers:
- name: create-nifi-ingestion-job
image: oci.stackable.tech/sdp/testing-tools:0.2.0-stackable0.0.0-dev
Expand All @@ -19,7 +26,8 @@ spec:
- -euo
- pipefail
- -c
- python -u /tmp/script/script.py
- |
python -u /tmp/script/script.py
volumeMounts:
- name: script
mountPath: /tmp/script
Expand Down Expand Up @@ -53,8 +61,8 @@ data:
import requests
import urllib3

# As of 2022-08-29 we cant use "https://nifi:8443" here because <h2>The request contained an invalid host header [<code>nifi:8443</code>] in the request [<code>/nifi-api</code>]. Check for request manipulation or third-party intercept.</h2>
ENDPOINT = f"https://nifi-node-default-0.nifi-node-default.{os.environ['NAMESPACE']}.svc.cluster.local:8443" # For local testing / developing replace it, afterwards change back to f"https://nifi-node-default-0.nifi-node-default.{os.environ['NAMESPACE']}.svc.cluster.local:8443"
# As of 2022-08-29 we cant use "https://nifi-node:8443" here because <h2>The request contained an invalid host header [<code>nifi:8443</code>] in the request [<code>/nifi-api</code>]. Check for request manipulation or third-party intercept.</h2>
ENDPOINT = f"https://nifi-node-default-0.nifi-node-default-headless.{os.environ['NAMESPACE']}.svc.cluster.local:8443" # For local testing / developing replace it, afterwards change back to f"https://nifi-node-default-0.nifi-node-default-headless.{os.environ['NAMESPACE']}.svc.cluster.local:8443"
USERNAME = "admin"
PASSWORD = open("/nifi-admin-credentials-secret/admin").read()

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,26 @@ spec:
initContainers:
- name: wait-for-kafka
image: oci.stackable.tech/sdp/tools:1.0.0-stackable0.0.0-dev
command: ["bash", "-c", "echo 'Waiting for all kafka brokers to be ready' && kubectl wait --for=condition=ready --timeout=30m pod -l app.kubernetes.io/name=kafka -l app.kubernetes.io/instance=kafka"]
command:
- bash
- -euo
- pipefail
- -c
- |
echo 'Waiting for all minio instances to be ready'
kubectl wait --for=condition=ready --timeout=30m pod -l app=minio,release=minio,stackable.tech/vendor=Stackable
echo 'Waiting for all kafka brokers to be ready'
kubectl wait --for=condition=ready --timeout=30m pod -l app.kubernetes.io/name=kafka,app.kubernetes.io/instance=kafka
containers:
- name: create-spark-ingestion-job
image: oci.stackable.tech/sdp/tools:1.0.0-stackable0.0.0-dev
command: ["bash", "-c", "echo 'Submitting Spark job' && kubectl apply -f /tmp/manifest/spark-ingestion-job.yaml"]
command:
- bash
- -euo
- pipefail
- -c
- |
echo 'Submitting Spark job' && kubectl apply -f /tmp/manifest/spark-ingestion-job.yaml
volumeMounts:
- name: manifest
mountPath: /tmp/manifest
Expand Down Expand Up @@ -56,7 +71,7 @@ data:
spark.sql.extensions: org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions
spark.sql.catalog.lakehouse: org.apache.iceberg.spark.SparkCatalog
spark.sql.catalog.lakehouse.type: hive
spark.sql.catalog.lakehouse.uri: thrift://hive-iceberg:9083
spark.sql.catalog.lakehouse.uri: thrift://hive-iceberg-metastore:9083
# Every merge into statements creates 8 files.
# Paralleling is enough for the demo, might need to be increased (or omitted entirely) when merge larger data volumes
spark.sql.shuffle.partitions: "8"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,14 @@ spec:
initContainers:
- name: wait-for-testdata
image: oci.stackable.tech/sdp/tools:1.0.0-stackable0.0.0-dev
command: ["bash", "-c", "echo 'Waiting for job load-test-data to finish' && kubectl wait --for=condition=complete --timeout=30m job/load-test-data"]
command:
- bash
- -euo
- pipefail
- -c
- |
echo 'Waiting for job load-test-data to finish'
kubectl wait --for=condition=complete --timeout=30m job/load-test-data
containers:
- name: create-tables-in-trino
image: oci.stackable.tech/sdp/testing-tools:0.2.0-stackable0.0.0-dev
Expand Down
8 changes: 8 additions & 0 deletions demos/data-lakehouse-iceberg-trino-spark/serviceaccount.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,14 @@ rules:
- get
- list
- watch
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- get
- list
- watch
- apiGroups:
- batch
resources:
Expand Down
11 changes: 9 additions & 2 deletions demos/data-lakehouse-iceberg-trino-spark/setup-superset.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,14 @@ spec:
containers:
- name: setup-superset
image: oci.stackable.tech/sdp/testing-tools:0.2.0-stackable0.0.0-dev
command: ["bash", "-c", "curl -o superset-assets.zip https://raw.githubusercontent.com/stackabletech/demos/main/demos/data-lakehouse-iceberg-trino-spark/superset-assets.zip && python -u /tmp/script/script.py"]
command:
- bash
- -euo
- pipefail
- -c
- |
curl -o superset-assets.zip https://raw.githubusercontent.com/stackabletech/demos/main/demos/data-lakehouse-iceberg-trino-spark/superset-assets.zip
python -u /tmp/script/script.py
volumeMounts:
- name: script
mountPath: /tmp/script
Expand Down Expand Up @@ -39,7 +46,7 @@ data:
import logging
import requests

base_url = "http://superset-node-default:8088" # For local testing / developing replace it, afterwards change back to http://superset-node-default:8088
base_url = "http://superset-node:8088" # For local testing / developing replace it, afterwards change back to http://superset-node:8088
superset_username = open("/superset-credentials/adminUser.username").read()
superset_password = open("/superset-credentials/adminUser.password").read()
trino_username = "admin"
Expand Down
Binary file modified demos/data-lakehouse-iceberg-trino-spark/superset-assets.zip
Binary file not shown.
7 changes: 4 additions & 3 deletions stacks/data-lakehouse-iceberg-trino-spark/nifi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,18 +9,19 @@ spec:
clusterConfig:
authentication:
- authenticationClass: nifi-admin-credentials
listenerClass: external-unstable
sensitiveProperties:
keySecret: nifi-sensitive-property-key
autoGenerate: true
nodes:
roleConfig:
listenerClass: external-unstable
config:
resources:
cpu:
min: "2"
max: "4"
memory:
limit: '6Gi'
limit: "6Gi"
storage:
contentRepo:
capacity: "10Gi"
Expand Down Expand Up @@ -51,4 +52,4 @@ kind: Secret
metadata:
name: nifi-admin-credentials-secret
stringData:
admin: {{ nifiAdminPassword }}
admin: "{{ nifiAdminPassword }}"
7 changes: 4 additions & 3 deletions stacks/data-lakehouse-iceberg-trino-spark/trino.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ spec:
image:
productVersion: "476"
clusterConfig:
listenerClass: external-unstable
catalogLabelSelector:
matchLabels:
trino: trino
Expand All @@ -18,14 +17,16 @@ spec:
configMapName: opa
package: trino
coordinators:
roleConfig:
listenerClass: external-unstable
config:
queryMaxMemory: 10TB
resources:
cpu:
min: "1"
max: "4"
memory:
limit: '6Gi'
limit: "6Gi"
roleGroups:
default:
replicas: 1
Expand All @@ -37,7 +38,7 @@ spec:
min: "2"
max: "6"
memory:
limit: '20Gi'
limit: "20Gi"
roleGroups:
default:
replicas: 4
Expand Down