Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DO NOT MERGE: build: clickhouse integration testing #91

Closed
wants to merge 6 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
65 changes: 65 additions & 0 deletions .github/workflows/integration-test.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
name: Integration Test

on:
pull_request:

jobs:

k8s:
runs-on: ubuntu-latest
strategy:
matrix:
cluster:
- elasticsearch

steps:
- name: Checkout
uses: actions/checkout@v4

- name: Install Kind
uses: helm/[email protected]
with:
install_only: true

- name: Start Kind
run: |
kind create cluster --config integration-test/cluster.yaml
kubectl config set-context --current --namespace=openedx-harmony
sudo echo "127.0.0.1 harmony.test" | sudo tee -a /etc/hosts

- name: Helm dependency add
run: |
helm dependency list charts/harmony-chart 2> /dev/null | tail +2 | awk '{ print "helm" " repo add " $1 " " $3 }' | while read cmd; do $cmd || true; done
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.10.1/cert-manager.crds.yaml --namespace=harmony

- name: Helm dependency build
run: |
helm dependency update charts/harmony-chart
helm dependency build charts/harmony-chart

- name: Helm install
run: |
helm install harmony --namespace openedx-harmony --create-namespace -f integration-test/${{matrix.cluster}}/values.yaml charts/harmony-chart
kubectl rollout status deployment --timeout 300s

- name: Healthcheck
run: |
curl http://harmony.test/cluster-echo-test

- name: Boostrap cluster
run: |
bash integration-test/${{matrix.cluster}}/cluster.sh

- name: setup python
uses: actions/setup-python@v5
with:
python-version: 3.12

- name: Install openedx
run: |
export INSTALLATIONS=$(ls -d integration-test/${{matrix.cluster}}/*/)
export CI_ROOT=$(pwd)
for installation in $(ls -d integration-test/${{matrix.cluster}}/*/)
do
bash integration-test/environment.sh $installation
done
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,5 @@ infra-*/secrets.auto.tfvars
.terraform
*secrets.auto.tfvars
my-notes
**/env/
**/venv/
38 changes: 38 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -316,6 +316,44 @@ populated with random value to ensure uniqueness.

In order for SSL to work without warnings the CA certificate needs to be mounted in the relevant pods. This is not yet implemented as due to an [outstanding issue in tutor](https://github.com/overhangio/tutor/issues/791) that had not yet been completed at the time of writing.

### ClickHouse Cluster

ClickHouse is needed for running Aspects, however for medium/large instances one single ClickHouse node can be
a bottleneck for Aspects and the default ClickHouse deployment in Aspects can take down other services running on the
same node as the ClickHouse pod. In case you are interested on running a ClickHouse cluster, you can enable the
Altinity ClickHouse Operator and follow the templates available on `charts/examples/clickhouse` to setup a ClickHouseKeeper
quorum (needed for replication) and a ClickHouse cluster based on your needs.

Once your cluster is created and working on Kubernetes, you need to update your installation settings:

```yaml
# See the clickhouse-installation.yml template for more details
CLICKHOUSE_ADMIN_USER: default
CLICKHOUSE_ADMIN_PASSWORD: change_me
CLICKHOUSE_CLUSTER_NAME: openedx-demo
# Set the first ClickHouse node as the DDL node.
CLICKHOUSE_CLUSTER_DDL_NODE_HOST: chi-clickhouse-{{CLICKHOUSE_CLUSTER_NAME}}-0-0.{{namespace}}
CLICKHOUSE_HOST: clickhouse-clickhouse.{{namespace}}
CLICKHOUSE_SECURE_CONNECTION: false
RUN_CLICKHOUSE: false
```

For multitenancy you have two options, either have multiple ClickHouse clusters or use different databases and users:

*Using different users and databases*: Make sure to update the users and databases on your config:

```yaml
ASPECTS_CLICKHOUSE_CMS_USER: openedx_demo_ch_cms
ASPECTS_CLICKHOUSE_LRS_USER: openedx_demo_ch_lrs
ASPECTS_CLICKHOUSE_REPORT_USER: openedx_demo_ch_report
ASPECTS_CLICKHOUSE_VECTOR_USER: openedx_demo_ch_vector
ASPECTS_XAPI_DATABASE: openedx_demo_xapi
ASPECTS_EVENT_SINK_DATABASE: openedx_demo_event_sink
ASPECTS_VECTOR_DATABASE: openedx_demo_openedx
DBT_PROFILE_TARGET_DATABASE: openedx_demo_reporting
```


## Extended Documentation

### How to uninstall this chart
Expand Down
76 changes: 76 additions & 0 deletions charts/examples/clickhouse/clickhouse-installation.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
---
apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
name: "clickhouse"
spec:
configuration:
clusters:
- name: "openedx-demo"
layout:
shardsCount: 1 # Shards have not been tested with Aspects and we don't recommend it.
replicasCount: 2 # Scale as you need/can
templates:
podTemplate: server
volumeClaimTemplate: storage
users:
test/networks/ip:
- "::/0"
test/profile: default
test/password: change_me
test/quota: default
# Default permissions needed for user creation
test/access_management: 1
test/named_collection_control: 1
test/show_named_collections: 1
test/show_named_collections_secrets: 1
zookeeper:
nodes:
- host: clickhouse-keeper-0.clickhouse-keeper-headless
- host: clickhouse-keeper-1.clickhouse-keeper-headless
- host: clickhouse-keeper-2.clickhouse-keeper-headless
files:
# Enable user replication
users-replication.xml: |
<clickhouse>
<user_directories replace="replace">
<users_xml>
<path>/etc/clickhouse-server/users.xml</path>
</users_xml>
<replicated>
<zookeeper_path>/clickhouse/access/</zookeeper_path>
</replicated>
</user_directories>
</clickhouse>
# Enable function replication
functions-replication.xml: |
<clickhouse>
<user_defined_zookeeper_path>/udf</user_defined_zookeeper_path>
</clickhouse>
templates:
podTemplates:
- name: server
spec:
containers:
- name: clickhouse
image: clickhouse/clickhouse-server:24.8
# If you are running a dedicated node group for ClickHouse (and you should)
# make sure to add it tolerations.
tolerations:
- key: "clickhouseInstance"
operator: "Exists"
effect: "NoSchedule"
# Optional: set the nodegroup name
nodeSelector:
eks.amazonaws.com/nodegroup: clickhouse_worker
volumeClaimTemplates:
- name: storage
# Do not delete PV if installation is deleted. If a new ClickHouseInstallation is created
# data will be re-used, allowing recovery of data
reclaimPolicy: Retain
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
86 changes: 86 additions & 0 deletions charts/examples/clickhouse/clickhouse-keeper.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
apiVersion: "clickhouse-keeper.altinity.com/v1"
kind: "ClickHouseKeeperInstallation"
metadata:
name: clickhouse-keeper
spec:
configuration:
clusters:
- name: "openedx-demo"
layout:
# ClickHouseKeeper needs at least tree pods to form a Quorum for high
# availability.
replicasCount: 3
settings:
logger/level: "trace"
logger/console: "true"
listen_host: "0.0.0.0"
keeper_server/storage_path: /var/lib/clickhouse-keeper
keeper_server/tcp_port: "2181"
keeper_server/four_letter_word_white_list: "*"
keeper_server/coordination_settings/raft_logs_level: "information"
keeper_server/raft_configuration/server/port: "9444"
prometheus/endpoint: "/metrics"
prometheus/port: "7000"
prometheus/metrics: "true"
prometheus/events: "true"
prometheus/asynchronous_metrics: "true"
prometheus/status_info: "false"
templates:
podTemplates:
- name: default
spec:
# affinity removed to allow use in single node test environment
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- clickhouse-keeper
topologyKey: "kubernetes.io/hostname"
containers:
- name: clickhouse-keeper
imagePullPolicy: IfNotPresent
# Make sure to keep this up to date with the ClickHouse compatible version
image: "clickhouse/clickhouse-keeper:24.8-alpine"
resources:
requests:
memory: "256M"
cpu: "0.25"
limits:
memory: "1Gi"
cpu: "1"
priorityClassName: clickhouse-priority
volumeClaimTemplates:
- name: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
- name: snapshot-storage-path
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
- name: log-storage-path
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi

---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: clickhouse-priority
value: 1000000
globalDefault: false
description: "This priority class should be used for ClickHouse service pods only."
7 changes: 5 additions & 2 deletions charts/harmony-chart/Chart.lock
Original file line number Diff line number Diff line change
Expand Up @@ -32,5 +32,8 @@ dependencies:
- name: openfaas
repository: https://openfaas.github.io/faas-netes
version: 14.2.34
digest: sha256:b636bd16d732d51544ca7223f460e22f45a7132e31e874a789c5fc0cac460a45
generated: "2024-05-02T12:32:49.796635+04:00"
- name: altinity-clickhouse-operator
repository: https://docs.altinity.com/clickhouse-operator/
version: 0.24.0
digest: sha256:4089b809a8c4ccccfc6309f97cf6002cec66a93e64955e1ccecd0aaef52782ab
generated: "2024-11-05T11:48:25.090620077-05:00"
5 changes: 5 additions & 0 deletions charts/harmony-chart/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -74,3 +74,8 @@ dependencies:
version: "14.2.34"
repository: https://openfaas.github.io/faas-netes
condition: openfaas.enabled

- name: altinity-clickhouse-operator
version: "0.24.0"
repository: https://docs.altinity.com/clickhouse-operator/
condition: clickhouse-operator.enabled
11 changes: 11 additions & 0 deletions charts/harmony-chart/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -364,3 +364,14 @@ velero:

openfaas:
enabled: false

clickhouse-operator:
enabled: false
dashboards: # Change this if you have monitoring disabled
enabled: true
serviceMonitor: # Change this if you have monitoring disabled
enabled: true
secret:
username: "change_me"
password: "change_me"
fullnameOverride: "clickhouse-operator"
20 changes: 20 additions & 0 deletions integration-test/cluster.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
listenAddress: 127.0.0.1 # omit for 0.0.0.0
protocol: TCP
- containerPort: 443
hostPort: 443
listenAddress: 127.0.0.1 # omit for 0.0.0.0
protocol: TCP

Empty file.
Loading