There are a few options for deploying the operator:
- Helm
- Kustomize
- Build from source
Make sure you have the following installed before going through the rest of the guide.
The examples in this guide use kind clusters. Install it now if you have not already done so.
By default kind clusters run on the same Docker network which means we will have routable pod IPs across clusters.
Note: Issues creating multiple kind clusters have been observed on various versions of Docker Desktop for Mac. These issues seem to be resolved with the 4.5.0 release of Docker Desktop. Please be sure to upgrade Docker Desktop if you plan to deploy using kind.
kubectx is a really handy tool when you are dealing with multiple clusters. The examples will use it so go ahead and install it now.
yq is lightweight and portable command-line YAML processor.
setup-kind-multicluster.sh lives in the k8ssandra-operator repo. It is used extensively during development and testing. Not only does it configure and create kind clusters, it also generates kubeconfig files for each cluster.
create-clientconfig.sh lives in the k8ssandra-operator repo. It is used to configure access to remote clusters.
Note: kind generates a kubeconfig with the IP address of the API server set to localhost since the cluster is
intended for local development. When creating a ClientConfig using create-clientconfig.sh
, the script will
automatically replace the API server address with the appropriate in-cluster IP.
If you're interested in getting running as quickly as possible, there's a number of helper scripts that can be used to greatly reduce the steps to deploy a local K8ssandra cluster via kind for testing purposes.
Two base make
commands are provided that deploy a basic kind-based Kubernetes cluster(s). These commands encapsulate the more detailed step-by-step installation instructions otherwise captured in this document.
Each of these commands will do the following:
- Create the kind-based cluster(s)
Across the cluster:
- Install cert-manager in it's own namespace
- Install cass-operator in the
k8ssandra-operator
namespace - Build the k8ssandra-operator from source, load the image into the kind nodes, and
install it in the
k8ssandra-operator
namespace - Install relevant CRDs
At completion, the cluster is now ready to accept a K8ssandraCluster
deployment.
Note: if a k8ssandra-0 and/or k8ssandra-1 kind cluster already exists, running make single-up
or make multi-up
will delete and recreate them.
Note: These steps will attempt to start a local Docker registry instance to be used by the kind cluster(s), if you are already running one locally it will need to be stopped before following these procedures.
Deploy a single kind based Kubernetes cluster.
make single-up
Once cluster should be available:
kubectx
kind-k8ssandra-0
The cluster should consist of the following nodes:
NAME STATUS ROLES AGE VERSION
k8ssandra-0-control-plane Ready control-plane,master 3m24s v1.21.2
k8ssandra-0-worker Ready <none> 2m53s v1.21.2
k8ssandra-0-worker2 Ready <none> 3m5s v1.21.2
k8ssandra-0-worker3 Ready <none> 2m53s v1.21.2
k8ssandra-0-worker4 Ready <none> 2m53s v1.21.2
Once the Kubernetes cluster is ready, deploy a K8ssandraCluster
like:
cat <<EOF | kubectl -n k8ssandra-operator apply -f -
apiVersion: k8ssandra.io/v1alpha1
kind: K8ssandraCluster
metadata:
name: demo
spec:
cassandra:
serverVersion: "4.0.1"
datacenters:
- metadata:
name: dc1
size: 3
storageConfig:
cassandraDataVolumeClaimSpec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
config:
jvmOptions:
heapSize: 512M
stargate:
size: 1
heapSize: 256M
EOF
Confirm that the resource has been created:
kubectl -n k8ssandra-operator get k8ssandraclusters
NAME AGE
demo 45s
kubectl -n k8ssandra-operator describe k8ssandracluster demo
Name: demo
Namespace: k8ssandra-operator
Labels: <none>
Annotations: <none>
API Version: k8ssandra.io/v1alpha1
Kind: K8ssandraCluster
...
Status:
Datacenters:
dc1:
Cassandra:
Cassandra Operator Progress: Updating
Node Statuses:
Events: <none>
Monitor the status of the deployment, eventually resulting in all the resources being in the Ready
state:
kubectl -n k8ssandra-operator describe K8ssandraCluster demo
Name: demo
Namespace: k8ssandra-operator
Labels: <none>
Annotations: <none>
API Version: k8ssandra.io/v1alpha1
Kind: K8ssandraCluster
...
Status:
Datacenters:
dc1:
Cassandra:
Cassandra Operator Progress: Ready
...
Stargate:
Available Replicas: 1
Conditions:
Last Transition Time: 2021-09-28T03:32:07Z
Status: True
Type: Ready
Deployment Refs:
demo-dc1-default-stargate-deployment
Progress: Running
Ready Replicas: 1
Ready Replicas Ratio: 1/1
Replicas: 1
Service Ref: demo-dc1-stargate-service
Updated Replicas: 1
Events: <none>
Deploy two kind based Kubernetes clusters with:
make multi-up
Two clusters should be available:
kubectx
kind-k8ssandra-0
kind-k8ssandra-1
Each cluster should consist of the following nodes:
kind-k8ssandra-0:
NAME STATUS ROLES AGE VERSION
k8ssandra-0-control-plane Ready control-plane,master 9m20s v1.21.2
k8ssandra-0-worker Ready <none> 8m49s v1.21.2
k8ssandra-0-worker2 Ready <none> 8m49s v1.21.2
k8ssandra-0-worker3 Ready <none> 8m48s v1.21.2
k8ssandra-0-worker4 Ready <none> 8m49s v1.21.2
kind-k8ssandra-1
NAME STATUS ROLES AGE VERSION
k8ssandra-1-control-plane Ready control-plane,master 9m51s v1.21.2
k8ssandra-1-worker Ready <none> 9m32s v1.21.2
k8ssandra-1-worker2 Ready <none> 9m20s v1.21.2
k8ssandra-1-worker3 Ready <none> 9m32s v1.21.2
k8ssandra-1-worker4 Ready <none> 9m20s v1.21.2
You're now ready to deploy a K8ssandraCluster
.
Set your context to the control-plane cluster (kind-k8ssandra-0
):
kubectx kind-k8ssandra-0
Switched to context "kind-k8ssandra-0".
Deploy the K8ssandraCluster
resource:
cat <<EOF | kubectl -n k8ssandra-operator apply -f -
apiVersion: k8ssandra.io/v1alpha1
kind: K8ssandraCluster
metadata:
name: demo
spec:
cassandra:
serverVersion: "4.0.1"
storageConfig:
cassandraDataVolumeClaimSpec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
config:
jvmOptions:
heapSize: 512M
networking:
hostNetwork: true
datacenters:
- metadata:
name: dc1
size: 3
stargate:
size: 1
heapSize: 256M
- metadata:
name: dc2
k8sContext: kind-k8ssandra-1
size: 3
stargate:
size: 1
heapSize: 256M
EOF
Confirm that the resource has been created:
kubectl -n k8ssandra-operator get k8ssandraclusters
NAME AGE
demo 45s
kubectl describe -n k8ssandra-operator K8ssandraCluster demo
Name: demo
Namespace: k8ssandra-operator
Labels: <none>
Annotations: <none>
API Version: k8ssandra.io/v1alpha1
Kind: K8ssandraCluster
...
Status:
Datacenters:
dc1:
Cassandra:
Cassandra Operator Progress: Updating
Node Statuses:
Events: <none>
Monitor the status of the deployment, eventually resulting in all the resources being in
the Ready
state:
kubectl -n k8ssandra-operator describe K8ssandraCluster demo
Name: demo
Namespace: k8ssandra-operator
Labels: <none>
Annotations: <none>
API Version: k8ssandra.io/v1alpha1
Kind: K8ssandraCluster
...
Status:
Datacenters:
dc1:
Cassandra:
Cassandra Operator Progress: Ready
...
Stargate:
Available Replicas: 1
Conditions:
Last Transition Time: 2021-09-27T17:52:41Z
Status: True
Type: Ready
Deployment Refs:
demo-dc1-default-stargate-deployment
Progress: Running
Ready Replicas: 1
Ready Replicas Ratio: 1/1
Replicas: 1
Service Ref: demo-dc1-stargate-service
Updated Replicas: 1
dc2:
Cassandra:
Cassandra Operator Progress: Ready
...
Stargate:
Available Replicas: 1
Conditions:
Last Transition Time: 2021-09-27T17:53:40Z
Status: True
Type: Ready
Deployment Refs:
demo-dc2-default-stargate-deployment
Progress: Running
Ready Replicas: 1
Ready Replicas Ratio: 1/1
Replicas: 1
Service Ref: demo-dc2-stargate-service
Updated Replicas: 1
Events: <none>
You need to have Helm v3+ installed.
Configure the K8ssandra Helm repository:
helm repo add k8ssandra https://helm.k8ssandra.io/stable
Update your Helm repository cache:
helm repo update
Verify that you see the k8ssandra-operator
chart:
helm search repo k8ssandra-operator
NAME CHART VERSION APP VERSION DESCRIPTION
k8ssandra/k8ssandra-operator 0.32.0 1.0.2 Kubernetes operator which handles the provision...
We will first look at a single cluster install to demonstrate that while K8ssandra Operator is designed for multi-cluster use, it can be used in a single cluster without any extra configuration.
Run setup-kind-multicluster.sh
as follows:
./setup-kind-multicluster.sh --kind-worker-nodes 4
Install the Helm chart:
helm install k8ssandra-operator k8ssandra/k8ssandra-operator -n k8ssandra-operator --create-namespace
This helm install
command does the following:
- Create the
k8ssandra-operator
namespace if necessary - Install Cass Operator in the
k8ssandra-operator
namespace - Install K8ssandra Operator in the
k8ssandra-operator
namespace
This does not currently install Cert Manager. Cass Operator requires Cert Manager when its webhook is enabled. This installs with the webhook disabled.
Verify that the Helm release is installed:
helm ls -n k8ssandra-operator
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
k8ssandra-operator k8ssandra-operator 1 2021-09-30 16:28:08.722822 -0400 EDT deployed k8ssandra-operator-0.32.0 1.0.2
Verify that the following CRDs are installed:
cassandradatacenters.cassandra.datastax.com
clientconfigs.config.k8ssandra.io
k8ssandraclusters.k8ssandra.io
replicatedsecrets.replication.k8ssandra.io
stargates.stargate.k8ssandra.io
Check that there are two Deployments. The output should look similar to this:
kubectl -n k8ssandra-operator get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
k8ssandra-operator-cass-operator 1/1 1 1 85s
k8ssandra-operator-k8ssandra-operator 1/1 1 1 85s
Now we will deploy a K8ssandraCluster that consists of a 3-node Cassandra cluster and a Stargate node.
cat <<EOF | kubectl -n k8ssandra-operator apply -f -
apiVersion: k8ssandra.io/v1alpha1
kind: K8ssandraCluster
metadata:
name: demo
spec:
cassandra:
serverVersion: "4.0.1"
datacenters:
- metadata:
name: dc1
size: 3
storageConfig:
cassandraDataVolumeClaimSpec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
config:
jvmOptions:
heapSize: 512M
stargate:
size: 1
heapSize: 256M
EOF
Confirm that the resource has been created:
kubectl -n k8ssandra-operator get k8ssandraclusters
NAME AGE
demo 45s
kubectl -n k8ssandra-operator describe k8ssandracluster demo
Name: demo
Namespace: k8ssandra-operator
Labels: <none>
Annotations: <none>
API Version: k8ssandra.io/v1alpha1
Kind: K8ssandraCluster
...
Status:
Datacenters:
dc1:
Cassandra:
Cassandra Operator Progress: Updating
Node Statuses:
Events: <none>
Monitor the status of the deployment, eventually resulting in all the resources being in the Ready
state:
kubectl -n k8ssandra-operator describe K8ssandraCluster demo
Name: demo
Namespace: k8ssandra-operator
Labels: <none>
Annotations: <none>
API Version: k8ssandra.io/v1alpha1
Kind: K8ssandraCluster
...
Status:
Datacenters:
dc1:
Cassandra:
Cassandra Operator Progress: Ready
...
Stargate:
Available Replicas: 1
Conditions:
Last Transition Time: 2021-09-28T03:32:07Z
Status: True
Type: Ready
Deployment Refs:
demo-dc1-default-stargate-deployment
Progress: Running
Ready Replicas: 1
Ready Replicas Ratio: 1/1
Replicas: 1
Service Ref: demo-dc1-stargate-service
Updated Replicas: 1
Events: <none>
If you previously created a cluster with setup-kind-multicluster.sh
we need to delete
it in order to create the multi-cluster setup. The script currently does not support
adding clusters to an existing setup (see#128).
We will create two kind clusters with 3 worker nodes per clusters. Remember that K8ssandra Operator requires clusters to have routable pod IPs. kind clusters by default will run on the same Docker network which means that they will have routable IPs.
Run setup-kind-multicluster.sh
as follows:
./setup-kind-multicluster.sh --clusters 2 --kind-worker-nodes 4
When creating a cluster, kind generates a kubeconfig with the address of the API server
set to localhost. We need a kubeconfig that has the API server address set to its
internal ip address. setup-kind-multi-cluster.sh
takes care of this for us. Generated
files are written into a build
directory.
Run kubectx
without any arguments and verify that you see the following contexts
listed in the output:
- kind-k8ssandra-0
- kind-k8ssandra-1
We will install the control plane in kind-k8ssandra-0
. Make sure your active context
is configured correctly:
kubectx kind-k8ssandra-0
Install the operator:
helm install k8ssandra-operator k8ssandra/k8ssandra-operator -n k8ssandra-operator --create-namespace
This helm install
command does the following:
- Create the
k8ssandra-operator
namespace if necessary - Install Cass Operator in the
k8ssandra-operator
namespace - Install K8ssandra Operator in the
k8ssandra-operator
namespace
This does not currently install Cert Manager. Cass Operator requires Cert Manager when its webhook is enabled. This installs with Cass Operator's webhook disabled.
Verify that the Helm release is installed:
helm ls -n k8ssandra-operator
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
k8ssandra-operator k8ssandra-operator 1 2021-09-30 16:28:08.722822 -0400 EDT deployed k8ssandra-operator-0.32.0 1.0.2
Verify that the following CRDs are installed:
cassandradatacenters.cassandra.datastax.com
clientconfigs.k8ssandra.io
k8ssandraclusters.k8ssandra.io
replicatedsecrets.k8ssandra.io
stargates.k8ssandra.io
Check that there are two Deployments. The output should look similar to this:
kubectl -n k8ssandra-operator get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
k8ssandra-operator-cass-operator 1/1 1 1 85s
k8ssandra-operator-k8ssandra-operator 1/1 1 1 85s
The operator looks for an environment variable named K8SSANDRA_CONTROL_PLANE
. When set
to true
the control plane is enabled. It is enabled by default.
Verify that the K8SSANDRA_CONTROL_PLANE
environment variable is set to true
:
kubectl -n k8ssandra-operator get deployment k8ssandra-operator-k8ssandra-operator -o jsonpath='{.spec.template.spec.containers[0].env[?(@.name=="K8SSANDRA_CONTROL_PLANE")].value}'
Now we will install the data plane in kind-k8ssandra-1
. Switch the active context:
kubectx kind-k8ssandra-1
Install the operator:
helm install k8ssandra-operator k8ssandra/k8ssandra-operator -n k8ssandra-operator --create-namespace --set controlPlane=false
This helm install
command does the following:
- Create the
k8ssandra-operator
namespace if necessary - Install Cass Operator in the
k8ssandra-operator
namespace - Install K8ssandra Operator in the
k8ssandra-operator
namespace - Configures K8ssandra Operator to run in the data plane
This does not currently install Cert Manager. Cass Operator requires Cert Manager when its webhook is enabled. This installs with Cass Operator's webhook disabled.
Verify that the following CRDs are installed:
cassandradatacenters.cassandra.datastax.com
clientconfigs.k8ssandra.io
k8ssandraclusters.k8ssandra.io
replicatedsecrets.k8ssandra.io
stargates.k8ssandra.io
Check that there are two Deployments. The output should look similar to this:
kubectl -n k8ssandra-operator get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
k8ssandra-operator-cass-operator 1/1 1 1 85s
k8ssandra-operator-k8ssandra-operator 1/1 1 1 85s
Verify that the K8SSANDRA_CONTROL_PLANE
environment variable is set to false
:
kubectl -n k8ssandra-operator get deployment k8ssandra-operator-k8ssandra-operator -o jsonpath='{.spec.template.spec.containers[0].env[?(@.name=="K8SSANDRA_CONTROL_PLANE")].value}'
Now we need to create a ClientConfig
for the kind-k8ssandra-1
cluster. We will use
the create-clientconfig.sh
script which can be found here.
Here is a summary of what the script does:
- Get the k8ssandra-operator service account from the data plane cluster
- Extract the service account token
- Extract the CA cert
- Create a kubeconfig using the token and cert
- Create a secret for the kubeconfig in the control plane cluster
- Create a ClientConfig in the control plane cluster that references the secret
Create a ClientConfig
in the kind-k8ssandra-0
cluster using the service account
token and CA cert from kind-k8ssandra-1
:
./create-clientconfig.sh \
--namespace k8ssandra-operator \
--src-kubeconfig ./build/kind-kubeconfig \
--dest-kubeconfig ./build/kind-kubeconfig \
--src-context kind-k8ssandra-1 \
--dest-context kind-k8ssandra-0
Here ./build/kind-kubeconfig
refers to the kubeconfig file generated by setup-kind-multicluster.sh
. It should contain
configurations to access both contexts kind-k8ssandra-0
and kind-k8ssandra-1
.
You can specify the namespace where the secret and ClientConfig are created with the --namespace
option.
Make the active context kind-k8ssandra-0
:
kubectx kind-k8ssandra-0
Restart the operator:
kubectl -n k8ssandra-operator rollout restart deployment k8ssandra-operator-k8ssandra-operator
Note: See k8ssandra#178 for details on why it is necessary to restart the control plane operator.
Now we will create a K8ssandraCluster that consists of a Cassandra cluster with 2 DCs and 3 nodes per DC, and a Stargate node per DC.
cat <<EOF | kubectl -n k8ssandra-operator apply -f -
apiVersion: k8ssandra.io/v1alpha1
kind: K8ssandraCluster
metadata:
name: demo
spec:
cassandra:
serverVersion: "3.11.11"
storageConfig:
cassandraDataVolumeClaimSpec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
config:
jvmOptions:
heapSize: 512M
networking:
hostNetwork: true
datacenters:
- metadata:
name: dc1
size: 3
stargate:
size: 1
heapSize: 256M
- metadata:
name: dc2
k8sContext: kind-k8ssandra-1
size: 3
stargate:
size: 1
heapSize: 256M
EOF
K8ssandra Operator can be installed with Kustomize which takes a declarative approach to configuring and deploying resources whereas Helm takes more of an imperative approach.
The following examples use kubectl apply -k
to deploy resources. The -k
option
essentially runs kustomize build
over the specified directory followed by kubectl apply
. See this doc
for details on the integration of Kustomize into kubectl
.
Note: If kubectl -k <dir>
does not work for, you can instead use
kustomize build <dir> | kubectl apply -f -
.
We will first look at a single cluster install to demonstrate that while K8ssandra Operator is designed for multi-cluster use, it can be used in a single cluster without any extra configuration.
Run setup-kind-multicluster.sh
as follows:
./setup-kind-multicluster.sh --kind-worker-nodes 4
We need to first install Cert Manager as it is a dependency of cass-operator:
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.3/cert-manager.yaml
The GitHub Actions for the project are configured to build and push a new operator image
to Docker Hub whenever commits are pushed to main
.
See here on Docker Hub for a list of available images.
Install with kubectl:
kustomize build github.com/k8ssandra/k8ssandra-operator/config/deployments/control-plane\?ref\=v1.0.2 | kubectl apply --server-side --force-conflicts -f -
This installs the operator in the k8ssandra-operator
namespace.
Note: This will deploy the latest
operator image, i.e.,
k8ssandra/k8ssandra-operator:latest
. In general it is best to avoid using latest
.
In case you want to customize the installation, create a kustomization directory that
builds from the main
branch and in this case we'll add namespace creation and define
new namespace. Note the namespace
property which we added. This property tells
Kustomize to apply a transformation on all resources that specify a namespace.
K8SSANDRA_OPERATOR_HOME=$(mktemp -d)
cat <<EOF >$K8SSANDRA_OPERATOR_HOME/kustomization.yaml
namespace: k8ssandra-operator
resources:
- github.com/k8ssandra/k8ssandra-operator/config/deployments/default?ref=v1.0.2
images:
- name: k8ssandra/k8ssandra-operator
newTag: v1.0.2
EOF
Now install the operator:
kustomize build $K8SSANDRA_OPERATOR_HOME | kubectl apply --server-side --force-conflicts -f -
This installs the operator in the k8ssandra-operator
namespace.
If you just want to generate the manifests then run:
kustomize build $K8SSANDRA_OPERATOR_HOME
Verify that the following CRDs are installed:
cassandradatacenters.cassandra.datastax.com
certificaterequests.cert-manager.io
certificates.cert-manager.io
challenges.acme.cert-manager.io
clientconfigs.config.k8ssandra.io
clusterissuers.cert-manager.io
issuers.cert-manager.io
k8ssandraclusters.k8ssandra.io
orders.acme.cert-manager.io
replicatedsecrets.replication.k8ssandra.io
stargates.stargate.k8ssandra.io
Check that there are two Deployments. The output should look similar to this:
kubectl -n k8ssandra-operator get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
cass-operator 1/1 1 1 2m
k8ssandra-operator 1/1 1 1 2m
Verify that the K8SSANDRA_CONTROL_PLANE
environment variable is set to false
:
kubectl -n k8ssandra-operator get deployment k8ssandra-operator -o jsonpath='{.spec.template.spec.containers[0].env[?(@.name=="K8SSANDRA_CONTROL_PLANE")].value}'
Now we will deploy a K8ssandraCluster that consists of a 3-node Cassandra cluster and a Stargate node.
cat <<EOF | kubectl -n k8ssandra-operator apply -f -
apiVersion: k8ssandra.io/v1alpha1
kind: K8ssandraCluster
metadata:
name: demo
spec:
cassandra:
serverVersion: "4.0.1"
datacenters:
- metadata:
name: dc1
size: 3
storageConfig:
cassandraDataVolumeClaimSpec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
config:
jvmOptions:
heapSize: 512M
stargate:
size: 1
heapSize: 256M
EOF
Confirm that the resource has been created:
kubectl -n k8ssandra-operator get k8ssandraclusters
NAME AGE
demo 45s
kubectl -n k8ssandra-operator describe k8ssandracluster demo
Name: demo
Namespace: k8ssandra-operator
Labels: <none>
Annotations: <none>
API Version: k8ssandra.io/v1alpha1
Kind: K8ssandraCluster
...
Status:
Datacenters:
dc1:
Cassandra:
Cassandra Operator Progress: Updating
Node Statuses:
Events: <none>
Monitor the status of the deployment, eventually resulting in all the resources being in
the Ready
state:
kubectl -n k8ssandra-operator describe K8ssandraCluster demo
Name: demo
Namespace: k8ssandra-operator
Labels: <none>
Annotations: <none>
API Version: k8ssandra.io/v1alpha1
Kind: K8ssandraCluster
...
Status:
Datacenters:
dc1:
Cassandra:
Cassandra Operator Progress: Ready
...
Stargate:
Available Replicas: 1
Conditions:
Last Transition Time: 2021-09-28T03:32:07Z
Status: True
Type: Ready
Deployment Refs:
demo-dc1-default-stargate-deployment
Progress: Running
Ready Replicas: 1
Ready Replicas Ratio: 1/1
Replicas: 1
Service Ref: demo-dc1-stargate-service
Updated Replicas: 1
Events: <none>
If you previously created a cluster with setup-kind-multicluster.sh
we need to delete
it in order to create the multi-cluster setup. The script currently does not support
adding clusters to an existing setup (see #128).
We will create two kind clusters with 3 worker nodes per clusters. Remember that K8ssandra Operator requires clusters to have routable pod IPs. kind clusters by default will run on the same Docker network which means that they will have routable IPs.
Run setup-kind-multicluster.sh
as follows:
./setup-kind-multicluster.sh --clusters 2 --kind-worker-nodes 4
When creating a cluster, kind generates a kubeconfig with the address of the API server
set to localhost. We need a kubeconfig that has the API server address set to its
internal ip address. setup-kind-multi-cluster.sh
takes care of this for us. Generated
files are written into a build
directory.
Run kubectx
without any arguments and verify that you see the following contexts
listed in the output:
- kind-k8ssandra-0
- kind-k8ssandra-1
Set the active context to kind-k8ssandra-0
:
kubectx kind-k8ssandra-0
Install Cert Manager:
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.3/cert-manager.yaml
Set the active context to kind-k8ssandra-1
:
kubectx kind-k8ssandra-1
Install Cert Manager:
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.3/cert-manager.yaml
We will install the control plane in kind-k8ssandra-0
. Make sure your active context is
configured correctly:
kubectx kind-k8ssandra-0
Now install the operator:
kustomize build github.com/k8ssandra/config/deployments/control-plane\?ref\=v1.0.2 | kubectl apply --server-side --force-conflicts -f -
This installs the operator in the k8ssandra-operator
namespace.
Verify that the following CRDs are installed:
cassandradatacenters.cassandra.datastax.com
certificaterequests.cert-manager.io
certificates.cert-manager.io
challenges.acme.cert-manager.io
clientconfigs.k8ssandra.io
clusterissuers.cert-manager.io
issuers.cert-manager.io
k8ssandraclusters.k8ssandra.io
orders.acme.cert-manager.io
replicatedsecrets.k8ssandra.io
stargates.k8ssandra.io
Check that there are two Deployments. The output should look similar to this:
kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
cass-operator 1/1 1 1 2m
k8ssandra-operator 1/1 1 1 2m
The operator looks for an environment variable named K8SSANDRA_CONTROL_PLANE
. When set
to true
the control plane is enabled. It is enabled by default.
Verify that the K8SSANDRA_CONTROL_PLANE
environment variable is set to true
:
kubectl -n k8ssandra-operator get deployment k8ssandra-operator -o jsonpath='{.spec.template.spec.containers[0].env[?(@.name=="K8SSANDRA_CONTROL_PLANE")].value}'
Now we will install the data plane in kind-k8ssandra-1
. Switch the active context:
kubectx kind-k8ssandra-1
Now install the operator:
kustomize build github.com/k8ssandra/config/deployments/data-plane\?ref\=v1.0.2 | kubectl apply --server-side --force-conflicts -f -
This installs the operator in the k8ssandra-operator
namespace.
Verify that the following CRDs are installed:
cassandradatacenters.cassandra.datastax.com
certificaterequests.cert-manager.io
certificates.cert-manager.io
challenges.acme.cert-manager.io
clientconfigs.k8ssandra.io
clusterissuers.cert-manager.io
issuers.cert-manager.io
k8ssandraclusters.k8ssandra.io
orders.acme.cert-manager.io
replicatedsecrets.k8ssandra.io
stargates.k8ssandra.io
Check that there are two Deployments. The output should look similar to this:
kubectl -n k8ssandra-operator get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
cass-operator 1/1 1 1 2m
k8ssandra-operator 1/1 1 1 2m
Verify that the K8SSANDRA_CONTROL_PLANE
environment variable is set to false
:
kubectl -n k8ssandra-operator get deployment k8ssandra-operator -o jsonpath='{.spec.template.spec.containers[0].env[?(@.name=="K8SSANDRA_CONTROL_PLANE")].value}'
Now we need to create a ClientConfig
for the k8ssandra-1
cluster. We will use the
create-clientconfig.sh
script which can be found
here.
Here is a summary of what the script does:
- Get the k8ssandra-operator service account from the data plane cluster
- Extract the service account token
- Extract the CA cert
- Create a kubeconfig using the token and cert
- Create a secret for the kubeconfig in the control plane cluster
- Create a ClientConfig in the control plane cluster that references the secret
Create a ClientConfig
in the kind-k8ssandra-0
cluster using the service account
token and CA cert from kind-k8ssandra-1
:
./create-clientconfig.sh \
--namespace k8ssandra-operator \
--src-kubeconfig ./build/kind-kubeconfig \
--dest-kubeconfig ./build/kind-kubeconfig \
--src-context kind-k8ssandra-1 \
--dest-context kind-k8ssandra-0
Here ./build/kind-kubeconfig
refers to the kubeconfig file generated by setup-kind-multicluster.sh
. It should contain
configurations to access both contexts kind-k8ssandra-0
and kind-k8ssandra-1
.
Make the active context kind-k8ssandra-0
:
kubectx kind-k8ssandra-0
Restart the operator:
kubectl -n k8ssandra-operator rollout restart deployment k8ssandra-operator
Note: See k8ssandra#178 for details on why it is necessary to restart the control plane operator.
Now we will create a K8ssandraCluster that consists of a Cassandra cluster with 2 DCs and 3 nodes per DC, and a Stargate node per DC.
cat <<EOF | kubectl -n k8ssandra-operator apply -f -
apiVersion: k8ssandra.io/v1alpha1
kind: K8ssandraCluster
metadata:
name: demo
spec:
cassandra:
serverVersion: "4.0.1"
storageConfig:
cassandraDataVolumeClaimSpec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
config:
jvmOptions:
heapSize: 512M
networking:
hostNetwork: true
datacenters:
- metadata:
name: dc1
size: 3
stargate:
size: 1
heapSize: 256M
- metadata:
name: dc2
k8sContext: kind-k8ssandra-1
size: 3
stargate:
size: 1
heapSize: 256M
EOF