Skip to content

Commit

Permalink
optimize the deployment for oceanbase used as openstack's metadb (#636)
Browse files Browse the repository at this point in the history
  • Loading branch information
chris-sun-star authored Nov 18, 2024
1 parent f9c1ec6 commit 120fcfe
Show file tree
Hide file tree
Showing 4 changed files with 280 additions and 142 deletions.
60 changes: 3 additions & 57 deletions example/openstack/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,70 +2,16 @@

## Overview
This folder contains configuration files to deploy OceanBase and OpenStack on Kubernetes.
* OceanBase configuration files are located in the oceanbase directory.
* A script for setting up OceanBase cluster and create tenant and deploy obproxy and finally provide a connection to use.
* OpenStack configuration files are located in the openstack directory.

## Deploy steps

### Deploy OceanBase
1. Deploy cert-manager
Deploy the cert-manager using the following command. Ensure all pods are running before proceeding to the next step:
The script oceanbase.sh provides a simple way to deploy OceanBase ready for use as OpenStack's metadb, it requires the storage class `general` already created in the K8s cluster, to deploy OceanBase you can simply run the following command
```
kubectl apply -f https://raw.githubusercontent.com/oceanbase/ob-operator/2.3.0_release/deploy/cert-manager.yaml
```
2. deploy ob-operator
Deploy the ob-operator using the command below. Wait until all pods are running:
```
kubectl apply -f https://raw.githubusercontent.com/oceanbase/ob-operator/2.3.0_release/deploy/cert-manager.yaml
```
3. create secret
Create secret using the following command
```
kubectl create secret generic root-password --from-literal=password='password' -n openstack
```

4. deploy obcluster
Deploy the obcluster using the following command:
```
kubectl apply -f oceanbase/obcluster.yaml
```
Wait until the obcluster status changes to `running`. You can check the status using:
```
kubectl get obcluster openstack -n openstack -o wide
```

5. deploy obtenant
Deploy the obtenant using the command below:
```
kubectl apply -f oceanbase/obtenant.yaml
bash oceanbase.sh
```
Wait until the obtenant status changes to `running`. You can verify this using:
```
kubectl get obtenant openstack -n openstack -o wide
```

6. deploy obproxy
A script is provided to set up obproxy. Download the script with the following command:
```
wget https://raw.githubusercontent.com/oceanbase/ob-operator/master/scripts/setup-obproxy.sh
```
Run the script to set up obproxy:
```
bash setup-obproxy.sh -n openstack --proxy-version 4.2.3.0-3 --env ODP_MYSQL_VERSION=8.0.30 --env ODP_PROXY_TENANT_NAME=openstack -d openstack openstack
```

7. Configure Tenant Variables
Connect to the openstack tenant using the command below (${ip} need to be replaced with the obproxy ip, root@openstack is the root user of openstack tenant, and the default password is password):
```
mysql -h${ip} -P2883 -uroot@openstack -p
```
Set the necessary variables
```
set global version='8.0.30';
set global autocommit=0;
```


### Deploy OpenStack
Once OceanBase is set up, deploying OpenStack is straightforward. Override the necessary variables using the files under the openstack directory. The files are based on OpenStack version 2024.1.
Expand Down
277 changes: 277 additions & 0 deletions example/openstack/oceanbase.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,277 @@
#!/bin/bash

set -xe

CLUSTER_NAME=openstack
TENANT_NAME=openstack
OCEANBASE_NAMESPACE=openstack
STORAGE_CLASS=general
OCEANBASE_CLUSTER_IMAGE=oceanbase/oceanbase-cloud-native:4.3.3.1-101000012024102216
OBPROXY_IMAGE=oceanbase/obproxy-ce:4.3.2.0-26
ROOT_PASSWORD=$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c 16 | base64 -w 0)
PROXYRO_PASSWORD=$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c 16 | base64 -w 0)

# check and install cert-manager
if kubectl get crds -o name | grep 'certificates.cert-manager.io'
then
echo "cert-manager is already installed"
else
echo "cert-manager is not installed, install it now"
kubectl apply -f https://raw.githubusercontent.com/oceanbase/ob-operator/refs/heads/master/deploy/cert-manager.yaml
fi

# install ob-operator, always apply the newest version
echo "install or update ob-operator"
kubectl apply -f https://raw.githubusercontent.com/oceanbase/ob-operator/refs/heads/2.3.0_release/deploy/operator.yaml
kubectl wait --for=condition=Ready pod -l "control-plane=controller-manager" -n oceanbase-system --timeout=300s

tee /tmp/namespace.yaml <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: ${OCEANBASE_NAMESPACE}
EOF

kubectl apply -f /tmp/namespace.yaml

tee /tmp/secret.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
name: root-password
namespace: ${OCEANBASE_NAMESPACE}
type: Opaque
data:
password: ${ROOT_PASSWORD}
---
apiVersion: v1
kind: Secret
metadata:
name: proxyro-password
namespace: ${OCEANBASE_NAMESPACE}
type: Opaque
data:
password: ${PROXYRO_PASSWORD}
EOF

kubectl apply -f /tmp/secret.yaml

tee /tmp/obcluster.yaml <<EOF
apiVersion: oceanbase.oceanbase.com/v1alpha1
kind: OBCluster
metadata:
name: ${CLUSTER_NAME}
namespace: ${OCEANBASE_NAMESPACE}
annotations:
"oceanbase.oceanbase.com/mode": "service"
spec:
clusterName: ${CLUSTER_NAME}
clusterId: 1
serviceAccount: "default"
userSecrets:
root: root-password
proxyro: proxyro-password
topology:
- zone: zone1
replica: 1
- zone: zone2
replica: 1
- zone: zone3
replica: 1
observer:
image: ${OCEANBASE_CLUSTER_IMAGE}
resource:
cpu: 2
memory: 10Gi
storage:
dataStorage:
storageClass: ${STORAGE_CLASS}
size: 50Gi
redoLogStorage:
storageClass: ${STORAGE_CLASS}
size: 50Gi
logStorage:
storageClass: ${STORAGE_CLASS}
size: 20Gi
parameters:
- name: system_memory
value: 2G
- name: __min_full_resource_pool_memory
value: "2147483648"
EOF

kubectl apply -f /tmp/obcluster.yaml
kubectl wait --for=jsonpath='{.status.status}'=running obcluster/${CLUSTER_NAME} -n ${OCEANBASE_NAMESPACE} --timeout=900s

tee /tmp/obtenant.yaml <<EOF
apiVersion: oceanbase.oceanbase.com/v1alpha1
kind: OBTenant
metadata:
name: ${TENANT_NAME}
namespace: ${OCEANBASE_NAMESPACE}
spec:
obcluster: ${CLUSTER_NAME}
tenantName: ${TENANT_NAME}
unitNum: 1
charset: utf8mb4
connectWhiteList: '%'
forceDelete: true
credentials:
root: root-password
pools:
- zone: zone1
type:
name: Full
replica: 1
isActive: true
resource:
maxCPU: 2
memorySize: 4Gi
- zone: zone2
type:
name: Full
replica: 1
isActive: true
resource:
maxCPU: 2
memorySize: 4Gi
- zone: zone3
type:
name: Full
replica: 1
isActive: true
priority: 3
resource:
maxCPU: 2
memorySize: 4Gi
EOF

kubectl apply -f /tmp/obtenant.yaml
kubectl wait --for=jsonpath='{.status.status}'=running obtenant/${TENANT_NAME} -n ${OCEANBASE_NAMESPACE} --timeout=300s

RS_LIST=$(kubectl get observers -l ref-obcluster=${CLUSTER_NAME} -n ${OCEANBASE_NAMESPACE} -o jsonpath='{range .items[*]}{.status.serviceIp}{":2881;"}' | sed 's/;:2881;$//g')
echo $RS_LIST

tee /tmp/obproxy.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-obproxy-${CLUSTER_NAME}
namespace: ${OCEANBASE_NAMESPACE}
data:
ODP_MYSQL_VERSION: 8.0.30
ODP_PROXY_TENANT_NAME: ${TENANT_NAME}
---
apiVersion: v1
kind: Service
metadata:
name: svc-obproxy-${CLUSTER_NAME}
namespace: ${OCEANBASE_NAMESPACE}
spec:
ports:
- name: sql
port: 2883
protocol: TCP
targetPort: 2883
- name: prometheus
port: 2884
protocol: TCP
targetPort: 2884
selector:
app: app-obproxy-${CLUSTER_NAME}
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: obproxy-${CLUSTER_NAME}
namespace: ${OCEANBASE_NAMESPACE}
spec:
replicas: 2
selector:
matchLabels:
app: app-obproxy-${CLUSTER_NAME}
template:
metadata:
labels:
app: app-obproxy-${CLUSTER_NAME}
spec:
containers:
- env:
- name: APP_NAME
value: obproxy-${CLUSTER_NAME}
- name: OB_CLUSTER
value: ${CLUSTER_NAME}
- name: RS_LIST
value: ${RS_LIST}
- name: PROXYRO_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: proxyro-password
envFrom:
- configMapRef:
name: cm-obproxy-${CLUSTER_NAME}
image: ${OBPROXY_IMAGE}
imagePullPolicy: IfNotPresent
name: obproxy
ports:
- containerPort: 2883
name: sql
protocol: TCP
- containerPort: 2884
name: prometheus
protocol: TCP
resources:
limits:
cpu: "1"
memory: 2Gi
requests:
cpu: "1"
memory: 2Gi
EOF

kubectl apply -f /tmp/obproxy.yaml
kubectl wait --for=condition=Ready pod -l app="app-obproxy-${CLUSTER_NAME}" -n ${OCEANBASE_NAMESPACE} --timeout=300s

tee /tmp/tenant-job.yaml <<EOF
apiVersion: batch/v1
kind: Job
metadata:
name: tenant-config-job
namespace: ${OCEANBASE_NAMESPACE}
spec:
backoffLimit: 3
ttlSecondsAfterFinished: 300
template:
spec:
containers:
- name: mysql-client
image: mysql:8.0
command: ["/bin/bash"]
args:
- -c
- |
mysql -h\${HOST} -P\${PORT} -u\${USER} -p\${PASSWORD} -e "
SET GLOBAL autocommit = 0;
SET GLOBAL version = '8.0.30';
"
env:
- name: HOST
value: svc-obproxy-${CLUSTER_NAME}.${OCEANBASE_NAMESPACE}.svc
- name: PORT
value: "2883"
- name: USER
value: root
- name: PASSWORD
valueFrom:
secretKeyRef:
name: root-password
key: password
restartPolicy: Never
EOF
kubectl apply -f /tmp/tenant-job.yaml
kubectl wait --for=condition=complete job/tenant-config-job -n ${OCEANBASE_NAMESPACE} --timeout=300s

echo "OceanBase is ready, you may use the following connection"
echo "mysql -hsvc-obproxy-${CLUSTER_NAME}.${OCEANBASE_NAMESPACE}.svc -P2883 -uroot -p$(echo ${ROOT_PASSWORD} | base64 -d)"
45 changes: 0 additions & 45 deletions example/openstack/oceanbase/obcluster.yaml

This file was deleted.

Loading

0 comments on commit 120fcfe

Please sign in to comment.