Skip to content

Commit

Permalink
changes after review.
Browse files Browse the repository at this point in the history
  • Loading branch information
henokgetachew committed Nov 25, 2024
1 parent 9047ace commit f5a8e3b
Showing 1 changed file with 69 additions and 14 deletions.
83 changes: 69 additions & 14 deletions content/en/hosting/4.x/data-migration-k8s.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,21 +7,21 @@ description: >
relatedContent: >
---

The hosting architecture differs entirely between CHT-Core 3.x and CHT-Core 4.x. When both versions are running in Kubernetes, migrating data requires specific steps using the `couchdb-migration` tool. This tool interfaces with CouchDB to update shard maps and database metadata.
The hosting architecture differs entirely between CHT-Core 3.x and CHT-Core 4.x. When both versions are running in Kubernetes, migrating data requires specific steps using the [couchdb-migration](https://github.com/medic/couchdb-migration) tool. This tool interfaces with CouchDB to update shard maps and database metadata.

{{% alert title="Note" %}}
If after upgrading you get an error, `Cannot convert undefined or null to object` - please see [issue #8040](https://github.com/medic/cht-core/issues/8040) for a work around. This only affects CHT 4.0.0, 4.0.1, 4.1.0 and 4.1.1. It was fixed in CHT 4.2.0.
{{% /alert %}}

## Migration Steps

### 1. Initial Setup
1. Initial Setup
```shell
# Set your namespace
NAMESPACE=<your-namespace>
```

### 2. Access the Source CouchDB (3.x)
2. Access the Source CouchDB (3.x)
```shell
# List all pods to find medic-os pod
kubectl get pods -n $NAMESPACE
Expand All @@ -30,7 +30,7 @@ kubectl get pods -n $NAMESPACE
kubectl exec -it -n $NAMESPACE <medic-os-pod-name> -- bash
```

### 3. Set Up Migration Tool Inside 3.x Pod
3. Set Up Migration Tool Inside 3.x Pod

Once inside the pod, install required dependencies:
```shell
Expand All @@ -47,7 +47,7 @@ cd couchdb-migration
npm ci --omit=dev
```

### 4. Run Pre-Migration Commands on 3.x
4. Run Pre-Migration Commands on 3.x

While still `exec`ed in the `medic-os` container, get credentials from 1Password and set them inside the pod:
```shell
Expand Down Expand Up @@ -80,7 +80,9 @@ Save the output containing:
- CouchDB server UUID (used for replication checkpointing)
- CouchDB admin credentials

### 5. Clone the 3.x Data Volume
You can now exit the medic-os container by running `exit`.

5. Clone the 3.x Data Volume

First, identify the volume ID from your 3.x data:
```shell
Expand Down Expand Up @@ -135,10 +137,62 @@ aws ec2 create-tags \
aws ec2 describe-volumes --region eu-west-2 --volume-id $NEW_VOLUME_ID | jq '.Volumes[0].Tags'
```

### 6. Deploy CHT 4.x with Existing Data
6. Deploy CHT 4.x with Existing Data

Create a `values.yaml` file using the volume ID from the previous step:

For single node deployment:

```yaml
project_name: <your-namespace-defined-in-NAMESPACE>
namespace: <your-namespace-defined-in-NAMESPACE>
chtversion: 4.10.0 # Can be 4.10.0 or latest version

upstream_servers:
docker_registry: "public.ecr.aws/medic"
builds_url: "https://staging.dev.medicmobile.org/_couch/builds_4"

couchdb:
password: <from_get-env_command> # Avoid using non-url-safe characters
secret: <from_get-env_command>
user: <admin_user>
uuid: <from_get-env_command>
clusteredCouch_enabled: false
couchdb_node_storage_size: 100Mi

toleration: # For production, change key to "prod-couchdb-only"
key: "dev-couchdb-only"
operator: "Equal"
value: "true"
effect: "NoSchedule"

ingress:
annotations: # For production, update groupname to "prod-cht-alb" and Environment to "prod"
groupname: "dev-cht-alb"
tags: "Environment=dev,Team=QA"
certificate: "arn:aws:iam::<account-id>:server-certificate/2024-wildcard-dev-medicmobile-org-chain"
host: <your-url> # eg. yyyy.app.medicmobile.org
hosted_zone_id: <your-hosted-zone-id>
# For production, use the production load balancer address
load_balancer: "dualstack.k8s-devchtalb-3eb0781cbb-694321496.eu-west-2.elb.amazonaws.com"

environment: "remote"
cluster_type: "eks"
cert_source: "eks-medic"

couchdb_data:
preExistingDataAvailable: "true"
dataPathOnDiskForCouchDB: "storage/medic-core/couchdb/data" # Use subPath from 3.x instance
partition: "0"

ebs:
preExistingEBSVolumeID-1: <NEW_VOLUME_ID-from-previous-step> # Volume containing 3.x data
preExistingEBSVolumeSize: "100Gi"

```

For a clustered deployment.

```yaml
project_name: <your-namespace-defined-in-NAMESPACE>
namespace: <your-namespace-defined-in-NAMESPACE>
Expand All @@ -155,7 +209,8 @@ couchdb:
secret: <from_get-env_command>
user: <admin_user>
uuid: <from_get-env_command>
clusteredCouch_enabled: false # Set to true if you want clustered CouchDB in 4.x
clusteredCouch_enabled: true # Set to true if you want clustered CouchDB in 4.x
couchdb_node_storage_size: 100Mi #Set equal to the migrated disk

clusteredCouch: # Only relevant if clusteredCouch_enabled is true
noOfCouchDBNodes: 3
Expand All @@ -181,10 +236,10 @@ cluster_type: "eks"
cert_source: "eks-medic"

# Only need to specify nodes if deploying on k3s and want to use specific nodes for CouchDB pods
nodes:
node-1: <worker-node-1-name> # Only for k3s deployments
node-2: <worker-node-2-name> # Only for k3s deployments
node-3: <worker-node-3-name> # Only for k3s deployments
#nodes:
# node-1: <worker-node-1-name> # Only for k3s deployments
# node-2: <worker-node-2-name> # Only for k3s deployments
# node-3: <worker-node-3-name> # Only for k3s deployments

couchdb_data:
preExistingDataAvailable: "true"
Expand All @@ -195,7 +250,7 @@ ebs:
preExistingEBSVolumeID-1: <NEW_VOLUME_ID-from-previous-step> # Volume containing 3.x data
preExistingEBSVolumeID-2: "" # Leave empty for secondary nodes
preExistingEBSVolumeID-3: "" # Leave empty for tertiary nodes
preExistingEBSVolumeSize: "100Gi"
preExistingEBSVolumeSize: "100Gi" #Set equal to the migrated disk
```
Deploy using cht-deploy script from cht-core repository:
Expand All @@ -204,7 +259,7 @@ cd cht-core/scripts/deploy
./cht-deploy -f PATH_TO/values.yaml
```

### 7. Verify Deployment and Run Migration Commands
7. Verify Deployment and Run Migration Commands

First verify CouchDB is running properly:
```shell
Expand Down

0 comments on commit f5a8e3b

Please sign in to comment.