Skip to content

Commit

Permalink
Preview PR pingcap/docs-tidb-operator#2406 and this preview is trigge…
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions committed Aug 3, 2023
1 parent 04b80cf commit 51bfd86
Showing 1 changed file with 139 additions and 127 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,10 @@ The restore method described in this document is implemented based on CustomReso

## Limitations

- Snapshot restore is applicable to TiDB Operator v1.5.0 or above, and TiDB v6.5.3 or above.
- Snapshot restore only supports restoring to a cluster with the same number of TiKV nodes and volumes configuration. That is, the number of TiKV nodes and volume configurations of TiKV nodes are identical between the restore cluster and backup cluster.
- Snapshot restore is applicable to TiDB Operator v1.5.0 or later versions and TiDB v6.5.3 or later versions.
- You can use snapshot restore only to restore data to a cluster with the same number of TiKV nodes and volumes configuration. That is, the number of TiKV nodes and volume configurations of TiKV nodes are identical between the restore cluster and backup cluster.
- Snapshot restore is currently not supported for TiFlash, TiCDC, DM, and TiDB Binlog nodes.
- Snapshot restore creates volumes with the default configuration (3000IOPS/125 MB) of GP3. To perform restore using other configurations, you can specify the volume type or configuration, such as `--volume-type=gp3`, `--volume-iops=7000`, or `--volume-throughput=400`.
- Snapshot restore creates volumes with the default configuration (3000 IOPS/125 MB) of GP3. To perform restore using other configurations, you can specify the volume type or configuration, such as `--volume-type=gp3`, `--volume-iops=7000`, or `--volume-throughput=400`.

```yaml
spec:
Expand All @@ -31,6 +31,8 @@ The restore method described in this document is implemented based on CustomReso
## Prerequisites
Before restoring a TiDB cluster across multiple Kubernetes from EBS volume snapshots, you need to complete the following preparations.
### Complete the volume backup
To restore a TiDB cluster across multiple Kubernetes from EBS snapshots, you should have a completed `VolumeBackup`. For steps of performing snapshot backup, refer to [Back Up a TiDB Cluster across Multiple Kubernetes Using EBS Volume Snapshots](backup-by-ebs-snapshot-across-multiple-kubernetes.md).
Expand All @@ -43,9 +45,9 @@ Deploy a TiDB cluster across multiple Kubernetes to which you want to restore da

### Step 1. Set up the environment for EBS volume snapshot restore in every data plane

**You must execute the steps below in every data plane**.
**You must execute the following steps in every data plane**.

1. Download the file [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/master/manifests/backup/backup-rbac.yaml) to the restore server.
1. Download the [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/master/manifests/backup/backup-rbac.yaml) file to the restore server.

2. Supposed that you deploy the TiDB cluster in `${namespace}`, create the RBAC-related resources required for the restore in this namespace by running the following command.

Expand All @@ -55,149 +57,159 @@ Deploy a TiDB cluster across multiple Kubernetes to which you want to restore da

3. Grant permissions to access remote storage.

To restore data from EBS snapshots, you need to grant permissions to remote storage. Three ways are available. See [AWS account authorization](grant-permissions-to-remote-storage.md#aws-account-permissions).
To restore data from EBS snapshots, you need to grant permissions to remote storage. Three ways are available. Refer to [AWS account authorization](grant-permissions-to-remote-storage.md#aws-account-permissions) for the three available methods.

### Step 2. Restore data to the TiDB cluster

### Step 2. Restore backup data to the TiDB cluster
**You must execute the following steps in the control plane**.

**You must execute the steps below in the control plane**. Based on the authorization method you selected in the previous step to grant remote storage access, you can restore data to TiDB using any of the following methods accordingly:
Depending on the authorization method you choose in the previous step for granting remote storage access, you can restore data to TiDB using any of the following methods accordingly:

<SimpleTab>
<div label="AK/SK">

If you grant permissions by accessKey and secretKey, you can create the `VolumeRestore` CR as follows:

```shell
kubectl apply -f restore-fed.yaml
```
```shell
kubectl apply -f restore-fed.yaml
```

The `restore-fed.yaml` file has the following content:

```yaml
---
apiVersion: federation.pingcap.com/v1alpha1
kind: VolumeRestore
metadata:
name: ${restore-name}
spec:
clusters:
- k8sClusterName: ${k8s-name1}
tcName: ${tc-name1}
tcNamespace: ${tc-namespace1}
backup:
s3:
provider: aws
secretName: s3-secret
region: ${region-name}
bucket: ${bucket-name}
prefix: ${backup-path1}
- k8sClusterName: ${k8s-name2}
tcName: ${tc-name2}
tcNamespace: ${tc-namespace2}
backup:
s3:
provider: aws
secretName: s3-secret
region: ${region-name}
bucket: ${bucket-name}
prefix: ${backup-path2}
- ... # other clusters
template:
br:
sendCredToTikv: true
toolImage: ${br-image}
```
</div>
The `restore-fed.yaml` file has the following content:

```yaml
---
apiVersion: federation.pingcap.com/v1alpha1
kind: VolumeRestore
metadata:
name: ${restore-name}
spec:
clusters:
- k8sClusterName: ${k8s-name1}
tcName: ${tc-name1}
tcNamespace: ${tc-namespace1}
backup:
s3:
provider: aws
secretName: s3-secret
region: ${region-name}
bucket: ${bucket-name}
prefix: ${backup-path1}
- k8sClusterName: ${k8s-name2}
tcName: ${tc-name2}
tcNamespace: ${tc-namespace2}
backup:
s3:
provider: aws
secretName: s3-secret
region: ${region-name}
bucket: ${bucket-name}
prefix: ${backup-path2}
- ... # other clusters
template:
br:
sendCredToTikv: true
toolImage: ${br-image}
```

</div>
<div label="IAM role with Pod">

If you grant permissions by associating Pod with IAM, you can create the `VolumeRestore` CR as follows:

```shell
kubectl apply -f restore-fed.yaml
```
```shell
kubectl apply -f restore-fed.yaml
```

The `restore-fed.yaml` file has the following content:

```yaml
---
apiVersion: federation.pingcap.com/v1alpha1
kind: VolumeRestore
metadata:
name: ${restore-name}
annotations:
iam.amazonaws.com/role: arn:aws:iam::123456789012:role/role-name
spec:
clusters:
- k8sClusterName: ${k8s-name1}
tcName: ${tc-name1}
tcNamespace: ${tc-namespace1}
backup:
s3:
provider: aws
region: ${region-name}
bucket: ${bucket-name}
prefix: ${backup-path1}
- k8sClusterName: ${k8s-name2}
tcName: ${tc-name2}
tcNamespace: ${tc-namespace2}
backup:
s3:
provider: aws
region: ${region-name}
bucket: ${bucket-name}
prefix: ${backup-path2}
- ... # other clusters
template:
br:
sendCredToTikv: false
toolImage: ${br-image}
```
</div>
The `restore-fed.yaml` file has the following content:

```yaml
---
apiVersion: federation.pingcap.com/v1alpha1
kind: VolumeRestore
metadata:
name: ${restore-name}
annotations:
iam.amazonaws.com/role: arn:aws:iam::123456789012:role/role-name
spec:
clusters:
- k8sClusterName: ${k8s-name1}
tcName: ${tc-name1}
tcNamespace: ${tc-namespace1}
backup:
s3:
provider: aws
region: ${region-name}
bucket: ${bucket-name}
prefix: ${backup-path1}
- k8sClusterName: ${k8s-name2}
tcName: ${tc-name2}
tcNamespace: ${tc-namespace2}
backup:
s3:
provider: aws
region: ${region-name}
bucket: ${bucket-name}
prefix: ${backup-path2}
- ... # other clusters
template:
br:
sendCredToTikv: false
toolImage: ${br-image}
```

</div>
<div label="IAM role with ServiceAccount">

If you grant permissions by associating ServiceAccount with IAM, you can create the `VolumeRestore` CR as follows:

```shell
kubectl apply -f restore-fed.yaml
```
```shell
kubectl apply -f restore-fed.yaml
```

The `restore-fed.yaml` file has the following content:

```yaml
---
apiVersion: federation.pingcap.com/v1alpha1
kind: VolumeRestore
metadata:
name: ${restore-name}
spec:
clusters:
- k8sClusterName: ${k8s-name1}
tcName: ${tc-name1}
tcNamespace: ${tc-namespace1}
backup:
s3:
provider: aws
region: ${region-name}
bucket: ${bucket-name}
prefix: ${backup-path1}
- k8sClusterName: ${k8s-name2}
tcName: ${tc-name2}
tcNamespace: ${tc-namespace2}
backup:
s3:
provider: aws
region: ${region-name}
bucket: ${bucket-name}
prefix: ${backup-path2}
- ... # other clusters
template:
br:
sendCredToTikv: false
toolImage: ${br-image}
serviceAccount: tidb-backup-manager
```

The `restore-fed.yaml` file has the following content:

```yaml
---
apiVersion: federation.pingcap.com/v1alpha1
kind: VolumeRestore
metadata:
name: ${restore-name}
spec:
clusters:
- k8sClusterName: ${k8s-name1}
tcName: ${tc-name1}
tcNamespace: ${tc-namespace1}
backup:
s3:
provider: aws
region: ${region-name}
bucket: ${bucket-name}
prefix: ${backup-path1}
- k8sClusterName: ${k8s-name2}
tcName: ${tc-name2}
tcNamespace: ${tc-namespace2}
backup:
s3:
provider: aws
region: ${region-name}
bucket: ${bucket-name}
prefix: ${backup-path2}
- ... # other clusters
template:
br:
sendCredToTikv: false
toolImage: ${br-image}
serviceAccount: tidb-backup-manager
```
</div>
</SimpleTab>

After creating the `VolumeRestore` CR, you can check the restore status using the following command:
### Step 3. View the restore status

After creating the `VolumeRestore` CR, the restore process automatically start.

To check the restore status, use the following command:

```shell
kubectl get vrt -n ${namespace} -o wide
Expand Down

0 comments on commit 51bfd86

Please sign in to comment.