From e9db05c5ded7ea9622c7c32e708781f0abdef017 Mon Sep 17 00:00:00 2001 From: github-actions Date: Wed, 23 Aug 2023 02:35:35 +0000 Subject: [PATCH] Preview PR https://github.com/pingcap/docs-tidb-operator/pull/2406 and this preview is triggered from commit https://github.com/pingcap/docs-tidb-operator/pull/2406/commits/b28d73109ee425e4d9ce2ff1da5ad5af82ff89c8 --- .../master/backup-restore-by-ebs-snapshot-faq.md | 6 +++--- .../master/br-federation-architecture.md | 10 +++++----- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/markdown-pages/en/tidb-in-kubernetes/master/backup-restore-by-ebs-snapshot-faq.md b/markdown-pages/en/tidb-in-kubernetes/master/backup-restore-by-ebs-snapshot-faq.md index 93f338fc0..4fe493b48 100644 --- a/markdown-pages/en/tidb-in-kubernetes/master/backup-restore-by-ebs-snapshot-faq.md +++ b/markdown-pages/en/tidb-in-kubernetes/master/backup-restore-by-ebs-snapshot-faq.md @@ -19,15 +19,15 @@ Solution: Probably you have forbidden the feature of "resolved ts" in TiKV or PD For TiKV configuration, confirm if you set `resolved-ts.enable = false` or `raftstore.report-min-resolved-ts-interval = "0s"`. If you set, please remove the configuration. For PD configuration, confirm if you set `pd-server.min-resolved-ts-persistence-interval = "0s"`. If you set, please remove the configuration. -## Backup Failed due to executed twice +## Backup failed due to execution twice -Issue: [#5143](https://github.com/pingcap/tidb-operator/issues/5143) +**Issue:** [#5143](https://github.com/pingcap/tidb-operator/issues/5143) Symptom: You get the error that contains `backup meta file exists`, and you can find the backup pod is scheduled twice. Solution: Probably the first backup pod is evicted by Kubernetes due to node resource pressure. You can configure `PriorityClass` and `ResourceRequirements` to reduce the possibility of eviction. Please refer to the [comment of issue](https://github.com/pingcap/tidb-operator/issues/5143#issuecomment-1654916830). -## Save the time for backup by controlling snapshot size calculation level +## Save time for backup by controlling snapshot size calculation level Symptom: Scheduled backup can't be finished in expected window due to the cost of snapshot size calculation. diff --git a/markdown-pages/en/tidb-in-kubernetes/master/br-federation-architecture.md b/markdown-pages/en/tidb-in-kubernetes/master/br-federation-architecture.md index 30b4e6903..accfbae06 100644 --- a/markdown-pages/en/tidb-in-kubernetes/master/br-federation-architecture.md +++ b/markdown-pages/en/tidb-in-kubernetes/master/br-federation-architecture.md @@ -19,9 +19,9 @@ BR Federation coordinates `Backup` and `Restore` Custom Resources (CRs) in the d ![BR Federation architecture](/media/br-federation-architecture.png) -## Backup Process +## Backup process -### Backup Process in Data Plane +### Backup process in data plane The backup process in the data plane consists of three phases: @@ -39,9 +39,9 @@ The orchestration process of `Backup` from the control plane to the data plane i ![backup orchestration process](/media/volume-backup-process-across-multiple-kubernetes-overall.png) -## Restore Process +## Restore process -### Restore Process in Data Plane +### Restore process in data plane The restore process in the data plane consists of three phases: @@ -53,7 +53,7 @@ The restore process in the data plane consists of three phases: ![restore process in data plane](/media/volume-restore-process-data-plane.png) -### Restore Orchestration Process +### Restore orchestration process The orchestration process of `Restore` from the control plane to the data plane is as follows: