title | summary |
---|---|
Deploy and Maintain TiCDC |
Learn the hardware and software recommendations for deploying and running TiCDC, and how to deploy and maintain it. |
This document describes how to deploy and maintain a TiCDC cluster, including the hardware and software recommendations. You can either deploy TiCDC along with a new TiDB cluster or add the TiCDC component to an existing TiDB cluster.
In production environments, the recommendations of software and hardware for TiCDC are as follows:
Linux OS | Version |
---|---|
Red Hat Enterprise Linux | 7.3 or later versions |
CentOS | 7.3 or later versions |
CPU | Memory | Disk | Network | Number of TiCDC cluster instances (minimum requirements for production environment) |
---|---|---|---|---|
16 core+ | 64 GB+ | 500 GB+ SSD | 10 Gigabit network card (2 preferred) | 2 |
For more information, see Software and Hardware Recommendations.
When you deploy a new TiDB cluster using TiUP, you can also deploy TiCDC at the same time. You only need to add the cdc_servers
section in the configuration file that TiUP uses to start the TiDB cluster. The following is an example:
cdc_servers:
- host: 10.0.1.20
gc-ttl: 86400
data_dir: "/cdc-data"
- host: 10.0.1.21
gc-ttl: 86400
data_dir: "/cdc-data"
More references:
- For detailed operations, see Edit the initialization configuration file.
- For detailed configurable fields, see Configure
cdc_servers
using TiUP. - For detailed steps to deploy a TiDB cluster, see Deploy a TiDB Cluster Using TiUP.
Note:
Before installing TiCDC, ensure that you have manually configured the SSH mutual trust and sudo without password between the TiUP control machine and the TiCDC host.
The method of scaling out a TiCDC cluster is similar to that of deploying one. It is recommended to use TiUP to perform the scale-out.
-
Create a
scale-out.yml
file to add the TiCDC node information. The following is an example:cdc_servers: - host: 10.1.1.1 gc-ttl: 86400 data_dir: /tidb-data/cdc-8300 - host: 10.1.1.2 gc-ttl: 86400 data_dir: /tidb-data/cdc-8300 - host: 10.0.1.4 gc-ttl: 86400 data_dir: /tidb-data/cdc-8300
-
Run the scale-out command on the TiUP control machine:
tiup cluster scale-out <cluster-name> scale-out.yml
For more use cases, see Scale out a TiCDC cluster.
It is recommended that you use TiUP to scale in TiCDC nodes. The following is the scale-in command:
tiup cluster scale-in <cluster-name> --node 10.0.1.4:8300
For more use cases, see Scale in a TiCDC cluster.
You can upgrade TiDB clusters using TiUP, during which TiCDC is upgraded as well. After you execute the upgrade command, TiUP automatically upgrades the TiCDC component. The following is an example:
tiup update --self && \
tiup update --all && \
tiup cluster upgrade <cluster-name> <version> --transfer-timeout 600
Note:
In the preceding command, you need to replace
<cluster-name>
and<version>
with the actual cluster name and cluster version. For example, the version can be v8.3.0.
When you upgrade a TiCDC cluster, you need to pay attention to the following:
-
TiCDC v4.0.2 reconfigured
changefeed
. For details, see Configuration file compatibility notes. -
If you encounter any problem during the upgrade, you can refer to upgrade FAQs for solutions.
-
Since v6.3.0, TiCDC supports rolling upgrade. During the upgrade, the replication latency is stable and does not fluctuate significantly. Rolling upgrade takes effect automatically if the following conditions are met:
-
TiCDC is v6.3.0 or later.
- TiUP is v1.11.3 or later.
- At least two TiCDC instances are running in the cluster.
This section describes how to use the tiup cluster edit-config
command to modify the configurations of TiCDC. In the following example, it is assumed that you need to change the default value of gc-ttl
from 86400
to 172800
(48 hours).
-
Run the
tiup cluster edit-config
command. Replace<cluster-name>
with the actual cluster name:tiup cluster edit-config <cluster-name>
-
In the vi editor, modify the
cdc
server-configs
:server_configs: tidb: {} tikv: {} pd: {} tiflash: {} tiflash-learner: {} cdc: gc-ttl: 172800
In the preceding command,
gc-ttl
is set to 48 hours. -
Run the
tiup cluster reload -R cdc
command to reload the configuration.
You can use TiUP to easily stop and start TiCDC nodes. The commands are as follows:
- Stop TiCDC:
tiup cluster stop -R cdc
- Start TiCDC:
tiup cluster start -R cdc
- Restart TiCDC:
tiup cluster restart -R cdc
See Enable TLS Between TiDB Components.
Run the following command to view the TiCDC cluster status. Note that you need to replace v<CLUSTER_VERSION>
with the TiCDC cluster version, such as v8.3.0
:
tiup cdc:v<CLUSTER_VERSION> cli capture list --server=http://10.0.10.25:8300
[
{
"id": "806e3a1b-0e31-477f-9dd6-f3f2c570abdd",
"is-owner": true,
"address": "127.0.0.1:8300",
"cluster-id": "default"
},
{
"id": "ea2a4203-56fe-43a6-b442-7b295f458ebc",
"is-owner": false,
"address": "127.0.0.1:8301",
"cluster-id": "default"
}
]
id
: Indicates the ID of the service process.is-owner
: Indicates whether the service process is the owner node.address
: Indicates the address via which the service process provides interface to the outside.cluster-id
: Indicates the ID of the TiCDC cluster. The default value isdefault
.