TiKV leader election instability causing TiCDC initialization failures #11528
Labels
affects-8.5
area/ticdc
Issues or PRs related to TiCDC.
may-affects-5.4
may-affects-6.1
may-affects-6.5
may-affects-7.1
may-affects-7.5
may-affects-8.1
severity/major
type/bug
The issue is confirmed as a bug.
Bug Report
Please answer these questions before submitting your issue. Thanks!
1. Minimal reproduce step (Required)
Set up a TiDB cluster with version 8.1.1 and integrate it with Kafka version 3.7.1.
Use DM to copy DB from Mysql to TIDB. Configure TiCDC for data replication between TiDB and Kafka.
Monitor the TiKV logs during normal operation, specifically looking for leader election and region management activities and TiCDC.
2. What did you expect to see? (Required)
Stable operation of the TiKV service with successful leader elections and consistent region management.
TiCDC should initialize and operate without errors, ensuring uninterrupted data replication to Kafka.
3. What did you see instead (Required)
cdc initialize fail: Request error message: peer is not leader for this region, leader may None not_leader
For example:
[INFO] [delegate.rs:1034] ["cdc stop observing"] [failed=true] [region]
4. What is your TiDB version? (Required)
| Release Version: v8.1.1
Edition: Community
Git Commit Hash: a7df4f9845d5d6a590c5d45dad0dcc9f21aa8765
Git Branch: HEAD
UTC Build Time: 2024-08-22 05:49:03
GoVersion: go1.21.13
Race Enabled: false
Check Table Before Drop: false
Store: tikv |
The text was updated successfully, but these errors were encountered: