diff --git a/best-practices/massive-regions-best-practices.md b/best-practices/massive-regions-best-practices.md index 6e041c02094c2..58f0949a8ea1f 100644 --- a/best-practices/massive-regions-best-practices.md +++ b/best-practices/massive-regions-best-practices.md @@ -95,8 +95,8 @@ Enable `Region Merge` by configuring the following parameters: {{< copyable "" >}} ``` -config set max-merge-region-size 20 -config set max-merge-region-keys 200000 +config set max-merge-region-size 54 +config set max-merge-region-keys 540000 config set merge-schedule-limit 8 ``` @@ -138,7 +138,11 @@ If Region followers have not received the heartbeat from the leader within the ` ### Method 6: Adjust Region size -The default size of a Region is 96 MiB, and you can reduce the number of Regions by setting Regions to a larger size. For more information, see [Tune Region Performance](/tune-region-performance.md). +The default size of a Region is 256 MiB, and you can reduce the number of Regions by setting Regions to a larger size. For more information, see [Tune Region Performance](/tune-region-performance.md). + +> **Note:** +> +> Starting from v8.4.0, the default Region size has been increased to 256 MiB. When upgrading a cluster to v8.4.0 or later, if the Region size has not been changed manually, the TiKV cluster's default Region size will automatically update to 256 MiB. > **Warning:** > diff --git a/best-practices/pd-scheduling-best-practices.md b/best-practices/pd-scheduling-best-practices.md index a86e17f0974d8..9a92eb5a8b3a3 100644 --- a/best-practices/pd-scheduling-best-practices.md +++ b/best-practices/pd-scheduling-best-practices.md @@ -104,9 +104,9 @@ Region merge refers to the process of merging adjacent small regions. It serves Specifically, when a newly split Region exists for more than the value of [`split-merge-interval`](/pd-configuration-file.md#split-merge-interval) (`1h` by default), if the following conditions occur at the same time, this Region triggers the Region merge scheduling: -- The size of this Region is smaller than the value of the [`max-merge-region-size`](/pd-configuration-file.md#max-merge-region-size) (20 MiB by default) +- The size of this Region is smaller than the value of the [`max-merge-region-size`](/pd-configuration-file.md#max-merge-region-size). Starting from v8.4.0, the default value is changed from 20 MiB to 54 MiB. The new default value is automatically applied only for newly created clusters. Existing clusters are not affected. -- The number of keys in this Region is smaller than the value of [`max-merge-region-keys`](/pd-configuration-file.md#max-merge-region-keys) (200,000 by default). +- The number of keys in this Region is smaller than the value of [`max-merge-region-keys`](/pd-configuration-file.md#max-merge-region-keys). Starting from v8.4.0, the default value is changed from 200000 to 540000. The new default value is automatically applied only for newly created clusters. Existing clusters are not affected. ## Query scheduling status diff --git a/glossary.md b/glossary.md index 39b96d20f7288..5b50d37f33de4 100644 --- a/glossary.md +++ b/glossary.md @@ -137,7 +137,7 @@ Raft Engine is an embedded persistent storage engine with a log-structured desig ### Region/peer/Raft group -Region is the minimal piece of data storage in TiKV, each representing a range of data (96 MiB by default). Each Region has three replicas by default. A replica of a Region is called a peer. Multiple peers of the same Region replicate data via the Raft consensus algorithm, so peers are also members of a Raft instance. TiKV uses Multi-Raft to manage data. That is, for each Region, there is a corresponding, isolated Raft group. +Region is the minimal piece of data storage in TiKV, each representing a range of data (256 MiB by default). Each Region has three replicas by default. A replica of a Region is called a peer. Multiple peers of the same Region replicate data via the Raft consensus algorithm, so peers are also members of a Raft instance. TiKV uses Multi-Raft to manage data. That is, for each Region, there is a corresponding, isolated Raft group. ### Region split diff --git a/information-schema/information-schema-cluster-config.md b/information-schema/information-schema-cluster-config.md index c5e1bfa1db225..cb3cc065e3b2e 100644 --- a/information-schema/information-schema-cluster-config.md +++ b/information-schema/information-schema-cluster-config.md @@ -50,10 +50,10 @@ SELECT * FROM cluster_config WHERE type='tikv' AND `key` LIKE 'coprocessor%'; | TYPE | INSTANCE | KEY | VALUE | +------+-----------------+-----------------------------------+---------+ | tikv | 127.0.0.1:20165 | coprocessor.batch-split-limit | 10 | -| tikv | 127.0.0.1:20165 | coprocessor.region-max-keys | 1440000 | -| tikv | 127.0.0.1:20165 | coprocessor.region-max-size | 144MiB | -| tikv | 127.0.0.1:20165 | coprocessor.region-split-keys | 960000 | -| tikv | 127.0.0.1:20165 | coprocessor.region-split-size | 96MiB | +| tikv | 127.0.0.1:20165 | coprocessor.region-max-keys | 3840000 | +| tikv | 127.0.0.1:20165 | coprocessor.region-max-size | 384MiB | +| tikv | 127.0.0.1:20165 | coprocessor.region-split-keys | 2560000 | +| tikv | 127.0.0.1:20165 | coprocessor.region-split-size | 256MiB | | tikv | 127.0.0.1:20165 | coprocessor.split-region-on-table | false | +------+-----------------+-----------------------------------+---------+ 6 rows in set (0.00 sec) diff --git a/pd-configuration-file.md b/pd-configuration-file.md index f727ce33ba9b5..02c08549ec151 100644 --- a/pd-configuration-file.md +++ b/pd-configuration-file.md @@ -261,13 +261,13 @@ Configuration items related to scheduling ### `max-merge-region-size` + Controls the size limit of `Region Merge`. When the Region size is greater than the specified value, PD does not merge the Region with the adjacent Regions. -+ Default value: `20` ++ Default value: `54`. Before v8.4.0, the default value is `20`. Starting from v8.4.0, the default value is `54`. + Unit: MiB ### `max-merge-region-keys` + Specifies the upper limit of the `Region Merge` key. When the Region key is greater than the specified value, the PD does not merge the Region with its adjacent Regions. -+ Default value: `200000` ++ Default value: `540000`. Before v8.4.0, the default value is `200000`. Starting from v8.4.0, the default value is `540000`. ### `patrol-region-interval` diff --git a/pd-control.md b/pd-control.md index daf4bcbd6e2d7..c9cafeac22108 100644 --- a/pd-control.md +++ b/pd-control.md @@ -146,8 +146,8 @@ Usage: "leader-schedule-limit": 4, "leader-schedule-policy": "count", "low-space-ratio": 0.8, - "max-merge-region-keys": 200000, - "max-merge-region-size": 20, + "max-merge-region-keys": 540000, + "max-merge-region-size": 54, "max-pending-peer-count": 64, "max-snapshot-count": 64, "max-store-down-time": "30m0s", diff --git a/tidb-storage.md b/tidb-storage.md index 8c801eb559988..7526ab2a341b8 100644 --- a/tidb-storage.md +++ b/tidb-storage.md @@ -49,7 +49,7 @@ To make it easy to understand, let's assume that all data only has one replica. * Hash: Create Hash by Key and select the corresponding storage node according to the Hash value. * Range: Divide ranges by Key, where a segment of serial Key is stored on a node. -TiKV chooses the second solution that divides the whole Key-Value space into a series of consecutive Key segments. Each segment is called a Region. Each Region can be described by `[StartKey, EndKey)`, a left-closed and right-open interval. The default size limit for each Region is 96 MiB and the size can be configured. +TiKV chooses the second solution that divides the whole Key-Value space into a series of consecutive Key segments. Each segment is called a Region. Each Region can be described by `[StartKey, EndKey)`, a left-closed and right-open interval. The default size limit for each Region is 256 MiB and the size can be configured. ![Region in TiDB](/media/tidb-storage-2.png) diff --git a/tikv-configuration-file.md b/tikv-configuration-file.md index 8c96e8528b37c..f798a57e759fb 100644 --- a/tikv-configuration-file.md +++ b/tikv-configuration-file.md @@ -1082,7 +1082,7 @@ Configuration items related to Coprocessor. ### `region-split-size` + The size of the newly split Region. This value is an estimate. -+ Default value: `"96MiB"` ++ Default value: `"256MiB"`. Before v8.4.0, the default value is `‘96MiB’`. + Unit: KiB|MiB|GiB ### `region-max-keys` @@ -1093,7 +1093,7 @@ Configuration items related to Coprocessor. ### `region-split-keys` + The number of keys in the newly split Region. This value is an estimate. -+ Default value: `960000` ++ Default value: `2560000`. Before v8.4.0, the default value is `960000’`. ### `consistency-check-method` @@ -2147,7 +2147,7 @@ Configuration items related to BR backup. + The threshold of the backup SST file size. If the size of a backup file in a TiKV Region exceeds this threshold, the file is backed up to several files with the TiKV Region split into multiple Region ranges. Each of the files in the split Regions is the same size as `sst-max-size` (or slightly larger). + For example, when the size of a backup file in the Region of `[a,e)` is larger than `sst-max-size`, the file is backed up to several files with regions `[a,b)`, `[b,c)`, `[c,d)` and `[d,e)`, and the size of `[a,b)`, `[b,c)`, `[c,d)` is the same as that of `sst-max-size` (or slightly larger). -+ Default value: `"144MiB"` ++ Default value: `"384MiB"`. Before v8.4.0, the default value is `"144MiB"`. ### `enable-auto-tune` New in v5.4.0 diff --git a/tikv-overview.md b/tikv-overview.md index 5934f8d8cd1fb..077ae60234b57 100644 --- a/tikv-overview.md +++ b/tikv-overview.md @@ -21,7 +21,7 @@ There is a RocksDB database within each Store and it stores data into the local Data consistency between replicas of a Region is guaranteed by the Raft Consensus Algorithm. Only the leader of the Region can provide the writing service, and only when the data is written to the majority of replicas of a Region, the write operation succeeds. -TiKV tries to keep an appropriate size for each Region in the cluster. The Region size is currently 96 MiB by default. This mechanism helps the PD component to balance Regions among nodes in a TiKV cluster. When the size of a Region exceeds a threshold (144 MiB by default), TiKV splits it into two or more Regions. When the size of a Region is smaller than the threshold (20 MiB by default), TiKV merges the two smaller adjacent Regions into one Region. +TiKV tries to keep an appropriate size for each Region in the cluster. The Region size is currently 256 MiB by default. This mechanism helps the PD component to balance Regions among nodes in a TiKV cluster. When the size of a Region exceeds a threshold (384 MiB by default), TiKV splits it into two or more Regions. When the size of a Region is smaller than the threshold (54 MiB by default), TiKV merges the two smaller adjacent Regions into one Region. When PD moves a replica from one TiKV node to another, it firstly adds a Learner replica on the target node, after the data in the Learner replica is nearly the same as that in the Leader replica, PD changes it to a Follower replica and removes the Follower replica on the source node. diff --git a/tune-tikv-memory-performance.md b/tune-tikv-memory-performance.md index 7c1c45fa60c81..f46369c00781d 100644 --- a/tune-tikv-memory-performance.md +++ b/tune-tikv-memory-performance.md @@ -111,8 +111,8 @@ region-split-check-diff = "32MB" [coprocessor] ## If the size of a Region with the range of [a,e) is larger than the value of `region_max_size`, TiKV tries to split the Region to several Regions, for example, the Regions with the ranges of [a,b), [b,c), [c,d), and [d,e). ## After the Region split, the size of the split Regions is equal to the value of `region_split_size` (or slightly larger than the value of `region_split_size`). -# region-max-size = "144MB" -# region-split-size = "96MB" +# region-max-size = "384MB" +# region-split-size = "256MB" [rocksdb] # The maximum number of threads of RocksDB background tasks. The background tasks include compaction and flush.