diff --git a/system-variables.md b/system-variables.md index 6d1e2fd125559..5580f6368365d 100644 --- a/system-variables.md +++ b/system-variables.md @@ -1947,6 +1947,10 @@ mysql> SELECT job_info FROM mysql.analyze_jobs ORDER BY end_time DESC LIMIT 1; ### tidb_enable_auto_analyze_priority_queue New in v8.0.0 + > **Warning:** + > + > Starting from v9.0.0, this variable is deprecated. TiDB always enables the priority queue for automatically collecting statistics. + - Scope: GLOBAL - Persists to cluster: Yes - Applies to hint [SET_VAR](/optimizer-hints.md#set_varvar_namevar_value): No @@ -3817,7 +3821,7 @@ For a system upgraded to v5.0 from an earlier version, if you have not modified - This variable defines the maximum number of TiDB nodes that the Distributed eXecution Framework (DXF) tasks can use. The default value is `-1`, which indicates that automatic mode is enabled. In automatic mode, TiDB dynamically calculates the value as `min(3, tikv_nodes / 3)`, where `tikv_nodes` represents the number of TiKV nodes in the cluster. > **Note:** -> +> > If you explicitly set the [`tidb_service_scope`](#tidb_service_scope-new-in-v740) system variable for some TiDB nodes, the Distributed eXecution Framework schedules tasks only to these nodes. In this case, even if you set `tidb_max_dist_task_nodes` to a larger value, the framework uses no more than the number of nodes explicitly configured with `tidb_service_scope`. > > For example, if the cluster has 10 TiDB nodes, and 4 of them are configured with `tidb_service_scope = group1`, then even if you set `tidb_max_dist_task_nodes = 5`, only 4 nodes participate in task execution.