diff --git a/docs-2.0/2.quick-start/3.1add-storage-hosts.md b/docs-2.0/2.quick-start/3.1add-storage-hosts.md index 8199037ec6b..b3bd0e0b473 100644 --- a/docs-2.0/2.quick-start/3.1add-storage-hosts.md +++ b/docs-2.0/2.quick-start/3.1add-storage-hosts.md @@ -21,21 +21,30 @@ You have [connected to NebulaGraph](3.connect-to-nebula-graph.md). ADD HOSTS : [,: ...]; ``` - - Example: + {{ent.ent_begin}} + + If enabling the [Zone](../../4.deployment-and-installation/5.zone.md) feature, you still need to specify `INTO ZONE ` to add Storage hosts, otherwise the Storage hosts will fail to be added. ```ngql - nebula> ADD HOSTS 192.168.10.100:9779, 192.168.10.101:9779, 192.168.10.102:9779; + ADD HOSTS : [,: ...] INTO ZONE ; + ``` + + Example: + + ```ngql + nebula> ADD HOSTS 192.168.8.111:9779,192.168.8.112:9779 INTO ZONE az1; ``` + {{ent.ent_end}} !!! caution - Make sure that the IP you added is the same as the IP configured for `local_ip` in the `nebula-storaged.conf` file. Otherwise, the Storage service will fail to start. For information about configurations, see [Configurations](../5.configurations-and-logs/1.configurations/1.configurations.md). + Make sure that the IP you added is the same as the IP configured for `local_ip` in the `nebula-storaged.conf` file. Otherwise, the Storage service will fail to start. For information about configurations, see [Configurations](../5.configurations-and-logs/1.configurations/1.configurations.md). 2. Check the status of the hosts to make sure that they are all online. diff --git a/docs-2.0/2.quick-start/6.cheatsheet-for-ngql.md b/docs-2.0/2.quick-start/6.cheatsheet-for-ngql.md index b2a4df93a6f..ab3bef92392 100644 --- a/docs-2.0/2.quick-start/6.cheatsheet-for-ngql.md +++ b/docs-2.0/2.quick-start/6.cheatsheet-for-ngql.md @@ -435,10 +435,9 @@ |Syntax|Description| |-|-| |`BALANCE LEADER`| Starts a job to balance the distribution of all the storage leaders in graph spaces. It returns the job ID.| - + |`BALANCE DATA`| Starts a job to balance the distribution of all the storage partitions in graph spaces. It returns the job ID. **For enterprise edition only.**| + |`BALANCE DATA REMOVE [,: ...]`| Starts a job to migrate the specified storage partitions. The default port is `9779`. **For enterprise edition only.**| + |`BALANCE DATA IN ZONE [REMOVE : [,: ...]]`| Starts a job to balance the distribution of storage partitions in each zone in the current graph space. It returns the job ID. You can use the `REMOVE` option to specify the partitions of storage services that you want to migrate to other storage services. **For enterprise edition only.**| * [Job statements](../3.ngql-guide/4.job-statements.md) @@ -448,14 +447,11 @@ | `SUBMIT JOB COMPACT` | Triggers the long-term RocksDB `compact` operation. | | `SUBMIT JOB FLUSH` | Writes the RocksDB memfile in the memory to the hard disk. | | `SUBMIT JOB STATS` | Starts a job that makes the statistics of the current graph space. Once this job succeeds, you can use the `SHOW STATS` statement to list the statistics. | + | `SUBMIT JOB BALANCE DATA IN ZONE`| Starts a job to balance partition replicas within each Zone. **For enterprise edition only.**| | `SHOW JOB ` | Shows the information about a specific job and all its tasks in the current graph space. The Meta Service parses a `SUBMIT JOB` request into multiple tasks and assigns them to the nebula-storaged processes. | | `SHOW JOBS` | Lists all the unexpired jobs in the current graph space. | | `STOP JOB` | Stops jobs that are not finished in the current graph space. | | `RECOVER JOB` | Re-executes the failed jobs in the current graph space and returns the number of recovered jobs. | - * [Kill queries](../3.ngql-guide/17.query-tuning-statements/6.kill-query.md) diff --git a/docs-2.0/20.appendix/learning-path.md b/docs-2.0/20.appendix/learning-path.md index 869d6d9698b..6c4969601bb 100644 --- a/docs-2.0/20.appendix/learning-path.md +++ b/docs-2.0/20.appendix/learning-path.md @@ -137,13 +137,14 @@ After completing the NebulaGraph learning path, taking [NebulaGraph Certificatio | Document | | ------------------------------------------------------------ | |[Backup&Restore](../backup-and-restore/nebula-br/1.what-is-br.md)| - + |[Zone](../4.deployment-and-installation/5.zone.md)| + {{ent.ent_end}} - SSL encryption diff --git a/docs-2.0/3.ngql-guide/4.job-statements.md b/docs-2.0/3.ngql-guide/4.job-statements.md index e865e44366e..a006f771fdf 100644 --- a/docs-2.0/3.ngql-guide/4.job-statements.md +++ b/docs-2.0/3.ngql-guide/4.job-statements.md @@ -52,6 +52,62 @@ nebula> SUBMIT JOB BALANCE DATA REMOVE 192.168.8.100:9779; +------------+ ``` +## SUBMIT JOB BALANCE DATA IN ZONE + +!!! enterpriseonly + + Only available for the NebulaGraph Enterprise Edition. + +`SUBMIT JOB BALANCE DATA IN ZONE` statement starts a job to balance partition replicas within each Zone. It returns the job ID. + + + +For details on zones, see [Manage Zones](../4.deployment-and-installation/5.zone.md). + +For example: + +```ngql +# Balance partition replicas within each Zone in the current space. +nebula> SUBMIT JOB BALANCE DATA IN ZONE; ++------------+ +| New Job Id | ++------------+ +| 25 | ++------------+ +``` + + + +## SUBMIT JOB BALANCE DATA IN ZONE REMOVE + +!!! enterpriseonly + + Only available for the NebulaGraph Enterprise Edition. + +`SUBMIT JOB BALANCE DATA IN ZONE REMOVE` statement starts a job to clear the partitions on specified Storage nodes in Zones in the current graph space. It returns the job ID. Before clearing the Storage nodes, make sure that the remaining Storage nodes in Zones can meet the set number of replicas. For example, if the number of replicas is set to 3, make sure that the remaining Storage nodes are greater than or equal to 3 before executing this command. + +For details on Zones, see [Manage Zones](../4.deployment-and-installation/5.zone.md). + +For example: + +```ngql +# Clear the partitions on the specified Storage nodes. +nebula> SUBMIT JOB BALANCE DATA IN ZONE REMOVE 192.168.10.101:9779,192.168.10.102:9779; ++------------+ +| New Job Id | ++------------+ +| 26 | ++------------+ +``` + {{ ent.ent_end }} ## SUBMIT JOB BALANCE LEADER @@ -70,14 +126,14 @@ nebula> SUBMIT JOB BALANCE LEADER; ``` !!! caution @@ -58,6 +55,17 @@ CREATE SPACE [IF NOT EXISTS] ( `graph_space_name`, `partition_num`, `replica_factor`, `vid_type`, and `comment` cannot be modified once set. To modify them, drop the current working graph space with [`DROP SPACE`](./5.drop-space.md) and create a new one with `CREATE SPACE`. +{{ent.ent_begin}} + +When creating a graph space, the system will automatically recognize the value of `--zone_list` in the Meta configuration file, which determines whether the Zone feature is enabled: + + - If the value is empty, it means the Zone feature is not enabled. In this case, the graph space will be created without specifying Zones. + - If the value is not empty, and the number of Zones in `--zone_list` is equal to the number of replicas specified by `replica_factor`, the replicas of each partition in the graph space will be evenly distributed across the Zones specified in `--zone_list`. If the specified number of replicas is not equal to the number of Zones, the creation of the graph space will fail. + +For more details on Zones, see [Manage Zones](../../4.deployment-and-installation/5.zone.md). + +{{ent.ent_end}} + ### Clone graph spaces ```ngql diff --git a/docs-2.0/3.ngql-guide/9.space-statements/4.describe-space.md b/docs-2.0/3.ngql-guide/9.space-statements/4.describe-space.md index b0ade55302d..63839b2ee63 100644 --- a/docs-2.0/3.ngql-guide/9.space-statements/4.describe-space.md +++ b/docs-2.0/3.ngql-guide/9.space-statements/4.describe-space.md @@ -23,14 +23,18 @@ nebula> DESCRIBE SPACE basketballplayer; +----+--------------------+------------------+----------------+---------+------------+--------------------+---------+ ``` - +{{ent.ent_end}} diff --git a/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md b/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md index 45109338d7b..4603060c98a 100644 --- a/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md +++ b/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md @@ -264,6 +264,18 @@ Users can refer to the content of the following configurations, which only show --port=9779 ``` +{{ent.ent_begin}} +### (Optional) Configure Zones + +!!! enterpriseonly + + This section is only applicable to NebulaGraph Enterprise. + +A Zone is a logical rack for Storage nodes. You can set up Zones and add specified Storage nodes into these Zones. By configuring the Graph service to directionally access a given Zone, resource isolation and directed data access can be achieved, thereby reducing traffic consumption and cutting costs. + +For details, see [Manage Zones](../../4.deployment-and-installation/5.zone.md). +{{ent.ent_end}} + ### Start the cluster Start the corresponding service on **each machine**. Descriptions are as follows. diff --git a/docs-2.0/4.deployment-and-installation/5.zone.md b/docs-2.0/4.deployment-and-installation/5.zone.md index 68afffeec8c..c4199eb09f1 100644 --- a/docs-2.0/4.deployment-and-installation/5.zone.md +++ b/docs-2.0/4.deployment-and-installation/5.zone.md @@ -1,67 +1,86 @@ -# Manage Zone +# Manage Zones + +!!! enterpriseonly + + This feature is only available in the NebulaGraph Enterprise Edition. The Zone is a logical rack of storage nodes in a NebulaGraph cluster. It divides multiple storage nodes into manageable logical areas to achieve resource isolation. At the same time, you can control the Graph service to access the replica data in the specified Zone to reduce traffic consumption and improve access efficiency. This article describes how to use the Zone feature. ## Principle -Storage nodes can be added to a Zone. When creating a graph space, you can specify a list of Zones. The graph space is created and stored on the storage nodes added in these Zones. Partition replicas are evenly stored in the Zones, as shown in the following figure. +Within NebulaGraph, you can set up multiple Zones, with each Zone containing one or more Storage nodes. When creating a graph space, the system automatically recognizes these set Zones and stores graph space data on the Storage nodes within these Zones. + +It's important to note that when creating a graph space, you need to specify the number of replicas for data partitioning. At this point, **the specified number of partition replicas and the number of set Zones must be equal, otherwise, the graph space cannot be created**. This is because NebulaGraph evenly distributes the replicas of every graph space partition across these set Zones. + +Since each Zone contains a complete set of graph space data partitions, at least one Storage node is required within each Zone to store these data partitions. + +The partition replicas in NebulaGraph achieve strong consistency through the [Raft](../1.introduction/3.nebula-graph-architecture/4.storage-service.md#raft_1) protocol. It's recommended to use an odd number of partition replicas, and therefore, it's also suggested to set an odd number of Zones. + +Taking the following picture as an example, when creating a graph space (S1), the data is partitioned into 3 partitions, with 3 replicas for each partition, and the number of Zones is also 3. Six machines hosting the Storage service are paired up and added to these 3 Zones. When creating the graph space, NebulaGraph stores the 3 replicas of each partition evenly across zone1, zone2, and zone3, and each Zone contains a complete set of graph space data partitions (Part1, Part2, and Part3). + +example_for_zones -![Zone](https://docs-cdn.nebula-graph.com.cn/figures/zone1.png) +To reduce cost of cross-Zone network traffic, and increase data transfer speed (Intra-zone network usually has a lower latency than inter-zone network), you can configure the Graph service to prioritize intra-zone data access. Each Graphd will then prioritize to access the partition replica in the same zone as specified by Graphd if there is any. As an example, suppose Graphd A and Graphd B are located in zone1, Graphd C and Graphd D are in zone2, and Graphd E is in zone3. You can configure Graphd A and Graphd B to prioritize accessing data in zone1, Graphd C and Graphd D to prioritize accessing data in zone2, and Graphd E to prioritize accessing data in zone3. This helps reduce the cost of cross-zone network traffic and improves data transfer speed. -In the above figure, the six machines with the running storage service are grouped in pairs and added to three Zones. When creating a graph space S1 with a partition replica number of 3 in these three Zones, these partition replicas are evenly stored in Zone1 ~ Zone3. +example_for_intra_zone + + ## Scenarios - Resource isolation. You can create a graph space on specified storage nodes to achieve resource isolation. - Rolling upgrade. You need to stop one or more servers to update them, and then put them into use again until all servers in the cluster are updated to the new version. -- Cost saving. Allocate different graph spaces to different Zones, and control the client to access the replica data in the specified Zone to reduce traffic consumption and improve access efficiency. - +- Cost saving. Allocate graph space data to different Zones, and control the client to access the replica data in the specified Zone to reduce traffic consumption and improve access efficiency. ## Notes -- Make sure that the cluster is empty before enabling the Zone feature. To enable the Zone feature, see **Enable Zone** below. -- A storage node can only belong to one Zone, but a Zone can contain multiple different storage nodes. -- Deleting a Zone is not supported. -- Modifying the name of a Zone is not supported. +- Before enabling the Zone feature, clear any existing data in the cluster. See **Enabling Zone** for details. +- Each Storage node must belong to one, and only one, Zone. However, a Zone can have multiple Storage nodes. Storage nodes should outnumber or equal Zones. +- The number of Zones must equal the number of partition replicas; otherwise, the graph space cannot be created. +- The number of Zones is recommended to be odd. +- Adjusting the number of Zones isn't allowed. +- Zone name modifications are unsupported. ## Enable Zone -1. In the configuration file `nebula-metad.conf` of the Meta service, operate as follows: - - 1. Modify the value of `--enable_zones` to `true` to enable the Zone feature. The default value is `false`. - 2. Manually add the `--zone_list` field and set its value to the name(s) of the Zone(s) to be added, such as `--assigned_zone=zone1, zone2, zone3`. +1. In the configuration file `nebula-metad.conf` of the Meta service, set `--zone_list` to Zone names to be added, such as `--zone_list=zone1, zone2, zone3`. !!! danger - Once the value of `--zone_list` is configured and the Meta service is started, it cannot be modified, otherwise, the Meta service will fail to restart. + Once the value of `--zone_list` is configured and the Meta service is started, it cannot be modified, otherwise, the Meta service will fail to restart. !!! note - If the name of a Zone contains special characters (excluding underscores), reserved keywords, or starts with a number, you need to enclose the Zone name in backticks (`) when specifying the Zone name in a query statement; the Zone name cannot contain English periods (.); multiple Zone names are separated by commas (,). + - The number of Zones specified in `--zone_list` is recommended to be odd and must be less than or equal to the number of Storage nodes. When `--zone_list` is empty, it indicates that the Zone feature is disabled. + - Consider the replica settings when setting the number of Zones, since the number of Zones should match the replica count. For example, with 3 replicas, you must have 3 Zones. + - If the name of a Zone contains special characters (excluding underscores), reserved keywords, or starts with a number, you need to enclose the Zone name in backticks (`) when specifying the Zone name in a query statement; the Zone name cannot contain English periods (.); multiple Zone names must be separated by commas (,). For more information about the Meta configuration file, see [Meta service configuration](../5.configurations-and-logs/1.configurations/2.meta-config.md). 2. Restart the Meta service. -## Specify the Zone to be accessed by the Graph service +## Specify intra Zone data access 1. Enable the Zone feature. For details, see **Enable Zone** above. 2. In the configuration file `nebula-graphd.conf` of the Graph service, add the following configuration: - 1. Add the `--assigned_zone` field and set its value to the name of the Zone to be accessed, such as `--assigned_zone=zone1`. + 1. Set the `--assigned_zone` to the name of the Zone where the Graphd is assigned, such as `--assigned_zone=zone1`. !!! note - - Different Graph services can set different values for `--assigned_zone`, but the value of `--assigned_zone` must be one of the values in `--zone_list`. - - The value of `--assigned_zone` is a string and does not support English commas (,). - - When the value of `--assigned_zone` is empty, it indicates that all Zones are accessed. + - Different Graph services can set different values for `--assigned_zone`, but the value of `--assigned_zone` must be one of the values in `--zone_list`. In production, it is recommended to use the actual zone that a Graphd locates to reduce management complexity. Of course, it must be within the `zone_list`. Otherwise, intra zone reading may not take effect. + - The value of `--assigned_zone` is a string and does not support English commas (,). + - When `--assigned_zone` is empty, it means reading from leader replicas. - 2. Set `--enable_intra_zone_routing=true` to enable the function of accessing the specified Zone. + 2. Set `--prioritize_intra_zone_reading` to `true` to prioritize intra zone data reading. When reading fails in the Zone specified by `--assigned_zone`, an error occurs depending on the value of `stick_to_intra_zone_on_failure`. !!! caution - It is recommended that the values of `--enable_intra_zone_routing` in different Graph services be consistent, otherwise, the load of Storage nodes will be unbalanced and unknown risks will occur. + It is recommended that the values of `--prioritize_intra_zone_reading` in different Graph services be consistent, otherwise, the load of Storage nodes will be unbalanced and unknown risks will occur. For details on the Graph configuration, see [Graph service configuration](../5.configurations-and-logs/1.configurations/3.graph-config.md). @@ -83,14 +102,12 @@ nebula> SHOW ZONES; +--------+-----------------+------+ | "az1" | "192.168.8.111" | 9779 | | "az1" | "192.168.8.112" | 9779 | -| "az2" | "" | 0 | -| "az3" | "" | 0 | +| "az2" | "192.168.8.113" | 9779 | +| "az3" | "192.168.8.114" | 9779 | +--------+-----------------+------+ ``` -!!! note - - Run `SHOW ZONES` in the current graph space to view all Zone information, instead of the Zone information of the current graph spaces. The Zone information includes the name of the Zone, the IP address and the port number of the storage node in the Zone. +Run `SHOW ZONES` in the current graph space to view all Zone information. The Zone information includes the name of the Zone, the IP address (or domain name) and the port number of the storage node in the Zone. ### View the specified Zone @@ -107,40 +124,19 @@ nebula> DESC ZONE az1 | Hosts | Port | +-----------------+------+ | "192.168.8.111" | 7779 | -| "192.168.8.111" | 9779 | +| "192.168.8.112" | 9779 | +-----------------+------+ ``` -### Create a space in the specified Zone +### Create a space in the specified Zones -```ngql -CREATE SPACE IF NOT EXISTS ( - [partition_num = ,] - [replica_factor = ,] - vid_type = {FIXED_STRING() | INT[64]} - ) - [COMMENT = ''] - [ON ]; -``` - -!!! note - - - The Zone specified when creating a graph space must be one or more of the values in the `--zone_list` of the Meta configuration file, otherwise the graph space cannot be created. - - The Zone specified when creating a graph space must contain at least one Storage node, otherwise the graph space cannot be created. - - The number of partition replicas specified when creating a graph space must be less than or equal to the number of Zones, otherwise, the graph space cannot be created. +The syntax for creating a graph space within a Zone is the same as in [Creating Graph Space](../3.ngql-guide/9.space-statements/1.create-space.md). +However, during graph space creation, the system automatically recognizes the `--zone_list` value from the Meta configuration file. If this value is not empty and the number of Zones matches the partition replica count specified by `replica_factor`, the graph space's replicas will be evenly distributed across the Zones in `--zone_list`. If the specified replica count doesn't match the number of Zones, graph space creation will fail. -!!! caution +If the value of `--zone_list` is empty, the Zone feature is not enabled, and the graph space will be created without specifying Zones. - It is not recommended to create a graph space without specifying a Zone when the Zone feature is enabled and the Graph service is configured to access the specified Zone (set `--assigned_zone`). This will result in the inability to query the replica data distributed in other Zones because the Graph service will only access the replica data in the specified Zone, and creating a graph space without specifying a Zone will result in the distribution of replicas in other Zones. - -For example: - -```ngql -nebula> CREATE SPACE IF NOT EXISTS my_space_1 (vid_type=FIXED_STRING(30)) on az1 -``` - -### View the Zone to which the specified graph space belongs +### Check the Zones for the specified graph space ```ngql DESC SPACE ; @@ -160,19 +156,42 @@ nebula> DESC SPACE my_space_1 ### Add Storage nodes to the specified Zone ```ngql -ADD HOSTS : [,: ...] INTO ZONE ; +ADD HOSTS : [,: ...] INTO ZONE ; ``` +- After enabling the Zone feature, you must include the `INTO ZONE` clause when executing the `ADD HOSTS` command; otherwise, adding a Storage node will fail. +- A Storage node can belong to only one Zone, but a single Zone can encompass multiple different Storage nodes. + + For example: ```ngql nebula> ADD HOSTS 192.168.8.111:9779,192.168.8.112:9779 INTO ZONE az1; ``` -### Migrate data from the Storage nodes in the specified Zone to other Storage nodes +### Balance the Zone replicas ```ngql -BALANCE IN ZONE REMOVE : [,: ...] +BALANCE DATA IN ZONE; +``` + +!!! note + + Specify a space before executing this command. + +After enabling the Zone feature, run `BALANCE DATA IN ZONE` to balance the partition replicas within each Zone. + +For example: + +```ngql +nebula> USE my_space_1; +nebula> BALANCE DATA IN ZONE; +``` + +### Migrate partitions from the Storage nodes in the specified Zones to other Storage nodes + +```ngql +BALANCE DATA IN ZONE REMOVE : [,: ...] ``` !!! note @@ -185,7 +204,7 @@ For example: ```ngql nebula> USE my_space_1; -nebula> BALANCE IN ZONE REMOVE 192.168.8.111:9779; +nebula> BALANCE DATA IN ZONE REMOVE 192.168.8.111:9779; +------------+ | New Job Id | +------------+ @@ -209,7 +228,8 @@ DROP HOSTS : [,: ...]; !!! note - You cannot directly drop a Storage node that is in use. You need to first drop the associated graph space before dropping the Storage nodes. See [drop space](../3.ngql-guide/9.space-statements/5.drop-space.md) for details. + - You cannot directly drop a Storage node that is in use. You need to first drop the associated graph space before dropping the Storage nodes. See [drop space](../3.ngql-guide/9.space-statements/5.drop-space.md) for details. + - Make sure the number of remaining Storage nodes outnumbers or equals that of Zones after removing a node, otherwise, the graph space will be unavailable. For example: @@ -217,20 +237,5 @@ For example: nebula> DROP HOSTS 192.168.8.111:9779; ``` -### Balance the data in the specified Zone - -```ngql -BALANCE IN ZONE; -``` -!!! note - - Specify a space before executing this command. - -For example: - -```ngql -nebula> USE my_space_1; -nebula> BALANCE IN ZONE; -``` diff --git a/docs-2.0/4.deployment-and-installation/manage-storage-host.md b/docs-2.0/4.deployment-and-installation/manage-storage-host.md index 168ff0a8cec..d5cc2d633d0 100644 --- a/docs-2.0/4.deployment-and-installation/manage-storage-host.md +++ b/docs-2.0/4.deployment-and-installation/manage-storage-host.md @@ -29,6 +29,12 @@ nebula> ADD HOSTS "": [,"": ...]; - Ensure that the storage host to be added is not used by any other cluster, otherwise, the storage adding operation will fail. +{{ent.ent_begin}} + +When adding a Storage host to a cluster with the Zone feature enabled, you must specify the `INTO ZONE` option; otherwise, the addition of the Storage node will fail. For more details, see [Managing Zones](5.zone.md). + +{{ent.ent_end}} + ## Drop Storage hosts Delete the Storage hosts from cluster. diff --git a/docs-2.0/5.configurations-and-logs/1.configurations/2.meta-config.md b/docs-2.0/5.configurations-and-logs/1.configurations/2.meta-config.md index eb72ccaab3e..4cf4505bbfb 100644 --- a/docs-2.0/5.configurations-and-logs/1.configurations/2.meta-config.md +++ b/docs-2.0/5.configurations-and-logs/1.configurations/2.meta-config.md @@ -88,7 +88,7 @@ For all parameters and their current values, see [Configurations](1.configuratio | Name | Predefined Value | Description |Whether supports runtime dynamic modifications| | :------------------------- | :-------------------- | :---------------------------------------------------------------------------- |:----------------- | -|`default_parts_num` | `100` | Specifies the default partition number when creating a new graph space. | No| +|`default_parts_num` | `10` | Specifies the default partition number when creating a new graph space. | No| |`default_replica_factor` | `1` | Specifies the default replica number when creating a new graph space. | No| ## RocksDB options configurations @@ -111,4 +111,10 @@ For all parameters and their current values, see [Configurations](1.configuratio |`ng_black_box_dump_period_seconds` |`5` |The time interval for Nebula-BBox to collect metric data. Unit: Second.| No| |`ng_black_box_file_lifetime_seconds` |`1800` |Storage time for Nebula-BBox files generated after collecting metric data. Unit: Second.| Yes| +## Zone configurations + +| Name | Predefined Value | Description |Whether supports runtime dynamic modifications| +| :-------------------- | :----- | :------------------- | :--------------------- | +| `zone_list` | Empty | A list of Zone names. When the value is not empty, the Zone feature is enabled. For details, see [Manage Zones](../../4.deployment-and-installation/5.zone.md).| No | + {{ ent.ent_end }} diff --git a/docs-2.0/5.configurations-and-logs/1.configurations/3.graph-config.md b/docs-2.0/5.configurations-and-logs/1.configurations/3.graph-config.md index 46e9133f92b..d48d507df53 100644 --- a/docs-2.0/5.configurations-and-logs/1.configurations/3.graph-config.md +++ b/docs-2.0/5.configurations-and-logs/1.configurations/3.graph-config.md @@ -191,4 +191,13 @@ For more information about audit log, see [Audit log](../2.log-management/audit- |`enable_http2_routing` |`false` |Whether to enable HTTP2 for RPC communications. Enabling it will slightly affect performance.|Yes| |`stream_timeout_ms` |`30000` | The timeout for the HTTP stream. Unit: ms.|Yes| + +## Zone Configuration + +| Name | Default Value | Description | Runtime Dynamic Modification Supported | +| :-------------------------------- | :------------ | :-------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------- | +| `assigned_zone` | Empty | When the Zone feature is enabled, set the Zone where the graphd to be located. See [Managing Zones](../../4.deployment-and-installation/5.zone.md) for details. | No | +| `prioritize_intra_zone_reading` | `false` | When set to `true`, prioritize to send queries to the Storage services in the same Zone. If reading fails, it depends on the value of `stick_to_intra_zone_on_failure` to determine whether to send requests to the leader partition replicas.
When set to `false`, data is read from the leader partition replicas. | No | +| `stick_to_intra_zone_on_failure` | `false` | When set to `true`, stick to intra-zone routing if unable to find the storaged hosting the requested partition replica in the same Zone.
When set to `false`, sending requests to leader partition replicas. | No | + {{ ent.ent_end }} diff --git a/docs-2.0/8.service-tuning/load-balance.md b/docs-2.0/8.service-tuning/load-balance.md index 74471ea2699..4ef2d358969 100644 --- a/docs-2.0/8.service-tuning/load-balance.md +++ b/docs-2.0/8.service-tuning/load-balance.md @@ -9,15 +9,18 @@ You can use the `SUBMIT JOB BALANCE` statement to balance the distribution of pa {{ ent.ent_begin }} ## Balance partition distribution +The `SUBMIT JOB BALANCE DATA` command starts a job to balance the distribution of storage partitions in the current graph space by creating and executing a set of subtasks. + !!! enterpriseonly Only available for the NebulaGraph Enterprise Edition. !!! note - If the current graph space already has a `SUBMIT JOB BALANCE DATA` job in the `FAILED` status, you can restore the `FAILED` job, but cannot start a new `SUBMIT JOB BALANCE DATA` job. If the job continues to fail, manually stop it, and then you can start a new one. + - If the current graph space already has a `SUBMIT JOB BALANCE DATA` job in the `FAILED` status, you can restore the `FAILED` job, but cannot start a new `SUBMIT JOB BALANCE DATA` job. If the job continues to fail, manually stop it, and then you can start a new one. + - The following example introduces the methods of balanced partition distribution for storage nodes with the Zone feature disabled. When the Zone feature is enabled, balanced partition distribution is performed across zones by specifying the `IN ZONE` clause. For details, see [Manage Zones](../4.deployment-and-installation/5.zone.md). + -The `SUBMIT JOB BALANCE DATA` commands starts a job to balance the distribution of storage partitions in the current graph space by creating and executing a set of subtasks. ### Examples @@ -103,6 +106,8 @@ To restore a balance job in the `FAILED` or `STOPPED` status, run `RECOVER JOB < To migrate specified partitions and scale in the cluster, you can run `SUBMIT JOB BALANCE DATA REMOVE [,: ...]`. +To migrate specified partitions for Zone-enabled clusters, you need to add the `IN ZONE` clause. For example, `SUBMIT JOB BALANCE DATA IN ZONE REMOVE [,: ...]`. For details, see [Manage Zones](../4.deployment-and-installation/5.zone.md). + For example, to migrate the partitions in server `192.168.8.100:9779`, the command as following: ```ngql @@ -118,134 +123,11 @@ nebula> SHOW HOSTS; !!! note - This command migrates partitions to other storage hosts but does not delete the current storage host from the cluster. To delete the Storage hosts from cluster, see [Manage Storage hosts](../4.deployment-and-installation/manage-storage-host.md). + This command migrates partitions to other storage hosts but does not delete the current storage host from the cluster. To delete the Storage hosts from a cluster, see [Manage Storage hosts](../4.deployment-and-installation/manage-storage-host.md). {{ ent.ent_end }} - ## Balance leader distribution To balance the raft leaders, run `SUBMIT JOB BALANCE LEADER`. It will start a job to balance the distribution of all the storage leaders in all graph spaces. diff --git a/mkdocs.yml b/mkdocs.yml index a29d033db3c..15dff7b18d8 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -302,13 +302,13 @@ nav: - Manage licenses: 9.about-license/4.manage-license.md #ent - Quick start: - - Deploy NebulaGraph using Docker: 2.quick-start/1.quick-start-workflow.md - - Deploy NebulaGraph on-premise: - - Step 1 Install NebulaGraph: 2.quick-start/2.install-nebula-graph.md - - Step 2 Manage NebulaGraph Service: 2.quick-start/5.start-stop-service.md - - Step 3 Connect to NebulaGraph: 2.quick-start/3.connect-to-nebula-graph.md - - Step 4 Register the Storage Service: 2.quick-start/3.1add-storage-hosts.md - - Step 5 Use nGQL (CRUD): 2.quick-start/4.nebula-graph-crud.md +# - Deploy NebulaGraph using Docker: 2.quick-start/1.quick-start-workflow.md +# - Deploy NebulaGraph on-premise: + - Step 1 Install NebulaGraph: 2.quick-start/2.install-nebula-graph.md + - Step 2 Manage NebulaGraph Service: 2.quick-start/5.start-stop-service.md + - Step 3 Connect to NebulaGraph: 2.quick-start/3.connect-to-nebula-graph.md + - Step 4 Register the Storage Service: 2.quick-start/3.1add-storage-hosts.md + - Step 5 Use nGQL (CRUD): 2.quick-start/4.nebula-graph-crud.md - nGQL cheatsheet: 2.quick-start/6.cheatsheet-for-ngql.md - nGQL guide: @@ -475,10 +475,10 @@ nav: - Local multi-node installation: 4.deployment-and-installation/2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md - Install using Docker Compose: 4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md - Install with ecosystem tools: 4.deployment-and-installation/2.compile-and-install-nebula-graph/6.deploy-nebula-graph-with-peripherals.md - - Manage Service: 4.deployment-and-installation/manage-service.md - - Connect to Service: 4.deployment-and-installation/connect-to-nebula-graph.md - - Manage Storage host: 4.deployment-and-installation/manage-storage-host.md -#ent - Manage zone: 4.deployment-and-installation/5.zone.md + - Manage services: 4.deployment-and-installation/manage-service.md + - Connect to services: 4.deployment-and-installation/connect-to-nebula-graph.md + - Manage Storage hosts: 4.deployment-and-installation/manage-storage-host.md + - Manage Zones: 4.deployment-and-installation/5.zone.md - Upgrade: - Upgrade NebulaGraph Community to the latest version: 4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-graph-to-latest.md # - Upgrade NebulaGraph from v3.x to v3.4 (Community Edition): 4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-from-300-to-latest.md