diff --git a/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md b/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md index 2523dd99d4d..3d6223193b1 100644 --- a/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md +++ b/docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md @@ -19,6 +19,7 @@ The following features are already available in NebulaGraph Operator: - **Deploy and uninstall clusters**: NebulaGraph Operator simplifies the process of deploying and uninstalling clusters for users. NebulaGraph Operator allows you to quickly create, update, or delete a NebulaGraph cluster by simply providing the corresponding CR file. For more information, see [Deploy NebulaGraph Clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Deploy NebulaGraph Clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). {{ent.ent_begin}} +- **Manage Zones**: Supports dividing multiple storage hosts into managed zones and creating graph spaces on specified storage hosts to achieve resource isolation. For more information, see [Create clusters using Zones with kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md#create_clusters) or [Create clusters using Zones with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). - **Scale clusters**: NebulaGraph Operator calls NebulaGraph's native scaling interfaces in a control loop to implement the scaling logic. You can simply perform scaling operations with YAML configurations and ensure the stability of data. For more information, see [Scale clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Scale clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md). @@ -40,7 +41,7 @@ NebulaGraph Operator does not support the v1.x version of NebulaGraph. NebulaGra | NebulaGraph | NebulaGraph Operator | | ------------- | -------------------- | -| 3.5.x | 1.5.0 | +| 3.5.x | 1.5.0, 1.6.0 | | 3.0.0 ~ 3.4.1 | 1.3.0, 1.4.0 ~ 1.4.2 | | 3.0.0 ~ 3.3.x | 1.0.0, 1.1.0, 1.2.0 | | 2.5.x ~ 2.6.x | 0.9.0 | diff --git a/docs-2.0/nebula-operator/10.backup-restore-using-operator.md b/docs-2.0/nebula-operator/10.backup-restore-using-operator.md index 8376572392b..a00bad071b7 100644 --- a/docs-2.0/nebula-operator/10.backup-restore-using-operator.md +++ b/docs-2.0/nebula-operator/10.backup-restore-using-operator.md @@ -6,6 +6,10 @@ This article introduces how to back up and restore data of the NebulaGraph clust This feature is only for the enterprise edition NebulaGraph clusters on Kubernetes. +!!! compatibility + + Make sure that the [Zone](../4.deployment-and-installation/5.zone.md) feature is not enabled in the NebulaGraph cluster before using the backup and restore in Operator, otherwise the backup and restore will fail. For details on Zones, see [Cluster with Zones](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md#create_clusters). + ## Overview [NebulaGraph BR (Enterprise Edition), short for BR-ent](../backup-and-restore/nebula-br-ent/1.br-ent-overview.md) is a command line tool for data backup and recovery of NebulaGraph enterprise edition. NebulaGraph Operator utilizes the BR-ent tool to achieve data backup and recovery for NebulaGraph clusters on Kubernetes. @@ -14,7 +18,6 @@ When backing up data, NebulaGraph Operator creates a job to back up the data in When restoring data, NebulaGraph Operator checks the specified backup NebulaGraph cluster for existence, and whether the access to remote storage is normally based on the information defined in the NebulaRestore resource object. It then creates a new cluster and restores the backup data to the new NebulaGraph cluster. For more information, see [restore flowchart](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/br_guide.md#restore-nebulagraph-cluster). - ## Prerequisites To backup and restore data using NebulaGraph Operator, the following conditions must be met: diff --git a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md index cd42eec5fda..41e8199ef7b 100644 --- a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md +++ b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md @@ -51,60 +51,291 @@ The following example shows how to create a NebulaGraph cluster by creating a cl - For a NebulaGraph Community cluster - Create a file named `apps_v1alpha1_nebulacluster.yaml`. For the file content, see the [sample configuration](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/apps_v1alpha1_nebulacluster.yaml). - - The parameters in the file are described as follows: - - | Parameter | Default value | Description | - | :---- | :--- | :--- | - | `metadata.name` | - | The name of the created NebulaGraph cluster. | - | `spec.graphd.replicas` | `1` | The numeric value of replicas of the Graphd service. | - | `spec.graphd.image` | `vesoft/nebula-graphd` | The container image of the Graphd service. | - | `spec.graphd.version` | `{{nebula.tag}}` | The version of the Graphd service. | - | `spec.graphd.service` | - | The Service configurations for the Graphd service. | - | `spec.graphd.logVolumeClaim.storageClassName` | - | The log disk storage configurations for the Graphd service. | - | `spec.metad.replicas` | `1` | The numeric value of replicas of the Metad service. | - | `spec.metad.image` | `vesoft/nebula-metad` | The container image of the Metad service. | - | `spec.metad.version` | `{{nebula.tag}}` | The version of the Metad service. | - | `spec.metad.dataVolumeClaim.storageClassName` | - | The data disk storage configurations for the Metad service. | - | `spec.metad.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Metad service.| - | `spec.storaged.replicas` | `3` | The numeric value of replicas of the Storaged service. | - | `spec.storaged.image` | `vesoft/nebula-storaged` | The container image of the Storaged service. | - | `spec.storaged.version` | `{{nebula.tag}}` | The version of the Storaged service. | - | `spec.storaged.dataVolumeClaims.resources.requests.storage` | - | Data disk storage size for the Storaged service. You can specify multiple data disks to store data. When multiple disks are specified, the storage path is `/usr/local/nebula/data1`, `/usr/local/nebula/data2`, etc.| - | `spec.storaged.dataVolumeClaims.resources.storageClassName` | - | The data disk storage configurations for Storaged. If not specified, the global storage parameter is applied. | - | `spec.storaged.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Storaged service.| - | `spec.storaged.enableAutoBalance` | `true` |Whether to balance data automatically. | - |`spec.agent`|`{}`| Configuration of the Agent service. This is used for backup and recovery as well as log cleanup functions. If you do not customize this configuration, the default configuration will be used.| - | `spec.reference.name` | - | The name of the dependent controller. | - | `spec.schedulerName` | - | The scheduler name. | - | `spec.imagePullPolicy` | The image policy to pull the NebulaGraph image. For details, see [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | The image pull policy in Kubernetes. | - |`spec.logRotate`| - |Log rotation configuration. For more information, see [Manage cluster logs](../8.custom-cluster-configurations/8.4.manage-running-logs.md).| - |`spec.enablePVReclaim`|`false`|Define whether to automatically delete PVCs and release data after deleting the cluster. For more information, see [Reclaim PVs](../8.custom-cluster-configurations/8.2.pv-reclaim.md).| + For the file content, see the [sample configuration](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/apps_v1alpha1_nebulacluster.yaml). + + ??? Info "Expand to show sample parameter descriptions" + + | Parameter | Default value | Description | + | :---- | :--- | :--- | + | `metadata.name` | - | The name of the created NebulaGraph cluster. | + |`spec.console`|-| Configuration of the Console service. For details, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console).| + | `spec.graphd.replicas` | `1` | The numeric value of replicas of the Graphd service. | + | `spec.graphd.image` | `vesoft/nebula-graphd` | The container image of the Graphd service. | + | `spec.graphd.version` | `{{nebula.tag}}` | The version of the Graphd service. | + | `spec.graphd.service` | - | The Service configurations for the Graphd service. | + | `spec.graphd.logVolumeClaim.storageClassName` | - | The log disk storage configurations for the Graphd service. | + | `spec.metad.replicas` | `1` | The numeric value of replicas of the Metad service. | + | `spec.metad.image` | `vesoft/nebula-metad` | The container image of the Metad service. | + | `spec.metad.version` | `{{nebula.tag}}` | The version of the Metad service. | + | `spec.metad.dataVolumeClaim.storageClassName` | - | The data disk storage configurations for the Metad service. | + | `spec.metad.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Metad service.| + | `spec.storaged.replicas` | `3` | The numeric value of replicas of the Storaged service. | + | `spec.storaged.image` | `vesoft/nebula-storaged` | The container image of the Storaged service. | + | `spec.storaged.version` | `{{nebula.tag}}` | The version of the Storaged service. | + | `spec.storaged.dataVolumeClaims.resources.requests.storage` | - | Data disk storage size for the Storaged service. You can specify multiple data disks to store data. When multiple disks are specified, the storage path is `/usr/local/nebula/data1`, `/usr/local/nebula/data2`, etc.| + | `spec.storaged.dataVolumeClaims.resources.storageClassName` | - | The data disk storage configurations for Storaged. If not specified, the global storage parameter is applied. | + | `spec.storaged.logVolumeClaim.storageClassName`|- | The log disk storage configurations for the Storaged service.| + | `spec.storaged.enableAutoBalance` | `true` |Whether to balance data automatically. | + |`spec.agent`|`{}`| Configuration of the Agent service. This is used for backup and recovery as well as log cleanup functions. If you do not customize this configuration, the default configuration will be used.| + | `spec.reference.name` | - | The name of the dependent controller. | + | `spec.schedulerName` | - | The scheduler name. | + | `spec.imagePullPolicy` | The image policy to pull the NebulaGraph image. For details, see [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). | The image pull policy in Kubernetes. | + |`spec.logRotate`| - |Log rotation configuration. For more information, see [Manage cluster logs](../8.custom-cluster-configurations/8.4.manage-running-logs.md).| + |`spec.enablePVReclaim`|`false`|Define whether to automatically delete PVCs and release data after deleting the cluster. For more information, see [Reclaim PVs](../8.custom-cluster-configurations/8.2.pv-reclaim.md).| {{ ent.ent_begin }} - For a NebulaGraph Enterprise cluster + Contact our sales team to get a complete NebulaGraph Enterprise Edition cluster YAML example. + !!! enterpriseonly Make sure that you have access to NebulaGraph Enterprise Edition images before pulling the image. - Create a file named `apps_v1alpha1_nebulacluster.yaml`. Contact our sales team to get a complete NebulaGraph Enterprise Edition cluster YAML example. You must customize and modify the following parameters, and other parameters can be changed as needed. + === "Cluster without Zones" + + You must set the following parameters in the configuration file for the enterprise edition. Other parameters can be changed as needed. For information on other parameters, see the [sample configuration](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/apps_v1alpha1_nebulacluster.yaml). + - - `spec.metad.licenseManagerURL` - - `spec..image` - - `spec.imagePullSecrets` + | Parameter | Default value | Description | + | :---- | :--- | :--- | + | `spec.metad.licenseManagerURL` | - | Configure the URL that points to the LM, which consists of the access address and port number (default port `9119`) of the LM. For example, `192.168.8.100:9119`. **You must configure this parameter in order to obtain the license information; otherwise, the enterprise edition cluster cannot be used.** | + |`spec..image`|-|The container image of the Graph, Meta, or Storage service of the enterprise edition.| + |`spec.imagePullSecrets`| - |Specifies the Secret for pulling the NebulaGraph Enterprise service images from a private repository.| - The parameters only for NebulaGraph Enterprise Edition are described as follows: - | Parameter | Default value | Description | - | :---- | :--- | :--- | - | `spec.metad.licenseManagerURL` | - | Configure the URL that points to the LM, which consists of the access address and port number (default port `9119`) of the LM. For example, `192.168.8.100:9119`. **You must configure this parameter in order to obtain the license information; otherwise, the enterprise edition cluster cannot be used.** | - |`spec.storaged.enableAutoBalance`| `false`| Specifies whether to enable automatic data balancing. For more information, see [Balance storage data after scaling out](../8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md).| - |`spec.enableBR`|`false`|Specifies whether to enable the BR tool. For more information, see [Backup and restore](../10.backup-restore-using-operator.md).| - |`spec.imagePullSecrets`| - |Specifies the Secret for pulling the NebulaGraph Enterprise service images from a private repository.| + === "Cluster with Zones" + NebulaGraph Operator supports creating a cluster with [Zones](../../4.deployment-and-installation/5.zone.md). + + You must set the following parameters for creating a cluster with Zones. Other parameters can be changed as needed. For more information on other parameters, see the [sample configuration](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/config/samples/apps_v1alpha1_nebulacluster.yaml). + + | Parameter | Default value | Description | + | :---- | :--- | :--- | + | `spec.metad.licenseManagerURL` | - | Configure the URL that points to the LM, which consists of the access address and port number (default port `9119`) of the LM. For example, `192.168.8.100:9119`. **You must configure this parameter in order to obtain the license information; otherwise, the enterprise edition cluster cannot be used.** | + |`spec..image`|-|The container image of the Graph, Meta, or Storage service of the enterprise edition.| + |`spec.imagePullSecrets`| - |Specifies the Secret for pulling the NebulaGraph Enterprise service images from a private repository.| + |`spec.alpineImage`|`reg.vesoft-inc.com/nebula-alpine:latest`|The Alpine Linux image, used to obtain the Zone information where nodes are located.| + |`spec.metad.config.zone_list`|-|A list of zone names, split by comma. For example: zone1,zone2,zone3.
**Zone names CANNOT be modified once be set.**| + |`spec.graphd.config.prioritize_intra_zone_reading`|`false`|Specifies whether to prioritize sending queries to the storage nodes in the same zone.
When set to `true`, the query is sent to the storage nodes in the same zone. If reading fails in that Zone, it will decide based on `stick_to_intra_zone_on_failure` whether to read the leader partition replica data from other Zones. | + |`spec.graphd.config.stick_to_intra_zone_on_failure`|`false`|Specifies whether to stick to intra-zone routing if unable to find the requested partitions in the same zone. When set to `true`, if unable to find the partition replica in that Zone, it does not read data from other Zones.| + + ???+ note "Learn more about Zones in NebulaGraph Operator" + + **Understanding NebulaGraph's Zone Feature** + + NebulaGraph utilizes a feature called Zones to efficiently manage its distributed architecture. Each Zone represents a logical grouping of Storage pods and Graph pods, responsible for storing the complete graph space data. The data within NebulaGraph's spaces is partitioned, and replicas of these partitions are evenly distributed across all available Zones. The utilization of Zones can significantly reduce inter-Zone network traffic costs and boost data transfer speeds. Moreover, intra-zone-reading allows for increased availability, because replicas of a partition spread out among different zones. + + **Configuring NebulaGraph Zones** + + To make the most of the Zone feature, you first need to determine the actual Zone where your cluster nodes are located. Typically, nodes deployed on cloud platforms are labeled with their respective Zones. Once you have this information, you can configure it in your cluster's configuration file by setting the `spec.metad.config.zone_list` parameter. This parameter should be a list of Zone names, separated by commas, and should match the actual Zone names where your nodes are located. For example, if your nodes are in Zones `az1`, `az2`, and `az3`, your configuration would look like this: + + ```yaml + spec: + metad: + config: + zone_list: az1,az2,az3 + ``` + + **Operator's Use of Zone Information** + + NebulaGraph Operator leverages Kubernetes' [TopoloySpread](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) feature to manage the scheduling of Storage and Graph pods. Once the `zone_list` is configured, Storage services are automatically assigned to their respective Zones based on the `topology.kubernetes.io/zone` label. + + For intra-zone data access, the Graph service dynamically assigns itself to a Zone using the `--assigned_zone=$NODE_ZONE` parameter. It identifies the Zone name of the node where the Graph service resides by utilizing an init-container to fetch this information. The Alpine Linux image specified in `spec.alpineImage` (default: `reg.vesoft-inc.com/nebula-alpine:latest`) plays a role in obtaining Zone information. + + **Prioritizing Intra-Zone Data Access** + + By setting `spec.graphd.config.prioritize_intra_zone_reading` to `true` in the cluster configuration file, you enable the Graph service to prioritize sending queries to Storage services within the same Zone. In the event of a read failure within that Zone, the behavior depends on the value of `spec.graphd.config.stick_to_intra_zone_on_failure`. If set to `true`, the Graph service avoids reading data from other Zones and returns an error. Otherwise, it reads data from leader partition replicas in other Zones. + + ```yaml + spec: + alpineImage: reg.vesoft-inc.com/cloud-dev/nebula-alpine:latest + graphd: + config: + prioritize_intra_zone_reading: "true" + stick_to_intra_zone_on_failure: "false" + ``` + + **Zone Mapping for Resilience** + + Once Storage and Graph services are assigned to Zones, the mapping between the pod and its corresponding Zone is stored in a configmap named `-graphd|storaged-zone`. This mapping facilitates pod scheduling during rolling updates and pod restarts, ensuring that services return to their original Zones as needed. + + !!! caution + + DO NOT manually modify the configmaps created by NebulaGraph Operator. Doing so may cause unexpected behavior. + + + Other optional parameters for the enterprise edition are as follows: + + | Parameter | Default value | Description | + | :---- | :--- | :--- | + |`spec.storaged.enableAutoBalance`| `false`| Specifies whether to enable automatic data balancing. For more information, see [Balance storage data after scaling out](../8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md).| + |`spec.enableBR`|`false`|Specifies whether to enable the BR tool. For more information, see [Backup and restore](../10.backup-restore-using-operator.md).| + |`spec.graphd.enable_graph_ssl`|`false`| Specifies whether to enable SSL for the Graph service. For more details, see [Enable mTLS](../8.custom-cluster-configurations/8.5.enable-ssl.md). | + + + ??? info "Expand to view sample cluster configurations" + + ```yaml + apiVersion: apps.nebula-graph.io/v1alpha1 + kind: NebulaCluster + metadata: + name: nebula + namespace: default + spec: + alpineImage: "reg.vesoft-inc.com/cloud-dev/nebula-alpine:latest" + agent: + image: reg.vesoft-inc.com/cloud-dev/nebula-agent + version: v3.6.0-sc + exporter: + image: vesoft/nebula-stats-exporter + replicas: 1 + maxRequests: 20 + console: + version: "nightly" + graphd: + config: + accept_partial_success: "true" + ca_client_path: certs/root.crt + ca_path: certs/root.crt + cert_path: certs/server.crt + key_path: certs/server.key + enable_graph_ssl: "true" + prioritize_intra_zone_reading: "true" + stick_to_intra_zone_on_failure: "true" + logtostderr: "1" + redirect_stdout: "false" + stderrthreshold: "0" + initContainers: + - name: init-auth-sidecar + imagePullPolicy: IfNotPresent + image: 496756745489.dkr.ecr.us-east-1.amazonaws.com/auth-sidecar:v1.60.0 + env: + - name: AUTH_SIDECAR_CONFIG_FILENAME + value: sidecar-init + volumeMounts: + - name: credentials + mountPath: /credentials + - name: auth-sidecar-config + mountPath: /etc/config + sidecarContainers: + - name: auth-sidecar + image: 496756745489.dkr.ecr.us-east-1.amazonaws.com/auth-sidecar:v1.60.0 + imagePullPolicy: IfNotPresent + resources: + requests: + cpu: 100m + memory: 500Mi + env: + - name: LOCAL_POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + - name: LOCAL_POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: LOCAL_POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + readinessProbe: + httpGet: + path: /ready + port: 8086 + initialDelaySeconds: 5 + periodSeconds: 10 + successThreshold: 1 + failureThreshold: 3 + livenessProbe: + httpGet: + path: /live + port: 8086 + initialDelaySeconds: 5 + periodSeconds: 10 + successThreshold: 1 + failureThreshold: 3 + volumeMounts: + - name: credentials + mountPath: /credentials + - name: auth-sidecar-config + mountPath: /etc/config + volumes: + - name: credentials + emptyDir: + medium: Memory + volumeMounts: + - name: credentials + mountPath: /usr/local/nebula/certs + resources: + requests: + cpu: "2" + memory: "2Gi" + limits: + cpu: "2" + memory: "2Gi" + replicas: 1 + image: reg.vesoft-inc.com/rc/nebula-graphd-ent + version: v3.5.0-sc + metad: + config: + redirect_stdout: "false" + stderrthreshold: "0" + logtostder: "true" + # Zone names CANNOT be modified once set. + # It's suggested to set an odd number of Zones. + zone_list: az1,az2,az3 + licenseManagerURL: "192.168.8.xxx:9119" + resources: + requests: + cpu: "300m" + memory: "500Mi" + limits: + cpu: "1" + memory: "1Gi" + replicas: 3 + image: reg.vesoft-inc.com/rc/nebula-metad-ent + version: v3.5.0-sc + dataVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: local-path + storaged: + config: + redirect_stdout: "false" + stderrthreshold: "0" + logtostder: "true" + resources: + requests: + cpu: "1" + memory: "1Gi" + limits: + cpu: "2" + memory: "2Gi" + replicas: 3 + image: reg.vesoft-inc.com/rc/nebula-storaged-ent + version: v3.5.0-sc + dataVolumeClaims: + - resources: + requests: + storage: 2Gi + storageClassName: local-path + enableAutoBalance: true + reference: + name: statefulsets.apps + version: v1 + schedulerName: nebula-scheduler + nodeSelector: + nebula: cloud + imagePullPolicy: Always + imagePullSecrets: + - name: nebula-image + topologySpreadConstraints: + - topologyKey: "topology.kubernetes.io/zone" + whenUnsatisfiable: "DoNotSchedule" + ``` {{ ent.ent_end }} @@ -212,9 +443,13 @@ The following shows how to scale out a NebulaGraph cluster by changing the numbe The principle of scaling in a cluster is the same as scaling out a cluster. You scale in a cluster if the numeric value of the `replicas` in `apps_v1alpha1_nebulacluster.yaml` is changed smaller than the current number. For more information, see the **Scale out clusters** section above. +In the process of downsizing the cluster, if the operation is not complete successfully and seems to be stuck, you may need to check the status of the job using the `nebula-console` client specified in the `spec.console` field. Analyzing the logs and manually intervening can help ensure that the Job runs successfully. For information on how to check jobs, see [Job statements](../../3.ngql-guide/4.job-statements.md). + !!! caution - NebulaGraph Operator currently only supports scaling Graph and Storage services and does not support scale Meta services. + - NebulaGraph Operator currently only supports scaling Graph and Storage services and does not support scale Meta services. + - If you scale in a cluster with Zones, make sure that the number of remaining storage pods is not less than the number of Zones specified in the `spec.metad.config.zone_list` field. Otherwise, the cluster will fail to start. + {{ ent.ent_end }} diff --git a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md index 162dde38f9c..31273375d2a 100644 --- a/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md +++ b/docs-2.0/nebula-operator/3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md @@ -70,36 +70,67 @@ --set nebula.version=v{{nebula.release}} \ # Specify the version of the nebula-cluster chart. If not specified, the latest version of the chart is installed by default. # Run 'helm search repo nebula-operator/nebula-cluster' to view the available versions of the chart. - --version={{operator.release}} + --version={{operator.release}} \ --namespace="${NEBULA_CLUSTER_NAMESPACE}" \ ``` {{ent.ent_begin}} - !!! enterpriseonly - - For NebulaGraph Enterprise, run the following command to create a NebulaGraph cluster: - - ```bash - helm install "${NEBULA_CLUSTER_NAME}" nebula-operator/nebula-cluster \ - # Configure the access address and port (default port is '9119') that points to the LM. You must configure this parameter in order to obtain the license information. Only for NebulaGraph Enterprise Edition clusters. - --set nebula.metad.licenseManagerURL=`192.168.8.XXX:9119` \ - # Configure the image addresses for each service in the cluster. - --set nebula.graphd.image= \ - --set nebula.metad.image= \ - --set nebula.storaged.image= \ - # Configure the Secret for pulling images from a private repository. - --set nebula.imagePullSecrets= \ - --set nameOverride=${NEBULA_CLUSTER_NAME} \ - --set nebula.storageClassName="${STORAGE_CLASS_NAME}" \ - # Specify the version of the NebulaGraph cluster. - --set nebula.version=v{{nebula.release}} \ - # Specify the version of the nebula-cluster chart. If not specified, the latest version of the chart is installed by default. - # Run 'helm search repo nebula-operator/nebula-cluster' to view the available versions of the chart. - --version={{operator.release}} - --namespace="${NEBULA_CLUSTER_NAMESPACE}" \ - ``` - {{ent.ent_end}} + To create a NebulaGraph cluster for Enterprise Edition, run the following command: + + === "Cluster without Zones" + + ```bash + helm install "${NEBULA_CLUSTER_NAME}" nebula-operator/nebula-cluster \ + # Configure the access address and port (default port is '9119') that points to the LM. You must configure this parameter in order to obtain the license information. Only for NebulaGraph Enterprise Edition clusters. + --set nebula.metad.licenseManagerURL=`192.168.8.XXX:9119` \ + # Configure the image addresses for each service in the cluster. + --set nebula.graphd.image= \ + --set nebula.metad.image= \ + --set nebula.storaged.image= \ + # Configure the Secret for pulling images from a private repository. + --set nebula.imagePullSecrets= \ + --set nameOverride=${NEBULA_CLUSTER_NAME} \ + --set nebula.storageClassName="${STORAGE_CLASS_NAME}" \ + # Specify the version of the NebulaGraph cluster. + --set nebula.version=v{{nebula.release}} \ + # Specify the version of the nebula-cluster chart. If not specified, the latest version of the chart is installed by default. + # Run 'helm search repo nebula-operator/nebula-cluster' to view the available versions of the chart. + --version={{operator.release}} \ + --namespace="${NEBULA_CLUSTER_NAMESPACE}" \ + ``` + + === "Cluster with Zones" + + NebulaGraph Operator supports the [Zones](../../4.deployment-and-installation/5.zone.md) feature. For how to use Zones in NebulaGraph Operator, see [Learn more about Zones in NebulaGraph Operator](3.1create-cluster-with-kubectl.md) + + ```bash + helm install "${NEBULA_CLUSTER_NAME}" nebula-operator/nebula-cluster \ + # Configure the access address and port (default port is '9119') that points to the LM. You must configure this parameter in order to obtain the license information. Only for NebulaGraph Enterprise Edition clusters. + --set nebula.metad.licenseManagerURL=`192.168.8.XXX:9119` \ + # Configure the image addresses for each service in the cluster. + --set nebula.graphd.image= \ + --set nebula.metad.image= \ + --set nebula.storaged.image= \ + # Configure the Secret for pulling images from a private repository. + --set nebula.imagePullSecrets= \ + --set nameOverride=${NEBULA_CLUSTER_NAME} \ + --set nebula.storageClassName="${STORAGE_CLASS_NAME}" \ + # Specify the version of the NebulaGraph cluster. + --set nebula.version=v{{nebula.release}} \ + # Specify the version of the nebula-cluster chart. If not specified, the latest version of the chart is installed by default. + # Run 'helm search repo nebula-operator/nebula-cluster' to view the available versions of the chart. + --version={{operator.release}} \ + # Configure Zones + # Once Zones are configured, the Zone information cannot be modified. + # It's suggested to configure an odd number of Zones. + --set nebula.metad.config.zone_list= \ + --set nebula.graphd.config.prioritize_intra_zone_reading=true \ + --set nebula.graphd.config.stick_to_intra_zone_on_failure=false \ + --namespace="${NEBULA_CLUSTER_NAMESPACE}" \ + ``` + + {{ent.ent_end}} To view all configuration parameters of the NebulaGraph cluster, run the `helm show values nebula-operator/nebula-cluster` command or click [nebula-cluster/values.yaml](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.branch}}/charts/nebula-cluster/values.yaml). @@ -150,9 +181,12 @@ helm upgrade "${NEBULA_CLUSTER_NAME}" nebula-operator/nebula-cluster \ Similarly, you can scale in a NebulaGraph cluster by setting the value of the `replicas` corresponding to the different services in the cluster smaller than the original value. +In the process of downsizing the cluster, if the operation is not complete successfully and seems to be stuck, you may need to check the status of the job using the `nebula-console` client specified in the `spec.console` field. Analyzing the logs and manually intervening can help ensure that the Job runs successfully. For information on how to check jobs, see [Job statements](../../3.ngql-guide/4.job-statements.md). + !!! caution - NebulaGraph Operator currently only supports scaling Graph and Storage services and does not support scale Meta services. + - NebulaGraph Operator currently only supports scaling Graph and Storage services and does not support scale Meta services. + - If you scale in a cluster with Zones, make sure that the number of remaining storage pods is not less than the number of Zones specified in the `spec.metad.config.zone_list` field. Otherwise, the cluster will fail to start. You can click on [nebula-cluster/values.yaml](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.tag}}/charts/nebula-cluster/values.yaml) to see more configurable parameters of the nebula-cluster chart. For more information about the descriptions of configurable parameters, see **Configuration parameters of the nebula-cluster Helm chart** below. diff --git a/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md b/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md index b3ce3015bf9..acc70c4f4c5 100644 --- a/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md +++ b/docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md @@ -95,6 +95,19 @@ Steps: - `-u`: The username of your NebulaGraph account. Before enabling authentication, you can use any existing username. The default username is root. - `-p`: The password of your NebulaGraph account. Before enabling authentication, you can use any characters as the password. + !!! note + + If the `spec.console` field is set in the cluster configuration file, you can also connect to NebulaGraph databases with the following command: + + ```bash + # Enter the nebula-console Pod. + kubectl exec -it nebula-console -- /bin/sh + + # Connect to NebulaGraph databases. + nebula-console -addr -port -u -p + ``` + + For information about the nebula-console container, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console). ## Connect to NebulaGraph databases from within a NebulaGraph cluster @@ -160,6 +173,7 @@ You can also create a `ClusterIP` type Service to provide an access point to the ```bash kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- nebula-console -addr 10.98.213.34 -port 9669 -u root -p vesoft + ``` - `--image`: The image for the tool NebulaGraph Console used to connect to NebulaGraph databases. - ``: The custom Pod name. @@ -176,13 +190,27 @@ You can also create a `ClusterIP` type Service to provide an access point to the (root@nebula) [(none)]> ``` -You can also connect to NebulaGraph databases with **Fully Qualified Domain Name (FQDN)**. The domain format is `-graphd..svc.`. The default value of `CLUSTER_DOMAIN` is `cluster.local`. + You can also connect to NebulaGraph databases with **Fully Qualified Domain Name (FQDN)**. The domain format is `-graphd..svc.`. The default value of `CLUSTER_DOMAIN` is `cluster.local`. -```bash -kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -graphd-svc.default.svc.cluster.local -port -u -p -``` + ```bash + kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- -addr -graphd-svc.default.svc.cluster.local -port -u -p + ``` -`service_port` is the port to connect to Graphd services, the default port of which is `9669`. + `service_port` is the port to connect to Graphd services, the default port of which is `9669`. + + !!! note + + If the `spec.console` field is set in the cluster configuration file, you can also connect to NebulaGraph databases with the following command: + + ```bash + # Enter the nebula-console Pod. + kubectl exec -it nebula-console -- /bin/sh + + # Connect to NebulaGraph databases. + nebula-console -addr nebula-graphd-svc.default.svc.cluster.local -port 9669 -u -p + ``` + + For information about the nebula-console container, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console). ## Connect to NebulaGraph databases from outside a NebulaGraph cluster via Ingress @@ -275,3 +303,17 @@ Steps are as follows. If you don't see a command prompt, try pressing enter. (root@nebula) [(none)]> ``` + + !!! note + + If the `spec.console` field is set in the cluster configuration file, you can also connect to NebulaGraph databases with the following command: + + ```bash + # Enter the nebula-console Pod. + kubectl exec -it nebula-console -- /bin/sh + + # Connect to NebulaGraph databases. + nebula-console -addr -port -u -p + ``` + + For information about the nebula-console container, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console). diff --git a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md index 177c9578243..55cfa9e88ca 100644 --- a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md +++ b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.2.pv-reclaim.md @@ -4,6 +4,10 @@ NebulaGraph Operator uses PVs (Persistent Volumes) and PVCs (Persistent Volume C You can also define the automatic deletion of PVCs to release data by setting the parameter `spec.enablePVReclaim` to `true` in the configuration file of the cluster instance. As for whether PV will be deleted automatically after PVC is deleted, you need to customize the PV reclaim policy. See [reclaimPolicy in StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/#reclaim-policy) and [PV Reclaiming](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming) for details. +## Notes + +The NebulaGraph Operator currently does not support the resizing of Persistent Volume Claims (PVCs), but this feature is expected to be supported in version 1.6.1. Additionally, the Operator does not support dynamically adding or mounting storage volumes to a running storaged instance. + ## Prerequisites You have created a cluster. For how to create a cluster with Kubectl, see [Create a cluster with Kubectl](../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md). diff --git a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md index c4d6506abc3..b5f265c9845 100644 --- a/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md +++ b/docs-2.0/nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md @@ -1,6 +1,6 @@ # Enable mTLS in NebulaGraph -Transport Layer Security (TLS) is an encryption protocol in wide use on the Internet. TLS, which was formerly called SSL, authenticates the server in a client-server connection and encrypts communications between client and server so that external parties cannot spy on the communications. Its working principle is mainly to protect data transmitted over the network by using encryption algorithms to prevent data interception or tampering during transmission. During the TLS connection establishment process, the server sends a digital certificate containing a public key and some identity information to the client. This certificate is issued by a trusted third-party Certification Authority (CA). The client verifies this digital certificate to confirm the identity of the server. +Transport Layer Security (TLS) is an encryption protocol in wide use on the Internet. TLS, which was formerly called SSL, authenticates the server in a client-server connection and encrypts communications between client and server so that external parties cannot spy on the communications. Its working principle is mainly to protect data transmitted over the network by using encryption algorithms to prevent data interception or tampering during transmission. During the TLS connection establishment process, the server sends a digital certificate containing a public key and some identity information to the client. This certificate is issued by a trusted third-party Certification Authority (CA). The client verifies this digital certificate to confirm the identity of the server. In the NebulaGraph environment running in Kubernetes, mutual TLS (mTLS) is used to encrypt the communication between the client and server by default, which means both the client and server need to verify their identities. This article explains how to enable mTLS encryption in NebulaGraph running in K8s. @@ -14,236 +14,598 @@ In the NebulaGraph environment running in Kubernetes, mutual TLS (mTLS) is used In the cluster created using Operator, the client and server use the same CA root certificate by default. -## Create a TLS-type Secret +## Encryption policies -In a K8s cluster, you can create Secrets to store sensitive information, such as passwords, OAuth tokens, and TLS certificates. In NebulaGraph, you can create a Secret to store TLS certificates and private keys. When creating a Secret, the type `tls` should be specified. A `tls` Secret is used to store TLS certificates. +NebulaGraph provides three encryption policies for mTLS: -For example, to create a Secret for storing server certificates and private keys: +- Encryption of data transmission between the client and the Graph service. + + This policy only involves encryption between the client and the Graph service and does not encrypt data transmission between other services in the cluster. -```bash -kubectl create secret tls --key= --cert= --namespace= +- Encrypt the data transmission between clients, the Graph service, the Meta service, and the Storage service. + + This policy encrypts data transmission between the client, Graph service, Meta service, and Storage service in the cluster. + +- Encryption of data transmission related to the Meta service within the cluster. + + This policy only involves encrypting data transmission related to the Meta service within the cluster and does not encrypt data transmission between other services or the client. + +For different encryption policies, you need to configure different fields in the cluster configuration file. For more information, see [Authentication policies](../../7.data-security/4.ssl.md#authentication_policies). + +## mTLS with certificate hot-reloading + +NebulaGraph Operator supports enabling mTLS with certificate hot-reloading. The following provides an example of the configuration file to enable mTLS between the client and the Graph service. + +### Sample configurations + +??? info "Expand to view the sample configurations of mTLS" + + ```yaml + apiVersion: apps.nebula-graph.io/v1alpha1 + kind: NebulaCluster + metadata: + name: nebula + spec: + exporter: + image: vesoft/nebula-stats-exporter + replicas: 1 + maxRequests: 20 + graphd: + config: + accept_partial_success: "true" + ca_client_path: certs/root.crt + ca_path: certs/root.crt + cert_path: certs/server.crt + enable_graph_ssl: "true" + enable_intra_zone_routing: "true" + key_path: certs/server.key + logtostderr: "1" + redirect_stdout: "false" + stderrthreshold: "0" + stick_to_intra_zone_on_failure: "true" + timestamp_in_logfile_name: "false" + initContainers: + - name: init-auth-sidecar + command: + - /bin/sh + - -c + args: + - cp /certs/* /credentials/ + imagePullPolicy: Always + image: reg.vesoft-inc.com/cloud-dev/nebula-certs:latest + volumeMounts: + - name: credentials + mountPath: /credentials + sidecarContainers: + - name: auth-sidecar + imagePullPolicy: Always + image: reg.vesoft-inc.com/cloud-dev/nebula-certs:latest + volumeMounts: + - name: credentials + mountPath: /credentials + volumes: + - name: credentials + emptyDir: + medium: Memory + volumeMounts: + - name: credentials + mountPath: /usr/local/nebula/certs + logVolumeClaim: + resources: + requests: + storage: 1Gi + storageClassName: local-path + resources: + requests: + cpu: "200m" + memory: "500Mi" + limits: + cpu: "1" + memory: "1Gi" + replicas: 1 + image: reg.vesoft-inc.com/rc/nebula-graphd-ent + version: v3.5.0-sc + metad: + licenseManagerURL: "192.168.8.53:9119" + resources: + requests: + cpu: "300m" + memory: "500Mi" + limits: + cpu: "1" + memory: "1Gi" + replicas: 1 + image: reg.vesoft-inc.com/rc/nebula-metad-ent + version: v3.5.0-sc + dataVolumeClaim: + resources: + requests: + storage: 2Gi + storageClassName: local-path + logVolumeClaim: + resources: + requests: + storage: 1Gi + storageClassName: local-path + storaged: + resources: + requests: + cpu: "300m" + memory: "500Mi" + limits: + cpu: "1" + memory: "1Gi" + replicas: 1 + image: reg.vesoft-inc.com/rc/nebula-storaged-ent + version: v3.5.0-sc + dataVolumeClaims: + - resources: + requests: + storage: 2Gi + storageClassName: local-path + logVolumeClaim: + resources: + requests: + storage: 1Gi + storageClassName: local-path + enableAutoBalance: true + reference: + name: statefulsets.apps + version: v1 + schedulerName: default-scheduler + imagePullPolicy: Always + imagePullSecrets: + - name: nebula-image + enablePVReclaim: true + topologySpreadConstraints: + - topologyKey: "kubernetes.io/hostname" + whenUnsatisfiable: "ScheduleAnyway" + ``` + +### Configure `spec..config` + +To enable mTLS between the client and the Graph service, configure the `spec.graphd.config` field in the cluster configuration file. The paths specified in fields with `*_path` correspond to file paths relative to `/user/local/nebula`. **It's important to avoid using absolute paths to prevent path recognition errors.** + +```yaml +spec: + graph: + config: + ca_client_path: certs/root.crt + ca_path: certs/root.crt + cert_path: certs/server.crt + enable_graph_ssl: "true" + key_path: certs/server.key ``` -- ``: The name of the Secret storing the server certificate and private key. -- ``: The path to the server private key file. -- ``: The path to the server certificate file. -- ``: The namespace where the Secret is located. If `--namespace` is not specified, it defaults to the `default` namespace. +For the configurations of the other two authentication policies: -You can follow the above steps to create Secrets for the client certificate and private key, and the CA certificate. +- To enable mTLS between the client, the Graph service, the Meta service, and the Storage service: + + Configure the `spec.metad.config`, `spec.graphd.config`, and `spec.storaged.config` fields in the cluster configuration file. + ```yaml + spec: + graph: + config: + ca_client_path: certs/root.crt + ca_path: certs/root.crt + cert_path: certs/server.crt + enable_ssl: "true" + key_path: certs/server.key + metad: + config: + ca_client_path: certs/root.crt + ca_path: certs/root.crt + cert_path: certs/server.crt + enable_ssl: "true" + key_path: certs/server.key + storaged: + config: + ca_client_path: certs/root.crt + ca_path: certs/root.crt + cert_path: certs/server.crt + enable_ssl: "true" + key_path: certs/server.key + ``` -To view the created Secrets: +- To enable mTLS related to the Meta service: + + Configure the `spec.metad.config`, `spec.graphd.config`, and `spec.storaged.config` fields in the cluster configuration file. -```bash -kubectl get secret --namespace= + ```yaml + spec: + graph: + config: + ca_client_path: certs/root.crt + ca_path: certs/root.crt + cert_path: certs/server.crt + enable_meta_ssl: "true" + key_path: certs/server.key + metad: + config: + ca_client_path: certs/root.crt + ca_path: certs/root.crt + cert_path: certs/server.crt + enable_meta_ssl: "true" + key_path: certs/server.key + storaged: + config: + ca_client_path: certs/root.crt + ca_path: certs/root.crt + cert_path: certs/server.crt + enable_meta_ssl: "true" + key_path: certs/server.key + ``` + +### Configure `initContainers`, `sidecarContainers`, `volumes`, and `volumeMounts` + +`initContainers`, `sidecarContainers`, `volumes`, and `volumeMounts` fields are essential for implementing mTLS certificate online hot-reloading. For the encryption scenario where only the Graph service needs to be encrypted, you need to configure `initContainers`, `sidecarContainers`, `volumes`, and `volumeMounts` under `spec.graph.config`. + +#### `initContainers` + +The `initContainers` field is utilized to configure an init-container responsible for generating certificate files. Note that the `volumeMounts` field specifies how the `credentials` volume, shared with the NebulaGraph container, is mounted, providing read and write access. + +In the following example, `init-auth-sidecar` performs the task of copying files from the `certs` directory within the image to `/credentials`. After this task is completed, the init-container exits. + +Example: + +```yaml +initContainers: +- name: init-auth-sidecar + command: + - /bin/sh + - -c + args: + - cp /certs/* /credentials/ + imagePullPolicy: Always + image: reg.vesoft-inc.com/cloud-dev/nebula-certs:latest + volumeMounts: + - name: credentials + mountPath: /credentials ``` -## Configure certifications +#### `sidecarContainers` + +The `sidecarContainers` field is dedicated to periodically monitoring the expiration time of certificates and, when they are near expiration, generating new certificates to replace the existing ones. This process ensures seamless online certificate hot-reloading without any service interruptions. The `volumeMounts` field specifies how the `credentials` volume is mounted, and this volume is shared with the NebulaGraph container. -Operator provides the `sslCerts` field to specify the encrypted certificates. The `sslCerts` field contains three subfields: `serverSecret`, `clientSecret`, and `caSecret`. These three fields are used to specify the Secret names of the NebulaGraph server certificate, client certificate, and CA certificate, respectively. When you specify these three fields, Operator reads the certificate content from the corresponding Secret and mounts it into the cluster's Pod. +In the example provided, the `auth-sidecar` container employs the `crond` process, which runs a crontab script every minute. This script checks the certificate's expiration status using the `openssl x509 -noout -enddate` command. + +Example: ```yaml -sslCerts: - serverSecret: "server-cert" # The name of the server certificate Secret. - serverCert: "" # The key name of the certificate in the server certificate Secret, default is tls.crt. - serverKey: "" # The key name of the private key in the server certificate Secret, default is tls.key. - clientSecret: "client-cert" # The name of the client certificate Secret. - clientCert: "" # The key name of the certificate in the client certificate Secret, default is tls.crt. - clientKey: "" # The key name of the private key in the client certificate Secret, default is tls.key. - caSecret: "ca-cert" # The name of the CA certificate Secret. - caCert: "" # The key name of the certificate in the CA certificate Secret, default is ca.crt. +sidecarContainers: +- name: auth-sidecar + imagePullPolicy: Always + image: reg.vesoft-inc.com/cloud-dev/nebula-certs:latest + volumeMounts: + - name: credentials + mountPath: /credentials ``` -The `serverCert` and `serverKey`, `clientCert` and `clientKey`, and `caCert` are used to specify the key names of the certificate and private key of the server Secret, the key names of the certificate and private key of the client Secret, and the key name of the CA Secret certificate. If you do not customize these field values, Operator defaults `serverCert` and `clientCert` to `tls.crt`, `serverKey` and `clientKey` to `tls.key`, and `caCert` to `ca.crt`. However, in the K8s cluster, the TLS type Secret uses `tls.crt` and `tls.key` as the default key names for the certificate and private key. Therefore, after creating the NebulaGraph cluster, you need to manually change the `caCert` field from `ca.crt` to `tls.crt` in the cluster configuration, so that the Operator can correctly read the content of the CA certificate. Before you customize these field values, you need to specify the key names of the certificate and private key in the Secret when creating it. For how to create a Secret with the key name specified, run the `kubectl create secret generic -h` command for help. +#### `volumes` + +The `volumes` field defines the storage volumes that need to be attached to the NebulaGraph pod. In the following example, the `credentials` volume is of type `emptyDir`, which is a temporary storage volume. Multiple containers can mount the `emptyDir` volume, and they all have read and write access to the same files within the volume. -You can use the `insecureSkipVerify` field to decide whether the client will verify the server's certificate chain and hostname. In production environments, it is recommended to set this to `false` to ensure the security of communication. If set to `true`, the client will not verify the server's certificate chain and hostname. +Example: ```yaml -sslCerts: - # Determines whether the client needs to verify the server's certificate chain and hostname when establishing an SSL connection. - insecureSkipVerify: false +volumes: +- name: credentials + emptyDir: + medium: Memory ``` -!!! caution +#### `volumeMounts` - Make sure that you have added the hostname or IP of the server to the server's certificate's `subjectAltName` field before the `insecureSkipVerify` is set to `false`. If the hostname or IP of the server is not added, an error will occur when the client verifies the server's certificate chain and hostname. For details, see [openssl](https://kubernetes.io/docs/tasks/administer-cluster/certificates/#openssl). +The `volumeMounts` field specifies the location within the container where the storage volume is mounted. In the example below, the `credentials` storage volume is mounted at the path `/usr/local/nebula/certs` within the NebulaGraph container. -When the certificates are approaching expiration, they can be automatically updated by installing [cert-manager](https://cert-manager.io/docs/installation/supported-releases/). NebulaGraph will monitor changes to the certificate directory files, and once a change is detected, it will load the new certificate content into memory. +Example: -## Encryption strategies +```yaml +volumeMounts: +- name: credentials + mountPath: /usr/local/nebula/certs +``` -NebulaGraph offers three encryption strategies that you can choose and configure according to your needs. +### Configure `sslCerts` -- Encryption of client-graph and all inter-service communications +The `spec.sslCerts` field specifies the encrypted certificates for NebulaGraph Operator and the [nebula-agent](https://github.com/vesoft-inc/nebula-agent) client (if you do not use the default nebula-agent image in Operator). - If you want to encrypt all data transmission between the client, Graph service, Meta service, and Storage service, you need to add the `enable_ssl = true` field to each service. +For the other two scenarios where the Graph service, Meta service, and Storage service need to be encrypted, and where only the Meta service needs to be encrypted, you not only need to configure `initContainers`, `sidecarContainers`, `volumes`, and `volumeMounts` under `spec.graph.config`, `spec.storage.config`, and `spec.meta.config`, but also configure `spec.sslCerts`. - Here is an example configuration: +```yaml +spec: + sslCerts: + clientSecret: "client-cert" + caSecret: "ca-cert" # The Secret name of the CA certificate. + caCert: "root.crt" +``` - ```yaml - apiVersion: apps.nebula-graph.io/v1alpha1 - kind: NebulaCluster - metadata: - name: nebula - namespace: default - spec: - sslCerts: - serverSecret: "server-cert" # The Secret name of the server certificate and private key. - clientSecret: "client-cert" # The Secret name of the client certificate and private key. - caSecret: "ca-cert" # The Secret name of the CA certificate. - graphd: - config: - enable_ssl: "true" - metad: - config: - enable_ssl: "true" - storaged: - config: - enable_ssl: "true" - ``` +The `insecureSkipVerify` field is used to determine whether the client will verify the server's certificate chain and hostname. In production environments, it is recommended to set this to `false` to ensure the security of communication. If set to `true`, the client will not verify the server's certificate chain and hostname. -- Encryption of only Graph service communication - - If the K8s cluster is deployed in the same data center and only the port of the Graph service is exposed externally, you can choose to encrypt only data transmission between the client and the Graph service. In this case, other services can communicate internally without encryption. Just add the `enable_graph_ssl = true` field to the Graph service. +```yaml +spec: + sslCerts: + insecureSkipVerify: false +``` - Here is an example configuration: +!!! caution - ```yaml - apiVersion: apps.nebula-graph.io/v1alpha1 - kind: NebulaCluster - metadata: - name: nebula - namespace: default - spec: - sslCerts: - serverSecret: "server-cert" - caSecret: "ca-cert" - graphd: - config: - enable_graph_ssl: "true" - ``` + Make sure that you have added the hostname or IP of the server to the server's certificate's `subjectAltName` field before the `insecureSkipVerify` is set to `false`. If the hostname or IP of the server is not added, an error will occur when the client verifies the server's certificate chain and hostname. For details, see [openssl](https://kubernetes.io/docs/tasks/administer-cluster/certificates/#openssl). - !!! note +### Connect to the Graph service - Because Operator doesn't need to call the Graph service through an interface, it's not necessary to set `clientSecret` in `sslCerts`. -- Encryption of only Meta service communication - - If you need to transmit confidential information to the Meta service, you can choose to encrypt data transmission related to the Meta service. In this case, you need to add the `enable_meta_ssl = true` configuration to each component. +After applying the cluster configuration file by running `kubectl apply -f`, you can use NebulaGraph Console to connect to the Graph service with the following command. - Here is an example configuration: +!!! note - ```yaml - apiVersion: apps.nebula-graph.io/v1alpha1 - kind: NebulaCluster - metadata: - name: nebula - namespace: default - spec: - sslCerts: - serverSecret: "server-cert" - clientSecret: "client-cert" - caSecret: "ca-cert" - graphd: - config: - enable_meta_ssl: "true" - metad: - config: - enable_meta_ssl: "true" - storaged: - config: - enable_meta_ssl: "true" - ``` + When mTLS is required for external clients to connect to the Graph service, you need to set the relevant SSL fields depending on different [clients](../../14.client/1.nebula-client.md). -After setting up the encryption policy, when an external [client](../../14.client/1.nebula-client.md) needs to connect to the Graph service with mutual TLS, you still need to set the relevant TLS parameters according to the different clients. See the Use NebulaGraph Console to connect to Graph service section below for examples. +You can configure `spec.console` to start a NebulaGraph Console container in the cluster. For details, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console). -## Steps +```yaml +spec: + console: + version: "nightly" +``` +Then enter the console container and run the following command to connect to the Graph service. -1. Use the pre-generated server and client certificates and private keys, and the CA certificate to create a Secret for each. +```bash +nebula-console -addr nebula-graphd-svc.default.svc.cluster.local -port 9669 -u root -p nebula -enable_ssl -ssl_root_ca_path /home/xxx/cert/root.crt -ssl_cert_path /home/xxx/cert/client.crt -ssl_private_key_path /home/xxx/cert/client.key +``` - ```yaml - kubectl create secret tls --key= --cert= - ``` +## mTLS without hot-reloading - - `tls`: Indicates that the type of secret being created is TLS, which is used to store TLS certificates. - - ``: Specifies the name of the new secret being created, which can be customized. - - `--key=`: Specifies the path to the private key file of the TLS certificate to be stored in the secret. - - `--cert=`: Specifies the path to the public key certificate file of the TLS certificate to be stored in the secret. - +??? info "If you don't need to perform TLS certificate hot-reloading and prefer to use TLS certificates stored in a Secret when deploying Kubernetes applications, expand to follow these steps" -2. Add server certificate, client certificate, CA certificate configuration, and encryption policy configuration in the corresponding cluster instance YAML file. For details, see [Encryption strategies](#encryption_strategies). - - For example, add encryption configuration for transmission data between client, Graph service, Meta service, and Storage service. + ### Create a TLS-type Secret - ```yaml - apiVersion: apps.nebula-graph.io/v1alpha1 - kind: NebulaCluster - metadata: - name: nebula - namespace: default - spec: - sslCerts: - serverSecret: "server-cert" // The name of the server Certificate Secret. - clientSecret: "client-cert" // The name of the client Certificate Secret. - caSecret: "ca-cert" // The name of the CA Certificate Secret. - graphd: - config: - enable_ssl: "true" - metad: - config: - enable_ssl: "true" - storaged: - config: - enable_ssl: "true" - ``` + In a K8s cluster, you can create Secrets to store sensitive information, such as passwords, OAuth tokens, and TLS certificates. In NebulaGraph, you can create a Secret to store TLS certificates and private keys. When creating a Secret, the type `tls` should be specified. A `tls` Secret is used to store TLS certificates. -3. Use `kubectl apply -f` to apply the file to the Kubernetes cluster. + For example, to create a Secret for storing server certificates and private keys: -4. Verify that the values of `serverCert`, `serverKey`, `clientCert`, `clientKey`, `caCert` under the `sslCerts` field in the cluster configuration match the key names of the certificates and private keys stored in the created Secret. - - ```bash - # Check the key names of the certificate and private key stored in the Secret. For example, check the key name of the CA certificate stored in the Secret. - kubectl get secret ca-cert -o yaml - ``` + ```bash + kubectl create secret tls --key= --cert= --namespace= + ``` - ```bash - # Check the cluster configuration file. - kubectl get nebulacluster nebula -o yaml - ``` + - ``: The name of the Secret storing the server certificate and private key. + - ``: The path to the server private key file. + - ``: The path to the server certificate file. + - ``: The namespace where the Secret is located. If `--namespace` is not specified, it defaults to the `default` namespace. - Example output: + You can follow the above steps to create Secrets for the client certificate and private key, and the CA certificate. - ``` - ... - spec: - sslCerts: - serverSecret: server-cert - serverCert: tls.crt - serverKey: tls.key - clientSecret: client-cert - clientCert: tls.crt - clientKey: tls.key - caSecret: ca-cert - caCert: ca.crt - ... - ``` - If the key names of the certificates and private keys stored in the Secret are different from the values of `serverCert`, `serverKey`, `clientCert`, `clientKey`, `caCert` under the `sslCerts` field in the cluster configuration, you need to execute `kubectl edit nebulacluster ` to manually modify the cluster configuration file. + To view the created Secrets: - In the example output, the key name of the CA certificate in the TLS-type Secret is `tls.crt`, so you need to change the value of caCert from `ca.crt` to `tls.crt`. + ```bash + kubectl get secret --namespace= + ``` -5. Use NebulaGraph Console to connect to the Graph service and establish a secure TLS connection. + ### Configure certifications - Example: + Operator provides the `sslCerts` field to specify the encrypted certificates. The `sslCerts` field contains four subfields. These three fields `serverSecret`, `clientSecret`, and `caSecret` are used to specify the Secret names of the NebulaGraph server certificate, client certificate, and CA certificate, respectively. + When you specify these three fields, Operator reads the certificate content from the corresponding Secret and mounts it into the cluster's Pod. The `autoMountServerCerts` must be set to `true` if you want to automatically mount the server certificate and private key into the Pod. The default value is `false`. - ``` - kubectl run -ti --image vesoft/nebula-console:v{{console.release}} --restart=Never -- nebula-console -addr 10.98.xxx.xx -port 9669 -u root -p nebula -enable_ssl -ssl_root_ca_path /path/to/cert/root.crt -ssl_cert_path /path/to/cert/client.crt -ssl_private_key_path /path/to/cert/client.key - ``` + ```yaml + sslCerts: + autoMountServerCerts: "true" # Automatically mount the server certificate and private key into the Pod. + serverSecret: "server-cert" # The name of the server certificate Secret. + serverCert: "" # The key name of the certificate in the server certificate Secret, default is tls.crt. + serverKey: "" # The key name of the private key in the server certificate Secret, default is tls.key. + clientSecret: "client-cert" # The name of the client certificate Secret. + clientCert: "" # The key name of the certificate in the client certificate Secret, default is tls.crt. + clientKey: "" # The key name of the private key in the client certificate Secret, default is tls.key. + caSecret: "ca-cert" # The name of the CA certificate Secret. + caCert: "" # The key name of the certificate in the CA certificate Secret, default is ca.crt. + ``` + + The `serverCert` and `serverKey`, `clientCert` and `clientKey`, and `caCert` are used to specify the key names of the certificate and private key of the server Secret, the key names of the certificate and private key of the client Secret, and the key name of the CA Secret certificate. If you do not customize these field values, Operator defaults `serverCert` and `clientCert` to `tls.crt`, `serverKey` and `clientKey` to `tls.key`, and `caCert` to `ca.crt`. However, in the K8s cluster, the TLS type Secret uses `tls.crt` and `tls.key` as the default key names for the certificate and private key. Therefore, after creating the NebulaGraph cluster, you need to manually change the `caCert` field from `ca.crt` to `tls.crt` in the cluster configuration, so that the Operator can correctly read the content of the CA certificate. Before you customize these field values, you need to specify the key names of the certificate and private key in the Secret when creating it. For how to create a Secret with the key name specified, run the `kubectl create secret generic -h` command for help. + + You can use the `insecureSkipVerify` field to decide whether the client will verify the server's certificate chain and hostname. In production environments, it is recommended to set this to `false` to ensure the security of communication. If set to `true`, the client will not verify the server's certificate chain and hostname. + + ```yaml + sslCerts: + # Determines whether the client needs to verify the server's certificate chain and hostname when establishing an SSL connection. + insecureSkipVerify: false + ``` + + !!! caution + + Make sure that you have added the hostname or IP of the server to the server's certificate's `subjectAltName` field before the `insecureSkipVerify` is set to `false`. If the hostname or IP of the server is not added, an error will occur when the client verifies the server's certificate chain and hostname. For details, see [openssl](https://kubernetes.io/docs/tasks/administer-cluster/certificates/#openssl). + + When the certificates are approaching expiration, they can be automatically updated by installing [cert-manager](https://cert-manager.io/docs/installation/supported-releases/). NebulaGraph will monitor changes to the certificate directory files, and once a change is detected, it will load the new certificate content into memory. + + ### Encryption strategies + + NebulaGraph offers three encryption strategies that you can choose and configure according to your needs. + + - Encryption of client-graph and all inter-service communications + + If you want to encrypt all data transmission between the client, Graph service, Meta service, and Storage service, you need to add the `enable_ssl = true` field to each service. + + Here is an example configuration: + + ```yaml + apiVersion: apps.nebula-graph.io/v1alpha1 + kind: NebulaCluster + metadata: + name: nebula + namespace: default + spec: + sslCerts: + autoMountServerCerts: "true" # Automatically mount the server certificate and private key into the Pod. + serverSecret: "server-cert" # The Secret name of the server certificate and private key. + clientSecret: "client-cert" # The Secret name of the client certificate and private key. + caSecret: "ca-cert" # The Secret name of the CA certificate. + graphd: + config: + enable_ssl: "true" + metad: + config: + enable_ssl: "true" + storaged: + config: + enable_ssl: "true" + ``` + + + - Encryption of only Graph service communication + + If the K8s cluster is deployed in the same data center and only the port of the Graph service is exposed externally, you can choose to encrypt only data transmission between the client and the Graph service. In this case, other services can communicate internally without encryption. Just add the `enable_graph_ssl = true` field to the Graph service. + + Here is an example configuration: + + ```yaml + apiVersion: apps.nebula-graph.io/v1alpha1 + kind: NebulaCluster + metadata: + name: nebula + namespace: default + spec: + sslCerts: + autoMountServerCerts: "true" + serverSecret: "server-cert" + caSecret: "ca-cert" + graphd: + config: + enable_graph_ssl: "true" + ``` + + !!! note + + Because Operator doesn't need to call the Graph service through an interface, it's not necessary to set `clientSecret` in `sslCerts`. + + - Encryption of only Meta service communication + + If you need to transmit confidential information to the Meta service, you can choose to encrypt data transmission related to the Meta service. In this case, you need to add the `enable_meta_ssl = true` configuration to each component. + + Here is an example configuration: + + ```yaml + apiVersion: apps.nebula-graph.io/v1alpha1 + kind: NebulaCluster + metadata: + name: nebula + namespace: default + spec: + sslCerts: + autoMountServerCerts: "true" + serverSecret: "server-cert" + clientSecret: "client-cert" + caSecret: "ca-cert" + graphd: + config: + enable_meta_ssl: "true" + metad: + config: + enable_meta_ssl: "true" + storaged: + config: + enable_meta_ssl: "true" + ``` + + After setting up the encryption policy, when an external [client](../../14.client/1.nebula-client.md) needs to connect to the Graph service with mutual TLS, you still need to set the relevant TLS fields according to the different clients. See the Use NebulaGraph Console to connect to Graph service section below for examples. + + ### Example of enabling mTLS without hot-reloading + + 1. Use the pre-generated server and client certificates and private keys, and the CA certificate to create a Secret for each. + + ```yaml + kubectl create secret tls --key= --cert= + ``` + + - `tls`: Indicates that the type of secret being created is TLS, which is used to store TLS certificates. + - ``: Specifies the name of the new secret being created, which can be customized. + - `--key=`: Specifies the path to the private key file of the TLS certificate to be stored in the secret. + - `--cert=`: Specifies the path to the public key certificate file of the TLS certificate to be stored in the secret. + + + 2. Add server certificate, client certificate, CA certificate configuration, and encryption policy configuration in the corresponding cluster instance YAML file. For details, see [Encryption strategies](#encryption_strategies). + + For example, add encryption configuration for transmission data between client, Graph service, Meta service, and Storage service. + + ```yaml + apiVersion: apps.nebula-graph.io/v1alpha1 + kind: NebulaCluster + metadata: + name: nebula + namespace: default + spec: + sslCerts: + autoMountServerCerts: "true" + serverSecret: "server-cert" // The name of the server Certificate Secret. + clientSecret: "client-cert" // The name of the client Certificate Secret. + caSecret: "ca-cert" // The name of the CA Certificate Secret. + graphd: + config: + enable_ssl: "true" + metad: + config: + enable_ssl: "true" + storaged: + config: + enable_ssl: "true" + ``` + + 3. Use `kubectl apply -f` to apply the file to the Kubernetes cluster. + + 4. Verify that the values of `serverCert`, `serverKey`, `clientCert`, `clientKey`, `caCert` under the `sslCerts` field in the cluster configuration match the key names of the certificates and private keys stored in the created Secret. + + ```bash + # Check the key names of the certificate and private key stored in the Secret. For example, check the key name of the CA certificate stored in the Secret. + kubectl get secret ca-cert -o yaml + ``` + + ```bash + # Check the cluster configuration file. + kubectl get nebulacluster nebula -o yaml + ``` + + Example output: + + ``` + ... + spec: + sslCerts: + autoMountServerCerts: "true" + serverSecret: server-cert + serverCert: tls.crt + serverKey: tls.key + clientSecret: client-cert + clientCert: tls.crt + clientKey: tls.key + caSecret: ca-cert + caCert: ca.crt + ... + ``` + + If the key names of the certificates and private keys stored in the Secret are different from the values of `serverCert`, `serverKey`, `clientCert`, `clientKey`, `caCert` under the `sslCerts` field in the cluster configuration, you need to execute `kubectl edit nebulacluster ` to manually modify the cluster configuration file. + + In the example output, the key name of the CA certificate in the TLS-type Secret is `tls.crt`, so you need to change the value of caCert from `ca.crt` to `tls.crt`. + + 5. Use NebulaGraph Console to connect to the Graph service and establish a secure TLS connection. + + Example: + + ``` + kubectl run -ti --image vesoft/nebula-console:v{{console.release}} --restart=Never -- nebula-console -addr 10.98.xxx.xx -port 9669 -u root -p nebula -enable_ssl -ssl_root_ca_path /path/to/cert/root.crt -ssl_cert_path /path/to/cert/client.crt -ssl_private_key_path /path/to/cert/client.key + ``` + + - `-enable_ssl`: Use mTLS when connecting to NebulaGraph. + - `-ssl_root_ca_path`: Specify the storage path of the CA root certificate. + - `-ssl_cert_path`: Specify the storage path of the TLS public key certificate. + - `-ssl_private_key_path`: Specify the storage path of the TLS private key. + - For details on using NebulaGraph Console to connect to the Graph service, see [Connect to NebulaGraph](../4.connect-to-nebula-graph-service.md). + + !!! note + + If you set `spec.console` to start a NebulaGraph Console container in the cluster, you can enter the console container and run the following command to connect to the Graph service. + + ```bash + nebula-console -addr 10.98.xxx.xx -port 9669 -u root -p nebula -enable_ssl -ssl_root_ca_path /path/to/cert/root.crt -ssl_cert_path /path/to/cert/client.crt -ssl_private_key_path /path/to/cert/client.key + ``` + + At this point, you can enable mTLS in NebulaGraph. - - `-enable_ssl`: Use mTLS when connecting to NebulaGraph. - - `-ssl_root_ca_path`: Specify the storage path of the CA root certificate. - - `-ssl_cert_path`: Specify the storage path of the TLS public key certificate. - - `-ssl_private_key_path`: Specify the storage path of the TLS private key. - - For details on using NebulaGraph Console to connect to the Graph service, see [Connect to NebulaGraph](../4.connect-to-nebula-graph-service.md). - -At this point, you can enable mTLS in NebulaGraph. \ No newline at end of file diff --git a/mkdocs.yml b/mkdocs.yml index 15dff7b18d8..8cd66dcffd9 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -61,6 +61,7 @@ markdown_extensions: - pymdownx.superfences - pymdownx.tabbed: alternate_style: true + - pymdownx.details # Plugins plugins: @@ -104,6 +105,7 @@ plugins: - 20.appendix/release-notes/dashboard-ent-release-note.md - 20.appendix/release-notes/explorer-release-note.md - 4.deployment-and-installation/5.zone.md + - nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md # ent.end # comm.begin @@ -250,9 +252,9 @@ extra: branch: release-1.2 tag: v1.2.0 operator: - release: 1.5.0 - tag: v1.5.0 - branch: release-1.5 + release: 1.6.0 + tag: v1.6.0 + branch: release-1.6 upgrade_from: 3.5.0 upgrade_to: 3.5.x exporter: @@ -733,9 +735,10 @@ nav: #ent - Balance storage data after scaling out: nebula-operator/8.custom-cluster-configurations/8.3.balance-data-when-scaling-storage.md - Manage cluster logs: nebula-operator/8.custom-cluster-configurations/8.4.manage-running-logs.md - - Enable mTLS: nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md - - Specify a rolling update strategy: nebula-operator/11.rolling-update-strategy.md +#ent +#ent - Enable mTLS: nebula-operator/8.custom-cluster-configurations/8.5.enable-ssl.md - Upgrade NebulaGraph clusters: nebula-operator/9.upgrade-nebula-cluster.md + - Specify a rolling update strategy: nebula-operator/11.rolling-update-strategy.md #ent - Backup and restore: nebula-operator/10.backup-restore-using-operator.md - Self-healing: nebula-operator/5.operator-failover.md