Skip to content

Commit

Permalink
[wip] [sc]updates for operator (#2255)
Browse files Browse the repository at this point in the history
* [wip] [sc]add zones for operator

* add zone principle

* helm parameters

* update mtls in operator

* updates nebula-consle & mtls & fix comments

* add `autoMountServerCerts`

* add notes

* Update 1.introduction-to-nebula-operator.md

* Update 8.5.enable-ssl.md

* 3.6.0-ent-zone-for-core
  • Loading branch information
abby-cyber authored Sep 19, 2023
1 parent 81c4758 commit e8f33ef
Show file tree
Hide file tree
Showing 8 changed files with 943 additions and 259 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ The following features are already available in NebulaGraph Operator:
- **Deploy and uninstall clusters**: NebulaGraph Operator simplifies the process of deploying and uninstalling clusters for users. NebulaGraph Operator allows you to quickly create, update, or delete a NebulaGraph cluster by simply providing the corresponding CR file. For more information, see [Deploy NebulaGraph Clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Deploy NebulaGraph Clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md).

{{ent.ent_begin}}
- **Manage Zones**: Supports dividing multiple storage hosts into managed zones and creating graph spaces on specified storage hosts to achieve resource isolation. For more information, see [Create clusters using Zones with kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md#create_clusters) or [Create clusters using Zones with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md).

- **Scale clusters**: NebulaGraph Operator calls NebulaGraph's native scaling interfaces in a control loop to implement the scaling logic. You can simply perform scaling operations with YAML configurations and ensure the stability of data. For more information, see [Scale clusters with Kubectl](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md) or [Scale clusters with Helm](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md).

Expand All @@ -40,7 +41,7 @@ NebulaGraph Operator does not support the v1.x version of NebulaGraph. NebulaGra

| NebulaGraph | NebulaGraph Operator |
| ------------- | -------------------- |
| 3.5.x | 1.5.0 |
| 3.5.x | 1.5.0, 1.6.0 |
| 3.0.0 ~ 3.4.1 | 1.3.0, 1.4.0 ~ 1.4.2 |
| 3.0.0 ~ 3.3.x | 1.0.0, 1.1.0, 1.2.0 |
| 2.5.x ~ 2.6.x | 0.9.0 |
Expand Down
5 changes: 4 additions & 1 deletion docs-2.0/nebula-operator/10.backup-restore-using-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,10 @@ This article introduces how to back up and restore data of the NebulaGraph clust

This feature is only for the enterprise edition NebulaGraph clusters on Kubernetes.

!!! compatibility

Make sure that the [Zone](../4.deployment-and-installation/5.zone.md) feature is not enabled in the NebulaGraph cluster before using the backup and restore in Operator, otherwise the backup and restore will fail. For details on Zones, see [Cluster with Zones](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md#create_clusters).

## Overview

[NebulaGraph BR (Enterprise Edition), short for BR-ent](../backup-and-restore/nebula-br-ent/1.br-ent-overview.md) is a command line tool for data backup and recovery of NebulaGraph enterprise edition. NebulaGraph Operator utilizes the BR-ent tool to achieve data backup and recovery for NebulaGraph clusters on Kubernetes.
Expand All @@ -14,7 +18,6 @@ When backing up data, NebulaGraph Operator creates a job to back up the data in

When restoring data, NebulaGraph Operator checks the specified backup NebulaGraph cluster for existence, and whether the access to remote storage is normally based on the information defined in the NebulaRestore resource object. It then creates a new cluster and restores the backup data to the new NebulaGraph cluster. For more information, see [restore flowchart](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/br_guide.md#restore-nebulagraph-cluster).


## Prerequisites

To backup and restore data using NebulaGraph Operator, the following conditions must be met:
Expand Down

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -70,36 +70,67 @@
--set nebula.version=v{{nebula.release}} \
# Specify the version of the nebula-cluster chart. If not specified, the latest version of the chart is installed by default.
# Run 'helm search repo nebula-operator/nebula-cluster' to view the available versions of the chart.
--version={{operator.release}}
--version={{operator.release}} \
--namespace="${NEBULA_CLUSTER_NAMESPACE}" \
```

{{ent.ent_begin}}
!!! enterpriseonly

For NebulaGraph Enterprise, run the following command to create a NebulaGraph cluster:
```bash
helm install "${NEBULA_CLUSTER_NAME}" nebula-operator/nebula-cluster \
# Configure the access address and port (default port is '9119') that points to the LM. You must configure this parameter in order to obtain the license information. Only for NebulaGraph Enterprise Edition clusters.
--set nebula.metad.licenseManagerURL=`192.168.8.XXX:9119` \
# Configure the image addresses for each service in the cluster.
--set nebula.graphd.image=<reg.example-inc.com/test/graphd-ent> \
--set nebula.metad.image=<reg.example-inc.com/test/metad-ent> \
--set nebula.storaged.image=<reg.example-inc.com/test/storaged-ent> \
# Configure the Secret for pulling images from a private repository.
--set nebula.imagePullSecrets=<image-pull-secret> \
--set nameOverride=${NEBULA_CLUSTER_NAME} \
--set nebula.storageClassName="${STORAGE_CLASS_NAME}" \
# Specify the version of the NebulaGraph cluster.
--set nebula.version=v{{nebula.release}} \
# Specify the version of the nebula-cluster chart. If not specified, the latest version of the chart is installed by default.
# Run 'helm search repo nebula-operator/nebula-cluster' to view the available versions of the chart.
--version={{operator.release}}
--namespace="${NEBULA_CLUSTER_NAMESPACE}" \
```
{{ent.ent_end}}

To create a NebulaGraph cluster for Enterprise Edition, run the following command:

=== "Cluster without Zones"

```bash
helm install "${NEBULA_CLUSTER_NAME}" nebula-operator/nebula-cluster \
# Configure the access address and port (default port is '9119') that points to the LM. You must configure this parameter in order to obtain the license information. Only for NebulaGraph Enterprise Edition clusters.
--set nebula.metad.licenseManagerURL=`192.168.8.XXX:9119` \
# Configure the image addresses for each service in the cluster.
--set nebula.graphd.image=<reg.example-inc.com/test/graphd-ent> \
--set nebula.metad.image=<reg.example-inc.com/test/metad-ent> \
--set nebula.storaged.image=<reg.example-inc.com/test/storaged-ent> \
# Configure the Secret for pulling images from a private repository.
--set nebula.imagePullSecrets=<image-pull-secret> \
--set nameOverride=${NEBULA_CLUSTER_NAME} \
--set nebula.storageClassName="${STORAGE_CLASS_NAME}" \
# Specify the version of the NebulaGraph cluster.
--set nebula.version=v{{nebula.release}} \
# Specify the version of the nebula-cluster chart. If not specified, the latest version of the chart is installed by default.
# Run 'helm search repo nebula-operator/nebula-cluster' to view the available versions of the chart.
--version={{operator.release}} \
--namespace="${NEBULA_CLUSTER_NAMESPACE}" \
```

=== "Cluster with Zones"

NebulaGraph Operator supports the [Zones](../../4.deployment-and-installation/5.zone.md) feature. For how to use Zones in NebulaGraph Operator, see [Learn more about Zones in NebulaGraph Operator](3.1create-cluster-with-kubectl.md)

```bash
helm install "${NEBULA_CLUSTER_NAME}" nebula-operator/nebula-cluster \
# Configure the access address and port (default port is '9119') that points to the LM. You must configure this parameter in order to obtain the license information. Only for NebulaGraph Enterprise Edition clusters.
--set nebula.metad.licenseManagerURL=`192.168.8.XXX:9119` \
# Configure the image addresses for each service in the cluster.
--set nebula.graphd.image=<reg.example-inc.com/test/graphd-ent> \
--set nebula.metad.image=<reg.example-inc.com/test/metad-ent> \
--set nebula.storaged.image=<reg.example-inc.com/test/storaged-ent> \
# Configure the Secret for pulling images from a private repository.
--set nebula.imagePullSecrets=<image-pull-secret> \
--set nameOverride=${NEBULA_CLUSTER_NAME} \
--set nebula.storageClassName="${STORAGE_CLASS_NAME}" \
# Specify the version of the NebulaGraph cluster.
--set nebula.version=v{{nebula.release}} \
# Specify the version of the nebula-cluster chart. If not specified, the latest version of the chart is installed by default.
# Run 'helm search repo nebula-operator/nebula-cluster' to view the available versions of the chart.
--version={{operator.release}} \
# Configure Zones
# Once Zones are configured, the Zone information cannot be modified.
# It's suggested to configure an odd number of Zones.
--set nebula.metad.config.zone_list=<zone1,zone2,zone3> \
--set nebula.graphd.config.prioritize_intra_zone_reading=true \
--set nebula.graphd.config.stick_to_intra_zone_on_failure=false \
--namespace="${NEBULA_CLUSTER_NAMESPACE}" \
```

{{ent.ent_end}}

To view all configuration parameters of the NebulaGraph cluster, run the `helm show values nebula-operator/nebula-cluster` command or click [nebula-cluster/values.yaml](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.branch}}/charts/nebula-cluster/values.yaml).

Expand Down Expand Up @@ -150,9 +181,12 @@ helm upgrade "${NEBULA_CLUSTER_NAME}" nebula-operator/nebula-cluster \

Similarly, you can scale in a NebulaGraph cluster by setting the value of the `replicas` corresponding to the different services in the cluster smaller than the original value.

In the process of downsizing the cluster, if the operation is not complete successfully and seems to be stuck, you may need to check the status of the job using the `nebula-console` client specified in the `spec.console` field. Analyzing the logs and manually intervening can help ensure that the Job runs successfully. For information on how to check jobs, see [Job statements](../../3.ngql-guide/4.job-statements.md).

!!! caution

NebulaGraph Operator currently only supports scaling Graph and Storage services and does not support scale Meta services.
- NebulaGraph Operator currently only supports scaling Graph and Storage services and does not support scale Meta services.
- If you scale in a cluster with Zones, make sure that the number of remaining storage pods is not less than the number of Zones specified in the `spec.metad.config.zone_list` field. Otherwise, the cluster will fail to start.

You can click on [nebula-cluster/values.yaml](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.tag}}/charts/nebula-cluster/values.yaml) to see more configurable parameters of the nebula-cluster chart. For more information about the descriptions of configurable parameters, see **Configuration parameters of the nebula-cluster Helm chart** below.

Expand Down
52 changes: 47 additions & 5 deletions docs-2.0/nebula-operator/4.connect-to-nebula-graph-service.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,6 +95,19 @@ Steps:
- `-u`: The username of your NebulaGraph account. Before enabling authentication, you can use any existing username. The default username is root.
- `-p`: The password of your NebulaGraph account. Before enabling authentication, you can use any characters as the password.

!!! note

If the `spec.console` field is set in the cluster configuration file, you can also connect to NebulaGraph databases with the following command:

```bash
# Enter the nebula-console Pod.
kubectl exec -it nebula-console -- /bin/sh
# Connect to NebulaGraph databases.
nebula-console -addr <node_ip> -port <node_port> -u <username> -p <password>
```

For information about the nebula-console container, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console).

## Connect to NebulaGraph databases from within a NebulaGraph cluster

Expand Down Expand Up @@ -160,6 +173,7 @@ You can also create a `ClusterIP` type Service to provide an access point to the

```bash
kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- nebula-console -addr 10.98.213.34 -port 9669 -u root -p vesoft
```

- `--image`: The image for the tool NebulaGraph Console used to connect to NebulaGraph databases.
- `<nebula-console>`: The custom Pod name.
Expand All @@ -176,13 +190,27 @@ You can also create a `ClusterIP` type Service to provide an access point to the
(root@nebula) [(none)]>
```

You can also connect to NebulaGraph databases with **Fully Qualified Domain Name (FQDN)**. The domain format is `<cluster-name>-graphd.<cluster-namespace>.svc.<CLUSTER_DOMAIN>`. The default value of `CLUSTER_DOMAIN` is `cluster.local`.
You can also connect to NebulaGraph databases with **Fully Qualified Domain Name (FQDN)**. The domain format is `<cluster-name>-graphd.<cluster-namespace>.svc.<CLUSTER_DOMAIN>`. The default value of `CLUSTER_DOMAIN` is `cluster.local`.

```bash
kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- <nebula_console_name> -addr <cluster_name>-graphd-svc.default.svc.cluster.local -port <service_port> -u <username> -p <password>
```
```bash
kubectl run -ti --image vesoft/nebula-console:{{console.tag}} --restart=Never -- <nebula_console_name> -addr <cluster_name>-graphd-svc.default.svc.cluster.local -port <service_port> -u <username> -p <password>
```

`service_port` is the port to connect to Graphd services, the default port of which is `9669`.
`service_port` is the port to connect to Graphd services, the default port of which is `9669`.

!!! note

If the `spec.console` field is set in the cluster configuration file, you can also connect to NebulaGraph databases with the following command:

```bash
# Enter the nebula-console Pod.
kubectl exec -it nebula-console -- /bin/sh
# Connect to NebulaGraph databases.
nebula-console -addr nebula-graphd-svc.default.svc.cluster.local -port 9669 -u <username> -p <password>
```

For information about the nebula-console container, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console).

## Connect to NebulaGraph databases from outside a NebulaGraph cluster via Ingress

Expand Down Expand Up @@ -275,3 +303,17 @@ Steps are as follows.
If you don't see a command prompt, try pressing enter.
(root@nebula) [(none)]>
```

!!! note

If the `spec.console` field is set in the cluster configuration file, you can also connect to NebulaGraph databases with the following command:

```bash
# Enter the nebula-console Pod.
kubectl exec -it nebula-console -- /bin/sh
# Connect to NebulaGraph databases.
nebula-console -addr <host_ip> -port <external_port> -u <username> -p <password>
```

For information about the nebula-console container, see [nebula-console](https://github.com/vesoft-inc/nebula-operator/blob/v{{operator.release}}/doc/user/nebula_console.md#nebula-console).
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,10 @@ NebulaGraph Operator uses PVs (Persistent Volumes) and PVCs (Persistent Volume C

You can also define the automatic deletion of PVCs to release data by setting the parameter `spec.enablePVReclaim` to `true` in the configuration file of the cluster instance. As for whether PV will be deleted automatically after PVC is deleted, you need to customize the PV reclaim policy. See [reclaimPolicy in StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/#reclaim-policy) and [PV Reclaiming](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming) for details.

## Notes

The NebulaGraph Operator currently does not support the resizing of Persistent Volume Claims (PVCs), but this feature is expected to be supported in version 1.6.1. Additionally, the Operator does not support dynamically adding or mounting storage volumes to a running storaged instance.

## Prerequisites

You have created a cluster. For how to create a cluster with Kubectl, see [Create a cluster with Kubectl](../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md).
Expand Down
Loading

0 comments on commit e8f33ef

Please sign in to comment.