From c264c8c63e88b5c9a9c2e4a71f1254a2a9e3c6cb Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Philippe=20No=C3=ABl?= <21990816+philippemnoel@users.noreply.github.com> Date: Thu, 7 Nov 2024 16:06:05 -0500 Subject: [PATCH] docs: Update docs following BYOC (#52) --- .prettierignore | 2 +- README.md | 63 +++++++++------------ charts/paradedb/README.md | 97 ++++++++++++++------------------ charts/paradedb/README.md.gotmpl | 95 +++++++++++-------------------- 4 files changed, 102 insertions(+), 155 deletions(-) diff --git a/.prettierignore b/.prettierignore index 070ddcc82..23f1068ec 100644 --- a/.prettierignore +++ b/.prettierignore @@ -1,5 +1,5 @@ /.github/actions/ /.github/workflows/lint.yml /.github/workflows/tests-*.yml -.github/workflows/tests-*.yaml +/.github/workflows/tests-*.yaml /charts/ diff --git a/README.md b/README.md index ea6587b07..a85e919cb 100644 --- a/README.md +++ b/README.md @@ -26,44 +26,45 @@ # ParadeDB Helm Chart -The [ParadeDB](https://github.com/paradedb/paradedb) Helm Chart is based on the official [CloudNativePG Helm Chart](https://cloudnative-pg.io/). CloudNativePG is a Kubernetes operator that manages the full lifecycle of a highly available PostgreSQL database cluster with a primary/standby architecture using Postgres streaming replication. +The [ParadeDB](https://github.com/paradedb/paradedb) Helm Chart is based on the official [CloudNativePG Helm Chart](https://cloudnative-pg.io/). CloudNativePG is a Kubernetes operator that manages the full lifecycle of a highly available PostgreSQL database cluster with a primary/standby architecture using Postgres streaming (physical) replication. Kubernetes, and specifically the CloudNativePG operator, is the recommended approach for deploying ParadeDB in production, with high availability. ParadeDB also provides a [Docker image](https://hub.docker.com/r/paradedb/paradedb) and [prebuilt binaries](https://github.com/paradedb/paradedb/releases) for Debian, Ubuntu and Red Hat Enterprise Linux. +The ParadeDB Helm Chart supports Postgres 13+ and ships with Postgres 16 by default. + The chart is also available on [Artifact Hub](https://artifacthub.io/packages/helm/paradedb/paradedb). -## Getting Started +## Usage -First, install [Helm](https://helm.sh/docs/intro/install/). The following steps assume you have a Kubernetes cluster running v1.25+. If you are testing locally, we recommend using [Minikube](https://minikube.sigs.k8s.io/docs/start/). +### ParadeDB Bring-Your-Own-Cloud (BYOC) -### Installing the Prometheus Stack +The most reliable way to run ParadeDB in production is with ParadeDB BYOC, an end-to-end managed solution that runs in the customer’s cloud account. It deploys on managed Kubernetes services and uses the ParadeDB Helm Chart. -The ParadeDB Helm chart supports monitoring via Prometheus and Grafana. To enable this, you need to have the Prometheus CRDs installed before installing the CloudNativePG operator. If you do not yet have the Prometheus CRDs installed on your Kubernetes cluster, you can install it with: +ParadeDB BYOC includes built-in integration with managed PostgreSQL services, such as AWS RDS, via logical replication. It also provides monitoring, logging and alerting through Prometheus and Grafana. The ParadeDB team manages the underlying infrastructure and lifecycle of the cluster. -```bash -helm repo add prometheus-community https://prometheus-community.github.io/helm-charts -helm upgrade --atomic --install prometheus-community \ ---create-namespace \ ---namespace prometheus-community \ ---values https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/docs/src/samples/monitoring/kube-stack-config.yaml \ -prometheus-community/kube-prometheus-stack -``` +You can read more about the optimal architecture for running ParadeDB in production [here](https://docs.paradedb.com/deploy/overview) and you can contact sales [here](mailto:sales@paradedb.com). + +### Self-Hosted + +First, install [Helm](https://helm.sh/docs/intro/install/). The following steps assume you have a Kubernetes cluster running v1.25+. If you are testing locally, we recommend using [Minikube](https://minikube.sigs.k8s.io/docs/start/). -### Installing the CloudNativePG Operator +#### Monitoring -Skip this step if the CloudNativePG operator is already installed in your cluster. If you do not wish to monitor your cluster, omit the `--set` commands. +The ParadeDB Helm chart supports monitoring via Prometheus and Grafana. To enable monitoring, you need to have the Prometheus CRDs installed before installing the CloudNativePG operator. The Promotheus CRDs can be found [here](https://prometheus-community.github.io/helm-charts). + +#### Installing the CloudNativePG Operator + +Skip this step if the CloudNativePG operator is already installed in your cluster. For advanced CloudNativePG configuration and monitoring, please refer to the [CloudNativePG Cluster Chart documentation](https://github.com/cloudnative-pg/charts/blob/main/charts/cloudnative-pg/README.md#values). ```bash helm repo add cnpg https://cloudnative-pg.github.io/charts helm upgrade --atomic --install cnpg \ --create-namespace \ --namespace cnpg-system \ ---set monitoring.podMonitorEnabled=true \ ---set monitoring.grafanaDashboard.create=true \ cnpg/cloudnative-pg ``` -### Setting up a ParadeDB CNPG Cluster +#### Setting up a ParadeDB CNPG Cluster Create a `values.yaml` and configure it to your requirements. Here is a basic example: @@ -77,7 +78,7 @@ cluster: size: 256Mi ``` -Then, launch the ParadeDB cluster. If you do not wish to monitor your cluster, omit the `--set` command. +Then, launch the ParadeDB cluster. ```bash helm repo add paradedb https://paradedb.github.io/charts @@ -85,40 +86,28 @@ helm upgrade --atomic --install paradedb \ --namespace paradedb \ --create-namespace \ --values values.yaml \ ---set cluster.monitoring.enabled=true \ paradedb/paradedb ``` -If `--values values.yaml` is omitted, the default values will be used. For additional configuration options for the `values.yaml` file, including configuring backups and PgBouncer, please refer to the [ParadeDB Helm Chart documentation](https://artifacthub.io/packages/helm/paradedb/paradedb#values). For advanced cluster configuration options, please refer to the [CloudNativePG Cluster Chart documentation](charts/paradedb/README.md). +If `--values values.yaml` is omitted, the default values will be used. For advanced ParadeDB configuration and monitoring, please refer to the [ParadeDB Chart documentation](https://github.com/paradedb/charts/tree/dev/charts/paradedb#values). -### Connecting to a ParadeDB CNPG Cluster +#### Connecting to a ParadeDB CNPG Cluster -The command to connect to the primary instance of the cluster will be printed in your terminal. If you do not modify any settings, it will be: +You can launch a Bash shell inside a specific pod via: ```bash -kubectl --namespace paradedb exec --stdin --tty services/paradedb-rw -- bash +kubectl exec --stdin --tty -n paradedb -- bash ``` -This will launch a Bash shell inside the instance. You can connect to the ParadeDB database via `psql` with: +The primary is called `paradedb-1`. The replicas are called `paradedb-2` onwards depending on the number of replicas you configured. You can connect to the ParadeDB database with `psql` via: ```bash psql -d paradedb ``` -### Connecting to the Grafana Dashboard - -To connect to the Grafana dashboard for your cluster, we suggested port forwarding the Kubernetes service running Grafana to localhost: - -```bash -kubectl --namespace prometheus-community port-forward svc/prometheus-community-grafana 3000:80 -``` - -You can then access the Grafana dasbhoard at [http://localhost:3000/](http://localhost:3000/) using the credentials `admin` as username and `prom-operator` as password. These default credentials are -defined in the [`kube-stack-config.yaml`](https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/docs/src/samples/monitoring/kube-stack-config.yaml) file used as the `values.yaml` file in [Installing the Prometheus CRDs](#installing-the-prometheus-stack) and can be modified by providing your own `values.yaml` file. - ## Development -To test changes to the Chart on a local Minikube cluster, follow the instructions from [Getting Started](#getting-started), replacing the `helm upgrade` step by the path to the directory of the modified `Chart.yaml`. +To test changes to the Chart on a local Minikube cluster, follow the instructions from [Self Hosted](#self-hosted) replacing the `helm upgrade` step by the path to the directory of the modified `Chart.yaml`. ```bash helm upgrade --atomic --install paradedb --namespace paradedb --create-namespace ./charts/paradedb diff --git a/charts/paradedb/README.md b/charts/paradedb/README.md index b6bca59c9..01a71a23c 100644 --- a/charts/paradedb/README.md +++ b/charts/paradedb/README.md @@ -1,43 +1,44 @@ -# ParadeDB CloudNativePG Cluster +# ParadeDB Helm Chart -The [ParadeDB](https://github.com/paradedb/paradedb) Helm Chart is based on the official [CloudNativePG Helm Chart](https://cloudnative-pg.io/). CloudNativePG is a Kubernetes operator that manages the full lifecycle of a highly available PostgreSQL database cluster with a primary/standby architecture using Postgres streaming replication. +The [ParadeDB](https://github.com/paradedb/paradedb) Helm Chart is based on the official [CloudNativePG Helm Chart](https://cloudnative-pg.io/). CloudNativePG is a Kubernetes operator that manages the full lifecycle of a highly available PostgreSQL database cluster with a primary/standby architecture using Postgres streaming (physical) replication. Kubernetes, and specifically the CloudNativePG operator, is the recommended approach for deploying ParadeDB in production, with high availability. ParadeDB also provides a [Docker image](https://hub.docker.com/r/paradedb/paradedb) and [prebuilt binaries](https://github.com/paradedb/paradedb/releases) for Debian, Ubuntu and Red Hat Enterprise Linux. +The ParadeDB Helm Chart supports Postgres 13+ and ships with Postgres 16 by default. + The chart is also available on [Artifact Hub](https://artifacthub.io/packages/helm/paradedb/paradedb). -## Getting Started +## Usage -First, install [Helm](https://helm.sh/docs/intro/install/). The following steps assume you have a Kubernetes cluster running v1.25+. If you are testing locally, we recommend using [Minikube](https://minikube.sigs.k8s.io/docs/start/). +### ParadeDB Bring-Your-Own-Cloud (BYOC) -### Installing the Prometheus Stack +The most reliable way to run ParadeDB in production is with ParadeDB BYOC, an end-to-end managed solution that runs in the customer’s cloud account. It deploys on managed Kubernetes services and uses the ParadeDB Helm Chart. -The ParadeDB Helm chart supports monitoring via Prometheus and Grafana. To enable this, you need to have the Prometheus CRDs installed before installing the CloudNativePG operator. If you do not yet have the Prometheus CRDs installed on your Kubernetes cluster, you can install it with: +ParadeDB BYOC includes built-in integration with managed PostgreSQL services, such as AWS RDS, via logical replication. It also provides monitoring, logging and alerting through Prometheus and Grafana. The ParadeDB team manages the underlying infrastructure and lifecycle of the cluster. -```bash -helm repo add prometheus-community https://prometheus-community.github.io/helm-charts -helm upgrade --atomic --install prometheus-community \ ---create-namespace \ ---namespace prometheus-community \ ---values https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/docs/src/samples/monitoring/kube-stack-config.yaml \ -prometheus-community/kube-prometheus-stack -``` +You can read more about the optimal architecture for running ParadeDB in production [here](https://docs.paradedb.com/deploy/overview) and you can contact sales [here](mailto:sales@paradedb.com). + +### Self-Hosted + +First, install [Helm](https://helm.sh/docs/intro/install/). The following steps assume you have a Kubernetes cluster running v1.25+. If you are testing locally, we recommend using [Minikube](https://minikube.sigs.k8s.io/docs/start/). -### Installing the CloudNativePG Operator +#### Monitoring -Skip this step if the CloudNativePG operator is already installed in your cluster. If you do not wish to monitor your cluster, omit the `--set` commands. +The ParadeDB Helm chart supports monitoring via Prometheus and Grafana. To enable monitoring, you need to have the Prometheus CRDs installed before installing the CloudNativePG operator. The Promotheus CRDs can be found [here](https://prometheus-community.github.io/helm-charts). + +#### Installing the CloudNativePG Operator + +Skip this step if the CloudNativePG operator is already installed in your cluster. For advanced CloudNativePG configuration and monitoring, please refer to the [CloudNativePG Cluster Chart documentation](https://github.com/cloudnative-pg/charts/blob/main/charts/cloudnative-pg/README.md#values). ```bash helm repo add cnpg https://cloudnative-pg.github.io/charts helm upgrade --atomic --install cnpg \ --create-namespace \ --namespace cnpg-system \ ---set monitoring.podMonitorEnabled=true \ ---set monitoring.grafanaDashboard.create=true \ cnpg/cloudnative-pg ``` -### Setting up a ParadeDB CNPG Cluster +#### Setting up a ParadeDB CNPG Cluster Create a `values.yaml` and configure it to your requirements. Here is a basic example: @@ -51,7 +52,7 @@ cluster: size: 256Mi ``` -Then, launch the ParadeDB cluster. If you do not wish to monitor your cluster, omit the `--set` command. +Then, launch the ParadeDB cluster. ```bash helm repo add paradedb https://paradedb.github.io/charts @@ -59,52 +60,40 @@ helm upgrade --atomic --install paradedb \ --namespace paradedb \ --create-namespace \ --values values.yaml \ ---set cluster.monitoring.enabled=true \ paradedb/paradedb ``` -If `--values values.yaml` is omitted, the default values will be used. For additional configuration options for the `values.yaml` file, including configuring backups and PgBouncer, please refer to the [ParadeDB Helm Chart documentation](https://artifacthub.io/packages/helm/paradedb/paradedb#values). For advanced cluster configuration options, please refer to the [CloudNativePG Cluster Chart documentation](charts/paradedb/README.md). +If `--values values.yaml` is omitted, the default values will be used. For advanced ParadeDB configuration and monitoring, please refer to the [ParadeDB Chart documentation](https://github.com/paradedb/charts/tree/dev/charts/paradedb#values). -### Connecting to a ParadeDB CNPG Cluster +#### Connecting to a ParadeDB CNPG Cluster -The command to connect to the primary instance of the cluster will be printed in your terminal. If you do not modify any settings, it will be: +You can launch a Bash shell inside a specific pod via: ```bash -kubectl --namespace paradedb exec --stdin --tty services/paradedb-rw -- bash +kubectl exec --stdin --tty -n paradedb -- bash ``` -This will launch a Bash shell inside the instance. You can connect to the ParadeDB database via `psql` with: +The primary is called `paradedb-1`. The replicas are called `paradedb-2` onwards depending on the number of replicas you configured. You can connect to the ParadeDB database with `psql` via: ```bash psql -d paradedb ``` -### Connecting to the Grafana Dashboard - -To connect to the Grafana dashboard for your cluster, we suggested port forwarding the Kubernetes service running Grafana to localhost: - -```bash -kubectl --namespace prometheus-community port-forward svc/prometheus-community-grafana 3000:80 -``` - -You can then access the Grafana dasbhoard at [http://localhost:3000/](http://localhost:3000/) using the credentials `admin` as username and `prom-operator` as password. These default credentials are -defined in the [`kube-stack-config.yaml`](https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/docs/src/samples/monitoring/kube-stack-config.yaml) file used as the `values.yaml` file in [Installing the Prometheus CRDs](#installing-the-prometheus-stack) and can be modified by providing your own `values.yaml` file. - ## Development -To test changes to the Chart on a local Minikube cluster, follow the instructions from [Getting Started](#getting-started), replacing the `helm upgrade` step by the path to the directory of the modified `Chart.yaml`. +To test changes to the Chart on a local Minikube cluster, follow the instructions from [Self Hosted](#self-hosted) replacing the `helm upgrade` step by the path to the directory of the modified `Chart.yaml`. ```bash helm upgrade --atomic --install paradedb --namespace paradedb --create-namespace ./charts/paradedb ``` -## Cluster Configuration +## Advanced Cluster Configuration -### Database types +### Database Types To use the ParadeDB Helm Chart, specify `paradedb` via the `type` parameter. -### Modes of operation +### Modes of Operation The chart has three modes of operation. These are configured via the `mode` parameter: @@ -112,7 +101,7 @@ The chart has three modes of operation. These are configured via the `mode` para * `replica` - Creates a replica cluster from an existing CNPG cluster. **_Note_ that this mode is not yet supported.** * `recovery` - Recovers a CNPG cluster from a backup, object store or via pg_basebackup. -### Backup configuration +### Backup Configuration CNPG implements disaster recovery via [Barman](https://pgbarman.org/). The following section configures the barman object store where backups will be stored. Barman performs backups of the cluster filesystem base backup and WALs. Both are @@ -302,18 +291,18 @@ refer to the [CloudNativePG Documentation](https://cloudnative-pg.io/documentat | type | string | `"paradedb"` | Type of the CNPG database. Available types: * `paradedb` | | version.paradedb | string | `"0.12.0"` | We default to v0.12.0 for testing and local development | | version.postgresql | string | `"16"` | PostgreSQL major version to use | -| poolers[].name | string | `` | Name of the pooler resource | -| poolers[].instances | number | `1` | The number of replicas we want | -| poolers[].type | [PoolerType][PoolerType] | `rw` | Type of service to forward traffic to. Default: `rw`. | -| poolers[].poolMode | [PgBouncerPoolMode][PgBouncerPoolMode] | `session` | The pool mode. Default: `session`. | -| poolers[].authQuerySecret | [LocalObjectReference][LocalObjectReference] | `{}` | The credentials of the user that need to be used for the authentication query. | -| poolers[].authQuery | string | `{}` | The credentials of the user that need to be used for the authentication query. | -| poolers[].parameters | map[string]string | `{}` | Additional parameters to be passed to PgBouncer - please check the CNPG documentation for a list of options you can configure | -| poolers[].template | [PodTemplateSpec][PodTemplateSpec] | `{}` | The template of the Pod to be created | -| poolers[].template | [ServiceTemplateSpec][ServiceTemplateSpec] | `{}` | Template for the Service to be created | -| poolers[].pg_hba | []string | `{}` | PostgreSQL Host Based Authentication rules (lines to be appended to the pg_hba.conf file) | -| poolers[].monitoring.enabled | bool | `false` | Whether to enable monitoring for the Pooler. | -| poolers[].monitoring.podMonitor.enabled | bool | `true` | Create a podMonitor for the Pooler. | +| poolers[].name | string | `` | Name of the pooler resource | +| poolers[].instances | number | `1` | The number of replicas we want | +| poolers[].type | [PoolerType][PoolerType] | `rw` | Type of service to forward traffic to. Default: `rw`. | +| poolers[].poolMode | [PgBouncerPoolMode][PgBouncerPoolMode] | `session` | The pool mode. Default: `session`. | +| poolers[].authQuerySecret | [LocalObjectReference][LocalObjectReference] | `{}` | The credentials of the user that need to be used for the authentication query. | +| poolers[].authQuery | string | `{}` | The credentials of the user that need to be used for the authentication query. | +| poolers[].parameters | map[string]string | `{}` | Additional parameters to be passed to PgBouncer - please check the CNPG documentation for a list of options you can configure | +| poolers[].template | [PodTemplateSpec][PodTemplateSpec] | `{}` | The template of the Pod to be created | +| poolers[].template | [ServiceTemplateSpec][ServiceTemplateSpec] | `{}` | Template for the Service to be created | +| poolers[].pg_hba | []string | `{}` | PostgreSQL Host Based Authentication rules (lines to be appended to the pg_hba.conf file) | +| poolers[].monitoring.enabled | bool | `false` | Whether to enable monitoring for the Pooler. | +| poolers[].monitoring.podMonitor.enabled | bool | `true` | Create a podMonitor for the Pooler. | ## Maintainers diff --git a/charts/paradedb/README.md.gotmpl b/charts/paradedb/README.md.gotmpl index c40de8dc9..5b2b50ef7 100644 --- a/charts/paradedb/README.md.gotmpl +++ b/charts/paradedb/README.md.gotmpl @@ -1,47 +1,44 @@ -# ParadeDB CloudNativePG Cluster +# ParadeDB Helm Chart +The [ParadeDB](https://github.com/paradedb/paradedb) Helm Chart is based on the official [CloudNativePG Helm Chart](https://cloudnative-pg.io/). CloudNativePG is a Kubernetes operator that manages the full lifecycle of a highly available PostgreSQL database cluster with a primary/standby architecture using Postgres streaming (physical) replication. -{{ template "chart.deprecationWarning" . }} +Kubernetes, and specifically the CloudNativePG operator, is the recommended approach for deploying ParadeDB in production, with high availability. ParadeDB also provides a [Docker image](https://hub.docker.com/r/paradedb/paradedb) and [prebuilt binaries](https://github.com/paradedb/paradedb/releases) for Debian, Ubuntu and Red Hat Enterprise Linux. +The ParadeDB Helm Chart supports Postgres 13+ and ships with Postgres 16 by default. -The [ParadeDB](https://github.com/paradedb/paradedb) Helm Chart is based on the official [CloudNativePG Helm Chart](https://cloudnative-pg.io/). CloudNativePG is a Kubernetes operator that manages the full lifecycle of a highly available PostgreSQL database cluster with a primary/standby architecture using Postgres streaming replication. +The chart is also available on [Artifact Hub](https://artifacthub.io/packages/helm/paradedb/paradedb). -Kubernetes, and specifically the CloudNativePG operator, is the recommended approach for deploying ParadeDB in production, with high availability. ParadeDB also provides a [Docker image](https://hub.docker.com/r/paradedb/paradedb) and [prebuilt binaries](https://github.com/paradedb/paradedb/releases) for Debian, Ubuntu and Red Hat Enterprise Linux. +## Usage -The chart is also available on [Artifact Hub](https://artifacthub.io/packages/helm/paradedb/paradedb). +### ParadeDB Bring-Your-Own-Cloud (BYOC) -## Getting Started +The most reliable way to run ParadeDB in production is with ParadeDB BYOC, an end-to-end managed solution that runs in the customer’s cloud account. It deploys on managed Kubernetes services and uses the ParadeDB Helm Chart. -First, install [Helm](https://helm.sh/docs/intro/install/). The following steps assume you have a Kubernetes cluster running v1.25+. If you are testing locally, we recommend using [Minikube](https://minikube.sigs.k8s.io/docs/start/). +ParadeDB BYOC includes built-in integration with managed PostgreSQL services, such as AWS RDS, via logical replication. It also provides monitoring, logging and alerting through Prometheus and Grafana. The ParadeDB team manages the underlying infrastructure and lifecycle of the cluster. -### Installing the Prometheus Stack +You can read more about the optimal architecture for running ParadeDB in production [here](https://docs.paradedb.com/deploy/overview) and you can contact sales [here](mailto:sales@paradedb.com). -The ParadeDB Helm chart supports monitoring via Prometheus and Grafana. To enable this, you need to have the Prometheus CRDs installed before installing the CloudNativePG operator. If you do not yet have the Prometheus CRDs installed on your Kubernetes cluster, you can install it with: +### Self-Hosted -```bash -helm repo add prometheus-community https://prometheus-community.github.io/helm-charts -helm upgrade --atomic --install prometheus-community \ ---create-namespace \ ---namespace prometheus-community \ ---values https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/docs/src/samples/monitoring/kube-stack-config.yaml \ -prometheus-community/kube-prometheus-stack -``` +First, install [Helm](https://helm.sh/docs/intro/install/). The following steps assume you have a Kubernetes cluster running v1.25+. If you are testing locally, we recommend using [Minikube](https://minikube.sigs.k8s.io/docs/start/). + +#### Monitoring + +The ParadeDB Helm chart supports monitoring via Prometheus and Grafana. To enable monitoring, you need to have the Prometheus CRDs installed before installing the CloudNativePG operator. The Promotheus CRDs can be found [here](https://prometheus-community.github.io/helm-charts). -### Installing the CloudNativePG Operator +#### Installing the CloudNativePG Operator -Skip this step if the CloudNativePG operator is already installed in your cluster. If you do not wish to monitor your cluster, omit the `--set` commands. +Skip this step if the CloudNativePG operator is already installed in your cluster. For advanced CloudNativePG configuration and monitoring, please refer to the [CloudNativePG Cluster Chart documentation](https://github.com/cloudnative-pg/charts/blob/main/charts/cloudnative-pg/README.md#values). ```bash helm repo add cnpg https://cloudnative-pg.github.io/charts helm upgrade --atomic --install cnpg \ --create-namespace \ --namespace cnpg-system \ ---set monitoring.podMonitorEnabled=true \ ---set monitoring.grafanaDashboard.create=true \ cnpg/cloudnative-pg ``` -### Setting up a ParadeDB CNPG Cluster +#### Setting up a ParadeDB CNPG Cluster Create a `values.yaml` and configure it to your requirements. Here is a basic example: @@ -55,7 +52,7 @@ cluster: size: 256Mi ``` -Then, launch the ParadeDB cluster. If you do not wish to monitor your cluster, omit the `--set` command. +Then, launch the ParadeDB cluster. ```bash helm repo add paradedb https://paradedb.github.io/charts @@ -63,52 +60,40 @@ helm upgrade --atomic --install paradedb \ --namespace paradedb \ --create-namespace \ --values values.yaml \ ---set cluster.monitoring.enabled=true \ paradedb/paradedb ``` -If `--values values.yaml` is omitted, the default values will be used. For additional configuration options for the `values.yaml` file, including configuring backups and PgBouncer, please refer to the [ParadeDB Helm Chart documentation](https://artifacthub.io/packages/helm/paradedb/paradedb#values). For advanced cluster configuration options, please refer to the [CloudNativePG Cluster Chart documentation](charts/paradedb/README.md). +If `--values values.yaml` is omitted, the default values will be used. For advanced ParadeDB configuration and monitoring, please refer to the [ParadeDB Chart documentation](https://github.com/paradedb/charts/tree/dev/charts/paradedb#values). -### Connecting to a ParadeDB CNPG Cluster +#### Connecting to a ParadeDB CNPG Cluster -The command to connect to the primary instance of the cluster will be printed in your terminal. If you do not modify any settings, it will be: +You can launch a Bash shell inside a specific pod via: ```bash -kubectl --namespace paradedb exec --stdin --tty services/paradedb-rw -- bash +kubectl exec --stdin --tty -n paradedb -- bash ``` -This will launch a Bash shell inside the instance. You can connect to the ParadeDB database via `psql` with: +The primary is called `paradedb-1`. The replicas are called `paradedb-2` onwards depending on the number of replicas you configured. You can connect to the ParadeDB database with `psql` via: ```bash psql -d paradedb ``` -### Connecting to the Grafana Dashboard - -To connect to the Grafana dashboard for your cluster, we suggested port forwarding the Kubernetes service running Grafana to localhost: - -```bash -kubectl --namespace prometheus-community port-forward svc/prometheus-community-grafana 3000:80 -``` - -You can then access the Grafana dasbhoard at [http://localhost:3000/](http://localhost:3000/) using the credentials `admin` as username and `prom-operator` as password. These default credentials are -defined in the [`kube-stack-config.yaml`](https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/docs/src/samples/monitoring/kube-stack-config.yaml) file used as the `values.yaml` file in [Installing the Prometheus CRDs](#installing-the-prometheus-stack) and can be modified by providing your own `values.yaml` file. - ## Development -To test changes to the Chart on a local Minikube cluster, follow the instructions from [Getting Started](#getting-started), replacing the `helm upgrade` step by the path to the directory of the modified `Chart.yaml`. +To test changes to the Chart on a local Minikube cluster, follow the instructions from [Self Hosted](#self-hosted) replacing the `helm upgrade` step by the path to the directory of the modified `Chart.yaml`. ```bash helm upgrade --atomic --install paradedb --namespace paradedb --create-namespace ./charts/paradedb ``` -## Cluster Configuration +## Advanced Cluster Configuration -### Database types +### Database Types To use the ParadeDB Helm Chart, specify `paradedb` via the `type` parameter. -### Modes of operation +### Modes of Operation The chart has three modes of operation. These are configured via the `mode` parameter: @@ -116,7 +101,7 @@ The chart has three modes of operation. These are configured via the `mode` para * `replica` - Creates a replica cluster from an existing CNPG cluster. **_Note_ that this mode is not yet supported.** * `recovery` - Recovers a CNPG cluster from a backup, object store or via pg_basebackup. -### Backup configuration +### Backup Configuration CNPG implements disaster recovery via [Barman](https://pgbarman.org/). The following section configures the barman object store where backups will be stored. Barman performs backups of the cluster filesystem base backup and WALs. Both are @@ -153,28 +138,12 @@ There is a separate document outlining the recovery procedure here: **[Recovery] There are several configuration examples in the [examples](examples) directory. Refer to them for a basic setup and refer to the [CloudNativePG Documentation](https://cloudnative-pg.io/documentation/current/) for more advanced configurations. - -{{ template "chart.requirementsSection" . }} - +## Values {{ template "chart.valuesSection" . }} -| poolers[].name | string | `` | Name of the pooler resource | -| poolers[].instances | number | `1` | The number of replicas we want | -| poolers[].type | [PoolerType][PoolerType] | `rw` | Type of service to forward traffic to. Default: `rw`. | -| poolers[].poolMode | [PgBouncerPoolMode][PgBouncerPoolMode] | `session` | The pool mode. Default: `session`. | -| poolers[].authQuerySecret | [LocalObjectReference][LocalObjectReference] | `{}` | The credentials of the user that need to be used for the authentication query. | -| poolers[].authQuery | string | `{}` | The credentials of the user that need to be used for the authentication query. | -| poolers[].parameters | map[string]string | `{}` | Additional parameters to be passed to PgBouncer - please check the CNPG documentation for a list of options you can configure | -| poolers[].template | [PodTemplateSpec][PodTemplateSpec] | `{}` | The template of the Pod to be created | -| poolers[].template | [ServiceTemplateSpec][ServiceTemplateSpec] | `{}` | Template for the Service to be created | -| poolers[].pg_hba | []string | `{}` | PostgreSQL Host Based Authentication rules (lines to be appended to the pg_hba.conf file) | -| poolers[].monitoring.enabled | bool | `false` | Whether to enable monitoring for the Pooler. | -| poolers[].monitoring.podMonitor.enabled | bool | `true` | Create a podMonitor for the Pooler. | -{{ template "chart.maintainersSection" . }} - -{{ template "helm-docs.versionFooter" . }} +{{ template "chart.maintainersSection" . }} ## License