forked from cloudnative-pg/charts
-
-
Notifications
You must be signed in to change notification settings - Fork 3
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
docs: Update docs following BYOC (#52)
- Loading branch information
1 parent
adfd4fa
commit d764615
Showing
4 changed files
with
102 additions
and
155 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,5 @@ | ||
/.github/actions/ | ||
/.github/workflows/lint.yml | ||
/.github/workflows/tests-*.yml | ||
.github/workflows/tests-*.yaml | ||
/.github/workflows/tests-*.yaml | ||
/charts/ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -26,44 +26,45 @@ | |
|
||
# ParadeDB Helm Chart | ||
|
||
The [ParadeDB](https://github.com/paradedb/paradedb) Helm Chart is based on the official [CloudNativePG Helm Chart](https://cloudnative-pg.io/). CloudNativePG is a Kubernetes operator that manages the full lifecycle of a highly available PostgreSQL database cluster with a primary/standby architecture using Postgres streaming replication. | ||
The [ParadeDB](https://github.com/paradedb/paradedb) Helm Chart is based on the official [CloudNativePG Helm Chart](https://cloudnative-pg.io/). CloudNativePG is a Kubernetes operator that manages the full lifecycle of a highly available PostgreSQL database cluster with a primary/standby architecture using Postgres streaming (physical) replication. | ||
|
||
Kubernetes, and specifically the CloudNativePG operator, is the recommended approach for deploying ParadeDB in production, with high availability. ParadeDB also provides a [Docker image](https://hub.docker.com/r/paradedb/paradedb) and [prebuilt binaries](https://github.com/paradedb/paradedb/releases) for Debian, Ubuntu and Red Hat Enterprise Linux. | ||
|
||
The ParadeDB Helm Chart supports Postgres 13+ and ships with Postgres 16 by default. | ||
|
||
The chart is also available on [Artifact Hub](https://artifacthub.io/packages/helm/paradedb/paradedb). | ||
|
||
## Getting Started | ||
## Usage | ||
|
||
First, install [Helm](https://helm.sh/docs/intro/install/). The following steps assume you have a Kubernetes cluster running v1.25+. If you are testing locally, we recommend using [Minikube](https://minikube.sigs.k8s.io/docs/start/). | ||
### ParadeDB Bring-Your-Own-Cloud (BYOC) | ||
|
||
### Installing the Prometheus Stack | ||
The most reliable way to run ParadeDB in production is with ParadeDB BYOC, an end-to-end managed solution that runs in the customer’s cloud account. It deploys on managed Kubernetes services and uses the ParadeDB Helm Chart. | ||
|
||
The ParadeDB Helm chart supports monitoring via Prometheus and Grafana. To enable this, you need to have the Prometheus CRDs installed before installing the CloudNativePG operator. If you do not yet have the Prometheus CRDs installed on your Kubernetes cluster, you can install it with: | ||
ParadeDB BYOC includes built-in integration with managed PostgreSQL services, such as AWS RDS, via logical replication. It also provides monitoring, logging and alerting through Prometheus and Grafana. The ParadeDB team manages the underlying infrastructure and lifecycle of the cluster. | ||
|
||
```bash | ||
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts | ||
helm upgrade --atomic --install prometheus-community \ | ||
--create-namespace \ | ||
--namespace prometheus-community \ | ||
--values https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/docs/src/samples/monitoring/kube-stack-config.yaml \ | ||
prometheus-community/kube-prometheus-stack | ||
``` | ||
You can read more about the optimal architecture for running ParadeDB in production [here](https://docs.paradedb.com/deploy/overview) and you can contact sales [here](mailto:[email protected]). | ||
|
||
### Self-Hosted | ||
|
||
First, install [Helm](https://helm.sh/docs/intro/install/). The following steps assume you have a Kubernetes cluster running v1.25+. If you are testing locally, we recommend using [Minikube](https://minikube.sigs.k8s.io/docs/start/). | ||
|
||
### Installing the CloudNativePG Operator | ||
#### Monitoring | ||
|
||
Skip this step if the CloudNativePG operator is already installed in your cluster. If you do not wish to monitor your cluster, omit the `--set` commands. | ||
The ParadeDB Helm chart supports monitoring via Prometheus and Grafana. To enable monitoring, you need to have the Prometheus CRDs installed before installing the CloudNativePG operator. The Promotheus CRDs can be found [here](https://prometheus-community.github.io/helm-charts). | ||
|
||
#### Installing the CloudNativePG Operator | ||
|
||
Skip this step if the CloudNativePG operator is already installed in your cluster. For advanced CloudNativePG configuration and monitoring, please refer to the [CloudNativePG Cluster Chart documentation](https://github.com/cloudnative-pg/charts/blob/main/charts/cloudnative-pg/README.md#values). | ||
|
||
```bash | ||
helm repo add cnpg https://cloudnative-pg.github.io/charts | ||
helm upgrade --atomic --install cnpg \ | ||
--create-namespace \ | ||
--namespace cnpg-system \ | ||
--set monitoring.podMonitorEnabled=true \ | ||
--set monitoring.grafanaDashboard.create=true \ | ||
cnpg/cloudnative-pg | ||
``` | ||
|
||
### Setting up a ParadeDB CNPG Cluster | ||
#### Setting up a ParadeDB CNPG Cluster | ||
|
||
Create a `values.yaml` and configure it to your requirements. Here is a basic example: | ||
|
||
|
@@ -77,48 +78,36 @@ cluster: | |
size: 256Mi | ||
``` | ||
Then, launch the ParadeDB cluster. If you do not wish to monitor your cluster, omit the `--set` command. | ||
Then, launch the ParadeDB cluster. | ||
```bash | ||
helm repo add paradedb https://paradedb.github.io/charts | ||
helm upgrade --atomic --install paradedb \ | ||
--namespace paradedb \ | ||
--create-namespace \ | ||
--values values.yaml \ | ||
--set cluster.monitoring.enabled=true \ | ||
paradedb/paradedb | ||
``` | ||
|
||
If `--values values.yaml` is omitted, the default values will be used. For additional configuration options for the `values.yaml` file, including configuring backups and PgBouncer, please refer to the [ParadeDB Helm Chart documentation](https://artifacthub.io/packages/helm/paradedb/paradedb#values). For advanced cluster configuration options, please refer to the [CloudNativePG Cluster Chart documentation](charts/paradedb/README.md). | ||
If `--values values.yaml` is omitted, the default values will be used. For advanced ParadeDB configuration and monitoring, please refer to the [ParadeDB Chart documentation](https://github.com/paradedb/charts/tree/dev/charts/paradedb#values). | ||
|
||
### Connecting to a ParadeDB CNPG Cluster | ||
#### Connecting to a ParadeDB CNPG Cluster | ||
|
||
The command to connect to the primary instance of the cluster will be printed in your terminal. If you do not modify any settings, it will be: | ||
You can launch a Bash shell inside a specific pod via: | ||
|
||
```bash | ||
kubectl --namespace paradedb exec --stdin --tty services/paradedb-rw -- bash | ||
kubectl exec --stdin --tty <pod-name> -n paradedb -- bash | ||
``` | ||
|
||
This will launch a Bash shell inside the instance. You can connect to the ParadeDB database via `psql` with: | ||
The primary is called `paradedb-1`. The replicas are called `paradedb-2` onwards depending on the number of replicas you configured. You can connect to the ParadeDB database with `psql` via: | ||
|
||
```bash | ||
psql -d paradedb | ||
``` | ||
|
||
### Connecting to the Grafana Dashboard | ||
|
||
To connect to the Grafana dashboard for your cluster, we suggested port forwarding the Kubernetes service running Grafana to localhost: | ||
|
||
```bash | ||
kubectl --namespace prometheus-community port-forward svc/prometheus-community-grafana 3000:80 | ||
``` | ||
|
||
You can then access the Grafana dasbhoard at [http://localhost:3000/](http://localhost:3000/) using the credentials `admin` as username and `prom-operator` as password. These default credentials are | ||
defined in the [`kube-stack-config.yaml`](https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/docs/src/samples/monitoring/kube-stack-config.yaml) file used as the `values.yaml` file in [Installing the Prometheus CRDs](#installing-the-prometheus-stack) and can be modified by providing your own `values.yaml` file. | ||
|
||
## Development | ||
|
||
To test changes to the Chart on a local Minikube cluster, follow the instructions from [Getting Started](#getting-started), replacing the `helm upgrade` step by the path to the directory of the modified `Chart.yaml`. | ||
To test changes to the Chart on a local Minikube cluster, follow the instructions from [Self Hosted](#self-hosted) replacing the `helm upgrade` step by the path to the directory of the modified `Chart.yaml`. | ||
|
||
```bash | ||
helm upgrade --atomic --install paradedb --namespace paradedb --create-namespace ./charts/paradedb | ||
|
Oops, something went wrong.