diff --git a/docs/deployment/private/data-plane-collaboration.mdx b/docs/deployment/private/data-plane-collaboration.mdx
index f63976382..f2bdba5da 100644
--- a/docs/deployment/private/data-plane-collaboration.mdx
+++ b/docs/deployment/private/data-plane-collaboration.mdx
@@ -1,5 +1,5 @@
---
-sidebar_position: 4
+sidebar_position: 3
sidebar_label: Data Plane Collaboration
description: "Learn how to use the data plane collaboration feature for Private DDN."
keywords:
diff --git a/docs/deployment/private/index.mdx b/docs/deployment/private/index.mdx
index 68d126cf1..defd7a354 100644
--- a/docs/deployment/private/index.mdx
+++ b/docs/deployment/private/index.mdx
@@ -19,6 +19,6 @@ Private Hasura DDN offers enhanced security and isolation by enabling private co
other connectors. Hasura communicates with your sources over a dedicated private network, bypassing the public internet.
- [Hasura-Hosted (VPC)](/deployment/private/hasura-hosted.mdx) private deployments
-- [Self-Hosted (BYOC)](/deployment/private/self-hosted.mdx) private deployments
- [Data Plane Collaboration](/deployment/private/data-plane-collaboration.mdx)
-- [Self-Hosted (Customer Managed) Data Plane Installation](/deployment/private/self-hosted-deployment.mdx)
+- [Self-Hosted (BYOC)](/deployment/private/self-hosted) private deployments
+ - [Data Plane Installation](/deployment/private/self-hosted/self-hosted-deployment.mdx) self-hosted and customer managed
\ No newline at end of file
diff --git a/docs/deployment/private/self-hosted/_category_.json b/docs/deployment/private/self-hosted/_category_.json
new file mode 100644
index 000000000..56205d14a
--- /dev/null
+++ b/docs/deployment/private/self-hosted/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Self-Hosted (BYOC)",
+ "position": 7
+}
diff --git a/docs/deployment/private/self-hosted.mdx b/docs/deployment/private/self-hosted/index.mdx
similarity index 95%
rename from docs/deployment/private/self-hosted.mdx
rename to docs/deployment/private/self-hosted/index.mdx
index af5ef35ae..3292bafd9 100644
--- a/docs/deployment/private/self-hosted.mdx
+++ b/docs/deployment/private/self-hosted/index.mdx
@@ -1,74 +1,74 @@
----
-sidebar_position: 3
-sidebar_label: Self-Hosted (BYOC)
-description: "Learn how to self-host your own instance of Hasura DDN."
-keywords:
- - hasura ddn
- - enterprise ddn
- - private ddn
----
-
-import Thumbnail from "@site/src/components/Thumbnail";
-
-# Self-Hosted (BYOC)
-
-## Introduction
-
-Hasura DDN is also available to self-host if you need to have complete control over the infrastructure.
-
-There are two models of deployment with Self-Hosted DDN.
-
-1. Hasura manages the data plane on the customer's cloud account.
-
- The [data plane](/deployment/architecture.mdx#data-plane) will run on a Kubernetes cluster of your choice and Hasura
- manages the uptime, upgrades, etc. The Hasura Control Plane will need to be given sufficient privileges to install,
- configure and manage the workloads and associated infrastructure components. The experience is exactly like using
- Hasura Hosted DDN, but running on your infrastructure. All API traffic stays within your network and does not leave
- the boundaries of your system.
-
-2. Customer manages the data plane on their cloud account themselves.
-
- The [data plane](/deployment/architecture.mdx#data-plane) can be installed on a Kubernetes cluster of your choice and
- you control the uptime, upgrades, etc. All API traffic stays within your network and does not leave the boundaries of
- your system.
-
- Installation docs related to deployment of a self hosted, customer managed data plane are located [here](/deployment/private/self-hosted-deployment.mdx)
-
-In both cases, the [control plane](/deployment/architecture.mdx#control-plane) is always hosted by Hasura.
-
-
-
-## Data Flow and security
-
-All critical data operations occur exclusively within the customer infrastructure. When an API user sends a GraphQL
-query, it's received by Hasura in the Data Plane. Hasura then directly accesses the necessary data sources within the
-customer infrastructure to fulfill the request. This direct access ensures that sensitive data never leaves the
-customer's controlled environment, maintaining data residency and security.
-
-While the Control Plane, situated in the Hasura infrastructure, does communicate with the Data Plane, this communication
-is strictly limited to configuration and management purposes. Importantly, this communication does not involve customer
-data or API responses, further enhancing data security.
-
-The distinction between the Control and Planes creates well-defined security boundaries. By keeping the Data Plane and
-all data sources within the customer's security perimeter, the architecture ensures that sensitive information remains
-under the customer's control at all times.
-
-## Interactions with the control plane
-
-In Self-Hosted Hasura DDN, the Data Plane running on your infrastructure communicates with the control plane only in
-very specific scenarios:
-
-1. The Data Plane accesses a metadata storage bucket to retrieve build artifacts; these artifacts are required to
- process API requests.
-2. The data plane accesses the control plane APIs to retrieve information about (applied) builds for the purposes of
- routing.
-3. (Optional) The data plane sends observability data to the Control Plane so you can visualize it on the console; it
- does not contain any API response data or variables.
-4. The Control Plane interacts with the Kubernetes cluster to manage Data Plane workloads (only in a Hasura-Managed Data
- Plane).
-
-
-
-## Get started
-
-To get started with Hasura DDN in your own cloud, [contact sales](https://hasura.io/contact-us).
+---
+sidebar_position: 1
+sidebar_label: Overview
+description: "Learn how to self-host your own instance of Hasura DDN."
+keywords:
+ - hasura ddn
+ - enterprise ddn
+ - private ddn
+---
+
+import Thumbnail from "@site/src/components/Thumbnail";
+
+# Self-Hosted (BYOC)
+
+## Introduction
+
+Hasura DDN is also available to self-host if you need to have complete control over the infrastructure.
+
+There are two models of deployment with Self-Hosted DDN.
+
+1. Hasura manages the data plane on the customer's cloud account.
+
+ The [data plane](/deployment/architecture.mdx#data-plane) will run on a Kubernetes cluster of your choice and Hasura
+ manages the uptime, upgrades, etc. The Hasura Control Plane will need to be given sufficient privileges to install,
+ configure and manage the workloads and associated infrastructure components. The experience is exactly like using
+ Hasura Hosted DDN, but running on your infrastructure. All API traffic stays within your network and does not leave
+ the boundaries of your system.
+
+2. Customer manages the data plane on their cloud account themselves.
+
+ The [data plane](/deployment/architecture.mdx#data-plane) can be installed on a Kubernetes cluster of your choice and
+ you control the uptime, upgrades, etc. All API traffic stays within your network and does not leave the boundaries of
+ your system.
+
+ Installation docs related to deployment of a self hosted, customer managed data plane are located [here](/deployment/private/self-hosted/self-hosted-deployment.mdx)
+
+In both cases, the [control plane](/deployment/architecture.mdx#control-plane) is always hosted by Hasura.
+
+
+
+## Data Flow and security
+
+All critical data operations occur exclusively within the customer infrastructure. When an API user sends a GraphQL
+query, it's received by Hasura in the Data Plane. Hasura then directly accesses the necessary data sources within the
+customer infrastructure to fulfill the request. This direct access ensures that sensitive data never leaves the
+customer's controlled environment, maintaining data residency and security.
+
+While the Control Plane, situated in the Hasura infrastructure, does communicate with the Data Plane, this communication
+is strictly limited to configuration and management purposes. Importantly, this communication does not involve customer
+data or API responses, further enhancing data security.
+
+The distinction between the Control and Planes creates well-defined security boundaries. By keeping the Data Plane and
+all data sources within the customer's security perimeter, the architecture ensures that sensitive information remains
+under the customer's control at all times.
+
+## Interactions with the control plane
+
+In Self-Hosted Hasura DDN, the Data Plane running on your infrastructure communicates with the control plane only in
+very specific scenarios:
+
+1. The Data Plane accesses a metadata storage bucket to retrieve build artifacts; these artifacts are required to
+ process API requests.
+2. The data plane accesses the control plane APIs to retrieve information about (applied) builds for the purposes of
+ routing.
+3. (Optional) The data plane sends observability data to the Control Plane so you can visualize it on the console; it
+ does not contain any API response data or variables.
+4. The Control Plane interacts with the Kubernetes cluster to manage Data Plane workloads (only in a Hasura-Managed Data
+ Plane).
+
+
+
+## Get started
+
+To get started with Hasura DDN in your own cloud, [contact sales](https://hasura.io/contact-us).
diff --git a/docs/deployment/private/self-hosted-deployment.mdx b/docs/deployment/private/self-hosted/self-hosted-deployment.mdx
similarity index 56%
rename from docs/deployment/private/self-hosted-deployment.mdx
rename to docs/deployment/private/self-hosted/self-hosted-deployment.mdx
index 962335c14..b3de13b6d 100644
--- a/docs/deployment/private/self-hosted-deployment.mdx
+++ b/docs/deployment/private/self-hosted/self-hosted-deployment.mdx
@@ -1,6 +1,6 @@
---
-sidebar_position: 5
-sidebar_label: Self-Hosted (Customer Managed) Data Plane Installation Guide
+sidebar_position: 1
+sidebar_label: Data Plane Installation
description: "Learn how to install a Self-Hosted (Customer Managed) Data Plane."
keywords:
- hasura ddn
@@ -10,28 +10,39 @@ keywords:
import Thumbnail from "@site/src/components/Thumbnail";
-# Self hosted (Customer Managed) Data Plane Installation Guide
+# Self-Hosted (Customer Managed) Data Plane Installation Guide
:::info
Documentation here targets customers who want to self host and self manage their clusters as well as their workloads. Here, you will find a full set of instructions, which takes you from local development all the way to having your workloads running under your Kubernetes hosted data plane.
:::
-## Prerequisites
+## Prerequisites {#prerequisites}
Before continuing, ensure you go through the following checklist and confirm that you meet all the requirements
- [DDN CLI](/getting-started/quickstart.mdx) (Latest)
- [Docker v2.27.1](/getting-started/quickstart.mdx) (Or greater)
- - You can also run `ddn doctor` to confirm that you meet the minimum requirements.
+ - You can also run `ddn doctor` to confirm that you meet the minimum requirements
- [Helm3](https://helm.sh/docs/intro/install/) (Prefer latest)
- [Hasura VS Code Extension](/getting-started/quickstart.mdx) (Recommended, but not required)
- Access to a Kubernetes cluster
+ - See Kubernetes version requirement [below](#kubernetes-version)
- Ability to build and push images that can then be pulled down from the Kubernetes cluster
- A user account on the Hasura DDN Control Plane
-- A Data Plane id & key created by the Hasura team. These will be referenced as `` and ``
+- A Data Plane id, key & customer identifier created by the Hasura team. These will be referenced as ``, `` & ``
-## Step 1. Local development
+### Kubernetes version requirement {#kubernetes-version}
+
+These instructions were tested under the following:
+- Amazon Elastic Kubernetes Service (EKS)
+- Azure Kubernetes Service (AKS)
+- Google Kubernetes Engine (GKE)
+- Non-Cloud Kubernetes
+
+Version requirement: `1.28+`
+
+## Step 1. Local development {#local-development}
:::note
@@ -80,11 +91,11 @@ ddn console --local
At this point, you have connected and introspected a data source and built your supergraph locally. Verify that everything is working as expected before moving on to the next section.
-## Step 2. Build connector(s)
+## Step 2. Build connector(s) {#build-connectors}
-:::warning Building images with proper OS/Arch
+:::warning Building images with proper target platform
-Ensure that you are building the image with the correct OS/arch which would enable the image to run properly under your Kubernetes cluster
+Ensure that you are building the image with the proper target platform. If you need to build an image for a different target platform, specify it via `export DOCKER_DEFAULT_PLATFORM=` prior to running the commands below. A common `` to use is `linux/amd64`.
:::
:::note
@@ -96,7 +107,7 @@ Repeat the steps below for each connector within your supergraph.
docker compose build
```
-```bash title="Re-tag the image"
+```bash title="Re-tag the image."
docker tag -app_ /:
```
@@ -104,7 +115,7 @@ docker tag -app_ /:/:
```
-## Step 3. Deploy connector(s) with Helm
+## Step 3. Deploy connector(s) {#deploy-connectors}
:::info Hasura DDN Helm Repo
@@ -113,7 +124,7 @@ Our DDN Helm Repo can be found [here](https://github.com/hasura/ddn-helm-charts/
Contact the Hasura team if you don't see a Helm chart available for your connector.
:::
-Execute `helm search repo hasura-ddn` in order to find the appropriate Helm chart for your connector.
+Execute `helm search repo hasura-ddn` in order to find the appropriate Helm chart for your connector. The connector chart name will be referenced as `` within this step.
A typical connector Helm install command would look like this:
@@ -121,26 +132,38 @@ A typical connector Helm install command would look like this:
- ``: Helm release name for your connector.
+- ``: Namespace to deploy connector to.
+
+- ``: Container repository path (include the image name) which you chose in [Step #2](#build-connectors).
+
+- ``: Image tag which you chose in [Step #2](#build-connectors).
+
- ``: The connector type. Select the appropriate value from [here](https://raw.githubusercontent.com/hasura/ddn-helm-charts/main/CONNECTORS).
- ``: Data Plane id, provided by the Hasura team.
- ``: Data Plane key, provided by the Hasura team.
-- ``: Connector Env variable name and corresponding ``. Repeat setting this for all connector environment variables which you need to pass along.
+- ``: Connector Env variable name and corresponding ``. Run `helm show readme ` in order to view the connector's README, which will list out the ENV variables that you need to set in the `helm upgrade` command.
:::
-```bash title="Connector Helm install"
+:::tip
+
+A common connector ENV variable that always needs to be passed through is `connectorEnvVars.HASURA_SERVICE_TOKEN_SECRET`. This value comes from the supergraph's `.env` file.
+:::
+
+```bash title="Connector Helm install."
helm upgrade --install \
- --set image.repository="my_repo/ndc-mongodb" \
- --set image.tag="my_custom_image_tag" \
+ --set namespace="" \
+ --set image.repository="" \
+ --set image.tag="" \
--set dataPlane.id="" \
--set dataPlane.key="" \
--set connectorEnvVars.="" \
hasura-ddn/
```
-## Step 4. Create a cloud project
+## Step 4. Create a cloud project {#create-cloud-project}
Ensure that you are running the commands here from the root of your supergraph.
@@ -150,8 +173,7 @@ ddn project init --data-plane-id
This command will create a cloud project and will report back as to what the name of your cloud project is. We will reference this name going forward as ``.
-
-## Step 5. Update .env.cloud with connector URLs
+## Step 5. Update .env.cloud with connector URLs {#update-env-cloud}
Next, you will need to access the `.env.cloud` file which was generated at the root of your supergraph.
@@ -171,21 +193,21 @@ URL: **http://``-``.``:8
- ``: Namespace where your connector was deployed to.
:::
-## Step 6. Create a cloud build
+## Step 6. Create a cloud build {#create-cloud-build}
After making the necessary changes to your `.env.cloud` file, run the below command. This will create a cloud build and will also generate the necessary artifacts which will later on be consumed by your v3-engine.
```bash title="Create a build for your cloud project."
-ddn supergraph build create --self-hosted-data-plane --output-dir build-data --project --out json
+ddn supergraph build create --self-hosted-data-plane --output-dir build-cloud --project --out json
```
At this point, take note of the `` and `` which will be outputted here. You will need these later.
-## Step 7. Build v3-engine
+## Step 7. Build v3-engine {#build-engine}
-:::warning Building images with proper OS/Arch
+:::warning Building images with proper target platform
-Ensure that you are building the image with the correct OS/arch which would enable the image to run properly under your Kubernetes cluster
+Ensure that you are building the image with the proper target platform. If you need to build an image for a different target platform, specify it via `export DOCKER_DEFAULT_PLATFORM=` prior to running the commands below. A common `` to use is `linux/amd64`.
:::
Ensure that you are running the commands from the root of your supergraph.
@@ -193,7 +215,7 @@ Ensure that you are running the commands from the root of your supergraph.
```bash title="Create a Dockerfile for v3-engine."
cat <> Dockerfile
FROM ghcr.io/hasura/v3-engine
-COPY ./build-data /md/
+COPY ./build-cloud /md/
EOF
```
@@ -205,13 +227,24 @@ docker build -t /v3-engine: .
docker push /v3-engine:
```
-## Step 8. Deploy v3-engine with Helm
+## Step 8. Deploy v3-engine {#deploy-engine}
+
+:::note
+
+Everytime you create a new cloud build, execute the steps below.
+:::
See the DDN Helm Repo [v3-engine](https://github.com/hasura/ddn-helm-charts/tree/main/charts/v3-engine) section for full documentation. A typical v3-engine Helm installation would look like this:
:::info
-- ``: Helm release name for v3-engine.
+- ``: Helm release name for v3-engine. A suggested name is: `v3-engine-`.
+
+- ``: Namespace to deploy v3-engine to.
+
+- ``: Container repository path (includes the image name) which you chose in [Step #7](#build-engine).
+
+- ``: Image tag which you chose in [Step #7](#build-engine).
- ``: Observability hostname. This was returned in output when `ddn supergraph build create` was executed.
@@ -220,31 +253,32 @@ See the DDN Helm Repo [v3-engine](https://github.com/hasura/ddn-helm-charts/tree
- ``: Data Plane key, provided by the Hasura team.
:::
-```bash title="v3-engine helm install"
+```bash title="v3-engine helm install."
helm upgrade --install \
- --set image.repository="my_repo/v3-engine" \
- --set image.tag="my_custom_image_tag" \
+ --set namespace="" \
+ --set image.repository="" \
+ --set image.tag="" \
--set observability.hostName="" \
--set dataPlane.id="" \
--set dataPlane.key="" \
hasura-ddn/v3-engine
```
-## Step 9. Create build specific ingress
+## Step 9. Create ingress {#create-build-ingress}
:::note
-Everytime you create a new build via `ddn supergraph build create` command, execute the steps below.
+Everytime you create a new cloud build, execute the steps below.
:::
-If you're using nginx-ingress and cert-manager, you can deploy using the below manifest. Ensure that you modify this accordingly
+If you're using nginx-ingress and cert-manager, you can deploy using the below manifest (ie. Save it to a file and run `kubectl apply -f `). Ensure that you modify this accordingly
:::info
- ``: This was part of the output when you ran `ddn supergraph build create` command.
-- ``: Domain which will be used for accessing this ingress. This could be constructed in the following format: **https://``.``**.
+- ``: Domain which will be used for accessing this ingress. This could be constructed in the following format: **``.``**, where `` is a hostname of your own choosing which will host your build specific APIs.
- ``: Namespace where your v3-engine was deployed to.
@@ -292,23 +326,60 @@ Next, you will be running the command below in order to record the ingress URL w
ddn supergraph build set-self-hosted-engine-url --build-version --project
```
-## Step 10. Apply and promote a build to project's API
+## Step 10. Deploy v3-engine (Project API specific) {#deploy-engine-project-api}
-:::note
+:::note Why am I deploying v3-engine again via Helm?
+
+In this step, you will be deploying v3-engine using a unique Helm release name. You will re-use this release name whenever you need to apply a specific build to the Project API. This is done in order to maintain the immutable build and Project API seperation of DDN.
+:::
+
+See the DDN Helm Repo [v3-engine](https://github.com/hasura/ddn-helm-charts/tree/main/charts/v3-engine) section for full documentation.
+
+:::info
+
+- ``: Helm release name for v3-engine Project API. This should be a unique release name which is specific to your Project API deployment. **You will use this same name everytime you need to apply a build to the project API**.
+
+- ``: Namespace to deploy v3-engine to.
+
+- ``: Container repository path which is tied to the specific build (includes the image name).
+
+- ``: Image tag which is tied to the specific build.
+
+- ``: Observability hostname for Project API. This value is constructed as follows: `..observability`
-Everytime you want to apply and promote a build to the project's API, execute the steps below.
+- ``: Data Plane id, provided by the Hasura team.
+
+- ``: Data Plane key, provided by the Hasura team.
:::
-**NOTE:** We are once again using an example of an ingress object which will work provided you have nginx and cert-manager installed on your cluster.
+```bash title="v3-engine helm install."
+helm upgrade --install \
+ --set namespace="" \
+ --set image.repository="" \
+ --set image.tag="" \
+ --set observability.hostName="" \
+ --set dataPlane.id="" \
+ --set dataPlane.key="" \
+ hasura-ddn/v3-engine
+```
+
+## Step 11. Create ingress (Project API specific) {#create-project-ingress}
+
+:::tip _This step needs to be executed just one time_
+
+Below you will be creating an ingress for the Project API.
+:::
+
+**NOTE:** We are once again using an example of an ingress object which will work provided you have nginx and cert-manager installed on your cluster. Save the contents into a file and run `kubectl apply -f `.
:::info
- ``: Namespace where your v3-engine was deployed to.
-- ``: Domain which will be used for accessing this ingress. This could be constructed in the following format: **https://``.``**.
+- ``: Domain which will be used for accessing this ingress. This could be constructed in the following format: **``.``** or **``.``** (If you want to name it after the project name). Note that `` is a hostname of your own choosing which will host your Project API.
-- ``: Helm release name for your v3-engine. Choose an appropriate v3-engine Helm release name that you want to be applied/promoted to the project's API.
+- ``: Helm release name for your v3-engine. This matches the `` that was specified in [Step 10](#deploy-engine-project-api).
:::
```bash
@@ -330,7 +401,7 @@ spec:
paths:
- backend:
service:
- name: -v3-engine
+ name: -v3-engine
port:
number: 3000
path: /
@@ -341,11 +412,9 @@ spec:
secretName: -tls-certs
```
-Next, you will be running the command below in order to record the project's API URL within Hasura's Control Plane.
-
-:::tip
+After you create the ingress above, you will be running the command below in order to record the project's API URL within Hasura's Control Plane.
-**You will only need to do this once**.
+:::info
- ``: Domain name, chosen above and prepended with protocol.
:::
@@ -354,6 +423,26 @@ Next, you will be running the command below in order to record the project's API
ddn project set-self-hosted-engine-url
```
-## Step 11. View API via console
+## Step 12. Apply a build to Project API {#apply-build}
+
+:::tip
+
+Everytime you need to apply a specific build to your Project API, execute this step.
+:::
+
+Repeat the `helm upgrade` instructions in [Step 10](#deploy-engine-project-api), using the unique Helm release name which you chose for your Project API. **Ensure that you are passing along the appropriate image tag for your v3-engine (ie. The image tag which is associated with the specific build that you want to apply).**
+
+After you go through the Helm installation, you need to go ahead and mark the build as applied. Proceed with the final step below.
+
+:::info
+
+- ``: This is the build version which you just ran the `helm upgrade` for.
+:::
+
+```bash title="Mark build as applied."
+ddn supergraph build apply
+```
+
+## Step 13. View Project API via console {#view-api}
Access [Hasura console](https://console.hasura.io) and locate your cloud project. Access your project and verify your deployment.