Skip to content

Commit

Permalink
Update k8s target docs with new copies, and redirect old locations
Browse files Browse the repository at this point in the history
  • Loading branch information
thekatstevens committed Jul 29, 2024
1 parent bd6d312 commit 8ec4706
Show file tree
Hide file tree
Showing 18 changed files with 604 additions and 1,373 deletions.
Original file line number Diff line number Diff line change
@@ -1,28 +1,9 @@
---
layout: src/layouts/Default.astro
pubDate: 2023-01-01
modDate: 2024-04-24
title: Kubernetes
navTitle: Overview
navSection: Kubernetes
description: Kubernetes deployment targets
navOrder: 50
layout: src/layouts/Redirect.astro
title: Redirect
redirect: docs/kubernetes/targets
pubDate: 2024-07-29
navSearch: false
navSitemap: false
navMenu: false
---

There are two different deployment targets for deploying to Kubernetes using Octopus Deploy, the [Kubernetes Agent](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent) and the [Kubernetes API](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api) targets.

The following table summarizes the key differences between the two targets.

| | [Kubernetes Agent](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent) | [Kubernetes API](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api) |
| :--------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------- |
| Connection method | [Polling agent](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication#polling-tentacles) in cluster | Direct API communication |
| Setup complexity | Generally simpler | Requires more setup |
| Security | No need to configure firewall<br />No need to provide external access to cluster | Depends on the cluster configuration |
| Requires workers | No | Yes |
| Requires public IP | No | Yes |
| Requires service account in Octopus | No | Yes |
| Limit deployments to a namespace | Yes | No |
| Planned support for upcoming observability features | Yes | No |
| Recommended usage scenario | <ul><li>For deployments and maintenance tasks (runbooks) on Kubernetes</li><li>If you want to run a worker on Kubernetes (to deploy to other targets)</li></ul> | If you cannot install an agent on a cluster |
| Step configuration | Simple (you need to specify target tag) | More complex (requires target tags, workers, execution container images) |
| Maintenance | <ul><li>Upgradeable via Octopus Server</li><li>No need to add and manage</li></ul> credentials | <ul><li>You need to update/rotate credentials</li><li>Requires worker maintenance updates</li></ul> |
Original file line number Diff line number Diff line change
@@ -1,213 +1,9 @@
---
layout: src/layouts/Default.astro
pubDate: 2024-05-14
modDate: 2024-05-14
title: Automated Installation
description: How to automate the installation and management of the Kubernetes agent
navOrder: 50
layout: src/layouts/Redirect.astro
title: Redirect
redirect: docs/kubernetes/targets/kubernetes-agent/automated-installation
pubDate: 2024-07-29
navSearch: false
navSitemap: false
navMenu: false
---

## Automated installation via Terraform
The Kubernetes agent can be installed and managed using a combination of the Kubernetes agent [Helm chart <= v1.1.0](https://hub.docker.com/r/octopusdeploy/kubernetes-agent), [Octopus Deploy <= v0.20.0 Terraform provider](https://registry.terraform.io/providers/OctopusDeployLabs/octopusdeploy/latest) and/or [Helm Terraform provider](https://registry.terraform.io/providers/hashicorp/helm).

### Octopus Deploy & Helm
Using a combination of the Octopus Deploy and Helm providers you can completely manage the Kubernetes agent via Terraform.

:::div{.warning}
To ensure that the Kubernetes agent and the deployment target within Octopus associate with each other correctly the some of the Helm chart values and deployment target properties must meet the following criteria:
`octopusdeploy_kubernetes_agent_deployment_target.name` and `agent.targetName` have the same values.
`octopusdeploy_kubernetes_agent_deployment_target.uri` and `agent.serverSubscriptionId` have the same values.
`octopusdeploy_kubernetes_agent_deployment_target.thumbprint` is the thumbprint calculated from the certificate used in `agent.certificate`.
:::

```hcl
terraform {
required_providers {
octopusdeploy = {
source = "octopus.com/com/octopusdeploy"
version = "0.20.0"
}
helm = {
source = "hashicorp/helm"
version = "2.13.2"
}
}
}
locals {
octopus_api_key = "API-XXXXXXXXXXXXXXXX"
octopus_address = "https://myinstance.octopus.app"
octopus_polling_address = "https://polling.myinstance.octopus.app"
}
provider "helm" {
kubernetes {
# Configure authentication for me
}
}
provider "octopusdeploy" {
address = local.octopus_address
api_key = local.octopus_api_key
}
resource "octopusdeploy_space" "agent_space" {
name = "agent space"
space_managers_teams = ["teams-everyone"]
}
resource "octopusdeploy_environment" "dev_env" {
name = "Development"
space_id = octopusdeploy_space.agent_space.id
}
resource "octopusdeploy_polling_subscription_id" "agent_subscription_id" {}
resource "octopusdeploy_tentacle_certificate" "agent_cert" {}
resource "octopusdeploy_kubernetes_agent_deployment_target" "agent" {
name = "agent-one"
space_id = octopusdeploy_space.agent_space.id
environments = [octopusdeploy_environment.dev_env.id]
roles = ["role-1", "role-2", "role-3"]
thumbprint = octopusdeploy_tentacle_certificate.agent_cert.thumbprint
uri = octopusdeploy_polling_subscription_id.agent_subscription_id.polling_uri
}
resource "helm_release" "octopus_agent" {
name = "octopus-agent-release"
repository = "oci://registry-1.docker.io"
chart = "octopusdeploy/kubernetes-agent"
version = "1.*.*"
atomic = true
create_namespace = true
namespace = "octopus-agent-target"
set {
name = "agent.acceptEula"
value = "Y"
}
set {
name = "agent.targetName"
value = octopusdeploy_kubernetes_agent_deployment_target.agent.name
}
set_sensitive {
name = "agent.serverApiKey"
value = local.octopus_api_key
}
set {
name = "agent.serverUrl"
value = local.octopus_address
}
set {
name = "agent.serverCommsAddress"
value = local.octopus_polling_address
}
set {
name = "agent.serverSubscriptionId"
value = octopusdeploy_polling_subscription_id.agent_subscription_id.polling_uri
}
set_sensitive {
name = "agent.certificate"
value = octopusdeploy_tentacle_certificate.agent_cert.base64
}
set {
name = "agent.space"
value = octopusdeploy_space.agent_space.name
}
set_list {
name = "agent.targetEnvironments"
value = octopusdeploy_kubernetes_agent_deployment_target.agent.environments
}
set_list {
name = "agent.targetRoles"
value = octopusdeploy_kubernetes_agent_deployment_target.agent.roles
}
}
```

### Helm
The Kubernetes agent can be installed using just the Helm provider but the associated deployment target that is created in Octopus when the agent registers itself cannot be managed solely using the Helm provider, as the Helm chart values relating to the deployment target are only used on initial installation and any modifications to them will not trigger an update to the deployment target unless you perform a complete reinstall of the agent. This option is useful if you plan on managing the configuration of the deployment target via means such as the Portal or API.

```hcl
terraform {
required_providers {
helm = {
source = "hashicorp/helm"
version = "2.13.2"
}
}
}
provider "helm" {
kubernetes {
# Configure authentication for me
}
}
locals {
octopus_api_key = "API-XXXXXXXXXXXXXXXX"
octopus_address = "https://myinstance.octopus.app"
octopus_polling_address = "https://polling.myinstance.octopus.app"
}
resource "helm_release" "octopus_agent" {
name = "octopus-agent-release"
repository = "oci://registry-1.docker.io"
chart = "octopusdeploy/kubernetes-agent"
version = "1.*.*"
atomic = true
create_namespace = true
namespace = "octopus-agent-target"
set {
name = "agent.acceptEula"
value = "Y"
}
set {
name = "agent.targetName"
value = "octopus-agent"
}
set_sensitive {
name = "agent.serverApiKey"
value = local.octopus_api_key
}
set {
name = "agent.serverUrl"
value = local.octopus_address
}
set {
name = "agent.serverCommsAddress"
value = local.octopus_polling_address
}
set {
name = "agent.space"
value = "Default"
}
set_list {
name = "agent.targetEnvironments"
value = ["Development"]
}
set_list {
name = "agent.targetRoles"
value = ["Role-1"]
}
}
```
Original file line number Diff line number Diff line change
@@ -1,57 +1,9 @@
---
layout: src/layouts/Default.astro
pubDate: 2024-05-14
modDate: 2024-05-14
title: HA Cluster Support
description: How to install/update the agent when running Octopus in an HA Cluster
navOrder: 60
layout: src/layouts/Redirect.astro
title: Redirect
redirect: docs/kubernetes/targets/kubernetes-agent/ha-cluster-support
pubDate: 2024-07-29
navSearch: false
navSitemap: false
navMenu: false
---

## Octopus Deploy HA Cluster

Similarly to Polling Tentacles, the Kubernetes agent must have a URL for each individual node in the HA Cluster so that it receive commands from all clusters. These URLs must be provided when registering the agent or some deployments may fail depending on which node the tasks are executing.

To read more about selecting the right URL for your nodes, see [Polling Tentacles and Kubernetes agents with HA](/docs/administration/high-availability/maintain/polling-tentacles-with-ha).

## Agent Installation on an HA Cluster

### Octopus Deploy 2024.3+

To make things easier, Octopus will detect when it's running HA and show an extra configuration page in the Kubernetes agent creation wizard which asks you to give a unique URL for each cluster node.

:::figure
![Kubernetes Agent HA Cluster Configuration Page](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-ha-cluster-configuration-page.png)
:::

Once these values are provided the generated helm upgrade command will configure your new agent to receive commands from all nodes.

### Octopus Deploy 2024.2

To install the agent with Octopus Deploy 2024.2 you need to adjust the Helm command produced by the wizard before running it.

1. Use the wizard to produce the Helm command to install the agent.
1. You may need to provide a ServerCommsAddress: you can just provide any valid URL to progress the wizard.
2. Replace the `--set agent.serverCommsAddress="..."` property with
```
--set agent.serverCommsAddresses="{https://<url1>:<port1>/,https://<url2>:<port2>/,https://<url3>:<port3>/}"
```
where each `<url>:<port>` is a unique address for an individual node.

3. Execute the Helm command in a terminal connected to the target cluster.

:::div{.warning}
The new property name is `agent.serverCommsAddresses`. Note that "Addresses" is plural.
:::

## Upgrading the Agent after Adding/Removing Cluster nodes

If you add or remove cluster nodes, you need to update your agent's configuration so that it continues to connect to all nodes in the cluster. To do this, you can simply run a helm upgrade command with the urls of all current cluster nodes. The agent will take remove any old urls and replace them with the provided ones.

```bash
helm upgrade --atomic \
--reuse-values \
--set agent.serverCommsAddresses="{https://<node-one-url>:<node-one-port>/,https://<node-two-url>:<node-two-port>/,https://<node-three-url>:<node-three-port>/}" \
--namespace <agent-namespace> \
<agent-release-name> \
oci://registry-1.docker.io/octopusdeploy/kubernetes-agent
```
Loading

0 comments on commit 8ec4706

Please sign in to comment.