Skip to content

Commit

Permalink
Add troubleshooting of node maintenance mode
Browse files Browse the repository at this point in the history
Signed-off-by: Jian Wang <[email protected]>
  • Loading branch information
w13915984028 committed Aug 6, 2024
1 parent f01d373 commit 3c2d88d
Show file tree
Hide file tree
Showing 8 changed files with 294 additions and 2 deletions.
14 changes: 13 additions & 1 deletion docs/host/host.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,16 @@ For admin users, you can click **Enable Maintenance Mode** to evict all VMs from

![node-maintenance.png](/img/v1.2/host/node-maintenance.png)

After a while the target node will enter maintenance mode successfully.

![node-enter-maintenance-mode.png](/img/v1.3/troubleshooting/node-enter-maintenance-mode.png)

:::note important

Check those [known limitations and workarounds](../troubleshooting/host.md#an-enable-maintenance-mode-node-stucks-on-cordoned-state) before you click this menu or you have encountered some issues.

:::

## Cordoning a Node

Cordoning a node marks it as unschedulable. This feature is useful for performing short tasks on the node during small maintenance windows, like reboots, upgrades, or decommissions. When you’re done, power back on and make the node schedulable again by uncordoning it.
Expand All @@ -42,6 +52,8 @@ Before removing a node from a Harvester cluster, determine if the remaining node

If the remaining nodes do not have enough resources, VMs might fail to migrate and volumes might degrade when you remove a node.

If you have some volumes which were created from the customized `StorageClass` with the value **1** of the [Number of Replicas](../advanced/storageclass.md#number-of-replicas), it is recommended to backup those single-replica volumes in advance. Those volumes can't be rebuilt or restored from other nodes after this node is removed.

:::

### 1. Check if the node can be removed from the cluster.
Expand Down Expand Up @@ -522,4 +534,4 @@ status:
```

The `harvester-node-manager` pod(s) in the `harvester-system` namespace may also contain some hints as to why it is not rendering a file to a node.
This pod is part of a daemonset, so it may be worth checking the pod that is running on the node of interest.
This pod is part of a daemonset, so it may be worth checking the pod that is running on the node of interest.
134 changes: 134 additions & 0 deletions docs/troubleshooting/host.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
---
sidebar_position: 6
sidebar_label: Host
title: "Host"
---

<head>
<link rel="canonical" href="https://docs.harvesterhci.io/v1.3/troubleshooting/host"/>
</head>

## An enable-maintenance-mode Node Stucks on Cordoned State

After you click the **Enable Maintenance Mode** menu upon one Harvester host, the target host stucks on `Cordoned` state, and the **Enable Maintenance Mode** menu is available again, the expected menu **Disable Maintenance Mode** is not available.

![node-stuck-cordoned.png](/img/v1.3/troubleshooting/node-stuck-cordoned.png)

When you check the Harvester pod log, there are repeated messages like:

```
time="2024-08-05T19:03:02Z" level=info msg="evicting pod longhorn-system/instance-manager-68cd2514dd3f6d59b95cbd865d5b08f7"
time="2024-08-05T19:03:02Z" level=info msg="error when evicting pods/\"instance-manager-68cd2514dd3f6d59b95cbd865d5b08f7\" -n \"longhorn-system\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget."
time="2024-08-05T19:03:07Z" level=info msg="evicting pod longhorn-system/instance-manager-68cd2514dd3f6d59b95cbd865d5b08f7"
time="2024-08-05T19:03:07Z" level=info msg="error when evicting pods/\"instance-manager-68cd2514dd3f6d59b95cbd865d5b08f7\" -n \"longhorn-system\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget."
time="2024-08-05T19:03:12Z" level=info msg="evicting pod longhorn-system/instance-manager-68cd2514dd3f6d59b95cbd865d5b08f7"
time="2024-08-05T19:03:12Z" level=info msg="error when evicting pods/\"instance-manager-68cd2514dd3f6d59b95cbd865d5b08f7\" -n \"longhorn-system\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget."
```

The Longhorn `instance-manager` uses pdb to protect itself from being evicted accidentally to avoid the data loss of volumes. When this error happens, it means the `instance-manager` pod is still serving some volumes/replicas.

There are some known causes and related workarounds.

### The Manually Attached Volume

When a Longhorn volume is attached to a host from the [Embedded Longhorn UI](./harvester.md#access-embedded-rancher-and-longhorn-dashboards), this volume will cause above the error.

You can check it from the [Embedded Longhorn UI](./harvester.md#access-embedded-rancher-and-longhorn-dashboards).

![attached-volume.png](/img/v1.3/troubleshooting/attached-volume.png)

The manually attached object is attached to a node name instead of the pod name.

You can also check it from CLI to get the CRD object `VolumeAttachment`.

The volume attached by Longhorn UI:

```
- apiVersion: longhorn.io/v1beta2
kind: VolumeAttachment
...
spec:
attachmentTickets:
longhorn-ui:
id: longhorn-ui
nodeID: node-name
...
volume: pvc-9b35136c-f59e-414b-aa55-b84b9b21ff89
```

The volume attached by CSI driver:

```
- apiVersion: longhorn.io/v1beta2
kind: VolumeAttachment
spec:
attachmentTickets:
csi-b5097155cddde50b4683b0e659923e379cbfc3873b5b2ee776deb3874102e9bf:
id: csi-b5097155cddde50b4683b0e659923e379cbfc3873b5b2ee776deb3874102e9bf
nodeID: node-name
...
volume: pvc-3c6403cd-f1cd-4b84-9b46-162f746b9667
```

:::note

It is not recommended to attach a volume to the host manually.

:::

#### Workaround 1: Set Longhorn option `Detach Manually Attached Volumes When Cordoned` to True

The Longhorn option [Detach Manually Attached Volumes When Cordoned](https://longhorn.io/docs/1.6.0/references/settings/#detach-manually-attached-volumes-when-cordoned) defaults to `true`, it will block the node drain when there is any manually attached volume.

This options is available from Harvester v1.3.1 with the embedded Longhorn v1.6.0.

From Harvester v1.4.0, this option is set to `false` by default.

#### Workaround 2: Manually Detach the Volume

Detach the volume from the [Embedded Longhorn UI](./harvester.md#access-embedded-rancher-and-longhorn-dashboards).

![detached-volume.png](/img/v1.3/troubleshooting/detached-volume.png)

After that, the node will enter maintenance mode successfully.

![node-enter-maintenance-mode.png](/img/v1.3/troubleshooting/node-enter-maintenance-mode.png)

### The Single-replica Volume

Harvester supports to define the customized `StorageClass`, the [Number of Replicas](../advanced/storageclass.md#number-of-replicas) can even be 1 in some scenarios.

When such a volume was ever attached to a certain host by CSI driver or other ways, the last and only replica stays on this node even after the volume is detached from the node.

This can be checked from the CRD object `Volume`.

```
- apiVersion: longhorn.io/v1beta2
kind: Volume
...
spec:
...
numberOfReplicas: 1 // the replica number
...
status:
...
ownerID: nodeName
...
state: attached
```

#### Workaround 1: Set Longhorn option `Node Drain Policy`

The Longhorn [Node Drain Policy](https://longhorn.io/docs/1.6.0/references/settings/#node-drain-policy) defaults to `block-if-contains-last-replica`. Longhorn will block the drain when the node contains the last healthy replica of a volume.

Set this option to `allow-if-replica-is-stopped` from the [Embedded Longhorn UI](./harvester.md#access-embedded-rancher-and-longhorn-dashboards) will solve this issue.

:::note important

If you plan to remove this node after it enters the maintenance mode, it is recommended to backup those single-replica volumes or redeploy the related workloads to other node in advance to get the volume scheduled to other node. Those volumes can't be rebuilt or restored from other nodes after this node is removed.

:::

From Harvester v1.4.0, this option is set to `allow-if-replica-is-stopped` by default.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
14 changes: 13 additions & 1 deletion versioned_docs/version-v1.3/host/host.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,16 @@ For admin users, you can click **Enable Maintenance Mode** to evict all VMs from

![node-maintenance.png](/img/v1.2/host/node-maintenance.png)

After a while the target node will enter maintenance mode successfully.

![node-enter-maintenance-mode.png](/img/v1.3/troubleshooting/node-enter-maintenance-mode.png)

:::note important

Check those [known limitations and workarounds](../troubleshooting/host.md#an-enable-maintenance-mode-node-stucks-on-cordoned-state) before you click this menu or you have encountered some issues.

:::

## Cordoning a Node

Cordoning a node marks it as unschedulable. This feature is useful for performing short tasks on the node during small maintenance windows, like reboots, upgrades, or decommissions. When you’re done, power back on and make the node schedulable again by uncordoning it.
Expand All @@ -42,6 +52,8 @@ Before removing a node from a Harvester cluster, determine if the remaining node

If the remaining nodes do not have enough resources, VMs might fail to migrate and volumes might degrade when you remove a node.

If you have some volumes which were created from the customized `StorageClass` with the value **1** of the [Number of Replicas](../advanced/storageclass.md#number-of-replicas), it is recommended to backup those single-replica volumes in advance. Those volumes can't be rebuilt or restored from other nodes after this node is removed.

:::

### 1. Check if the node can be removed from the cluster.
Expand Down Expand Up @@ -522,4 +534,4 @@ status:
```

The `harvester-node-manager` pod(s) in the `harvester-system` namespace may also contain some hints as to why it is not rendering a file to a node.
This pod is part of a daemonset, so it may be worth checking the pod that is running on the node of interest.
This pod is part of a daemonset, so it may be worth checking the pod that is running on the node of interest.
134 changes: 134 additions & 0 deletions versioned_docs/version-v1.3/troubleshooting/host.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
---
sidebar_position: 6
sidebar_label: Host
title: "Host"
---

<head>
<link rel="canonical" href="https://docs.harvesterhci.io/v1.3/troubleshooting/host"/>
</head>

## An enable-maintenance-mode Node Stucks on Cordoned State

After you click the **Enable Maintenance Mode** menu upon one Harvester host, the target host stucks on `Cordoned` state, and the **Enable Maintenance Mode** menu is available again, the expected menu **Disable Maintenance Mode** is not available.

![node-stuck-cordoned.png](/img/v1.3/troubleshooting/node-stuck-cordoned.png)

When you check the Harvester pod log, there are repeated messages like:

```
time="2024-08-05T19:03:02Z" level=info msg="evicting pod longhorn-system/instance-manager-68cd2514dd3f6d59b95cbd865d5b08f7"
time="2024-08-05T19:03:02Z" level=info msg="error when evicting pods/\"instance-manager-68cd2514dd3f6d59b95cbd865d5b08f7\" -n \"longhorn-system\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget."
time="2024-08-05T19:03:07Z" level=info msg="evicting pod longhorn-system/instance-manager-68cd2514dd3f6d59b95cbd865d5b08f7"
time="2024-08-05T19:03:07Z" level=info msg="error when evicting pods/\"instance-manager-68cd2514dd3f6d59b95cbd865d5b08f7\" -n \"longhorn-system\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget."
time="2024-08-05T19:03:12Z" level=info msg="evicting pod longhorn-system/instance-manager-68cd2514dd3f6d59b95cbd865d5b08f7"
time="2024-08-05T19:03:12Z" level=info msg="error when evicting pods/\"instance-manager-68cd2514dd3f6d59b95cbd865d5b08f7\" -n \"longhorn-system\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget."
```

The Longhorn `instance-manager` uses pdb to protect itself from being evicted accidentally to avoid the data loss of volumes. When this error happens, it means the `instance-manager` pod is still serving some volumes/replicas.

There are some known causes and related workarounds.

### The Manually Attached Volume

When a Longhorn volume is attached to a host from the [Embedded Longhorn UI](./harvester.md#access-embedded-rancher-and-longhorn-dashboards), this volume will cause above the error.

You can check it from the [Embedded Longhorn UI](./harvester.md#access-embedded-rancher-and-longhorn-dashboards).

![attached-volume.png](/img/v1.3/troubleshooting/attached-volume.png)

The manually attached object is attached to a node name instead of the pod name.

You can also check it from CLI to get the CRD object `VolumeAttachment`.

The volume attached by Longhorn UI:

```
- apiVersion: longhorn.io/v1beta2
kind: VolumeAttachment
...
spec:
attachmentTickets:
longhorn-ui:
id: longhorn-ui
nodeID: node-name
...
volume: pvc-9b35136c-f59e-414b-aa55-b84b9b21ff89
```

The volume attached by CSI driver:

```
- apiVersion: longhorn.io/v1beta2
kind: VolumeAttachment
spec:
attachmentTickets:
csi-b5097155cddde50b4683b0e659923e379cbfc3873b5b2ee776deb3874102e9bf:
id: csi-b5097155cddde50b4683b0e659923e379cbfc3873b5b2ee776deb3874102e9bf
nodeID: node-name
...
volume: pvc-3c6403cd-f1cd-4b84-9b46-162f746b9667
```

:::note

It is not recommended to attach a volume to the host manually.

:::

#### Workaround 1: Set Longhorn option `Detach Manually Attached Volumes When Cordoned` to True

The Longhorn option [Detach Manually Attached Volumes When Cordoned](https://longhorn.io/docs/1.6.0/references/settings/#detach-manually-attached-volumes-when-cordoned) defaults to `true`, it will block the node drain when there is any manually attached volume.

This options is available from Harvester v1.3.1 with the embedded Longhorn v1.6.0.

From Harvester v1.4.0, this option is set to `false` by default.

#### Workaround 2: Manually Detach the Volume

Detach the volume from the [Embedded Longhorn UI](./harvester.md#access-embedded-rancher-and-longhorn-dashboards).

![detached-volume.png](/img/v1.3/troubleshooting/detached-volume.png)

After that, the node will enter maintenance mode successfully.

![node-enter-maintenance-mode.png](/img/v1.3/troubleshooting/node-enter-maintenance-mode.png)

### The Single-replica Volume

Harvester supports to define the customized `StorageClass`, the [Number of Replicas](../advanced/storageclass.md#number-of-replicas) can even be 1 in some scenarios.

When such a volume was ever attached to a certain host by CSI driver or other ways, the last and only replica stays on this node even after the volume is detached from the node.

This can be checked from the CRD object `Volume`.

```
- apiVersion: longhorn.io/v1beta2
kind: Volume
...
spec:
...
numberOfReplicas: 1 // the replica number
...
status:
...
ownerID: nodeName
...
state: attached
```

#### Workaround 1: Set Longhorn option `Node Drain Policy`

The Longhorn [Node Drain Policy](https://longhorn.io/docs/1.6.0/references/settings/#node-drain-policy) defaults to `block-if-contains-last-replica`. Longhorn will block the drain when the node contains the last healthy replica of a volume.

Set this option to `allow-if-replica-is-stopped` from the [Embedded Longhorn UI](./harvester.md#access-embedded-rancher-and-longhorn-dashboards) will solve this issue.

:::note important

If you plan to remove this node after it enters the maintenance mode, it is recommended to backup those single-replica volumes or redeploy the related workloads to other node in advance to get the volume scheduled to other node. Those volumes can't be rebuilt or restored from other nodes after this node is removed.

:::

From Harvester v1.4.0, this option is set to `allow-if-replica-is-stopped` by default.

0 comments on commit 3c2d88d

Please sign in to comment.