Skip to content
Permalink

Comparing changes

This is a direct comparison between two commits made in this repository or its related repositories. View the default comparison for this range or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: harvester/docs
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: 69fb03b988e56fb4e57c3bd3a9451d7e05f4202a
Choose a base ref
..
head repository: harvester/docs
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: 5f0ec4c2cdc65373ff41aacd8fd9207bc37cbaa9
Choose a head ref
14 changes: 14 additions & 0 deletions docs/install/management-address.md
Original file line number Diff line number Diff line change
@@ -13,12 +13,18 @@ Description: The Harvester provides a virtual IP as the management address.

Harvester provides a fixed virtual IP (VIP) as the management address, VIP must be different from any Node IP. You can find the management address on the console dashboard after the installation.

The VIP is configured while the **first Node** of the cluser is installed.

e.g. ![Configure the VIP mode and IP address in ISO Installation](/img/v1.2/install/config-virtual-ip.png)

:::note

If you selected the IP address to be configured via DHCP, you will need to configure static MAC-to-IP address mapping on your DHCP server in order to have a persistent Virtual IP

:::

After the Node starts successfully, both of the VIP and Node IP are shown on the console.

![](/img/v1.2/install/iso-installed.png)

## How to get the VIP MAC address
@@ -39,3 +45,11 @@ The management address:
- Allows the access to the Harvester API/UI via `HTTPS` protocol.
- Allows other nodes to join the cluster.
![](/img/v1.2/install/config-virtual-ip.png)

:::note

After the first Node of the Harvester cluster is installed, user may configure the [ssl-certificates](../advanced/settings.md#ssl-certificates), then the cluster can be accessed via VIP and FQDN.

The following installed Node can also join the cluster by both VIP and FQDN. When using FQDN, please note a known issue [Unable to join the new node](https://github.com/harvester/harvester/issues/4511) and workaround: https://github.com/harvester/harvester/issues/4511#issuecomment-1761047115

:::
6 changes: 3 additions & 3 deletions docs/install/requirements.md
Original file line number Diff line number Diff line change
@@ -44,13 +44,13 @@ Harvester nodes have the following network requirements for installation.

### IP Address Requirements for Harvester Nodes

Harvester is built on top of Kubernetes, each Node needs an independent IP address, this IP is used for the Node identity, it cannot change during the lifecycle of a Harvester cluster.
Harvester is built on top of Kubernetes, and each node needs an independent IP address. Harvester uses this IP address to identify a node identity, and it cannot change during the lifecycle of a Harvester cluster.

### IP Address Requirements for Harvester Cluster

The Harvester cluster needs an additional IP address, it is used for the management IP of the whole cluster, which is also called Virtual IP (VIP).
The Harvester cluster needs an additional IP address called Virtual IP (VIP). It uses it as the management IP for the whole cluster.

For more details, please refer [management address](./management-address.md)
Please refer to [Management Address](./management-address.md) for more details.

### Port Requirements for Harvester Nodes

76 changes: 76 additions & 0 deletions docs/upgrade/v1-1-2-to-v1-2-0.md
Original file line number Diff line number Diff line change
@@ -407,3 +407,79 @@ Harvester does not package the `registry.suse.com/harvester-beta/vmdp:latest` im
- [[BUG] VMDP Image wrong after upgrade to Harvester 1.2.0](https://github.com/harvester/harvester/issues/4534)
---
### 9. Upgrade stuck in the Post-draining state
The node might be stuck in the OS upgrade process if you encounter the **Post-draining** state, as shown below.
![](/img/v1.2/upgrade/known_issues/stuck-in-post-draining.png)
Harvester uses `elemental upgrade` to help us upgrade the OS. Check the `elemental upgrade` logs to see if there are any errors.
You can check the `elemental upgrade` logs with the following commands:
```bash
# View the post-drain job, which should be named `hvst-upgrade-xxx-post-drain-xxx`
$ kubectl get pod --selector=harvesterhci.io/upgradeJobType=post-drain -n harvester-system
# Check the logs with the following command
$ kubectl logs -n harvester-system pods/hvst-upgrade-xxx-post-drain-xxx
```

Suppose you see the following error in the logs. An incomplete `state.yaml` causes this issue.

```bash
Flag --directory has been deprecated, 'directory' is deprecated please use 'system' instead
INFO[2023-09-13T12:02:42Z] Starting elemental version 0.3.1
INFO[2023-09-13T12:02:42Z] reading configuration form '/tmp/tmp.N6rn4F6mKM'
ERRO[2023-09-13T12:02:42Z] Invalid upgrade command setup undefined state partition
elemental upgrade failed with return code: 33
+ ret=33
+ '[' 33 '!=' 0 ']'
+ echo 'elemental upgrade failed with return code: 33'
+ cat /host/usr/local/upgrade_tmp/elemental-upgrade-20230913120242.log
```

In this case, Harvester upgrades the elemental-cli to the latest version. It will try to find the `state` partition from the `state.yaml`. If the `state.yaml` is incomplete, there is a chance it will fail to find the `state` partition.

The incomplete `state.yaml` will look like the following.

```yaml
# Autogenerated file by elemental client, do not edit

date: "2023-09-13T08:31:42Z"
state:
# we are missing `label` here.
active:
source: dir:///tmp/tmp.01deNrXNEC
label: COS_ACTIVE
fs: ext2
passive: null
```
Remove this incomplete `state.yaml` file to work around this issue. (The post-draining will retry every 10 minutes).

1. Remount the `state` partition to RW.

```bash
$ mount -o remount,rw /run/initramfs/cos-state
```

1. Remove the `state.yaml`.

```bash
$ rm -f /run/initramfs/cos-state/state.yaml
```

1. Remount the `state` partition to RO.

```bash
$ mount -o remount,ro /run/initramfs/cos-state
```

After performing the steps above, you should pass post-draining with the next retry.

- Related issues:
- [[BUG] Upgrade stuck with first node in Post-draining state](https://github.com/harvester/harvester/issues/4526)
- [A potential bug in NewElementalPartitionsFromList which caused upgrade error code 33](https://github.com/rancher/elemental-toolkit/issues/1827)
- Workaround:
- https://github.com/harvester/harvester/issues/4526#issuecomment-1732853216
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
17 changes: 16 additions & 1 deletion versioned_docs/version-v1.2/install/management-address.md
Original file line number Diff line number Diff line change
@@ -13,12 +13,18 @@ Description: The Harvester provides a virtual IP as the management address.

Harvester provides a fixed virtual IP (VIP) as the management address, VIP must be different from any Node IP. You can find the management address on the console dashboard after the installation.

The VIP is configured while the **first Node** of the cluser is installed.

e.g. ![Configure the VIP mode and IP address in ISO Installation](/img/v1.2/install/config-virtual-ip.png)

:::note

If you selected the IP address to be configured via DHCP, you will need to configure static MAC-to-IP address mapping on your DHCP server in order to have a persistent Virtual IP

:::

After the Node starts successfully, both of the VIP and Node IP are shown on the console.

![](/img/v1.2/install/iso-installed.png)

## How to get the VIP MAC address
@@ -34,8 +40,17 @@ Example of output:
```

## Usages

The management address:

- Allows the access to the Harvester API/UI via `HTTPS` protocol.
- Allows other nodes to join the cluster.
![](/img/v1.2/install/config-virtual-ip.png)
![](/img/v1.2/install/configure-management-address.png)

:::note

After the first Node of the Harvester cluster is installed, user may configure the [ssl-certificates](../advanced/settings.md#ssl-certificates), then the cluster can be accessed via VIP and FQDN.

The following installed Node can also join the cluster by both VIP and FQDN. When using FQDN, please note a known issue [Unable to join the new node](https://github.com/harvester/harvester/issues/4511) and workaround: https://github.com/harvester/harvester/issues/4511#issuecomment-1761047115

:::
6 changes: 3 additions & 3 deletions versioned_docs/version-v1.2/install/requirements.md
Original file line number Diff line number Diff line change
@@ -44,13 +44,13 @@ Harvester nodes have the following network requirements for installation.

### IP Address Requirements for Harvester Nodes

Harvester is built on top of Kubernetes, each Node needs an independent IP address, this IP is used for the Node identity, it cannot change during the lifecycle of a Harvester cluster.
Harvester is built on top of Kubernetes, and each node needs an independent IP address. Harvester uses this IP address to identify a node identity, and it cannot change during the lifecycle of a Harvester cluster.

### IP Address Requirements for Harvester Cluster

The Harvester cluster needs an additional IP address, it is used for the management IP of the whole cluster, which is also called Virtual IP (VIP).
The Harvester cluster needs an additional IP address called Virtual IP (VIP). It uses it as the management IP for the whole cluster.

For more details, please refer [management address](./management-address.md)
Please refer to [Management Address](./management-address.md) for more details.

### Port Requirements for Harvester Nodes

77 changes: 77 additions & 0 deletions versioned_docs/version-v1.2/upgrade/v1-1-2-to-v1-2-0.md
Original file line number Diff line number Diff line change
@@ -407,3 +407,80 @@ Harvester does not package the `registry.suse.com/harvester-beta/vmdp:latest` im
- [[BUG] VMDP Image wrong after upgrade to Harvester 1.2.0](https://github.com/harvester/harvester/issues/4534)
---
### 9. Upgrade stuck in the Post-draining state
The node might be stuck in the OS upgrade process if you encounter the **Post-draining** state, as shown below.
![](/img/v1.2/upgrade/known_issues/stuck-in-post-draining.png)
Harvester uses `elemental upgrade` to help us upgrade the OS. Check the `elemental upgrade` logs to see if there are any errors.
You can check the `elemental upgrade` logs with the following commands:
```bash
# View the post-drain job, which should be named `hvst-upgrade-xxx-post-drain-xxx`
$ kubectl get pod --selector=harvesterhci.io/upgradeJobType=post-drain -n harvester-system
# Check the logs with the following command
$ kubectl logs -n harvester-system pods/hvst-upgrade-xxx-post-drain-xxx
```

Suppose you see the following error in the logs. An incomplete `state.yaml` causes this issue.

```bash
Flag --directory has been deprecated, 'directory' is deprecated please use 'system' instead
INFO[2023-09-13T12:02:42Z] Starting elemental version 0.3.1
INFO[2023-09-13T12:02:42Z] reading configuration form '/tmp/tmp.N6rn4F6mKM'
ERRO[2023-09-13T12:02:42Z] Invalid upgrade command setup undefined state partition
elemental upgrade failed with return code: 33
+ ret=33
+ '[' 33 '!=' 0 ']'
+ echo 'elemental upgrade failed with return code: 33'
+ cat /host/usr/local/upgrade_tmp/elemental-upgrade-20230913120242.log
```

In this case, Harvester upgrades the elemental-cli to the latest version. It will try to find the `state` partition from the `state.yaml`. If the `state.yaml` is incomplete, there is a chance it will fail to find the `state` partition.

The incomplete `state.yaml` will look like the following.

```yaml
# Autogenerated file by elemental client, do not edit

date: "2023-09-13T08:31:42Z"
state:
# we are missing `label` here.
active:
source: dir:///tmp/tmp.01deNrXNEC
label: COS_ACTIVE
fs: ext2
passive: null
```
Remove this incomplete `state.yaml` file to work around this issue. (The post-draining will retry every 10 minutes).

1. Remount the `state` partition to RW.

```bash
$ mount -o remount,rw /run/initramfs/cos-state
```

1. Remove the `state.yaml`.

```bash
$ rm -f /run/initramfs/cos-state/state.yaml
```

1. Remount the `state` partition to RO.

```bash
$ mount -o remount,ro /run/initramfs/cos-state
```

After performing the steps above, you should pass post-draining with the next retry.

- Related issues:
- [[BUG] Upgrade stuck with first node in Post-draining state](https://github.com/harvester/harvester/issues/4526)
- [A potential bug in NewElementalPartitionsFromList which caused upgrade error code 33](https://github.com/rancher/elemental-toolkit/issues/1827)
- Workaround:
- https://github.com/harvester/harvester/issues/4526#issuecomment-1732853216