Skip to content

Commit

Permalink
fix: fix note block
Browse files Browse the repository at this point in the history
Signed-off-by: PoAn Yang <[email protected]>
  • Loading branch information
FrankYang0529 committed Jul 4, 2024
1 parent 729e5c8 commit 7a26ae2
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions docs/rancher/cloud-provider.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ When spinning up an RKE2 cluster using the Harvester node driver, select the `Ha

![](/img/v1.2/rancher/rke2-cloud-provider.png)

### Deploying to the Custom RKE2 Cluster with Harvester Cloud Provider (Experimental)
### Deploying to the RKE2 custom cluster (experimental)

![](/img/v1.2/rancher/custom.png)
1. Generate cloud config data using the script `generate_addon.sh`, and then place the data on every custom node (directory: `/etc/kubernetes/cloud-config`).
Expand All @@ -68,7 +68,7 @@ When spinning up an RKE2 cluster using the Harvester node driver, select the `Ha
curl -sfL https://raw.githubusercontent.com/harvester/cloud-provider-harvester/master/deploy/generate_addon.sh | bash -s <serviceaccount name> <namespace>
```

:::note
:::note

The script depends on `kubectl` and `jq` when operating the Harvester cluster, and functions only when given access to the `Harvester Cluster` kubeconfig file.

Expand All @@ -88,7 +88,7 @@ When spinning up an RKE2 cluster using the Harvester node driver, select the `Ha

You must specify the namespace in which the guest cluster will be created.

:::
:::

Example of output:

Expand Down Expand Up @@ -256,7 +256,7 @@ RKE/K3s upgrade cloud provider via the Rancher UI, as follows:

:::info

The upgrade process for a [single-node guest cluster](../advanced/singlenodeclusters) may stall when the new `harvester-cloud-provider` pod is stuck in the *Pending* state. This issue is caused by a section in the `harvester-cloud-provider` deployment that describes the rolling update strategy. Specifically, the default value conflicts with the `podAntiAffinity` configuration in single-node clusters.
The upgrade process for a [single-node guest cluster](../advanced/singlenodeclusters) may stall when the new `harvester-cloud-provider` pod is stuck in the *Pending* state. This issue is caused by a section in the `harvester-cloud-provider` deployment that describes the rolling update strategy. Specifically, the default value conflicts with the `podAntiAffinity` configuration in single-node clusters.

For more information, see [this GitHub issue comment](https://github.com/harvester/harvester/issues/5348#issuecomment-2055453709). To address the issue, manually delete the old `harvester-cloud-provider` pod. You might need to do this multiple times until the new pod can be successfully scheduled.

Expand Down

0 comments on commit 7a26ae2

Please sign in to comment.