- {this.checkKotsVeleroIncompatibility(selectedVersions.velero.version, selectedVersions.kotsadm.version) &&
- Version {selectedVersions.velero.version} is not compatible with KOTS {selectedVersions.kotsadm.version} }
-
+ );
+ }
+}
+
+export default AddOnWrapper;
diff --git a/src/components/shared/SidebarFileTree.js b/src/components/shared/SidebarFileTree.js
index 9880ac95..3741f1a7 100644
--- a/src/components/shared/SidebarFileTree.js
+++ b/src/components/shared/SidebarFileTree.js
@@ -156,7 +156,7 @@ export default class SidebarFileTree extends Component {
onClick={this.onLinkClick}
data-path={entry.path}
>
- {entry.linktitle || entry.title} {entry.isAlpha && alpha} {entry.isBeta && beta}
+ {entry.linktitle || entry.title} {entry.isAlpha && alpha} {entry.isBeta && beta} {entry.isDeprecated && deprecated}
);
diff --git a/src/markdown-pages/add-on-author/index.md b/src/markdown-pages/add-on-author/index.md
deleted file mode 100644
index 1628eefb..00000000
--- a/src/markdown-pages/add-on-author/index.md
+++ /dev/null
@@ -1,63 +0,0 @@
----
-path: "/docs/add-on-author/"
-date: "2019-10-15"
-linktitle: "Overview"
-weight: 40
-title: "Contributing an Add-On"
----
-
-## Structure
-
-New add-ons should be added to a source directory following the format `/addons//`.
-That directory must have at least two files: `install.sh` and `Manifest`.
-
-The Manifest file specifies a list of images required for the add-on.
-These will be pulled during CI and saved to the directory `/addons///images/`.
-
-The install.sh script must define a function named `` that will perform the install.
-For example, `/addons/weave/2.5.2/install.sh` defines the function named `weave`.
-
-Most add-ons include yaml files that will be applied to the cluster.
-These should be copied to the directory `kustomize/` and applied with `kubectl apply -k` rather than applied directly with `kubectl apply -f`.
-This will allow users to easily review all applied yaml, add their own patches and re-apply after the script completes.
-
-All files and directories in the add-on's source directory will be included in the package built for the add-on.
-The package will be built and uploaded to `s3://kurl-sh/dist/-.tar.gz` during CI.
-It can be downloaded directly from S3 or by redirect from `https://kurl.sh/dist/-.tar.gz`.
-The built package will include the images from the Manifest saved as tar archives.
-
-The install.sh script may also define the functions `_pre_init` and `_join`.
-The pre_init function will be called prior to initializing the Kubernetes cluster.
-This may be used to modify the configuration that will be passed to `kubeadm init`.
-The join function will be called on remote nodes before they join the cluster and is useful for host configuration tasks such as loading a kernel module.
-
-## Runtime
-
-For online installs, the add-on package will be downloaded and extracted at runtime.
-For airgap installs, the add-on package will already be included in the installer bundle.
-
-The [add-on](https://github.com/replicatedhq/kurl/blob/master/scripts/common/addon.sh) function in kURL will first load all images from the add-on's `images/` directory and create the directory `/kustomize/`.
-It will then dynamically source the `install.sh` script and execute the function named ``.
-
-## Developing Add-ons
-
-The `DIR` env var will be defined to the install root.
-Any yaml that is ready to be applied unmodified should be copied from the add-on directory to the kustomize directory.
-```
-cp "$DIR/addons/weave/2.5.2/kustomization.yaml" "$DIR/kustomize/weave/kustomization.yaml"
-```
-
-The [`insert_resources`](https://github.com/replicatedhq/kurl/blob/5e6c9549ad6410df1f385444b83eabaf42a7e244/scripts/common/yaml.sh#L29) function can be used to add an item to the resources object of a kustomization.yaml:
-```
-insert_resources "$DIR/kustomize/weave/kustomization.yaml" secret.yaml
-```
-
-The [`insert_patches_strategic_merge`](https://github.com/replicatedhq/kurl/blob/5e6c9549ad6410df1f385444b83eabaf42a7e244/scripts/common/yaml.sh#L18) function can be used to add an item to the `patchesStrategicMerge` object of a kustomization.yaml:
-```
-insert_patches_strategic_merge "$DIR/kustomize/weave/kustomization.yaml" ip-alloc-range.yaml
-```
-
-The [`render_yaml_file`](https://github.com/replicatedhq/kurl/blob/5e6c9549ad6410df1f385444b83eabaf42a7e244/scripts/common/yaml.sh#L14) function can be used to substitute env vars in a yaml file at runtime:
-```
-render_yaml_file "$DIR/addons/weave/2.5.2/tmpl-secret.yaml" > "$DIR/kustomize/weave/secret.yaml"
-```
diff --git a/src/markdown-pages/add-ons/antrea.md b/src/markdown-pages/add-ons/antrea.md
deleted file mode 100644
index 2969a685..00000000
--- a/src/markdown-pages/add-ons/antrea.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-path: "/docs/add-ons/antrea"
-date: "2020-05-13"
-linktitle: "Antrea"
-weight: 30
-title: "Antrea Add-On"
-addOn: "antrea"
----
-
-[Antrea](https://antrea.io/) implements the Container Network Interface (CNI) to enable pod networking in a Kubernetes cluster.
-It also functions as a NetworkPolicy controller to optionally enforce security at the network layer.
-Antrea is implemented with Open vSwitch and IPSec.
-
-By default, Antrea [encrypts traffic](https://antrea.io/docs/v1.4.0/docs/traffic-encryption/) between nodes.
-kURL does not install the necessary kernel modules to enable traffic encryption.
-An installer is blocked if encryption is enabled and the host does not have the required `wireguard` module installed.
-If you do not want to install `wireguard` manually, you can disable encryption by setting `isEncryptionDisabled` to `true`.
-
-## Advanced Install Options
-
-```yaml
-spec:
- antrea:
- version: "1.4.0"
- isEncryptionDisabled: true
- podCIDR: "10.32.0.0/22"
- podCidrRange: "/22"
-```
-
-flags-table
diff --git a/src/markdown-pages/add-ons/aws.md b/src/markdown-pages/add-ons/aws.md
deleted file mode 100644
index 46c26975..00000000
--- a/src/markdown-pages/add-ons/aws.md
+++ /dev/null
@@ -1,84 +0,0 @@
----
-path: "/docs/add-ons/aws"
-date: "2022-05-10"
-linktitle: "AWS"
-weight: 31
-title: "AWS Add-On"
-addOn: "aws"
-isBeta: true
----
-
-The AWS add-on enables the Kubernetes control plane to be configured to use the [Amazon Web Services (AWS) cloud provider integration](https://cloud-provider-aws.sigs.k8s.io/). For more information about these components, see the [Kubernetes `cloud-provider-aws`](https://github.com/kubernetes/cloud-provider-aws/#components) repository. For information about the kubeadm add-on, see [Kubernetes (kubeadm) Add-On](/docs/addon-ons/kubernetes).
-
-This integration, provided by Kubernetes, creates an interface between the Kubernetes cluster and specific AWS APIs. This enables the:
-
-- Dynamic provisioning of Elastic Block Store (EBS) volumes. See [Amazon Elastic Block Store (EBS)](https://aws.amazon.com/ebs/) in the AWS documentation.
-- Image retrieval from Elastic Container Registry (ECR). See [Amazon Elastic Container Registry](https://aws.amazon.com/ecr/) in the AWS documentation.
-- Dynamic provisioning and configuration of Elastic Load Balancers (ELBs) for exposing Kubernetes Service objects. [Amazon Elastic Load Balancers](https://aws.amazon.com/elasticloadbalancing/) in the AWS documentation.
-
-For more information about the AWS cloud provider, see [AWS Cloud Provider](https://cloud-provider-aws.sigs.k8s.io/) in the Kubernetes documentation.
-
-## Prerequisite
-### IAM Roles and Policies
-The AWS cloud provider performs some tasks on behalf of the operator, such as creating an ELB or an EBS volume. Considering this, you must create identity and access management (IAM) policies in your AWS account.
-
-For more information about AWS IAM, see [AWS Identity and Access Management (IAM)](https://aws.amazon.com/iam/) in the AWS documentation.
-
-For more information about the required permissions for Amazon Web Services (AWS) cloud provider integration, see the [Prerequisites](https://kubernetes.github.io/cloud-provider-aws/prerequisites/) section in the Kubernetes documentation.
-
-### Applying Policies by Tagging AWS Resources
-After the prerequisite policies are created, you must assign them to the appropriate resources in your AWS account.
-
-For more information about AWS policies prerequisites, see [Prerequisites](https://kubernetes.github.io/cloud-provider-aws/prerequisites/) in the Kubernetes documentation.
-
-For more information about tagging, see [AWS Documentation: Tagging your Amazon EC2 Resources](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) in the AWS documentation.
-
-The following resources are discovered and managed only after the tags are assigned:
-
-- **EC2 instances:** The Elastic Compute Cloud (EC2) instances used for the kURL cluster. See [Elastic Compute Cloud (EC2)](https://aws.amazon.com/ec2/) in the AWS documentation.
-- **Security Groups:** The security groups used by the nodes in the kURL cluster.
-- **Subnet:** The subnets used by the kURL cluster.
-- **VPC:** The VPC used by the kURL cluster.
-
-These resources must have a tag with the key of `kubernetes.io/cluster/`. By default, the Kubernetes add-on uses the cluster name `kubernetes`. The value for this key is `owned`. Alternatively, if you choose to share resources between clusters, the value `shared` can be used. For more information about the Kubernetes add-on, see [Advanced Install Options](https://kurl.sh/docs/add-ons/kubernetes#advanced-install-options) in _Kubernetes add-on_.
-
-
-## Requirements and Limitations
-### Supported Configurations
-
-The AWS add-on is supported only:
-
-- When the cluster created by kURL is installed on an AWS EC2 instance.
-- With the Kubernetes (kubeadm) add-on. See [Kubernetes (kubeadm) add-on](/docs/addon-ons/kubernetes).
-
-The AWS add-on is **not** supported for the K3s or RKE2 add-ons. See [K3s](/docs/addon-ons/k3s) and [RKE2](/docs/addon-ons/rke2).
-
-### AWS ELB and Kubernetes LoadBalancer Service Requirements
-
-There are additional requirements when creating a `LoadBalancer` service:
-
-- The AWS cloud provider requires that a minimum of two nodes are available in the cluster and that one of the nodes is assigned the `worker` node role to use this integration. See [Kubernetes AWS Cloud Provider](https://cloud-provider-aws.sigs.k8s.io/) in the Kubernetes documentation. **Failure to have two nodes available, one of which is a worker node, will require an AWS administrator for your account to manually register the ELB in the AWS management console.**
-
-- When creating a `LoadBalancer` service where there is more than one security group attached to your cluster nodes, you must tag only one of the security groups as `owned` so that Kubernetes knows which group to add and remove rules from. A single, untagged security group is allowed, however, sharing this untagged security group between clusters is not recommended.
-
-- Kubernetes uses subnet tagging to attempt to discover the correct subnet for the `LoadBalancer` service. This requires that these internet-facing and internal AWS ELB resources are properly tagged in your AWS account to operate successfully. For more information about AWS subnet tagging, see [AWS Documentation: Subnet tagging for load balancers](https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html#subnet-tagging-for-load-balancers) in the AWS documentation.
-
-
-## Advanced Install Options
-
-The following example shows the exclusion of AWS-EBS provisioner storage class provided by the AWS add-on:
-
-```yaml
-spec:
- aws:
- version: 0.1.0
- excludeStorageClass: false
-```
-
-flags-table
-
-## Using a Volume Provisioner with the AWS Add-On
-
-When the AWS add-on is enabled, you do not need to add a volume provisioner add-on to the kURL specification because you can use the default AWS EBS volume provisioner.
-
-For more information about the AWS EBS volume provisioner, see [Amazon Elastic Block Store (EBS)](https://aws.amazon.com/ebs/) in the the AWS documentation.
diff --git a/src/markdown-pages/add-ons/collectd.md b/src/markdown-pages/add-ons/collectd.md
index cf7c307e..79ec386e 100644
--- a/src/markdown-pages/add-ons/collectd.md
+++ b/src/markdown-pages/add-ons/collectd.md
@@ -8,6 +8,23 @@ addOn: "collectd"
---
[collectd](https://collectd.org/) gathers system statistics on kURL hosts to track system health and find performance bottlenecks.
+## Host Package Requirements
+
+The following host packages are required for Red Hat Enterprise Linux 9 and Rocky Linux 9:
+
+- bash
+- glibc
+- libcurl
+- libcurl-minimal
+- libgcrypt
+- libgpg-error
+- libmnl
+- openssl-libs
+- rrdtool
+- systemd
+- systemd-libs
+- yajl
+
## Advanced Install Options
```yaml
diff --git a/src/markdown-pages/add-ons/containerd.md b/src/markdown-pages/add-ons/containerd.md
index a1e1098a..d8f4aa18 100644
--- a/src/markdown-pages/add-ons/containerd.md
+++ b/src/markdown-pages/add-ons/containerd.md
@@ -13,6 +13,19 @@ As CentOS, RHEL and Oracle Linux 8.x do not support Docker, the Containerd CRI i
Containerd 1.4.8+ has dropped support for Ubuntu 16.04.
+### About Containerd upgrades
+
+If you are planning to upgrade your Containerd installation, it is highly recommended that you refer to the relevant [documentation page](/docs/install-with-kurl/upgrading#about-containerd-upgrades) for guidance and best practices.
+
+## Host Package Requirements
+
+The following host packages are required for Red Hat Enterprise Linux 9 and Rocky Linux 9:
+
+- bash
+- libseccomp
+- libzstd
+- systemd
+
## Advanced Install Options
```yaml
diff --git a/src/markdown-pages/add-ons/docker.md b/src/markdown-pages/add-ons/docker.md
index d075d944..be529893 100644
--- a/src/markdown-pages/add-ons/docker.md
+++ b/src/markdown-pages/add-ons/docker.md
@@ -5,12 +5,23 @@ linktitle: "Docker"
weight: 36
title: "Docker Add-On"
addOn: "docker"
+isDeprecated: true
---
+## Deprecation Notice
+
+### This add-on is deprecated.
+
+As of March 27, 2023, the Docker add-on is deprecated. The Docker add-on might be removed from kURL after September 31st, 2023. Existing installations that use the Docker add-on are supported during this deprecation window. Kubernetes 1.24.0 and later does not support Docker. We recommend that you remove the Docker add-on on or before September 31st, 2023 and instead use the [Containerd](https://kurl.sh/docs/add-ons/containerd) add-on.
+
+## Summary
+
+
Docker is a CRI (Container Runtime Interface).
If Docker is not used, an alternative CRI must be used in its place.
See [containerd documentation](/docs/add-ons/containerd) for more information.
-Kubenetes 1.24.0+ does not support Dockershim, therefore you must use an alternative CRI, such as [containerd](/docs/add-ons/containerd), instead.
+
+For disk requirements, see [Add-on Directory Disk Space Requirements](/docs/install-with-kurl/system-requirements/#add-on-directory-disk-space-requirements).
## Advanced Install Options
diff --git a/src/markdown-pages/add-ons/ekco.md b/src/markdown-pages/add-ons/ekco.md
index 57dc3eb3..0ab8a64a 100644
--- a/src/markdown-pages/add-ons/ekco.md
+++ b/src/markdown-pages/add-ons/ekco.md
@@ -9,7 +9,7 @@ addOn: "ekco"
The [EKCO](https://github.com/replicatedhq/ekco) add-on is a utility tool to perform maintenance operations on a kURL cluster.
-The kURL add-on installs the EKCO operator into the kURL namespace.
+The latest version of EKCO is installed for every kURL cluster since v2023.04.10-0, even if it is not included in the spec, or if an older version is specified.
## Advanced Install Options
@@ -29,6 +29,8 @@ spec:
enableInternalLoadBalancer: true
shouldDisableRestartFailedEnvoyPods: false
envoyPodsNotReadyDuration: 5m
+ minioShouldDisableManagement: false
+ kotsadmShouldDisableManagement: false
```
flags-table
@@ -90,6 +92,8 @@ Global Flags:
--log_level string Log level (default "info")
```
+⚠️ _**Warning**:_ Purging a node is intended to be an irrevocable operation and is meant to permanently remove the node from the cluster with the expectation that it will never become a member again.
+
### Rook
The EKCO operator is responsible for appending nodes to the CephCluster `storage.nodes` setting to include the node in the list of nodes used by Ceph for storage. This operation only appends nodes. Removing nodes is done during the purge.
@@ -110,9 +114,31 @@ This has been added to work around a [known issue](https://github.com/projectcon
This functionality can be disabled by setting the `ekco.shouldDisableRestartFailedEnvoyPods` property to `true`.
The duration can be adjusted by changing the `ekco.envoyPodsNotReadyDuration` property.
+### MinIO
+
+When you install kURL with `ekco.minioShouldDisableManagement` set to `false`, the EKCO operator manages data in the MinIO deployment to ensure that the data is properly replicated and has high availability.
+
+To manage data in MinIO, the EKCO operator first enables a high availability six-replica StatefulSet when at least three nodes are healthy and the OpenEBS localpv storage class is available.
+
+Then, EKCO migrates data from the original MinIO deployment to the StatefulSet before deleting the data.
+MinIO is temporarily unavailable while the data migration is in progress.
+If this migration fails, it will be retried but MinIO will remain offline until it succeeds.
+
+After data has been migrated to the StatefulSet, EKCO ensures that replicas are evenly distributed across nodes.
+
+To disable EKCO's management of data in MinIO, set `ekco.minioShouldDisableManagement` to `true`.
+
+### Kotsadm
+
+When you install kURL with `ekco.kotsadmShouldDisableManagement` set to `false`, the EKCO operator ensures that necessary KOTS components run with multiple replicas for high availability.
+
+For Kotsadm v1.89.0+, the EKCO operator enables a high availability three-replica StatefulSet for the database when at least three nodes are healthy and the OpenEBS localpv storage class is available.
+
+To disable EKCO's management of Kotsadm components, set `ekco.kotsadmShouldDisableManagement` to `true`.
+
### TLS Certificate Rotation
-EKCO supports automatic certificate rotation for the [registry add-on](/docs/install-with-kurl/setup-tls-certs#registry) and the [Kubernetes control plane](/docs/install-with-kurl/setup-tls-certs#kubernetes-control-plane) since version 0.5.0 and for the [KOTS add-on](/docs/install-with-kurl/setup-tls-certs#kots-tls-certificate-renewal) since version 0.7.0.
+EKCO supports automatic certificate rotation for the [registry add-on](/docs/install-with-kurl/setup-tls-certs#registry) and the [Kubernetes control plane](/docs/install-with-kurl/setup-tls-certs#kubernetes-control-plane) since version 0.5.0 and for the KOTS add-on since version 0.7.0. For more information about automatic certificate rotation for the KOTS add-on, which is used by the Replicated app manager, see [Using TLS Certificates](https://docs.replicated.com/vendor/packaging-using-tls-certs) in the Replicated documentation.
### Internal Load Balancer
diff --git a/src/markdown-pages/add-ons/flannel.md b/src/markdown-pages/add-ons/flannel.md
new file mode 100644
index 00000000..26ae2c42
--- /dev/null
+++ b/src/markdown-pages/add-ons/flannel.md
@@ -0,0 +1,122 @@
+---
+path: "/docs/add-ons/flannel"
+date: "2022-10-19"
+linktitle: "Flannel"
+weight: 38
+title: "Flannel Add-On"
+addOn: "flannel"
+---
+
+[Flannel](https://github.com/flannel-io/flannel) implements the Container Network Interface (CNI) to enable pod networking in a Kubernetes cluster.
+Flannel runs a small, single binary agent called flanneld in a Pod on each host, and is responsible for allocating a subnet lease to each host out of a larger, preconfigured address space.
+Flannel uses the Kubernetes API directly to store the network configuration, the allocated subnets, and any auxiliary data (such as the host's public IP).
+Packets are forwarded using VXLAN encapsulation.
+
+## Advanced Install Options
+
+```yaml
+spec:
+ flannel:
+ version: "0.20.0"
+ podCIDR: "10.32.0.0/22"
+ podCIDRRange: "/22"
+```
+
+flags-table
+
+## System Requirements
+
+The following additional ports must be open between nodes for multi-node clusters:
+
+#### Primary Nodes:
+
+| Protocol | Direction | Port Range | Purpose | Used By |
+| ------- | --------- | ---------- | ----------------------- | ------- |
+| UDP | Inbound | 8472 | Flannel VXLAN | All |
+
+#### Secondary Nodes:
+
+| Protocol | Direction | Port Range | Purpose | Used By |
+| ------- | --------- | ---------- | ----------------------- | ------- |
+| UDP | Inbound | 8472 | Flannel VXLAN | All |
+
+## Firewalls
+
+When using a stateless packet filtering firewall, to allow for outgoing TCP connections, it is common to configure the firewall to allow packets with TCP flags "ack" with destination port range 32768-65535 as this is the default range specified by the kernel /proc/sys/net/ipv4/ip_local_port_range.
+
+Flannel uses a larger range when doing SNAT, thus this range must be expanded to 1024-65535.
+
+An example rule is as follows:
+
+```
+| Name | Source IP | Destination IP | Source port | Destination port | Protocol | TCP flags | Action |
+| ---- | --------- | -------------- | ----------- | ---------------- | -------- | --------- | ------ |
+| Allow outgoing TCP | 0.0.0.0/0 | 0.0.0.0/0 | 0-65535 | 1024-65535 | tcp | ack | accept |
+```
+
+## Custom Pod Subnet
+
+The Pod subnet will default to `10.32.0.0/20` if available.
+If not available, the installer will attempt to find an available range with prefix bits 20 in the `10.32.0.0/16` or `10.0.0.0/8` address spaces.
+This can be overridden using the `podCIDR` to specify a specific address space, or `podCIDRRange` to specify a different prefix bits.
+
+## Limitations
+
+* Flannel is not compatible with the Docker container runtime
+* Network Policies are not supported
+* IPv6 and dual stack networks are not supported
+* Encryption is not supported
+
+## Migration from Weave
+
+You must use the Containerd CRI runtime when migrating from Weave to Flannel. For more information about limitations, see [Limitations](#limitations).
+
+The migration process results in downtime for the entire cluster because Weave must be removed before Flannel can be installed.
+Every pod in the cluster is also deleted and then recreated, to receive new IP addresses allocated by Flannel.
+
+The migration is performed by rerunning the installer with Flannel v0.20.2+ as the configured CNI.
+The user is presented with a prompt to confirm the migration:
+
+```bash
+The migration from Weave to Flannel will require whole-cluster downtime.
+Would you like to continue? (Y/n)
+```
+
+If there are additional nodes in the cluster, the user is prompted to run a command on each node.
+
+For additional primary nodes, the command looks similar to the following example:
+
+```bash
+Moving primary nodes from Weave to Flannel requires removing certain weave files and restarting kubelet.
+Please run the following command on each of the listed primary nodes:
+
+
+
+
+ curl -fsSL https://kurl.sh/version///tasks.sh | sudo bash -s weave-to-flannel-primary cert-key=
+
+Once this has been run on all nodes, press enter to continue.
+```
+
+For secondary nodes, the command looks similar to the following example:
+
+```bash
+Moving from Weave to Flannel requires removing certain weave files and restarting kubelet.
+Please run the following command on each of the listed secondary nodes:
+
+
+
+
+
+ curl -fsSL https://kurl.sh/version///tasks.sh | sudo bash -s weave-to-flannel-secondary
+
+Once this has been run on all nodes, press enter to continue.
+```
+
+After these scripts run, the migration takes several additional minutes to recreate the pods in the cluster.
+
+**Important:** Migrating from Weave to Flannel requires careful attention. There will be a period during the migration when the cluster operates without a Container Network Interface.
+In the event of an unexpected error leading to the migration's interruption, the cluster may end up in an unhealthy state.
+In such cases, users have the option to reattempt the installation, which will ensure the completion of any pending tasks.
+If the migration encounters failures and re-running the installation process fails during initial cluster health assessment, users can utilize the `-s host-preflight-ignore` option.
+This option instructs the installer to disregard host preflight failures and warnings, offering a potential workaround for overcoming migration obstacles.
diff --git a/src/markdown-pages/add-ons/fluentd.md b/src/markdown-pages/add-ons/fluentd.md
index 2c8ea96f..5548ffc5 100644
--- a/src/markdown-pages/add-ons/fluentd.md
+++ b/src/markdown-pages/add-ons/fluentd.md
@@ -15,6 +15,8 @@ To use a vendor supplied fluent.conf, simply create a configuration and include
There is also an optional [Elasticsearch](https://www.elastic.co/elasticsearch/) and [Kibana](https://www.elastic.co/kibana) integration for complete EFK logging stack and visualization.
+The kURL Fluentd implementation does not support plugins. You can use the kURL Fluentd implementation to forward logs to a syslog sink or to explore logs in Elasticsearch.
+
Elasticsearch requires 1gb of memory for stability. Default storage is set to 20GB. Log rotation is not done by default. It uses the existing Rook/Ceph setups to handle the persistent volume claims.
## Advanced Install Options
diff --git a/src/markdown-pages/add-ons/k3s.md b/src/markdown-pages/add-ons/k3s.md
deleted file mode 100644
index 5fa5a413..00000000
--- a/src/markdown-pages/add-ons/k3s.md
+++ /dev/null
@@ -1,47 +0,0 @@
----
-path: "/docs/add-ons/k3s"
-date: "2021-02-18"
-linktitle: "K3s"
-weight: 42
-title: "K3s Add-On"
-addOn: "k3s"
-isBeta: true
----
-
-[K3s](https://k3s.io/) is a lightweight Kubernetes distribution built by Rancher for Internet of Things (IoT) and edge computing.
-
-Rather than using the [Kubernetes add-on](/docs/add-ons/kubernetes), which uses `kubeadm` to install Kubernetes, the K3s add-on can be used to install the K3s distribution. This distribution includes Kubernetes as well as several add-ons for networking, ingress, and more.
-
-There are several reasons to use K3s instead of Kubernetes (`kubeadm`). The main reason is that K3s is simpler than upstream Kubernetes, so it is easier to support. K3s is packaged as a single binary that is less than 50MB. It reduces the dependencies and steps needed to install, run, and update Kubernetes, as compared to `kubeadm`. In addition, K3s has lower [CPU and RAM requirements](https://rancher.com/docs/k3s/latest/en/installation/installation-requirements/#hardware).
-
-By default, K3s uses `sqlite3` as the storage backend instead of etcd.
-
-## Operating System Compatibility
-The K3s add-on is currently supported on CentOS and Ubuntu.
-
-## Add-On Compatibility
-The following are included by default with K3s:
-* Flannel (CNI)
-* CoreDNS
-* Traefik (Ingress)
-* Kube-proxy
-* [Metrics Server](/docs/add-ons/metrics-server)
-* [containerd](/docs/add-ons/containerd) (CRI)
-* Local path provisioner (CSI)
-
-K3s has been tested with the following add-ons:
-* [KOTS](/docs/add-ons/kotsadm)
-* [MinIO](/docs/add-ons/minio)
-* [OpenEBS](/docs/add-ons/openebs)
-* [Registry](/docs/add-ons/registry)
-* [Rook](/docs/add-ons/rook)
-
-## Limitations
-Because K3s support is currently in beta, there are several limitations:
-* Joining additional nodes is not supported.
-* Upgrading from one version of K3s to another is not supported.
-* Selecting a CRI or CNI provider is not supported because containerd and Flannel are already included.
-* The NodePort range in K3s is 30000-32767, so `kotsadm.uiBindPort` must be set to something in this range.
-* While Rook and OpenEBS have been tested, they are not recommended.
-* Due to limitations with Velero and Restic, volumes provisioned using the default Local Path Provisioner (or volumes based on host paths) cannot be snapshotted.
-* While K3s has experimental support for SELinux, this cannot currently be enabled through kURL, so SELinux should be disabled on the host.
diff --git a/src/markdown-pages/add-ons/kotsadm.md b/src/markdown-pages/add-ons/kotsadm.md
index 63dfa229..64d4ac04 100644
--- a/src/markdown-pages/add-ons/kotsadm.md
+++ b/src/markdown-pages/add-ons/kotsadm.md
@@ -56,7 +56,7 @@ spec:
version: latest
containerd:
version: latest
- weave:
+ flannel:
version: latest
rook:
version: latest
@@ -78,12 +78,12 @@ spec:
version: latest
containerd:
version: latest
- weave:
+ flannel:
version: latest
openebs:
version: latest
isLocalPVEnabled: true
- localPVStorageClassName: default
+ localPVStorageClassName: local
minio:
version: latest
registry:
@@ -102,10 +102,12 @@ spec:
version: latest
docker:
version: latest
- weave:
+ flannel:
version: latest
- longhorn:
+ openebs:
version: latest
+ isLocalPVEnabled: true
+ localPVStorageClassName: local
registry:
version: latest
kotsadm:
diff --git a/src/markdown-pages/add-ons/kubernetes.md b/src/markdown-pages/add-ons/kubernetes.md
index 966858e9..51fb4bdc 100644
--- a/src/markdown-pages/add-ons/kubernetes.md
+++ b/src/markdown-pages/add-ons/kubernetes.md
@@ -10,8 +10,17 @@ addOn: "kubernetes"
[Kubernetes](https://kubernetes.io/) is installed using [`kubeadm`](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/), the cluster management tool built by the core Kubernetes team and owned by `sig-cluster-lifecycle`.
`kubeadm` brings up the Kubernetes control plane before other add-ons are applied.
-In addition to supporting Kubernetes using `kubeadm`, kURL can install [RKE2](/docs/add-ons/rke2) and [K3s](/docs/add-ons/k3s).
-Support for both of these distributions is in beta. For more information about limitations and instructions, see the respective add-on pages.
+## Host Package Requirements
+
+The following host packages are required for Red Hat Enterprise Linux 9 and Rocky Linux 9:
+
+- conntrack-tools
+- ethtool
+- glibc
+- iproute
+- iptables-nft
+- socat
+- util-linux
## Advanced Install Options
diff --git a/src/markdown-pages/add-ons/kurl.md b/src/markdown-pages/add-ons/kurl.md
index b35fc3c5..7e993852 100644
--- a/src/markdown-pages/add-ons/kurl.md
+++ b/src/markdown-pages/add-ons/kurl.md
@@ -6,7 +6,7 @@ weight: 45
title: "Kurl Add-On"
addOn: "kurl"
---
-
+For disk space requirements, see [Core Directory Disk Space Requirements](/docs/install-with-kurl/system-requirements#core-directory-disk-space-requirements).
## Advanced Install Options
diff --git a/src/markdown-pages/add-ons/longhorn.md b/src/markdown-pages/add-ons/longhorn.md
index 7e4cfc30..142e15c7 100644
--- a/src/markdown-pages/add-ons/longhorn.md
+++ b/src/markdown-pages/add-ons/longhorn.md
@@ -5,14 +5,31 @@ linktitle: "Longhorn"
weight: 46
title: "Longhorn Add-On"
addOn: "longhorn"
+isDeprecated: true
---
+## Deprecation Notice
+
+### This add-on is deprecated.
+
+As of March 27, 2023, the Longhorn add-on is deprecated. The Longhorn add-on might be removed from kURL after September 31st, 2023. Existing installations that use the Longhorn add-on are supported during this deprecation window. We recommend that you migrate from the Longhorn add-on to either the [OpenEBS](https://kurl.sh/docs/add-ons/openebs) or [Rook](https://kurl.sh/docs/add-ons/rook) add-on on or before September 31st, 2023. For more information about migrating from Longhorn, see [Migrating to Change CSI Add-On](https://kurl.sh/docs/install-with-kurl/migrating-csi).
+
+
+## Summary
+
[Longhorn](https://longhorn.io/) is a CNCF Sandbox Project originally developed by Rancher labs as a “lightweight, reliable and easy-to-use distributed block storage system for Kubernetes”. Longhorn uses a microservice-based architecture to create a pod for every Custom Resource in the Longhorn ecosystem: Volumes, Replicas, a control plane, a data plane, etc.
-Longhorn uses the `/var/lib/longhorn` directory for storage on all nodes.
-This directory should have enough space to hold a complete copy of every PersistentVolumeClaim that will be in the cluster.
+Longhorn uses the `/var/lib/longhorn` directory for storage on all nodes. For disk requirements, see [Add-on Directory Disk Space Requirements](/docs/install-with-kurl/system-requirements/#add-on-directory-disk-space-requirements).
+
For production installs, an SSD should be mounted at `/var/lib/longhorn`.
+## Host Package Requirements
+
+The following host packages are required for Red Hat Enterprise Linux 9 and Rocky Linux 9:
+
+- iscsi-initiator-utils
+- nfs-utils
+
## Advanced Install Options
```yaml
diff --git a/src/markdown-pages/add-ons/minio.md b/src/markdown-pages/add-ons/minio.md
index c22d1eca..349ccbfc 100644
--- a/src/markdown-pages/add-ons/minio.md
+++ b/src/markdown-pages/add-ons/minio.md
@@ -26,4 +26,12 @@ flags-table
If Rook was previously installed but is no longer specified in the kURL spec and MinIO is specified instead, MinIO will migrate data from Rook's object store to MinIO.
-If Longhorn is also specified in the new kURL spec and completes its migration process successfully, Rook will be removed to free up resources.
+If OpenEBS or Longhorn is also specified in the new kURL spec and completes its migration process successfully, Rook will be removed to free up resources.
+
+## High Availability
+
+By default, and upon initial installation, MinIO runs as a single replica and relies on the storage it uses being available on every node.
+
+When there is non-distributed storage available on at least three nodes, [EKCO](/docs/add-ons/ekco#minio) will upgrade MinIO to run 6 replicas in a highly-available fashion.
+While this upgrade is in process, MinIO will temporarily go offline.
+If this migration fails, it will be retried but MinIO will remain offline until it succeeds.
diff --git a/src/markdown-pages/add-ons/openebs.md b/src/markdown-pages/add-ons/openebs.md
index b3559308..7ce6a377 100644
--- a/src/markdown-pages/add-ons/openebs.md
+++ b/src/markdown-pages/add-ons/openebs.md
@@ -7,10 +7,13 @@ title: "OpenEBS Add-On"
addOn: "openebs"
---
-The [OpenEBS](https://openebs.io) add-on includes two options for provisioning volumes for PVCs: LocalPV and cStor.
+The [OpenEBS](https://openebs.io) add-on creates a Storage Class which provisions [Local](https://openebs.io/docs#local-volumes) Persistent Volumes to Stateful workloads.
-Either provisioner may be selected as the default provisioner for the cluster by naming its storageclass `default`.
-In this example, the localPV provisioner would be used as provisioner for any PVCs that did not explicitly specify a storageClassName.
+## Host Package Requirements
+
+The following host packages are required for Red Hat Enterprise Linux 9 and Rocky Linux 9 for versions 1.x and 2.x of the OpenEBS add-on:
+
+- iscsi-initiator-utils
## Advanced Install Options
@@ -18,123 +21,31 @@ In this example, the localPV provisioner would be used as provisioner for any PV
spec:
openebs:
version: latest
- namespace: "space"
+ namespace: openebs
isLocalPVEnabled: true
- localPVStorageClassName: default
- isCstorEnabled: true
- cstorStorageClassName: cstor
+ localPVStorageClassName: local
```
flags-table
-## LocalPV
-
-The [LocalPV provisioner](https://docs.openebs.io/docs/next/localpv.html) uses the host filesystem directory `/var/openebs/local` for storage.
-PersistentVolumes provisioned with localPV will not be relocatable to a new node if a pod gets rescheduled.
-Data in these PersistentVolumes will not be replicated across nodes to protect against data loss.
-The localPV provisioner is suitable as the default provisioner for single-node clusters.
-
-## cStor
-
-The [cStor provisioner](https://docs.openebs.io/docs/next/ugcstor.html) relies on block devices for storage.
-The OpenEBS NodeDeviceManager runs a DaemonSet to automatically incorporate available block devices into a storage pool named `cstor-disk`.
-The first available block device on each node in the cluster will automatically be added to this pool.
-
-### Limitations
-
-cStor is no longer supported for OpenEBS add-on versions 2.12.9+.
-
-
-### Adding Disks
+## Local Volumes
-After joining more nodes with disks to your cluster you can re-run the kURL installer to re-configure the `cstor-disk` storagepoolclaim, the storageclass, and the replica count of any existing volumes.
+The [Local PV](https://openebs.io/docs/#local-volumes) provisioner uses the host filesystem directory `/var/openebs/local` for storage.
+Local Volumes are accessible only from a single node in the cluster.
+Pods using Local Volume have to be scheduled on the node where volume is provisioned.
+Persistent Volumes provisioned as Local Volumes will not be relocatable to a new node if a pod gets rescheduled.
+Data in these Persistent Volumes will not be replicated across nodes to protect against data loss.
+The Local PV provisioner is suitable as the default provisioner for single-node clusters.
+Additionally, Local Volumes are typically preferred for workloads like Cassandra, MongoDB, Elastic, etc that are distributed in nature and have high availability built into them.
### Storage Class
-The kURL installer will create a StorageClass that initially configures cStor to provision volumes with a single replica.
-After adding more nodes with disks to the cluster, re-running the kURL installer will increase the replica count up to a maximum of three.
-The kURL installer will also check for any PVCs that were created at a lower ReplicaCount and will add additional replicas to bring those volumes up to the new ReplicaCount.
+The OpenEBS Storage Class will be set as the default if:
-```
-apiVersion: storage.k8s.io/v1
-kind: StorageClass
-metadata:
- name: cstor
- annotations:
- openebs.io/cas-type: cstor
- cas.openebs.io/config: |
- - name: StoragePoolClaim
- value: "cstor-disk"
- - name: ReplicaCount
- value: "1"
-provisioner: openebs.io/provisioner-iscsi
-```
-
-### CustomResources
-
-#### disks.openebs.io
-
-Use `kubectl get disks` to list all the disks detected by the Node Device Manager.
-The Node Device Manager will ignore any disks that are already mounted or that match `loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-`.
+1. Rook is not included in the spec.
+2. There is no existing default Storage Class in the cluster.
+3. Longhorn is not included in the spec OR the `openebs.localPVStorageClassName` property is set to `"default"`.
-It is critical to ensure that disks are attached with a serial number and that this serial number is unique across all disks in the cluster.
-On GCP, for example, this can be accomplished with the `--device-name` flag.
-```
-gcloud compute instances attach-disk my-instance --disk=my-disk-1 --device-name=my-instance-my-disk-1
-```
-
-Use the `lsblk` command with the `SERIAL` output columnm to verify on the host that a disk has a unique serial number:
-```
-lsblk -o NAME,SERIAL
-```
-
-#### blockdevices.openebs.io
-
-For each disk in `kubectl get disks` there should be a corresponding blockdevice resource in `kubectl -n openebs get blockdevices`.
-(It is possible to manually configure multiple blockdevice resources for a partitioned disk but that is not supported by the kURL installer.)
-
-#### blockdeviceclaims.openebs.io
-
-For each blockdevice that is actually being used by cStor for storage there will be a resource listed under `kubectl -n openebs get blockdeviceclaims`.
-The kURL add-on uses an automatic striped storage pool, which can make use of no more than one blockdevice per node in the cluster.
-Attaching a 2nd disk to a node, for example, would trigger creation of `disk` and `blockdevice` resources, but not a `blockdeviceclaim`.
-
-#### storagepoolclaim.openebs.io
-
-The kURL installer will create a `storagepoolclaim` resource named `cstor-disk`.
-For the initial install, kURL will use this spec for the storagepoolclaim:
-
-```yaml
-spec:
- blockDevices:
- blockDeviceList: null
- maxPools: 1
- minPools: 1
- name: cstor-disk
- poolSpec:
- cacheFile: ""
- overProvisioning: false
- poolType: striped
- type: disk
-```
-
-The `blockDeviceList: null` setting indicates to OpenEBS that this is an automatic pool.
-Blockdevices will automatically be claimed for the pool up to the value of `maxPools`.
-If no blockdevices are available, the kURL installer will prompt for one to be attached and wait.
-After joining more nodes with disks to the cluster, re-running the kURL installer will increase the `maxPools` level.
-
-#### cstorvolumes.openebs.io
-
-Each PVC provisioned by cStor will have a corresponding cstorvolume resource in the `openebs` namespace.
-```
-kubectl -n openebs get cstorvolumes
-```
-The cstorvolume name will be identical to the PersistentVolume name created for the PVC once bound.
-
-#### cstorvolumereplicas.openebs.io
+### Limitations
-For each cstorvolume there will be 1 to 3 cstorvolumereplicas in the `openebs` namespace.
-```
-kubectl -n openebs get cstorvolumereplicas
-```
-The number of replicas should match the `ReplicaCount` configured in the StorageClass, which kURL increases as more nodes with disks are added to the clsuter.
+Other [Replicated Volume](https://openebs.io/docs/#replicated-volumes) provisioners provided by the OpenEBS project including cStor are not supported.
diff --git a/src/markdown-pages/add-ons/rke2.md b/src/markdown-pages/add-ons/rke2.md
deleted file mode 100644
index 8aa68e78..00000000
--- a/src/markdown-pages/add-ons/rke2.md
+++ /dev/null
@@ -1,45 +0,0 @@
----
-path: "/docs/add-ons/rke2"
-date: "2021-02-18"
-linktitle: "RKE2"
-weight: 52
-title: "RKE2 Add-On"
-addOn: "rke2"
-isBeta: true
----
-
-[RKE2](https://rke2.io/), also known as RKE Government, is a fully-conformant Kubernetes distribution from Rancher.
-
-Rather than using the [Kubernetes add-on](/docs/add-ons/kubernetes), which uses `kubeadm` to install Kubernetes, the RKE2 add-on can be used to install the RKE2 distribution. This distribution includes Kubernetes as well as several add-ons for networking, ingress, and more.
-
-There are several reasons to use RKE2 instead of Kubernetes (`kubeadm`). The main reason is that RKE2 is simpler than upstream Kubernetes, so it is easier to support. RKE2 is packaged as a single binary, reducing the dependencies and steps needed to install, run, and update Kubernetes, in comparison to kubeadm.
-
-In contrast to K3s, which deviates from upstream Kubernetes to better support edge deployments, RKE2 maintains closer alignment with upstream Kubernetes.
-
-## Operating System Compatibility
-The RKE2 add-on is currently only supported on CentOS 7.
-
-## Add-On Compatibility
-The following are included by default with RKE2:
-* Canal (CNI)
-* CoreDNS
-* NGINX (Ingress)
-* Kube-proxy
-* [Metrics Server](/docs/add-ons/metrics-server)
-* [containerd](/docs/add-ons/containerd) (CRI)
-
-RKE2 has been tested with the following add-ons:
-* [KOTS](/docs/add-ons/kotsadm)
-* [MinIO](/docs/add-ons/minio)
-* [OpenEBS](/docs/add-ons/openebs)
-* [Velero](/docs/add-ons/velero)
-* [Registry](/docs/add-ons/registry)
-* [Rook](/docs/add-ons/rook)
-
-## Limitations
-Because RKE2 support is currently in beta, there are several limitations:
-* Joining additional nodes is not supported.
-* Upgrading from one version of RKE2 to another is not supported.
-* Selecting a CRI or CNI provider is not supported because containerd and Canal are already included.
-* The NodePort range in RKE2 is 30000-32767, so `kotsadm.uiBindPort` must be set to something in this range.
-* While Rook has been tested, it is not recommended.
diff --git a/src/markdown-pages/add-ons/rook.md b/src/markdown-pages/add-ons/rook.md
index 11c27b37..bd531ff4 100644
--- a/src/markdown-pages/add-ons/rook.md
+++ b/src/markdown-pages/add-ons/rook.md
@@ -10,13 +10,13 @@ addOn: "rook"
The [Rook](https://rook.io/) add-on creates and manages a Ceph cluster along with a storage class for provisioning PVCs.
It also runs the Ceph RGW object store to provide an S3-compatible store in the cluster.
-By default the cluster uses the filesystem for storage. Each node in the cluster will have a single OSD backed by a directory in `/opt/replicated/rook`. Nodes with a Ceph Monitor also utilize `/var/lib/rook`.
+The [EKCO](/docs/add-ons/ekco) add-on is recommended when installing Rook. EKCO is responsible for performing various operations to maintain the health of a Ceph cluster.
-**Note**: At minimum, 10GB of disk space should be available to `/var/lib/rook` for the Ceph Monitors and other configs. We recommend a separate partition to prevent a disruption in Ceph's operation as a result of `/var` or the root partition running out of space.
+## Host Package Requirements
-**Note**: All disks used for storage in the cluster should be of similar size. A cluster with large discrepancies in disk size may fail to replicate data to all available nodes.
+The following host packages are required for Red Hat Enterprise Linux 9 and Rocky Linux 9:
-The [EKCO](/docs/add-ons/ekco) add-on is recommended when installing Rook. EKCO is responsible for performing various operations to maintain the health of a Ceph cluster.
+- lvm2
## Advanced Install Options
@@ -30,17 +30,39 @@ spec:
storageClassName: "storage"
hostpathRequiresPrivileged: false
bypassUpgradeWarning: false
+ minimumNodeCount: 3
```
flags-table
+## System Requirements
+
+The following ports must be open between nodes for multi-node clusters:
+
+| Protocol | Direction | Port Range | Purpose | Used By |
+| ------- | --------- | ---------- | ----------------------- | ------- |
+| TCP | Inbound | 9090 | CSI RBD Plugin Metrics | All |
+
+The `/var/lib/rook/` directory requires at least 10 GB space available for Ceph monitor metadata.
+
## Block Storage
-For production clusters, Rook should be configured to use block devices rather than the filesystem.
-Enabling block storage is required with version 1.4.3+. Therefore, the `isBlockStorageEnabled` option will always be set to true when using version 1.4.3+.
-The following spec enables block storage for the Rook add-on and automatically uses disks matching the regex `/sd[b-z]/`.
-Rook will start an OSD for each discovered disk, which could result in multiple OSDs running on a single node.
-Rook will ignore block devices that already have a filesystem on them.
+Rook versions 1.4.3 and later require a dedicated block device attached to each node in the cluster.
+The block device must be unformatted and dedicated for use by Rook only.
+The device cannot be used for other purposes, such as being part of a Raid configuration.
+If the device is used for purposes other than Rook, then the installer fails, indicating that it cannot find an available block device for Rook.
+
+For Rook versions earlier than 1.4.3, a dedicated block device is recommended in production clusters.
+
+For disk requirements, see [Add-on Directory Disk Space Requirements](/docs/install-with-kurl/system-requirements/#add-on-directory-disk-space-requirements).
+
+You can enable and disable block storage for Rook versions earlier than 1.4.3 with the `isBlockStorageEnabled` field in the kURL spec.
+
+When the `isBlockStorageEnabled` field is set to `true`, or when using Rook versions 1.4.3 and later, Rook starts an OSD for each discovered disk.
+This can result in multiple OSDs running on a single node.
+Rook ignores block devices that already have a filesystem on them.
+
+The following provides an example of a kURL spec with block storage enabled for Rook:
```yaml
spec:
@@ -50,14 +72,30 @@ spec:
blockDeviceFilter: sd[b-z]
```
-The Rook add-on will wait for a disk before continuing.
-If you have attached a disk to your node but the installer is still waiting at the Rook add-on installation step, refer to the [troubleshooting guide](https://rook.io/docs/rook/v1.0/ceph-common-issues.html#osd-pods-are-not-created-on-my-devices) for help with diagnosing and fixing common issues.
+In the example above, the `isBlockStorageEnabled` field is set to `true`.
+Additionally, `blockDeviceFilter` instructs Rook to use only block devices that match the specified regex.
+For more information about the available options, see [Advanced Install Options](#advanced-install-options) above.
+
+The Rook add-on waits for the dedicated disk that you attached to your node before continuing with installation.
+If you attached a disk to your node, but the installer is waiting at the Rook add-on installation step, see [OSD pods are not created on my devices](https://rook.io/docs/rook/v1.10/Troubleshooting/ceph-common-issues/#osd-pods-are-not-created-on-my-devices) in the Rook documentation for troubleshooting information.
+
+## Filesystem Storage
+
+By default, for Rook versions earlier than 1.4.3, the cluster uses the filesystem for Rook storage.
+However, block storage is recommended for Rook in production clusters.
+For more information, see [Block Storage](#block-storage) above.
+
+When using the filesystem for storage, each node in the cluster has a single OSD backed by a directory in `/opt/replicated/rook/`.
+We recommend a separate disk or partition at `/opt/replicated/rook/` to prevent a disruption in Ceph's operation as a result the root partition running out of space.
+
+**Note**: All disks used for storage in the cluster should be of similar size.
+A cluster with large discrepancies in disk size may fail to replicate data to all available nodes.
## Shared Filesystem
-The [Ceph filesystem](https://rook.io/docs/rook/v1.4/ceph-filesystem.html) is supported with version 1.4.3+.
+The [Ceph filesystem](https://rook.io/docs/rook/v1.10/Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/) is supported with version 1.4.3+.
This allows the use of PersistentVolumeClaims with access mode `ReadWriteMany`.
-Set the storage class to `rook-cephfs` in the pvc spec to use this feature.
+Set the storage class to `rook-cephfs` in the PVC spec to use this feature.
```yaml
apiVersion: v1
@@ -73,14 +111,54 @@ spec:
storageClassName: rook-cephfs
```
+## Per-Node Storage Configuration
+
+By default, Rook is configured to consume all [nodes](https://rook.io/docs/rook/v1.11/CRDs/Cluster/ceph-cluster-crd/#cluster-settings:~:text=for%20specific%20nodes.-,useAllNodes,-%3A%20true%20or) and [devices](https://rook.io/docs/rook/v1.11/CRDs/Cluster/ceph-cluster-crd/#node-settings:~:text=in%20the%20cluster.-,useAllDevices,-%3A%20true%20or) found on those nodes for Ceph storage.
+This can be overridden with configuration per-node using the `rook.nodes` property of the spec.
+This string must adhere to the `nodes` storage configuration spec in the CephCluster CRD.
+See the Rook CephCluster CRD [Node Settings](https://rook.io/docs/rook/v1.11/CRDs/Cluster/ceph-cluster-crd/#node-settings) documentation for more information.
+
+For example:
+
+```yaml
+spec:
+ rook:
+ nodes: |
+ - name: node-01
+ devices:
+ - name: sdb
+ - name: node-02
+ devices:
+ - name: sdb
+ - name: sdc
+ - name: node-03
+ devices:
+ - name: sdb
+ - name: sdc
+```
+
+To override this property at install time, see [Modifying an Install Using a YAML Patch File](/docs/install-with-kurl#modifying-an-install-using-a-yaml-patch-file-at-runtime) for more details on using patch files.
+
## Upgrades
-It is not possible to upgrade multiple minor versions of the Rook add-on at once.
-Individual upgrades from one version to the next are required for upgrading multiple minor versions.
-For example, to upgrade from `1.4.3` to `1.6.11`, you must first install `1.5.10`, `1.5.11` or `1.5.12`.
+It is now possible to upgrade multiple minor versions of the Rook add-on at once.
+This upgrade process will step through minor versions one at a time.
+For example, upgrades from Rook 1.0.x to 1.5.x will step through Rook versions 1.1.9, 1.2.7, 1.3.11 and 1.4.9 before installing 1.5.x.
+Upgrades without internet access may prompt the end-user to download supplemental packages.
-If the currently installed Rook version is `1.0.x`, upgrades to both `1.4.9` and `1.5.x` are supported through the main installer.
-Alternatively, the upgrade from `1.0.x` to `1.4.9` can be triggered independently with `curl https://k8s.kurl.sh/latest/tasks.sh | sudo bash -s rook_10_to_14`.
-This upgrade migrates data off of any hostpath-based OSDs in favor of block device-based OSDs and upgrades through Rook `1.1.9`, `1.2.7` and `1.3.11` before installing `1.4.9` (and then optionally `1.5.x`).
+Rook upgrades from 1.0.x migrate data off of any filesystem-based OSDs in favor of block device-based OSDs.
The upstream Rook project introduced a requirement for block storage in versions 1.3.x and later.
-In instances without internet access, this requires supplying an additional file when prompted.
+
+## Monitoring
+
+For Rook version 1.9.12 and later, when you install with both the Rook add-on and the Prometheus add-on, kURL enables Ceph metrics collection and creates a Ceph cluster statistics Grafana dashboard.
+
+The Ceph cluster statistics dashboard in Grafana displays metrics that help you monitor the health of the Rook Ceph cluster, including the status of the Ceph object storage daemons (OSDs), the available cluster capacity, the OSD commit and apply latency, and more.
+
+The following shows an example of the Ceph cluster dashboard in Grafana:
+
+![Graphs and metrics on the Ceph Grafana dashboard](/ceph-grafana-dashboard.png)
+
+To access the Ceph cluster dashboard, log in to Grafana in the `monitoring` namespace of the kURL cluster using your Grafana admin credentials.
+
+For more information about installing with the Prometheus add-on and updating the Grafana credentials, see [Prometheus Add-on](/docs/add-ons/prometheus).
diff --git a/src/markdown-pages/add-ons/selinux.md b/src/markdown-pages/add-ons/selinux.md
index 46a96750..429385e8 100644
--- a/src/markdown-pages/add-ons/selinux.md
+++ b/src/markdown-pages/add-ons/selinux.md
@@ -11,6 +11,11 @@ Security-Enhanced Linux (SELinux) is a security architecture for Linux systems t
This add-on allows for configuration of system SELinux policies, such as setting the desired state of SELinux, and running chcon and semanage commands in a sanitized manner.
This add-on will be skipped if SELinux is not installed or is disabled.
+Many SELinux configurations will break Kubernetes.
+Many more will break applications running within Kubernetes.
+We strongly recommend testing configurations extensively.
+Replicated will not take ownership of problems caused by overly restrictive SELinux confirations, and our first ask on instances encountering issues with SELinux enabled will often be to set SELinux to `permissive`.
+
## Advanced Install Options
```yaml
@@ -27,3 +32,21 @@ spec:
```
flags-table
+
+## End User Patching
+
+Occasionally end users will wish to enable SELinux and take responsibility for configuring it.
+This can be done even without adding SELinux to a vendor's kURL spec with the `installer-spec-file` kURL installer option.
+They can run `curl https://kurl.sh/somebigbank | sudo bash -s installer-spec-file="./patch.yaml"` instead, with an appropriate patch file:
+
+```yaml
+apiVersion: "cluster.kurl.sh/v1beta1"
+kind: "Installer"
+metadata:
+ name: "preserve-system-selinux"
+spec:
+ selinuxConfig:
+ preserveConfig: true
+```
+
+The process of using a patch spec at runtime is expanded upon [here](/docs/install-with-kurl/#modifying-an-install-using-a-yaml-patch-file-at-runtime).
diff --git a/src/markdown-pages/add-ons/sonobuoy.md b/src/markdown-pages/add-ons/sonobuoy.md
index 3bc0cf97..b2adf312 100644
--- a/src/markdown-pages/add-ons/sonobuoy.md
+++ b/src/markdown-pages/add-ons/sonobuoy.md
@@ -13,6 +13,10 @@ It is a customizable, extendable, and cluster-agnostic way to generate clear, in
This makes Sonobuoy assets available in the cluster, but does not run conformance tests.
The `sonobuoy` binary will be installed in the directory `/usr/local/bin`.
+## Limitations
+
+This add-on does not work if Kubernetes 1.27.x and Prometheus is installed as `sonobuoy` will panic with the error: `panic: runtime error: invalid memory address or nil pointer dereference`. This problem is being [addressed](https://github.com/vmware-tanzu/sonobuoy/pull/1909) in the upstream Sonobuoy project.
+
## Advanced Install Options
```yaml
diff --git a/src/markdown-pages/add-ons/velero.md b/src/markdown-pages/add-ons/velero.md
index c490b01c..e72b574e 100644
--- a/src/markdown-pages/add-ons/velero.md
+++ b/src/markdown-pages/add-ons/velero.md
@@ -14,8 +14,14 @@ The Kurl add-on installs:
* The velero CLI onto the host
* CRDs for configuring backups and restores
+## Host Package Requirements
+
+The following host packages are required for Red Hat Enterprise Linux 9 and Rocky Linux 9:
+
+- nfs-utils
+
## Limitations
-The limitations of Velero apply to this add-on. For more information, see [Limitations](https://github.com/vmware-tanzu/velero/blob/master/site/docs/master/restic.md#limitations) in the Velero GitHub repository.
+The limitations of Velero apply to this add-on. For more information, see [Limitations](https://github.com/vmware-tanzu/velero/blob/master/site/docs/master/restic.md#limitations) in the Velero GitHub repository.
## Advanced Install Options
@@ -29,6 +35,10 @@ spec:
localBucket: "local"
resticRequiresPrivileged: true
resticTimeout: "12h0m0s"
+ serverFlags:
+ - --log-level
+ - debug
+ - --default-repo-maintain-frequency=12h
```
flags-table
diff --git a/src/markdown-pages/add-ons/weave.md b/src/markdown-pages/add-ons/weave.md
index d9f58d05..5e5c5a59 100644
--- a/src/markdown-pages/add-ons/weave.md
+++ b/src/markdown-pages/add-ons/weave.md
@@ -5,8 +5,17 @@ linktitle: "Weave"
weight: 59
title: "Weave Add-On"
addOn: "weave"
+isDeprecated: true
---
+## Deprecation Notice
+
+### This add-on is deprecated.
+
+As of March 27, 2023, the Weave add-on is deprecated. The Weave add-on might be removed from kURL after September 31st, 2023. Existing installations that use the Weave add-on are supported during this deprecation window. We recommend that you remove the Weave add-on on or before September 31, 2023 and instead use the [Flannel](https://kurl.sh/docs/add-ons/flannel) add-on. For more information about how to migrate from Weave, see [Migration from Weave](https://kurl.sh/docs/add-ons/flannel#migration-from-weave).
+
+## Summary
+
Weave Net creates a virtual network that connects containers across multiple hosts and enables their automatic discovery. With Weave Net, portable microservices-based applications consisting of multiple containers can run anywhere: on one host, multiple hosts or even across cloud providers and data centers.
## Advanced Install Options
@@ -22,3 +31,21 @@ spec:
```
flags-table
+
+## System Requirements
+
+The following additional ports must be open between nodes for multi-node clusters:
+
+#### Primary Nodes:
+
+| Protocol | Direction | Port Range | Purpose | Used By |
+| ------- | --------- | ---------- | ----------------------- | ------- |
+| TCP | Inbound | 6783 | Weave Net control | All |
+| UDP | Inbound | 6783-6784 | Weave Net data | All |
+
+#### Secondary Nodes:
+
+| Protocol | Direction | Port Range | Purpose | Used By |
+| ------- | --------- | ---------- | ----------------------- | ------- |
+| TCP | Inbound | 6783 | Weave Net control | All |
+| UDP | Inbound | 6783-6784 | Weave Net data | All |
diff --git a/src/markdown-pages/create-installer/choosing-a-pv-provisioner.md b/src/markdown-pages/create-installer/choosing-a-pv-provisioner.md
index e995e88f..57d91d1b 100644
--- a/src/markdown-pages/create-installer/choosing-a-pv-provisioner.md
+++ b/src/markdown-pages/create-installer/choosing-a-pv-provisioner.md
@@ -26,7 +26,7 @@ spec:
openebs:
version: "3.3.x"
isLocalPVEnabled: true
- localPVStorageClassName: "default"
+ localPVStorageClassName: "local"
```
Conversely, [Rook](/docs/add-ons/rook) provides dynamic PV provisioning of distributed [Ceph](https://ceph.io/) storage.
@@ -62,7 +62,7 @@ spec:
openebs:
version: "3.3.x"
isLocalPVEnabled: true
- localPVStorageClassName: "default"
+ localPVStorageClassName: "local"
minio:
version: "2022-09-07T22-25-02Z"
```
diff --git a/src/markdown-pages/create-installer/host-preflights/index.md b/src/markdown-pages/create-installer/host-preflights/index.md
index 19ccfde4..970c31c5 100644
--- a/src/markdown-pages/create-installer/host-preflights/index.md
+++ b/src/markdown-pages/create-installer/host-preflights/index.md
@@ -44,6 +44,7 @@ For each of the add-ons that you are using that have default host preflight chec
For example, if your installer includes KOTS version 1.59.0, you would find the host preflight file at this link: https://github.com/replicatedhq/kURL/blob/main/addons/kotsadm/1.59.0/host-preflight.yaml.
+Flannel: https://github.com/replicatedhq/kURL/tree/main/addons/flannel
Weave: https://github.com/replicatedhq/kURL/tree/main/addons/weave
Rook: https://github.com/replicatedhq/kURL/tree/main/addons/rook
OpenEBS: https://github.com/replicatedhq/kURL/tree/main/addons/openebs
diff --git a/src/markdown-pages/create-installer/host-preflights/system-packages.md b/src/markdown-pages/create-installer/host-preflights/system-packages.md
index 905a0a79..13a156e0 100644
--- a/src/markdown-pages/create-installer/host-preflights/system-packages.md
+++ b/src/markdown-pages/create-installer/host-preflights/system-packages.md
@@ -47,6 +47,14 @@ An array of the names of packages to collect information about if the operating
An array of the names of packages to collect information about if the operating system is `RHEL` version `8.x`.
+#### `rhel9` (Optional)
+
+An array of the names of packages to collect information about if the operating system is `RHEL` version `9.x`.
+
+#### `rocky9` (Optional)
+
+An array of the names of packages to collect information about if the operating system is `Rocky Linux` version `9.x`.
+
#### `centos` (Optional)
An array of the names of packages to collect information about if the operating system is `CentOS`, regardless of the version.
diff --git a/src/markdown-pages/create-installer/index.md b/src/markdown-pages/create-installer/index.md
index 87d65c12..e56c82ab 100644
--- a/src/markdown-pages/create-installer/index.md
+++ b/src/markdown-pages/create-installer/index.md
@@ -16,8 +16,8 @@ metadata:
spec:
kubernetes:
version: "1.25.x"
- weave:
- version: "2.6.x"
+ flannel:
+ version: "0.20.x"
contour:
version: "1.22.x"
minio:
diff --git a/src/markdown-pages/install-with-kurl/adding-nodes.md b/src/markdown-pages/install-with-kurl/adding-nodes.md
index ae1690d5..fb993bd6 100644
--- a/src/markdown-pages/install-with-kurl/adding-nodes.md
+++ b/src/markdown-pages/install-with-kurl/adding-nodes.md
@@ -5,89 +5,31 @@ weight: 19
linktitle: "Adding Nodes"
title: "Adding Nodes"
---
-At the end of the install process, the install script will print out commands for adding nodes.
-Commands to add new secondary nodes last 24 hours, and commands to add additional primary nodes in HA mode last for 2 hours.
-To get new commands, run `tasks.sh join_token` with the relevant parameters (`airgap` and `ha`) on a primary node.
-For instance, on an airgapped HA installation you would run `cat ./tasks.sh | sudo bash -s join_token ha airgap`, while on a single-primary online installation you would run `curl -sSL https://kurl.sh/latest/tasks.sh | sudo bash -s join_token`.
-## Standard Installations
-The install script will print the command that can be run on **secondary** nodes to join them to your new cluster.
-
-![add-nodes](/add-nodes.png)
-
-## HA Installations
-For HA clusters, the install script will print out separate commands for joining **secondaries** and joining additional **primary** nodes.
-See [Highly Available K8s](/docs/install-with-kurl/#highly-available-k8s-ha) for HA install details.
-
-![add-nodes-ha](/add-nodes-ha.png)
-
-## Resetting a Node
-
-It is possible to reset a node using the following script:
+This topic describes how to add nodes to kURL clusters.
+For information about managing nodes on kURL clusters, including removing, rebooting, and resetting nodes, see [Managing Nodes](/docs/install-with-kurl/managing-nodes).
-*NOTE: The provided script is best effort and is not guaranteed to fully reset the node.*
+## About Adding Nodes
+At the end of the install process, the install script will print out commands for adding nodes, i.e.:
-```bash
-curl -sSL https://kurl.sh/latest/tasks.sh | sudo bash -s reset
```
-
-Or for airgap:
-
-```bash
-cat ./tasks.sh | sudo bash -s reset
+To add worker nodes to this installation, run the following script on your other nodes:
+ curl -fsSL https://kurl.sh/version/v2023.01.13-1/95569f3/join.sh | sudo bash -s kubernetes-master-address=10.154.15.203:6443 kubeadm-token=pjxtic.8jrj88214t1tcyfq kubeadm-token-ca-hash=sha256:7f3374d6e8f1971d33c6a9edb16bac5bc6e2c98d2f7f6fa4209a8178b749d462 kubernetes-version=1.19.16 docker-registry-ip=10.96.2.26 primary-host=10.154.15.203
```
-## Rebooting a Node
-
-To safely reboot a node, use the following steps:
-
-*NOTE: In order to safely reboot a node it is required to have the [EKCO add-on](/docs/add-ons/ekco) installed.*
-
-1. Run `/opt/ekco/shutdown.sh` on the node.
-1. Reboot the node
-
-## Removing a Node
-
-It is possible to remove a node from a multi-node cluster.
-In order to safely remove a node it is required to have the [EKCO add-on](/docs/add-ons/ekco) installed.
-When removing a node it is always safest to add back a node or check the health of the cluster before removing an additional node.
-
-When removing a node it is safest to use the following steps:
+Be aware that those commands to add new secondary nodes last 24 hours, and commands to add additional primary nodes in HA mode last for 2 hours. Therefore,
+to get new commands, run `tasks.sh join_token` with the relevant parameters (`airgap` and `ha`) on a primary node such as the following examples.
-1. Run `/opt/ekco/shutdown.sh` on the node.
-1. Power down the node or run the [reset script](/docs/install-with-kurl/adding-nodes#resetting-a-node)
-1. Run `ekco-purge-node [node name]` on another primary.
+- **For single-primary online installation:** run `curl -sSL https://kurl.sh/latest/tasks.sh | sudo bash -s join_token`
+- **For airgapped HA installation:** run `cat ./tasks.sh | sudo bash -s join_token ha airgap`
-### Etcd Cluster Health
-
-When removing a primary node extra precautions must be taken to maintain etcd quorom.
-
-First, there must always be at least one primary node.
-
-Once scaled up to three primary nodes, a minimum of three primaries must be maintained to maintain quorom.
-If the cluster is scaled down to two primaries, a third primary should be added back to prevent loss of quorom.
-
-### Ceph Cluster Health
-
-When using the [Rook add-on](/docs/add-ons/rook) extra precautions must be taken to avoid data loss.
-
-On a one or two node cluster, the size of the Ceph cluster will always be one.
-
-Once the cluster is scaled up to three nodes, the Ceph cluster must be maintained at three nodes.
-If a Ceph Object Storage Daemon (OSD) is scheduled on a node that is removed, Ceph cluster health must be regained before removing any additional nodes.
-Once the node is removed, Ceph will begin replicating its data to OSDs on remaining nodes.
-If the cluster is scaled below three, a new node must be added to regain cluster health.
-
-It is possible to check the Ceph cluster health by running the command `ceph status` in the `rook-ceph-tools` or `rook-ceph-operator` Pod in the `rook-ceph` namespace for Rook version 1.0.x or 1.4.x respectively.
-
-**Rook 1.4.x**
+### Standard Installations
+The install script will print the command that can be run on **secondary** nodes to join them to your new cluster.
-```
-kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph status
-```
+![add-nodes](/add-nodes.png)
-**Rook 1.0.x**
+### HA Installations
+For HA clusters, the install script will print out separate commands for joining **secondaries** and joining additional **primary** nodes.
+See [Highly Available K8s](/docs/install-with-kurl/#highly-available-k8s-ha) for HA install details.
-```
-kubectl -n rook-ceph exec deploy/rook-ceph-operator -- ceph status
-```
+![add-nodes-ha](/add-nodes-ha.png)
diff --git a/src/markdown-pages/install-with-kurl/advanced-options.md b/src/markdown-pages/install-with-kurl/advanced-options.md
index 12b3b601..ebd93969 100644
--- a/src/markdown-pages/install-with-kurl/advanced-options.md
+++ b/src/markdown-pages/install-with-kurl/advanced-options.md
@@ -81,16 +81,6 @@ The install scripts are idempotent. Re-run the scripts with different flags to c
curl https://kurl.sh/latest | sudo bash -s exclude-builtin-host-preflights
-
-
force-reapply-addons
-
Reinstall add-ons, whether or not they have changed since the last time kurl was run.
-
-
- Example:
-
- curl https://kurl.sh/latest | sudo bash -s force-reapply-addons
-
-
ha
Install will require a load balancer to allow for a highly available Kubernetes Control Plane.
@@ -264,6 +254,16 @@ The install scripts are idempotent. Re-run the scripts with different flags to c
curl https://kurl.sh/latest | sudo bash -s velero-restic-timeout=12h0m0s
+
+
velero-server-flags
+
Additional flags to pass to the Velero server. This is a comma-separated list of arguments.
+
+
+ Example:
+
+ curl https://kurl.sh/latest | sudo bash -s velero-server-flags=--log-level=debug,--default-repo-maintain-frequency=12h
+
+
ekco-enable-internal-load-balancer
Run an internal load balanacer with HAProxy listening on localhost:6444 on all nodes.
@@ -274,4 +274,24 @@ The install scripts are idempotent. Re-run the scripts with different flags to c
curl https://kurl.sh/latest | sudo bash -s ekco-enable-internal-load-balancer
+
+
kubernetes-upgrade-ignore-preflight-errors
+
Bypass kubeadm upgrade preflight errors and warnings. See the kubeadm upgrade documentation for more information.
+
+
+ Example:
+
+ curl https://kurl.sh/latest | sudo bash -s kubernetes-upgrade-ignore-preflight-errors=CoreDNSUnsupportedPlugins
+
+
+
+
kubernetes-max-pods-per-node
+
The maximum number of Pods that can run on each node (default 110).
+
+
+ Example:
+
+ curl https://kurl.sh/latest | sudo bash -s kubernetes-max-pods-per-node=200
+
+
diff --git a/src/markdown-pages/install-with-kurl/changing-storage.md b/src/markdown-pages/install-with-kurl/changing-storage.md
new file mode 100644
index 00000000..757e1da0
--- /dev/null
+++ b/src/markdown-pages/install-with-kurl/changing-storage.md
@@ -0,0 +1,218 @@
+---
+path: "/docs/install-with-kurl/changing-storage"
+date: "2022-10-10"
+weight: 25
+linktitle: "Changing Storage"
+title: "Changing Storage"
+isAlpha: false
+---
+
+It is relatively common to initially allocate less (or more) storage than is required for an installation in practice.
+Some storage providers allow this to be done easily, while others require far more effort.
+
+This guide is not for changing the size of an individual PVC, but instead the storage of the entire PV provisioner.
+
+# OpenEBS LocalPV
+
+OpenEBS LocalPV consumes a host's storage at `/var/openebs/local`.
+If you increase the amount of storage available here, OpenEBS will use it.
+If it shrinks, OpenEBS will have less storage to use.
+
+# Rook-Ceph
+
+## Expanding Storage
+
+Rook does not support expanding the storage of existing block devices, only adding new ones.
+However, a 100GB block device added to a node that already had a 100GB block device used by rook will be treated similarly to a freshly created instance with a single 200GB disk.
+In general, all nodes in a cluster should have the same amount of storage, and lopsided amounts of storage can lead to inefficiencies.
+
+Rook will use all block devices attached to the host unless a `blockDeviceFilter` is set, explained [here](/docs/add-ons/rook#block-storage).
+
+## Contracting Storage
+
+Rook does not support shrinking a block device once it has been allocated to Rook.
+However, entire block devices can be removed from the cluster.
+This procedure is not intended to be used on a regular basis, but can be very helpful if you have accidentally added a disk to Rook that was intended for something else.
+
+Removing all OSDs from a node will fail if there would be less than three nodes with sufficient storage for your usage remaining.
+
+### Identifying OSDs
+The first step is to determine what OSD number corresponds to the block device you wish to be freed up.
+
+With a shell into the tools deployment from `kubectl exec -it -n rook-ceph deployment/rook-ceph-tools -- bash`, we can get a list of disks that ceph is using, and which disk is on which host.
+
+```
+[rook@rook-ceph-tools-6fb84b545-js4rg /]$ ceph osd tree
+ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
+-1 0.58597 root default
+-3 0.39067 host laverya-rook-main
+ 0 ssd 0.09769 osd.0 up 1.00000 1.00000
+ 1 ssd 0.09769 osd.1 up 1.00000 1.00000
+ 2 ssd 0.19530 osd.2 up 1.00000 1.00000
+-5 0.19530 host laverya-rook-worker
+ 3 ssd 0.04880 osd.3 up 1.00000 1.00000
+ 4 ssd 0.14650 osd.4 up 1.00000 1.00000
+[rook@rook-ceph-tools-6fb84b545-js4rg /]$ ceph osd df
+ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
+ 0 ssd 0.09769 1.00000 100 GiB 13 GiB 13 GiB 0 B 52 MiB 87 GiB 13.45 0.76 37 up
+ 1 ssd 0.09769 1.00000 100 GiB 25 GiB 25 GiB 0 B 93 MiB 75 GiB 25.38 1.44 47 up
+ 2 ssd 0.19530 1.00000 200 GiB 32 GiB 32 GiB 0 B 124 MiB 168 GiB 16.17 0.92 85 up
+ 3 ssd 0.04880 1.00000 50 GiB 9.4 GiB 9.4 GiB 0 B 39 MiB 41 GiB 18.88 1.07 45 up
+ 4 ssd 0.14650 1.00000 150 GiB 25 GiB 25 GiB 0 B 100 MiB 125 GiB 16.75 0.95 115 up
+ TOTAL 600 GiB 106 GiB 105 GiB 0 B 407 MiB 494 GiB 17.62
+MIN/MAX VAR: 0.76/1.44 STDDEV: 4.05
+```
+If this is enough to identify your disk already, fantastic!
+For instance, a 150GB disk on `laverya-rook-worker` would be osd.4 above.
+However, a 100GB disk on `laverya-rook-main` could be either osd.0 or osd.1.
+
+If this is not enough - for instance if you have multiple disks of the same size on the same instance - you can use kubectl from the host.
+You can get the list of rook-ceph OSDs with `kubectl get pods -n rook-ceph -l ceph_daemon_type=osd`, and then check each one by searching the describe output for `ROOK_BLOCK_PATH`.
+
+```
+kubectl describe pod -n rook-ceph rook-ceph-osd-1-6cf7c5cb7-z7c8p | grep ROOK_BLOCK_PATH
+ DEVICE="$ROOK_BLOCK_PATH"
+ ROOK_BLOCK_PATH: /dev/sdc
+ ROOK_BLOCK_PATH: /dev/sdc
+```
+From this we can tell that the OSD using /dev/sdc is osd.1.
+
+### Draining OSDs
+Once this is known, there are many commands to be run within the rook-ceph-tools deployment.
+You can get a shell to this deployment with `kubectl exec -it -n rook-ceph deployment/rook-ceph-tools -- bash`.
+
+Once you have determined the OSD to be removed, you can tell ceph to relocate data off of this disk with `ceph osd reweight 1 0`.
+```
+[rook@rook-ceph-tools-6fb84b545-js4rg /]$ ceph osd reweight 1 0
+reweighted osd.1 to 0 (0)
+```
+
+Immediately after running this, if we run ceph status there will be a progress bar:
+```
+[rook@rook-ceph-tools-6fb84b545-js4rg /]$ ceph status
+ cluster:
+ id: be2c8681-a8a8-4f84-bf78-5afe2d88e48e
+ health: HEALTH_WARN
+ Degraded data redundancy: 6685/32898 objects degraded (20.320%), 39 pgs degraded, 13 pgs undersized
+
+ services:
+ mon: 1 daemons, quorum a (age 37m)
+ mgr: a(active, since 35m)
+ mds: 1/1 daemons up, 1 hot standby
+ osd: 5 osds: 5 up (since 20m), 4 in (since 21s); 32 remapped pgs
+ rgw: 1 daemon active (1 hosts, 1 zones)
+
+ data:
+ volumes: 1/1 healthy
+ pools: 11 pools, 177 pgs
+ objects: 16.45k objects, 62 GiB
+ usage: 95 GiB used, 405 GiB / 500 GiB avail
+ pgs: 6685/32898 objects degraded (20.320%)
+ 4418/32898 objects misplaced (13.429%)
+ 126 active+clean
+ 15 active+recovery_wait+undersized+degraded+remapped
+ 14 active+recovery_wait+degraded
+ 5 active+recovery_wait+remapped
+ 5 active+recovery_wait
+ 5 active+recovery_wait+degraded+remapped
+ 3 active+undersized+degraded+remapped+backfill_wait
+ 2 active+remapped+backfill_wait
+ 2 active+recovering+undersized+degraded+remapped
+
+ io:
+ client: 18 MiB/s rd, 35 MiB/s wr, 79 op/s rd, 18 op/s wr
+ recovery: 46 MiB/s, 0 keys/s, 11 objects/s
+
+ progress:
+ Global Recovery Event (0s)
+ [............................]
+
+```
+
+At this point, ceph is removing data from the OSD to be removed and moving it to the OSD(s) to be kept.
+
+When completed, every PG should be active+clean, and none should be in any other status.
+
+There should be no progress bar at the bottom, and it will look similar to this:
+```
+[rook@rook-ceph-tools-6fb84b545-js4rg /]$ ceph status
+ cluster:
+ id: be2c8681-a8a8-4f84-bf78-5afe2d88e48e
+ health: HEALTH_OK
+
+ services:
+ mon: 1 daemons, quorum a (age 112m)
+ mgr: a(active, since 111m)
+ mds: 1/1 daemons up, 1 hot standby
+ osd: 5 osds: 5 up (since 95m), 4 in (since 75m)
+ rgw: 1 daemon active (1 hosts, 1 zones)
+
+ data:
+ volumes: 1/1 healthy
+ pools: 11 pools, 177 pgs
+ objects: 18.02k objects, 68 GiB
+ usage: 137 GiB used, 363 GiB / 500 GiB avail
+ pgs: 177 active+clean
+
+ io:
+ client: 136 MiB/s rd, 51 KiB/s wr, 580 op/s rd, 1 op/s wr
+```
+
+You can then run `ceph osd df` to ensure that the OSD to be removed is empty:
+```
+[rook@rook-ceph-tools-6fb84b545-js4rg /]$ ceph osd df
+ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
+ 0 ssd 0.09769 1.00000 100 GiB 26 GiB 25 GiB 0 B 225 MiB 74 GiB 25.64 0.93 56 up
+ 1 ssd 0.09769 0 0 B 0 B 0 B 0 B 0 B 0 B 0 0 0 up
+ 2 ssd 0.19530 1.00000 200 GiB 43 GiB 43 GiB 0 B 423 MiB 157 GiB 21.51 0.78 121 up
+ 3 ssd 0.04880 1.00000 50 GiB 15 GiB 15 GiB 0 B 133 MiB 35 GiB 29.91 1.09 49 up
+ 4 ssd 0.14650 1.00000 150 GiB 54 GiB 53 GiB 0 B 470 MiB 96 GiB 35.77 1.30 128 up
+ TOTAL 500 GiB 137 GiB 136 GiB 0 B 1.2 GiB 363 GiB 27.46
+MIN/MAX VAR: 0.78/1.30 STDDEV: 5.34
+```
+
+And additionally run `ceph osd safe-to-destroy osd.` to ensure that ceph really is done with the drive:
+```
+[rook@rook-ceph-tools-6fb84b545-js4rg /]$ ceph osd safe-to-destroy osd.1
+OSD(s) 1 are safe to destroy without reducing data durability.
+```
+
+### Disabling OSDs
+The next set of commands will need to be run with kubectl, so you can exit the rook-ceph-tools shell with `exit`.
+
+First, scale the rook-ceph-operator to 0 replicas to keep it from undoing things. `kubectl -n rook-ceph scale deployment rook-ceph-operator --replicas=0`.
+Then, scale down the OSD you wish to remove with `kubectl -n rook-ceph scale deployment rook-ceph-osd- --replicas=0`.
+
+After this, it's time to TEST AND MAKE SURE NOTHING IS BROKEN before actually deleting things for real.
+
+Does kotsadm still work?
+Can you make a support bundle through the web UI?
+Can you browse the files of a release?
+Does the application itself still function?
+
+If any of the above answers are "no", you should scale the rook-ceph-operator and rook-ceph-osd- deployments back to 1 replica.
+
+### Destroying OSDs
+
+Once again, you will need to get a toolbox shell with `kubectl exec -it -n rook-ceph deployment/rook-ceph-tools -- bash`.
+
+An OSD can be destroyed with `ceph osd purge 1 --yes-i-really-mean-it`.
+If you do not in fact "mean it", please do not run this.
+
+```
+[rook@rook-ceph-tools-6fb84b545-js4rg /]$ ceph osd purge 1 --yes-i-really-mean-it
+purged osd.1
+[rook@rook-ceph-tools-6fb84b545-js4rg /]$ ceph osd df
+ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
+ 0 ssd 0.09769 1.00000 100 GiB 26 GiB 25 GiB 0 B 287 MiB 74 GiB 25.72 0.93 58 up
+ 2 ssd 0.19530 1.00000 200 GiB 43 GiB 43 GiB 0 B 500 MiB 157 GiB 21.63 0.78 119 up
+ 3 ssd 0.04880 1.00000 50 GiB 15 GiB 15 GiB 0 B 162 MiB 35 GiB 29.97 1.09 48 up
+ 4 ssd 0.14650 1.00000 150 GiB 54 GiB 53 GiB 0 B 573 MiB 96 GiB 35.99 1.30 127 up
+ TOTAL 500 GiB 138 GiB 136 GiB 0 B 1.5 GiB 362 GiB 27.59
+MIN/MAX VAR: 0.78/1.30 STDDEV: 5.37
+```
+
+Once the OSD has been purged, you can `exit` the toolbox again and reformat the freed disk for your purposes, or remove it from the instance entirely.
+
+After the free block device has been made unavailable for use by rook-ceph, you can restore the operator with `kubectl -n rook-ceph scale deployment rook-ceph-operator --replicas=1`, because the operator is needed for normal use.
+
diff --git a/src/markdown-pages/install-with-kurl/cis-compliance.md b/src/markdown-pages/install-with-kurl/cis-compliance.md
index 3ded5678..c0bf7d22 100644
--- a/src/markdown-pages/install-with-kurl/cis-compliance.md
+++ b/src/markdown-pages/install-with-kurl/cis-compliance.md
@@ -1,12 +1,12 @@
---
path: "/docs/install-with-kurl/cis-compliance"
date: "2022-03-23"
-weight: 26
+weight: 27
linktitle: "CIS Compliance"
title: "CIS Compliance"
isAlpha: false
---
-You can configure the kURL installer to be Center for Internet Security (CIS) compliant. Opt-in to this feature by setting the `kurl.cisCompliance` field to `true` in the kURL specification. For information about known limitations, see [Known Limitations](#known-limitations). For more information about CIS security compliance for Kubernetes, see the [CIS benchmark information](https://www.cisecurity.org/benchmark/kubernetes).
+You can configure the kURL installer to be Center for Internet Security (CIS) compliant for CIS 1.8 or earlier. Opt-in to this feature by setting the `kurl.cisCompliance` field to `true` in the kURL specification. For information about known limitations, see [Known Limitations](#known-limitations). For more information about CIS security compliance for Kubernetes, see the [CIS benchmark information](https://www.cisecurity.org/benchmark/kubernetes).
When you set `cisCompliance` is set to `true`, the following settings are changed from the default settings:
@@ -39,8 +39,8 @@ spec:
kubernetes:
version: "1.23.x"
cisCompliance: true
- weave:
- version: "2.6.x"
+ flannel:
+ version: "0.20.x"
contour:
version: "1.20.x"
prometheus:
@@ -62,15 +62,26 @@ spec:
* The [EKCO add-on](/docs/add-ons/ekco) v0.19.0 and later is required to use this feature.
* This feature works with the [Kubernetes (kubeadm) add-on](https://kurl.sh/docs/add-ons/kubernetes) only.
-* To meet CIS compliance, admin.conf permissions are changed from the default `root:sudo 440` to `root:root 400`.
+* To meet CIS compliance, admin.conf and super-admin.conf permissions are changed from the default `root:sudo 440` to `root:root 400` and `root:root 600` respectively.
* Kubelet no longer attempts to change kernel parameters at runtime. Using kernel parameters other than those expected by Kubernetes can block kubelet from initializing and causes the installation to fail.
* This feature has been tested with kURL upgrades, however we strongly recommend testing this with your development environments prior to upgrading production.
-* The following failure was identified in kURL testing with `kube-bench` v0.6.8 and is believed to be due to the etcd user not being listed in /etc/passwd mounted from the host:
- ```bash
- [FAIL] 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
- ```
- * **Note:** This check only fails when `kube-bench` is deployed as a Kubernetes job running on a control plane node.
- * For more information about the etcd data directory ownership check failure issue, see [this issue in GitHub](https://github.com/aquasecurity/kube-bench/issues/1221).
+
+## Running kube-bench
+
+Below are instructions for running the CIS 1.8 Kubernetes Benchmark checks for Kubernetes versions 1.26 through 1.31 using kube-bench.
+
+Download the kube-bench binary:
+
+```bash
+curl -LO https://github.com/aquasecurity/kube-bench/releases/download/v0.8.0/kube-bench_0.8.0_linux_amd64.tar.gz
+tar xzvf kube-bench_0.8.0_linux_amd64.tar.gz
+```
+
+Run kube-bench:
+
+```bash
+sudo KUBECONFIG=/etc/kubernetes/admin.conf ./kube-bench run --config-dir=./cfg --benchmark cis-1.8
+```
## AWS Amazon Linux 2 (AL2) Considerations
The kernel defaults of this Amazon Machine Image (AMI) are not set properly for CIS compliance. CIS compliance does not allow Kubernetes to change kernel settings itself. You must change the kernel defaults to the following settings before installing with kURL:
diff --git a/src/markdown-pages/install-with-kurl/connecting-remotely.md b/src/markdown-pages/install-with-kurl/connecting-remotely.md
index 154840f1..8a17e15c 100644
--- a/src/markdown-pages/install-with-kurl/connecting-remotely.md
+++ b/src/markdown-pages/install-with-kurl/connecting-remotely.md
@@ -16,7 +16,7 @@ sudo bash tasks.sh generate-admin-user
```
This will use the load balancer or public address for the Kubernetes API server and generate a new user with full admin privileges, and save the configuration into a file `$USER.conf`.
-You can then copy to another machine and use with:
+You can then copy to another machine (i.e. `cat $USER.conf`) and use with:
```
kubectl --kubeconfig=$USER.conf
@@ -36,7 +36,7 @@ Or merge them into your main config with:
```
cp $HOME/.kube/config $HOME/.kube/config.bak
-KUBECONFIG=$HOME/.kube/config.bak:$USER.conf kubectl config view --merge --flatten > $HOME/.kube/config
+KUBECONFIG=$USER.conf:$HOME/.kube/config.bak kubectl config view --merge --flatten > $HOME/.kube/config
```
You can choose the kurl context with:
diff --git a/src/markdown-pages/install-with-kurl/dedicated-primary.md b/src/markdown-pages/install-with-kurl/dedicated-primary.md
index 0d7a8ec9..57443d43 100644
--- a/src/markdown-pages/install-with-kurl/dedicated-primary.md
+++ b/src/markdown-pages/install-with-kurl/dedicated-primary.md
@@ -1,7 +1,7 @@
---
path: "/docs/install-with-kurl/dedicated-primary"
date: "2021-05-14"
-weight: 20
+weight: 21
linktitle: "Dedicated Primary"
title: "Dedicated Primary"
---
diff --git a/src/markdown-pages/install-with-kurl/host-preflights.md b/src/markdown-pages/install-with-kurl/host-preflights.md
index 6a5769bd..9f3ffeca 100644
--- a/src/markdown-pages/install-with-kurl/host-preflights.md
+++ b/src/markdown-pages/install-with-kurl/host-preflights.md
@@ -12,12 +12,12 @@ These checks can also run conditionally, depending on whether the installer is p
The installer has default host preflight checks that run to ensure that certain conditions are met (such as supported operating systems, disk usage, and so on).
The default host preflight checks are designed and maintained to help ensure the successful installation and ongoing health of the cluster.
-The default preflight checks are also customizable. New host preflight checks can be added to run in addition to the defaults, or the default checks can be disabled to allow for a new set of host preflight checks to run instead.
+The default preflight checks are also customizable. New host preflight checks can be added to run in addition to the defaults, or the default checks can be disabled to allow for a new set of host preflight checks to run instead.
For more information, see [customizing host preflight checks](/docs/create-installer/host-preflights).
-Host prelight failures block the installation from continuing and exit with a non-zero return code.
+Host prelight failures block the installation from continuing and exit with a non-zero return code.
This behavior can be changed as follows:
-* Failures can be bypassed with the [`host-preflight-ignore` flag](/docs/install-with-kurl/advanced-options).
+* Failures can be bypassed with the [`host-preflight-ignore` flag](/docs/install-with-kurl/advanced-options).
* For a more conservative approach, the [`host-preflight-enforce-warnings` flag](/docs/install-with-kurl/advanced-options) can be used to block the installation on warnings.
* The [`exclude-builtin-host-preflights` flag](/docs/install-with-kurl/advanced-options) can be used to skip the default host preflight checks and run only the custom checks.
@@ -39,7 +39,7 @@ The following checks run on all nodes during installations and upgrades:
* Firewalld is disabled.
* SELinux is disabled.
* At least one nameserver is accessible on a non-loopback address.
-* /var/lib/kubelet has at least 30GiB total space and is less than 80% full. (Warns when less than 10GiB available or when it is more than 60% full.)
+* Minimum disk space for the /var/lib/kubelet directory. (Warns when less than 10GiB available or when it is more than 60% full.) For information about disk space requirements, see [Core Directory Disk Space Requirements](/docs/install-with-kurl/system-requirements#core-directory-disk-space-requirements).
* The system clock is synchronized and the time zone is set to UTC.
#### Installations Only
@@ -69,6 +69,12 @@ These checks run on all primary and secondary nodes joining an existing cluster:
Some checks only run when certain add-ons are enabled or configured in a certain way in the installer:
+#### Flannel
+
+These checks only run on installations with Flannel:
+
+* UDP port 8472 is available on the current host.
+
#### Weave
These checks only run on installations with Weave:
diff --git a/src/markdown-pages/install-with-kurl/index.md b/src/markdown-pages/install-with-kurl/index.md
index 59d2512b..1bd03bd7 100644
--- a/src/markdown-pages/install-with-kurl/index.md
+++ b/src/markdown-pages/install-with-kurl/index.md
@@ -131,29 +131,29 @@ While the `latest` specification can be suitable for some situations, Replicated
An example of how `latest` can be used in a spec is:
```yaml
- apiVersion: "cluster.kurl.sh/v1beta1"
- kind: "Installer"
- metadata:
- name: ""
- spec:
- kubernetes:
- version: "1.25.x"
- weave:
- version: "2.6.x"
- contour:
- version: "1.22.x"
- minio:
- version: "latest"
- registry:
- version: "latest"
- prometheus:
- version: "latest"
- containerd:
- version: "1.5.x"
- longhorn:
- version: "1.3.x"
- ekco:
- version: "latest"
+apiVersion: "cluster.kurl.sh/v1beta1"
+kind: "Installer"
+metadata:
+ name: "my-installer"
+spec:
+ kubernetes:
+ version: "1.26.x"
+ flannel:
+ version: "0.21.x"
+ contour:
+ version: "1.24.x"
+ prometheus:
+ version: "0.63.x"
+ registry:
+ version: "2.8.x"
+ containerd:
+ version: "1.6.x"
+ ekco:
+ version: "latest"
+ openebs:
+ version: "3.5.x"
+ isLocalPVEnabled: true
+ localPVStorageClassName: "local"
```
## Using the kURL Installer CRD
@@ -164,14 +164,14 @@ install, the install time options can be easily viewed via kubectl.
For example, if the install was done using the following command:
```
-curl https://kurl.sh/latest
+curl https://kurl.sh/latest | sudo bash
```
Once the install is complete you can view the current state of the cluster and every option that was
changed in the kURL YAML spec with the following command.
```
-kubectl get installer latest
+kubectl get installer latest -n default
```
## Modifying an Install Using a YAML Patch File at Runtime.
@@ -211,7 +211,7 @@ Once the install is finished, the merged YAML that represents the install can be
viewed by running the following command to show the current state of the cluster.
```
-kubectl get installer merged
+kubectl get installer merged -n default
```
## Select Examples of Using a Patch YAML File
@@ -249,10 +249,9 @@ replace, not add to commands that may exist on the base YAML.
- ["-A", "INPUT", "-s", "1.1.1.1", "-j", "DROP"]
```
-The following patch YAML can be used to configure the IP adddress ranges
-of Pods and Services. Note that the installer will attempt to default to `10.32.0.0/16`
-for Pods and `10.96.0.0/16` for Services. If those ranges aren't available per the routing table,
-the installer will fallback to searching for available subnets in `10.0.0.0/8`.
+The following patch YAML can be used to configure the IP adddress ranges of Pods and Services.
+Note that the installer will attempt to default to `10.32.0.0/20` for Pods and `10.96.0.0/22` for Services.
+If not available, the installer will attempt to find an available range with prefix bits of 20 and 22 respectively in the `10.32.0.0/16` or `10.0.0.0/8` address spaces.
```yaml
apiVersion: cluster.kurl.sh/v1beta1
@@ -262,7 +261,7 @@ the installer will fallback to searching for available subnets in `10.0.0.0/8`.
spec:
kubernetes:
serviceCIDR: ""
- weave:
+ flannel:
podCIDR: ""
```
diff --git a/src/markdown-pages/install-with-kurl/ipv6.md b/src/markdown-pages/install-with-kurl/ipv6.md
deleted file mode 100644
index a0e1cacc..00000000
--- a/src/markdown-pages/install-with-kurl/ipv6.md
+++ /dev/null
@@ -1,101 +0,0 @@
----
-path: "/docs/install-with-kurl/ipv6"
-date: "2021-12-14"
-weight: 24
-linktitle: "IPv6"
-title: "IPv6"
-isAlpha: true
----
-kURL can be installed on IPv6 enabled hosts by passing the `ipv6` flag to the installer or by setting the `kurl.ipv6` field to `true` in the yaml spec.
-
-```
-sudo bash install.sh ipv6
-```
-
-This example shows a valid spec for ipv6.
-
-```
-apiVersion: "cluster.kurl.sh/v1beta1"
-kind: "Installer"
-metadata:
- name: "ipv6"
-spec:
- kurl:
- ipv6: true
- kubernetes:
- version: "1.23.x"
- kotsadm:
- version: "latest"
- antrea:
- version: "latest"
- contour:
- version: "1.20.x"
- prometheus:
- version: "0.53.x"
- registry:
- version: "2.7.x"
- containerd:
- version: "1.4.x"
- ekco:
- version: "latest"
- minio:
- version: "2020-01-25T02-50-51Z"
- longhorn:
- version: "1.2.x"
-```
-
-There is no auto-detection of ipv6 or fall-back to ipv4 when ipv6 is not enabled on the host.
-
-
-## Current Limitations
-
-* Dual-stack is not supported. Resources will have only an IPv6 address when IPv6 is enabled. The host can be dual-stack, but control plane servers, pods, and cluster services will use IPv6. Node port services must be accessed on the hosts' IPv6 address.
-* The only supported operating systems are: Ubuntu 18.04, Ubuntu 20.04, CentOS 8, and RHEL 8.
-* Antrea is the only supported CNI (1.4.0+).
-* Antrea with encryption requires the kernel wireguard module to be available. The installer will bail if wireguard module cannot be loaded. Follow this guide for your OS, then reboot before running the kURL installer: https://www.wireguard.com/install/.
-* Rook is the only supported CSI (1.5.12+).
-* Snapshots require velero 1.7.1+.
-* External load balancer requires a DNS name. You cannot enter an IPv6 IP at the load balancer prompt. (The internal load balancer is not affected since it automatically uses `localhost`).
-
-## Host Requirements
-
-* IPv6 forwarding must be enabled and bridge-call-nf6tables must be enabled. The installer does this automatically and configures this to persist after reboots.
-
-* Using antrea, TCP 8091 and UDP 6081 have to be open between nodes instead of the ports used by weave (6784 and 6783). Antrea with encryption requires UDP port 51820 be open between nodes for wireguard and that the wireguard kernel module be available.
-
-* The ip6_tables kernel module must be available. The installer configures this to be loaded automatically.
-
-
-## Troubleshooting
-
-### Joining 2nd Node to Cluster Fails
-
-If nodes in the cluster can't `ping6` each other and the commmand `ip -6 route` shows no default route, you may need to add a default route to your primary interface, for example: `ip -6 route add default dev ens5`
-
-### Upload Kotsadm License Fails
-
-If an application license fails to upload, click the more details link to view the error.
-An error like this indicates a DNS failure:
-```
-failed to execute get request: Get "https://replicated.app/license/ipv6": dial tcp: lookup replicated.app on [fd00:c00b:2::a]:53: server misbehaving`
-```
-
-This is caused by a lack of AAAA records for replicated.app.
-The solution is to deploy a NAT64 server that can translate `A` records into `AAAA` records.
-Another solution is to switch to an airgap install or to temporarily set the env var "DISABLE_OUTBOUND_CONNECTIONS=1" on the kotsadm deployment.
-A third option is to perform a [proxy install](/docs/install-with-kurl/proxy-installs).
-
-### Networking Check Fails in kURL Installer
-
-The kURL installer includes a networking check after antrea is installed.
-If this fails, check the logs for the antrea-agent daemonset in the kube-system namespace.
-An error like the following indicates the ip6_tables kernel module is not available:
-```
-E1210 19:44:12.494994 1 route_linux.go:119] Failed to initialize iptables: error checking if chain ANTREA-PREROUTING exists in table raw: running [/usr/sbin/ip6tables -t raw -S ANTREA-PREROUTING 1
---wait]: exit status 3: modprobe: FATAL: Module ip6_tables not found in directory /lib/modules/4.18.0-193.19.1.el8_2.x86_64
-ip6tables v1.8.4 (legacy): can't initialize ip6tables table `raw': Table does not exist (do you need to insmod?)
-Perhaps ip6tables or your kernel needs to be upgraded.
-```
-
-Verify that `lsmod | grep ip6_tables` is empty and then run `modprobe ip6_tables` to load the required module.
-Since the antrea add-on install script persists this under `/etc/modules-load.d` there may be another host agent interfering if this module is not loaded after reboots.
diff --git a/src/markdown-pages/install-with-kurl/managing-nodes.md b/src/markdown-pages/install-with-kurl/managing-nodes.md
new file mode 100644
index 00000000..c0a4ccb3
--- /dev/null
+++ b/src/markdown-pages/install-with-kurl/managing-nodes.md
@@ -0,0 +1,451 @@
+---
+path: "/docs/install-with-kurl/managing-nodes"
+date: "2022-10-14"
+weight: 20
+linktitle: "Managing Nodes"
+title: "Managing Nodes"
+---
+
+This topic describes how to manage nodes on kURL clusters.
+It includes procedures for how to safely reset, reboot, and remove nodes when performing maintenance tasks.
+
+See the following sections:
+
+* [EKCO Add-On Prerequisite](#ekco-add-on-prerequisite)
+* [Reset a Node](#reset-a-node)
+* [Reboot a Node](#reboot-a-node)
+* [Remove a Node from Rook Ceph Clusters](#remove-a-node-from-rook-ceph-clusters)
+ * [Rook Ceph and etcd Node Removal Requirements](#rook-ceph-and-etcd-node-removal-requirements)
+ * [Rook Ceph Cluster Prerequisites](#rook-ceph-cluster-prerequisites)
+ * [(Recommended) Manually Rebalance Ceph and Remove a Node](#recommended-manually-rebalance-ceph-and-remove-a-node)
+ * [Remove Nodes with EKCO](#remove-nodes-with-ekco)
+* [Troubleshoot Node Removal](#troubleshoot-node-removal)
+
+## EKCO Add-On Prerequisite
+
+Before you manage a node on a kURL cluster, you must install the Embedded kURL Cluster Operator (EKCO) add-on on the cluster.
+The EKCO add-on is a utility tool used to perform maintenance operations on a kURL cluster.
+
+For information on how to install the EKCO add-on to a kURL cluster, see [EKCO Add-on](/docs/add-ons/ekco).
+
+## Reset a Node
+
+Resetting a node is the process of attempting to remove all Kubernetes packages and host files from the node.
+
+Resetting a node can be useful if you are creating and testing a kURL specification in a non-production environment.
+Some larger changes to a kURL specification cannot be deployed for testing by rerunning the kURL installation script on an existing node.
+In this case, you can attempt to reset the node so that you can reinstall kURL to test the change to the kURL specification.
+
+_**Warning**_: Do not attempt to reset a node on a cluster in a production environment.
+Attempting to reset a node can permanently damage the cluster, which makes any data from the cluster irretrievable.
+Reset a node on a cluster only if you are able to delete the host VM and provision a new VM if the reset script does not successfully complete.
+
+To reset a node on a cluster managed by kURL:
+
+1. Run the kURL reset script on a VM that you are able to delete if the script is unsuccessful.
+The kURL reset script first runs the EKCO shutdown script to cordon the node. Then, it attempts to remove all Kubernetes packages and host files from the node.
+
+ * **Online**:
+
+ ```bash
+ curl -sSL https://kurl.sh/latest/tasks.sh | sudo bash -s reset
+ ```
+
+ * **Air Gapped**:
+
+ ```bash
+ cat ./tasks.sh | sudo bash -s reset
+ ```
+
+1. If the reset does not complete, delete the host VM and provision a new VM.
+
+ The reset script might not complete successfully if the removal of the Kubernetes packages and host files from the node also damages the cluster itself.
+ When the cluster is damaged, the tools used by the reset script, such as the kubectl command-line tool, can no longer communicate with the cluster and the script cannot complete.
+
+## Reboot a Node
+
+Rebooting a node is useful when you are performing maintenance on the operating system (OS) level of the node.
+For example, after you perform a kernel update, you can reboot the node to apply the change to the OS.
+
+To reboot a node on a cluster managed by kURL:
+
+1. Run the EKCO shutdown script on the node:
+
+ ```
+ /opt/ekco/shutdown.sh
+ ```
+
+ The shutdown script deletes any Pods on the node that mount volumes provisioned by Rook. It also cordons the node, so that the node is marked as unschedulable and kURL does not start any new containers on the node. For more information, see [EKCO Add-on](/docs/add-ons/ekco).
+
+1. Reboot the node.
+
+## Remove a Node from Rook Ceph Clusters
+
+As part of performing maintenance on a multi-node cluster managed by kURL, it is often required to remove a node from the cluster and replicate its data to a new node. For example, you might need to remove one or more nodes during hardware maintenance.
+
+This section describes how to safely remove nodes from a kURL cluster that uses the Rook add-on for Rook Ceph storage. For more information about the Rook add-on, see [Rook Add-on](/docs/add-ons/rook).
+
+For information about how to remove a node from a cluster that does not use Rook Ceph, see [kubectl drain](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) in the Kubernetes documentation.
+
+### Rook Ceph and etcd Node Removal Requirements
+
+Review the following requirements and considerations before you remove one or more nodes from Rook Ceph and etcd clusters:
+
+* **etcd cluster health**: To remove a primary node from etcd clusters, you must meet the following requirements to maintain etcd quorum:
+ * You must have at least one primary node.
+ * If you scale the etcd cluster to three primary nodes, you must then maintain a minimum of three primary nodes to maintain quorum.
+* **Rook Ceph cluster health**: When you scale a Ceph Storage Cluster to three or more Ceph Object Storage Daemons (OSDs), such as when you add additional manager or worker nodes to the cluster, the Ceph Storage Cluster can no longer have fewer than three OSDs. If you reduce the number of OSDs to less than three in this case, then the Ceph Storage Cluster loses quorum.
+* **Add a node before removing a node**: To remove and replace a node, it is recommended that you add a new node before removing the node.
+For example, to remove one node in a three-node cluster, first add a new node to scale the cluster to four nodes. Then, remove the desired node to scale the cluster back down to three nodes.
+* **Remove one node at a time**: If you need to remove multiple nodes from a cluster, remove one node at a time.
+
+### Rook Ceph Cluster Prerequisites
+
+Complete the following prerequisites before you remove one or more nodes from a Rook Ceph cluster:
+
+* Upgrade Rook Ceph to v1.4 or later.
+
+ The two latest minor releases of Rook Ceph are actively maintained. It is recommended to upgrade to the latest stable release available. For more information, see [Release Cycle](https://rook.io/docs/rook/v1.10/Getting-Started/release-cycle/) in the Rook Ceph documentation.
+
+ Attempting to remove a node from a cluster that uses a Rook Ceph version earlier than v1.4 can cause Ceph to enter an unhealthy state. For example, see [Rook Ceph v1.0.4 is Unhealthy with Mon Pods Not Rescheduled](#rook-ceph-v104-is-unhealthy-with-mon-pods-not-rescheduled) under _Troubleshoot Node Removal_ below.
+
+* In the kURL specification, set `isBlockStorageEnabled` to `true`. This is the default for Rook Ceph v1.4 and later.
+
+* Ensure that you can access the ceph CLI from a Pod that can communicate with the Ceph Storage Cluster. To access the ceph CLI, you can do one of the following:
+
+ * (Recommended) Use the `rook-ceph-tools` Pod to access the ceph CLI.
+ Use the same version of the Rook toolbox as the version of Rook Ceph that is installed in the cluster.
+ By default, the `rook-ceph-tools` Pod is included on kURL clusters with Rook Ceph v1.4 and later.
+ For more information about `rook-ceph-tools` Pods, see [Rook Toolbox](https://rook.io/docs/rook/v1.10/Troubleshooting/ceph-toolbox/) in the Rook Ceph documentation.
+
+ * Use `kubectl exec` to enter the `rook-ceph-operator` Pod, where the ceph CLI is available.
+
+* (Optional) Open an interactive shell in the `rook-ceph-tools` or `rook-ceph-operator` Pod to run multiple ceph CLI commands in a row. For example:
+
+ ```
+ kubectl exec -it -n rook-ceph deployment/rook-ceph-tools -- bash
+ ```
+
+ If you do not create an interactive shell, precede each ceph CLI command in the `rook-ceph-tools` or `rook-ceph-operator` Pod with `kubectl exec`. For example:
+
+ ```
+ kubectl exec -it -n rook-ceph deployment/rook-ceph-tools -- ceph status
+ ```
+
+* Verify that Ceph is in a healthy state by running one of the following `ceph status` commands in the `rook-ceph-tools` Pod in the `rook-ceph` namespace:
+
+ * **Rook Ceph v1.4.0 or later**:
+
+ ```
+ kubectl -n rook-ceph exec deployment/rook-ceph-tools -- ceph status
+ ```
+
+ * **Rook Ceph v1.0.0 to 1.3.0**:
+
+ ```
+ kubectl -n rook-ceph exec deployment/rook-ceph-operator -- ceph status
+ ```
+
+ **Note**: It is not recommended to use versions of Rook Ceph earlier than v1.4.0.
+
+ The output of the command shows `health: HEALTH_OK` if Ceph is in a healthy state.
+
+### (Recommended) Manually Rebalance Ceph and Remove a Node
+
+This procedure ensures that the data held in Rook Ceph is safely replicated to a new node before you remove a node.
+Rebalancing your data is critical for preventing data loss that can occur when removing a node if the data stored in Ceph has not been properly replicated.
+
+To manually remove a node, you first use the Ceph CLI to reweight the Ceph OSD to `0` on the node that you want to remove and wait for Ceph to rebalance the data across OSDs.
+Then, you can remove the OSD from the node, and finally remove the node.
+
+**Note**: The commands in this procedure assume that you created an interactive shell in the `rook-ceph-tools` or `rook-ceph-operator` Pod. It also helps to have another shell to use `kubectl` commands at the same time.
+For more information, see [Rook Ceph Cluster Prerequisites](#rook-ceph-cluster-prerequisites) above.
+
+To manually rebalance data and remove a node:
+
+1. Review the [Rook Ceph and etcd Node Removal Requirements](#rook-ceph-and-etcd-node-removal-requirements) above.
+
+1. Complete the [Rook Ceph Cluster Prerequisites](#rook-ceph-cluster-prerequisites) above.
+
+1. Add the same number of new nodes to the cluster that you intended to remove.
+For example, if you intend to remove a total of two nodes, add two new nodes.
+
+ Ceph rebalances the existing placement groups to the new OSDs.
+
+1. After Ceph completes rebalancing, run the following command to verify that Ceph is in a healthy state:
+
+ ```
+ ceph status
+ ```
+
+1. Run the following command to display a list of all the OSDs in the cluster and their associated nodes:
+
+ ```
+ ceph osd tree
+ ```
+
+ **Example output**:
+
+ ```
+ [root@rook-ceph-tools-54ff78f9b6-gqsfm /]# ceph osd tree
+
+ ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
+ -1 0.97649 root default
+ -3 0.19530 host node00.foo.com
+ 0 hdd 0.19530 osd.0 up 1.00000 1.00000
+ -7 0.19530 host node01.foo.com
+ 2 hdd 0.19530 osd.1 up 1.00000 1.00000
+ -5 0.19530 host node02.foo.com
+ 1 hdd 0.19530 osd.2 up 1.00000 1.00000
+ -9 0.19530 host node03.foo.com
+ 3 hdd 0.19530 osd.3 up 1.00000 1.00000
+ -11 0.19530 host node04.foo.com
+ 4 hdd 0.19530 osd.4 up 1.00000 1.00000
+ ```
+
+1. Run the following command to reweight the OSD to `0` on the first node that you intend to remove:
+
+ ```
+ ceph osd reweight OSD_ID 0
+ ```
+
+ Replace `OSD_ID` with the Ceph OSD on the node that you intend to remove. For example, `ceph osd reweight 1 0`.
+
+ Ceph rebalances the placement groups off the OSD that you specify in the `ceph osd reweight` command.
+ To view progress, run `ceph status`, or `watch ceph status`. Ceph may display a HEALTH_WARN state during the rebalance, but will return to HEALTH_OK once complete.
+
+ **Example output**:
+
+ ```
+ [root@rook-ceph-tools-54ff78f9b6-gqsfm /]# watch ceph status
+ cluster:
+ id: 5f0d6e3f-7388-424d-942b-4bab37f94395
+ health: HEALTH_WARN
+ Degraded data redundancy: 1280/879 objects degraded (145.620%), 53 pgs degraded
+ ...
+ progress:
+ Rebalancing after osd.2 marked out (15s)
+ [=====================.......] (remaining: 4s)
+ Rebalancing after osd.1 marked out (5s)
+ [=============...............] (remaining: 5s)
+ ```
+
+1. After the `ceph osd reweight` command completes, run the following command to verify that Ceph is in a healthy state:
+
+ ```
+ ceph status
+ ```
+
+1. Then, run the following command to mark the OSD as `down`:
+
+ ```
+ ceph osd down OSD_ID
+ ```
+
+ Replace `OSD_ID` with the Ceph OSD on the node that you intend to remove. For example, `ceph osd down 1`. Note: it may not report as down until after the next step.
+
+1. In another terminal, outside of the `rook-ceph-tools` pod run the following kubectl command to scale the corresponding OSD deployment to 0 replicas:
+
+ ```
+ kubectl scale deployment -n rook-ceph OSD_DEPLOYMENT --replicas 0
+ ```
+
+ Replace `OSD_DEPLOYMENT` with the name of the Ceph OSD deployment. For example, `kubectl scale deployment -n rook-ceph rook-ceph-osd-1 --replicas 0`.
+
+1. Back in the `rook-ceph-tools` pod, run the following command to ensure that the OSD is safe to remove:
+
+ ```
+ ceph osd safe-to-destroy osd.OSD_ID
+ ```
+
+ Replace `OSD_ID` with the ID of the OSD. For example, `ceph osd safe-to-destroy osd.1`.
+
+ **Example output**:
+
+ ```
+ OSD(s) 1 are safe to destroy without reducing data durability.
+ ```
+
+1. Purge the OSD from the Ceph cluster:
+
+ ```
+ ceph osd purge OSD_ID --yes-i-really-mean-it
+ ```
+
+ Replace `OSD_ID` with the ID of the OSD. For example, `ceph osd purge 1 --yes-i-really-mean-it`.
+
+ **Example output**:
+
+ ```
+ purged osd.1
+ ```
+
+1. Outside of the `rook-ceph-tools` pod, delete the OSD deployment:
+
+ ```
+ kubectl delete deployment -n rook-ceph OSD_DEPLOYMENT
+ ```
+
+ Replace `OSD_DEPLOYMENT` with the name of the Ceph OSD deployment. For example, `kubectl delete deployment -n rook-ceph rook-ceph-osd-1`.
+
+1. Remove the node.
+
+Repeat the steps in this procedure for any remaining nodes that you want to remove. Always verify that Ceph is in a HEALTH_OK state before making changes to Ceph.
+
+### Remove Nodes with EKCO
+
+You can use EKCO add-on scripts to programmatically cordon and purge a node so that you can then remove the node from the cluster.
+
+_**Warnings**_: Consider the following warnings about data loss before you proceed with this procedure:
+
+* **Ceph health**: The EKCO scripts in this procedure provide a quick method for cordoning a node and purging Ceph OSDs so that you can remove the node. This procedure is _not_ recommended unless you are able to confirm that Ceph is in a healthy state. If Ceph is not in a healthy state before you remove a node, you risk data loss.
+
+ To verify that Ceph is in a healthy state, run the following `ceph status` command in the `rook-ceph-tools` or `rook-ceph-operator` Pod in the `rook-ceph` namespace for Rook Ceph v1.4 or later:
+
+ ```
+ kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph status
+ ```
+
+* **Data replication**: A common Ceph configuration is three data replicas across three Ceph OSDs.
+ It is possible for Ceph to report a healthy status without data being replicated properly across all OSDs.
+ For example, in a single-node cluster, there are not multiple machines where Ceph can replicate data.
+ In this case, even if Ceph reports healthy, removing a node results in data loss because the data was not properly replicated across multiple OSDs on multiple machines.
+
+ If you are not certain that Ceph data replication was configured and completed properly, or if Ceph is not in a healthy state, it is recommended that you first rebalance the data off the node that you intend to remove to avoid data loss.
+ For more information, see [(Recommended) Manually Rebalance Ceph and Remove a Node](#recommended-manually-rebalance-ceph-and-remove-a-node) above.
+
+To use the EKCO add-on to remove a node:
+
+1. Review the [Rook Ceph and etcd Node Removal Requirements](#rook-ceph-and-etcd-node-removal-requirements) above.
+
+1. Complete the [Rook Ceph Cluster Prerequisites](#rook-ceph-cluster-prerequisites) above.
+
+1. Verify that Ceph is in a healthy state before you proceed. Run the following `ceph status` command in the `rook-ceph-tools` or `rook-ceph-operator` Pod in the `rook-ceph` namespace for Rook Ceph v1.4 or later:
+
+ ```
+ kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph status
+ ```
+
+1. Run the EKCO shutdown script on the node:
+
+ ```
+ /opt/ekco/shutdown.sh
+ ```
+
+ The shutdown script deletes any Pods on the node that mount volumes provisioned by Rook.
+ It also cordons the node, so that the node is marked as unschedulable and kURL does not start any new containers on the node. For more information, see [EKCO Add-on](/docs/add-ons/ekco).
+
+1. Power down the node.
+
+1. On another primary node in the cluster, run the EKCO purge script for the node that you intend to remove:
+
+ ```
+ ekco-purge-node NODE_NAME
+ ```
+
+ Replace `NODE_NAME` with the name of the node that you powered down in the previous step.
+
+ The EKCO purge script For information about the EKCO purge script, see [Purge Nodes](/docs/add-ons/ekco#purge-nodes) in _EKCO Add-on_.
+
+1. Remove the node from the cluster.
+
+## Troubleshoot Node Removal
+
+This section includes information about troubleshooting issues with node removal in Rook Ceph clusters.
+
+### Rook Ceph v1.0.4 is Unhealthy with Mon Pod Pending
+
+#### Symptom
+
+After you remove a node from a Rook Ceph v1.0.4 cluster and you run `kubectl -n rook-ceph exec deployment.apps/rook-ceph-operator -- ceph status`, you see that Ceph is in an unhealthy state where a Ceph monitor (mon) is down.
+
+For example:
+
+```
+health: HEALTH_WARN
+ 1/3 mons down, quorum a,c
+```
+
+Additionally, under `services`, one or more are out of quorum:
+
+```
+services:
+ mon: 3 daemons, quorum a,c (age 5min), out of quorum: b
+```
+
+When you run `kubectl -n rook-ceph get pod -l app=rook-ceph-mon`, you see that the mon pod is in a Pending state.
+
+For example:
+
+```
+NAME READY STATUS RESTARTS AGE
+rook-ceph-mon-a 1/1 Running 0 20m
+rook-ceph-mon-b 0/1 Pending 0 9m45s
+rook-ceph-mon-c 1/1 Running 0 13m
+```
+
+#### Cause
+
+This is caused by an issue in Rook Ceph v1.0.4 where the rook-ceph-mon-endpoints ConfigMap still maps a node that was removed.
+
+#### Workaround
+
+To address this issue, you must return the Ceph cluster to a healthy state and upgrade to Rook Ceph v1.4 or later.
+
+To return Ceph to a healthy state so that you can upgrade, manually delete the mapping to the removed node from the rook-ceph-mon-endpoints ConfigMap then rescale the operator.
+
+To return Ceph to a healthy state and upgrade:
+
+1. Stop the Rook Ceph operator:
+
+ ```
+ kubectl -n rook-ceph scale --replicas=0 deployment.apps/rook-ceph-operator
+ ```
+
+1. Edit the rook-ceph-mon-endpoints ConfigMap to delete the removed node from the `mapping`:
+
+ ```
+ kubect -n rook-ceph edit configmaps rook-ceph-mon-endpoints
+ ```
+
+ _**Warning**_: Ensure that you remove the correct rook-ceph-mon-endpoint from the `mapping` field in the ConfigMap. Removing the wrong rook-ceph-mon-endpoint can cause unexpected behavior, including data loss.
+
+1. Find the name of the Pending mon pod:
+
+ ```
+ kubectl -n rook-ceph get pod -l app=rook-ceph-mon
+ ```
+
+1. Delete the Pending mon pod:
+
+ ```
+ kubectl -n rook-ceph delete pod MON_POD_NAME
+ ```
+
+ Replace `MON_POD_NAME` with the name of the mon pod that is in a Pending state from the previous step.
+
+1. Rescale the operator:
+
+ ```
+ kubectl -n rook-ceph scale --replicas=1 deployment.apps/rook-ceph-operator
+ ```
+
+1. Verify that all mon pods are running:
+
+ ```
+ kubectl -n rook-ceph get pod -l app=rook-ceph-mon
+ ```
+
+ The output of this command shows that each mon pod has a `Status` of `Running`.
+
+1. Verify that Ceph is in a healthy state:
+
+ ```
+ kubectl -n rook-ceph exec deployment.apps/rook-ceph-operator -- ceph status
+ ```
+
+ The output of this command shows `health: HEALTH_OK` if Ceph is in a healthy state.
+
+1. After confirming that Ceph is in a healthy state, upgrade Rook Ceph to v1.4 or later before attempting to manage nodes in the cluster.
+
+For more information about these steps, see [Managing nodes when the previous Rook version is in use might leave Ceph in an unhealthy state where mon pods are not rescheduled](https://community.replicated.com/t/managing-nodes-when-the-previous-rook-version-is-in-use-might-leave-ceph-in-an-unhealthy-state-where-mon-pods-are-not-rescheduled/1099/1) in _Replicated Community_.
diff --git a/src/markdown-pages/install-with-kurl/migrating-csi.md b/src/markdown-pages/install-with-kurl/migrating-csi.md
index d49ba1a7..ae2580a9 100644
--- a/src/markdown-pages/install-with-kurl/migrating-csi.md
+++ b/src/markdown-pages/install-with-kurl/migrating-csi.md
@@ -1,50 +1,243 @@
---
path: "/docs/install-with-kurl/migrating-csi"
date: "2021-06-30"
-weight: 22
+weight: 23
linktitle: "Migrating CSI"
-title: "Migrating to Change kURL CSI Add-Ons"
+title: "Migrating to Change CSI Add-On"
---
-As of kURL [v2021.07.30-0](https://kurl.sh/release-notes/v2021.07.30-0), MinIO and Longhorn support migrating data from Rook.
-If you need to migrate to a different storage provider than Longhorn, check out [Migrating](/docs/install-with-kurl/migrating)
+This topic describes how to change the Container Storage Interface (CSI) provisioner add-on in your kURL cluster. It includes information about how to use the kURL installer to automatically migrate data to the new provisioner during upgrade. It also includes prerequisites that you must complete before attempting to change CSI add-ons to reduce the risk of errors during data migration.
-When initially installing the following kURL spec:
+* [Supported CSI Migrations](#supported-csi-migrations)
+* [About Changing the CSI Add-on](#about-changing-the-csi-add-on)
+* [Prerequisites](#prerequisites)
+ * [General Prerequisites](#general-prerequisites)
+ * [Longhorn Prerequisites](#longhorn-prerequisites)
+* [Automated Local to Distributed Storage Migrations](#automated-local-to-distributed-storage-migrations)
+* [Change the CSI Add-on in a Cluster](#change-the-csi-add-on-in-a-cluster)
+* [Troubleshoot Longhorn Data Migration](#troubleshoot-longhorn-data-migration)
+
+## Supported CSI Migrations
+
+_**Important**_: kURL does not support Longhorn. If you are currently using Longhorn, you must migrate data from Longhorn to either OpenEBS or Rook. kURL v2023.02.17-0 and later supports automatic data migration from Longhorn to OpenEBS or Rook. For more information, see [Longhorn Prerequisites](#longhorn-prerequisites) below.
+
+This table describes the CSI add-on migration paths that kURL supports:
+
+| From | To | Notes |
+|-----------|-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Longhorn | OpenEBS 3.3.0 and later | Migrating from Longhorn to OpenEBS 3.3.0 or later is recommended for single-node installations. |
+| Longhorn | Rook 1.10.8 and later | Migrating from Longhorn to Rook 1.10.8 or later is recommended for clusters with three or more nodes where data replication and availability are requirements. Compared to OpenEBS, Rook requires more resources from your cluster, including a dedicated block device. Single-node installations of Rook are not recommended. Migrating from Longhorn to Rook is not supported for single-node clusters. |
+| Rook | OpenEBS 3.3.0 and later | Migrating from Rook to OpenEBS 3.3.0 or later is strongly recommended for single-node installations, or for applications that do not require data replication. Compared to Rook, OpenEBS requires significantly fewer hardware resources from your cluster. |
+
+For more information about how to choose between Rook or OpenEBS, see [Choosing a PV Provisioner](/docs/create-installer/choosing-a-pv-provisioner).
+
+## About Changing the CSI Add-on
+
+You can change the CSI provisioner that your kURL cluster uses by updating the CSI add-on in your kURL specification file. Then, when you upgrade a kURL cluster using the new specification, the kURL installer detects the change that you made to the CSI add-on and begins automatically migrating data from the current provisioner to the new one.
+
+kURL supports data migration when you change your CSI provisioner from Rook to OpenEBS, or when you change from Longhorn to Rook or OpenEBS.
+
+The following describes the automatic data migration process when you change the CSI provisioner add-on, where _source_ is the CSI provisioner currently installed in the cluster and _target_ is the desired CSI provisioner:
+
+1. kURL installer temporarily shuts down all pods mounting volumes backed by the _source_ provisioner. This is done to ensure that the data being migrated is not in use and can be safely copied to the new storage system.
+
+1. kURL recreates all PVCs provided by the _source_ provisioner using the _target_ provisioner as backing storage. Data is then copied from the source PVC to the destination PVC.
+
+1. If you are migrating from Rook or Longhorn to OpenEBS in a cluster that has more than two nodes, then the kURL installer attempts to create local OpenEBS volumes on the same nodes where the original Rook or Longhorn volumes were referenced.
+
+1. When the data migration is complete, the pods are restarted using the new PVCs.
+
+1. kURL uninstalls the _source_ provisioner from the cluster.
+
+## Prerequisites
+
+This section includes prerequisites that you must complete before changing the CSI provisioner in your kURL cluster. These prerequisites help you identify and address the most common causes for a data migration failure so that you can reduce the risk of issues.
+
+### General Prerequisites
+
+Before you attempt to change the CSI provisioner in your cluster, complete the following prerequisites:
+
+- Take a snapshot or backup of the relevant data. This helps ensure you can recover your data if the data migration fails.
+
+- Schedule downtime for the migration. During the automated migration process, there is often a period of time where the application is unavailable. The duration of this downtime depends on the amount of data to migrate. Proper planning and scheduling is necessary to minimize the impact of downtime.
+
+- Verify that the version of Kubernetes you are upgrading to supports both the current CSI provisioner and the new provisioner that you want to use. Running incompatible versions causes an error during data migration.
+
+- Ensure that your cluster has adequate hardware resources to run both the current and the new CSI provisioner simultaneously. Your cluster must be able to run both provisioners simultaneously. During the data migration process, the cluster uses twice as much storage capacity as usual due to duplicate volumes. So, the Rook dedicated storage device or the OpenEBS volume must have sufficient disk space available to handle this increase.
+
+ After kURL completes the data migration, storage consumption in the cluster returns to normal because the volumes from the previous CSI provisioner are deleted.
+
+ To ensure that your cluster has adequate resources, review the following for requirements specific to Rook or OpenEBS:
+ - **Rook Ceph**: See [Rook Add-on](/add-ons/rook) in the kURL documentation and [Hardware Recommendations](https://docs.ceph.com/en/quincy/start/hardware-recommendations/) in the Ceph documentation.
+ - **OpenEBS**: See [OpenEBS Add-on](/add-ons/openebs) in the kURL documentation and [What are the minimum requirements and supported container orchestrators?](https://openebs.io/docs/faqs/general#what-are-the-minimum-requirements-and-supported-container-orchestrators) in the OpenEBS documentation.
+
+- If you are migrating from Longhorn, complete the additional [Longhorn Prerequisites](#longhorn-prerequisites) below.
+
+- The add-ons Kots and Registry require an Object Storage API (similar to AWS S3) to be available in the cluster. This API can be provided either by Rook (directly) or by the OpenEBS+Minio duo (OpenEBS provides the storage and Minio the Object Storage API). If you are using Kots or Registry add-ons in your cluster then the available migration paths are to migrate to Rook or to OpenEBS+Minio.
+
+### Longhorn Prerequisites
+
+If you are migrating from Longhorn to a different CSI provisioner, you must complete the following prerequisites in addition to the [General Prerequisites](#general-prerequisites) above:
+
+- Upgrade your cluster to kURL [v2023.02.17-0](https://github.com/replicatedhq/kurl/tree/v2023.02.17-0) or later. Automatic data migration from Longhorn to Rook or OpenEBS is not available in kURL versions earlier than v2023.02.17-0.
+
+- Upgrade the version of Longhorn installed in your cluster to 1.2.0 or later or 1.3.0 or later. Longhorn versions 1.2.x and 1.3.x support Kubernetes versions 1.24 and earlier.
+
+- Confirm that the Longhorn volumes are in a healthy state. Run the following command to check the status of the volumes:
+
+ ```
+ kubectl get volumes.longhorn.io -A
+ ```
+
+ If any volumes are reported as not healthy in the `Robustness` column in the ouput of this command, resolve the issue before proceeding.
+
+ To learn more about volume health, you can also inspect each volume individually:
+
+ ```
+ kubectl get volumes.longhorn.io -n longhorn-system -o yaml
+ ```
+
+ In many cases, volume health is caused by issues with volume replication. Specifically, when multiple replicas are configured for a volume but not all have been scheduled.
+
+ _**Note**_: During the data migration process in single-node clusters, the system automatically scales down the number of replicas to 1 in all Longhorn volumes to ensure the volumes are in a healthy state before beginning the data transfer. This is done to minimize the risk of a migration failure.
+
+- Confirm that Longhorn nodes are in a healthy state. The nodes must be healthy to ensure they are not over-provisioned and can handle scheduled workloads. Run the following command to check the status of the Longhorn nodes:
+
+ ```
+ kubectl get nodes.longhorn.io -A
+ ```
+
+ If any node is not reported as "Ready" and "Schedulable" in the output of this command, resolve the issue before proceeding.
+
+ To learn more, you can also inspect each node individually and view its "Status" property:
+
+ ```
+ kubectl get nodes.longhorn.io -n longhorn-system -o yaml
+ ```
+
+- (OpenEBS Only) Before you migrate from Longhorn to OpenEBS:
+ - Ensure the filesystem on the node has adequate space to accommodate twice the amount of data currently stored by Longhorn. This is important because both OpenEBS and Longhorn use the node's filesystem for data storage.
+ - Ensure that there is an additional 2G of memory and 2 CPUs available for OpenEBS. For more information, see [What are the minimum requirements and supported container orchestrators?](https://openebs.io/docs/faqs/general#what-are-the-minimum-requirements-and-supported-container-orchestrators) in the OpenEBS documentation.
+
+- (Rook Only) Before you migrate from Longhorn to Rook, ensure that the dedicated block device for Rook attached to each node has enough space to host all data currently stored in Longhorn.
+
+## Change the CSI Add-on in a Cluster
+
+This procedure describes how to update the kURL specification file to use a new CSI provisioner add-on. Then, upgrade your kURL cluster to automatically migrate data to the new provisioner.
+
+For more information about the supported migration paths for CSI provisioners, see [Supported CSI Migrations](#supported-csi-migrations) above.
+
+_**Warning**_: When you change the CSI provisioner in your cluster, the data migration process causes some amount of downtime for the application. It is important to plan accordingly to minimize the impact on users.
+
+To migrate to a new CSI provisioner in a kURL cluster:
+
+1. Complete the [Prerequisites](#prerequisites) above.
+
+1. Update the kURL specification to remove the current CSI add-on and add the new CSI add-on that you want to use (either Rook or OpenEBS). For information about the options for the Rook or OpenEBS kURL add-ons, see [Rook Add-on](/add-ons/rook) or [OpenEBS Add-on](/add-ons/openebs).
+
+ **Example:**
+
+ This example shows how to update a kURL specification to change the CSI provisioner add-on from Rook to OpenEBS Local PV.
+
+ Given the following `my-current-installer` file, which specifies Rook as the CSI provisioner:
+
+ ```
+ apiVersion: cluster.kurl.sh/v1beta1
+ kind: Installer
+ metadata:
+ name: my-current-installer
+ spec:
+ kubernetes:
+ version: 1.19.12
+ docker:
+ version: 20.10.5
+ flannel:
+ version: 0.20.2
+ rook:
+ version: 1.0.4
+ ```
+
+ You can remove `rook` and add `openebs` with `isLocalPVEnable: true` to migrate data from Rook to OpenEBS Local PV, as shown in the following `my-new-installer` file:
+
+ ```
+ apiVersion: cluster.kurl.sh/v1beta1
+ kind: Installer
+ metadata:
+ name: my-new-installer
+ spec:
+ kubernetes:
+ version: 1.19.12
+ docker:
+ version: 20.10.5
+ flannel:
+ version: 0.20.2
+ openebs:
+ version: 3.3.0
+ isLocalPVEnabled: true
+ localPVStorageClassName: local
+ ```
+
+1. Upgrade your kURL cluster to use the updated specification by rerunning the kURL installation script. For more information about how to upgrade a kURL cluster, see [Upgrading](/install-with-kurl/upgrading).
+
+ During the cluster upgrade, the kURL installer detects that the CSI add-on has changed. kURL automatically begins the process of migrating data from the current CSI provisioner to the provisioner in the updated specification. For more information about the data migration process, see [About Changing the CSI Add-on](#about-changing-the-csi-add-on) above.
+
+## Automated Local to Distributed Storage Migrations
+
+You can use the `minimumNodeCount` field to configure kURL to automatically migrate clusters from local (non-HA) storage to distributed (HA) storage when the node count increases to a minimum of _three_ nodes.
+
+When you include the `minimumNodeCount` field and the cluster meets the minimum node count specified, one of the following must occur for kURL to migrate to distributed storage:
+
+- The user joins the third(+) node to the cluster using the kURL join script, and accepts a prompt to migrate storage.
+
+- The user runs the kURL install.sh script on a primary node, and accepts a prompt to migrate storage.
+
+- The user runs the `migrate-multinode-storage` command in the kURL tasks.sh script from a primary node.
+
+### Implementation
+
+The following example spec uses the `minimumNodeCount` field to configure kURL to run local storage with OpenEBS until the cluster increases to three nodes. When the cluster increases to three nodes, kURL automatically migrates to distributed storage with Rook:
```
-apiVersion: cluster.kurl.sh/v1beta1
-kind: Installer
-metadata:
- name: old
-spec:
- kubernetes:
- version: 1.19.12
- docker:
- version: 20.10.5
- weave:
- version: 2.6.5
- rook:
- version: 1.0.4
+ rook:
+ version: "1.11.x"
+ minimumNodeCount: 3
+ openebs:
+ version: "3.7.x"
+ isLocalPVEnabled: true
+ localPVStorageClassName: "local"
```
-and then upgrading to:
+### Requirements
+
+The `minimumNodeCount` field has the following requirements:
+
+* Distributed storage requires a node count equal to or greater than three. This means that you can set the `minimumNodeCount` field to `3` or greater.
+
+* Automated local to distributed storage migrations require the following:
+
+ - Rook 1.11.7 or later
+ - OpenEBS 3.6.0 or later
+ - Block storage devices for Rook
+
+### Limitation
+
+There is downtime while kURL migrates data from local to distributed storage.
+
+## Troubleshoot Longhorn Data Migration
+
+This section describes how to troubleshoot known issues in migrating data from Longhorn to Rook or OpenEBS.
+
+### Pods stuck in Terminating or Creating state
+
+One of the most common problems that may arise during the migration process is Pods getting stuck in the Terminating or Creating state. This can happen when the Pods are trying to be scaled down or up but are not able to do so due to some underlying issue with Longhorn. In this case, it is recommended to restart the kubelet service on all nodes. This can be done by opening new sessions to the nodes and running the command below to restart the kubelet service.
```
-apiVersion: cluster.kurl.sh/v1beta1
-kind: Installer
-metadata:
- name: new
-spec:
- kubernetes:
- version: 1.19.12
- docker:
- version: 20.10.5
- weave:
- version: 2.6.5
- longhorn:
- version: 1.1.2
- minio:
- version: 2020-01-25T02-50-51Z
+sudo systemctl restart kubelet
```
-All PVCs created using Rook will be recreated (with the same name and contents) on Longhorn, all data within the Rook object store will be copied to MinIO, and Rook will be uninstalled from the cluster.
+### Restore the original number of Volume replicas
+
+To ensure a smooth migration process, when executed on a single node cluster, all Longhorn volumes are scaled down to 1 replica. This is done to make it easier to identify any issues that may arise during the migration, as scaling up the number of replicas can potentially mask the underlying problem. Despite the migration not being successful, the volumes will remain at 1 replica in order to identify the root cause of the failure. If necessary you can restore the original number of replicas by running the following command:
+```
+kurl longhorn rollback-migration-replicas
+```
diff --git a/src/markdown-pages/install-with-kurl/migrating.md b/src/markdown-pages/install-with-kurl/migrating.md
index d6c66e00..2e7a0d27 100644
--- a/src/markdown-pages/install-with-kurl/migrating.md
+++ b/src/markdown-pages/install-with-kurl/migrating.md
@@ -1,13 +1,14 @@
---
path: "/docs/install-with-kurl/migrating"
date: "2021-06-30"
-weight: 21
+weight: 22
linktitle: "Migrating"
title: "Migrating to Change kURL Add-Ons"
---
Changing the CRI, CSI, or CNI provider on a kURL install is possible by migrating a [KOTS](https://kots.io/) application to a new cluster.
-If you are only changing the CSI from Rook to Longhorn, check out [Migrating CSI](/docs/install-with-kurl/migrating-csi).
+
+If you're looking to make the move from one CSI provisioner (Longhorn, OpenEBS, or Rook) to another, be sure to consult the [Migrating CSI](/docs/install-with-kurl/migrating-csi) page.
## Requirements
@@ -37,7 +38,7 @@ If you are only changing the CSI from Rook to Longhorn, check out [Migrating CSI
## Example Old and New Specs
-In the new spec, the Kubernetes version has been upgraded to 1.21, Rook has been replaced with Longhorn and Minio, Weave has been replaced with Antrea, and docker has been replaced with containerd.
+In the new spec, the Kubernetes version has been upgraded to 1.21, Longhorn has been replaced with OpenEBS, Weave has been replaced with Flannel, and docker has been replaced with containerd.
### Old
@@ -53,8 +54,10 @@ spec:
version: 20.10.5
weave:
version: 2.6.5
- rook:
- version: 1.0.4
+ longhorn:
+ version: 1.3.1
+ minio:
+ version: 2020-01-25T02-50-51Z
registry:
version: 2.7.1
kotsadm:
@@ -72,19 +75,21 @@ metadata:
name: new
spec:
kubernetes:
- version: 1.21.2
+ version: 1.21.14
containerd:
- version: 1.4.6
- antrea:
- version: 1.1.0
- longhorn:
- version: 1.1.1
+ version: 1.6.15
+ flannel:
+ version: 0.20.2
minio:
- version: 2020-01-25T02-50-51Z
+ version: 2023-01-25T00-19-54Z
+ openebs:
+ version: 3.3.0
+ isLocalPVEnabled: true
+ localPVStorageClassName: local
registry:
- version: 2.7.1
+ version: 2.8.1
kotsadm:
- version: 1.45.0
+ version: 1.93.0
velero:
- version: 1.6.1
+ version: 1.9.5
```
diff --git a/src/markdown-pages/install-with-kurl/proxy-installs.md b/src/markdown-pages/install-with-kurl/proxy-installs.md
index 7d4ea436..f2a7411c 100644
--- a/src/markdown-pages/install-with-kurl/proxy-installs.md
+++ b/src/markdown-pages/install-with-kurl/proxy-installs.md
@@ -20,13 +20,18 @@ spec:
```
The proxy configuration will be used to download packages required for the installation script to complete and will be applied to the docker and KOTS add-ons.
+The provided proxy will be configured and used for HTTP and HTTPS access.
See [Modifying an Install Using a YAML Patch File](/docs/install-with-kurl#modifying-an-install-using-a-yaml-patch-file-at-runtime) for more details on using patch files.
## Proxy Environment Variables
-If a `proxyAddress` is not configured in the installer spec, the following environment variables will be checked in order: `HTTP_PROXY`, `http_proxy`, `HTTPS_PROXY`, `https_proxy`.
+If a `proxyAddress` is not configured in the installer spec, the following environment variables will be used instead:
-Any addresses set in either the `NO_PROXY` or `no_proxy` environment variable will be added to the list of no proxy addresses.
+| Environment variable | Description |
+|-----------------------------|-------------------------------------------------------------------------|
+| `HTTP_PROXY`/`http_proxy` | Will be configured and used for HTTP access |
+| `HTTPS_PROXY`/`https_proxy` | Will be configured and used for HTTPS access |
+| `NO_PROXY`/`no_proxy` | Defines the host names/IP addresses that shouldn't go through the proxy |
## No Proxy Addresses
diff --git a/src/markdown-pages/install-with-kurl/removing-object-storage.md b/src/markdown-pages/install-with-kurl/removing-object-storage.md
index 20b9e3b3..bf47b7bb 100644
--- a/src/markdown-pages/install-with-kurl/removing-object-storage.md
+++ b/src/markdown-pages/install-with-kurl/removing-object-storage.md
@@ -1,7 +1,7 @@
---
path: "/docs/install-with-kurl/removing-object-storage"
date: "2021-12-17"
-weight: 23
+weight: 24
linktitle: "Removing Object Storage"
title: "Removing Object Storage Dependencies"
---
@@ -24,19 +24,21 @@ metadata:
name: no-object-storage
spec:
kubernetes:
- version: 1.21.x
+ version: 1.26.x
containerd:
- version: 1.4.x
- weave:
- version: 2.6.5
- longhorn:
- version: 1.2.x
+ version: 1.6.x
+ flannel:
+ version: 0.20.x
+ openebs:
+ version: 3.3.x
+ isLocalPVEnabled: true
+ localPVStorageClassName: local
registry:
- version: 2.5.7
+ version: 2.8.x
velero:
- version: 1.7.x
+ version: 1.9.x
kotsadm:
- version: 1.58.x
+ version: 1.93.x
disableS3: true
```
@@ -58,8 +60,8 @@ When you re-install or upgrade using the updated installer spec (see the [New In
To fully remove object storage from the cluster, the current provider must be removed from your installer spec.
In the case of MinIO, it is a straightforward removal of the add-on.
-For clusters using the Rook add-on, another CSI such as Longhorn or OpenEBS is required for storage.
-Data can be migrated to Longhorn automatically using existing [CSI Migrations](/docs/install-with-kurl/migrating-csi). If you want to use another storage provider like OpenEBS, read about [migrating with snapshots](/docs/install-with-kurl/migrating).
+For clusters using the Rook add-on, another CSI such as OpenEBS is required for storage.
+Data can be migrated to OpenEBS automatically using existing [CSI Migrations](/docs/install-with-kurl/migrating-csi).
When you re-install or upgrade using the new updated spec (see the [New Installations](/docs/install-with-kurl/removing-object-storage#new-installations) section for a sample), you should expect:
1. **Registry**: A persistent volume (PV) will be added for storage. In order to trigger a migration from object storage into the attached PV, see [Setting `disableS3` to `true` in the KOTS Add-On](/docs/install-with-kurl/removing-object-storage#setting-disables3-to-true-in-the-kots-add-on).
diff --git a/src/markdown-pages/install-with-kurl/setup-tls-certs.md b/src/markdown-pages/install-with-kurl/setup-tls-certs.md
index d666bee1..8445ce80 100644
--- a/src/markdown-pages/install-with-kurl/setup-tls-certs.md
+++ b/src/markdown-pages/install-with-kurl/setup-tls-certs.md
@@ -6,50 +6,6 @@ linktitle: "TLS Certificates"
title: "Setting up TLS Certificates"
---
-After kURL install has completed, you'll be prompted to set up the KOTS Admin Console by directing your browser to `http://:8800`. Only after initial install you'll be presented a warning page:
-
-![tls-certs-insecure](/tls-certs-insecure.png)
-
-
-The next page allows you to configure your TLS certificates:
-
-![tls-certs-setup](/tls-certs-setup.png)
-
-To continue with the preinstalled self-signed TLS certificates, click "skip & continue". Otherwise upload your signed TLS certificates as describe on this page. The hostname is an optional field, and when its specified, its used to redirect your browser to the specified host.
-
-Once you complete this process then you'll no longer be presented this page when logging into the KOTS Admin Console. If you direct your browser to `http://:8800` you'll always be redirected to `https://:8800`.
-
-## KOTS TLS Secret
-
-kURL will set up a Kubernetes secret called `kotsadm-tls`. The secret stores the TLS certificate, key, and hostname. Initially the secret will have an annotation set called `acceptAnonymousUploads`. This indicates that a new TLS certificate can be uploaded as described above.
-
-## Uploading new TLS Certs
-
-If you've already gone through the setup process once, and you want to upload new TLS certificates, you must run this command to restore the ability to upload new TLS certificates:
-
-`kubectl -n default annotate secret kotsadm-tls acceptAnonymousUploads=1 --overwrite`
-
-**Warning: adding this annotation will temporarily create a vulnerability for an attacker to maliciously upload TLS certificates. Once TLS certificates have been uploaded then the vulnerability is closed again.**
-
-After adding the annotation, you will need to restart the kurl proxy server. The simplest way is to delete the kurl-proxy pod (the pod will automatically get restarted):
-
-`kubectl delete pods PROXY_SERVER`
-
-The following command should provide the name of the kurl-proxy server:
-
-`kubectl get pods -A | grep kurl-proxy | awk '{print $2}'`
-
-After the pod has been restarted direct your browser to `http://:8800/tls` and run through the upload process as described above.
-
-It's best to complete this process as soon as possible to avoid anyone from nefariously uploading TLS certificates. After this process has completed, the vulnerability will be closed, and uploading new TLS certificates will be disallowed again. In order to upload new TLS certificates you must repeat the steps above.
-
-
-### KOTS TLS Certificate Renewal
-
-The certificate used to serve the KOTS Admin Console will be renewed automatically at thirty days prior to expiration if the [EKCO add-on](/docs/add-ons/ekco) is enabled with version 0.7.0+.
-Only the default self-signed certificate will be renewed.
-If a custom certificate has been uploaded then no renewal will be attempted, even if the certificate is expired.
-
## Registry
The TLS certificate for the [registry add-on](/docs/add-ons/registry) will be renewed automatically at thirty days prior to expiration if the [EKCO add-on](/docs/add-ons/ekco) is enabled with version 0.5.0+.
diff --git a/src/markdown-pages/install-with-kurl/system-requirements.md b/src/markdown-pages/install-with-kurl/system-requirements.md
index 3ec125ad..f1d8bc0f 100644
--- a/src/markdown-pages/install-with-kurl/system-requirements.md
+++ b/src/markdown-pages/install-with-kurl/system-requirements.md
@@ -8,74 +8,164 @@ title: "System Requirements"
## Supported Operating Systems
-* Ubuntu 18.04
-* Ubuntu 20.04 (Docker version >= 19.03.10)
-* Ubuntu 22.04 (Requires Containerd version >= 1.5.10 or Docker version >= 20.10.17. Collectd add-ons are not supported.)
-* CentOS 7.4\*, 7.5\*, 7.6\*, 7.7\*, 7.8\*, 7.9, 8.0\*, 8.1\*, 8.2\*, 8.3\*, 8.4\* (CentOS 8.x requires Containerd)
-* RHEL 7.4\*, 7.5\*, 7.6\*, 7.7\*, 7.8\*, 7.9, 8.0\*, 8.1, 8.2, 8.3, 8.4, 8.5, 8.6 (RHEL 8.x requires Containerd)
-* Oracle Linux 7.4\*, 7.5\*, 7.6\*, 7.7\*, 7.8\*, 7.9, 8.0\*, 8.1, 8.2, 8.3, 8.4, 8.5, 8.6 (OL 8.x requires Containerd)
+* Ubuntu 18.04\*
+* Ubuntu 20.04 (Requires Docker version >= 19.03.10)
+* Ubuntu 22.04 (Requires Containerd version >= 1.5.10 or Docker version >= 20.10.17)
+* Ubuntu 24.04 (Requires Containerd package preinstalled on the host)
+* CentOS 7.4\*, 7.5\*, 7.6\*, 7.7\*, 7.8\*, 7.9\*, 8.0\*, 8.1\*, 8.2\*, 8.3\*, 8.4\* (CentOS 8.x requires Containerd)
+* RHEL 7.4\*, 7.5\*, 7.6\*, 7.7\*, 7.8\*, 7.9\*, 8.0\*, 8.1\*, 8.2\*, 8.3\*, 8.4\*, 8.5\*, 8.6, 8.7\*, 8.8, 8.9, 8.10, 9.0, 9.1\*, 9.2, 9.3, 9.4 (RHEL 8.x and 9.x require Containerd)
+* Rocky Linux 9.0\*, 9.1\*, 9.2, 9.3, 9.4 (Rocky Linux 9.x requires Containerd)
+* Oracle Linux 7.4\*, 7.5\*, 7.6\*, 7.7\*, 7.8\*, 7.9, 8.0\*, 8.1\*, 8.2\*, 8.3\*, 8.4\*, 8.5\*, 8.6\*, 8.7\*, 8.8\*, 8.9\*, 8.10 (OL 8.x requires Containerd)
* Amazon Linux 2
+* Amazon Linux 2023 (Requires Containerd package preinstalled on the host)
-*: This version is deprecated since it is no longer supported by its creator. We continue to support it, but support will be removed in the future.
+** This version is deprecated since it is no longer supported by its creator. We continue to support it, but support will be removed in the future.*
## Minimum System Requirements
* 4 AMD64 CPUs or equivalent per machine
* 8 GB of RAM per machine
-* 40 GB of Disk Space per machine.
- * **Note**: 10GB of the total 40GB should be available to `/var/lib/rook`. For more information see [Rook](/docs/add-ons/rook)
-* TCP ports 2379, 2380, 6443, 6783, 10250, 10251 and 10252 open between cluster nodes
-* UDP ports 6783 and 6784 open between cluster nodes
-
-## kURL Dependencies Directory
-
-kURL will install additional dependencies in the directory /var/lib/kurl/.
-These dependencies include utilities as well as system packages and container images.
-This directory must be writeable by the kURL installer and must have sufficient disk space (5 GB).
-This directory can be overridden with the flag `kurl-install-directory`.
-For more information see [kURL Advanced Install Options](/docs/install-with-kurl/advanced-options).
+* 256 GB of Disk Space per machine
+ *(For more specific requirements see [Disk Space Requirements](#disk-space-requirements) below)*
+* TCP ports 2379, 2380, 6443, 10250, 10257 and 10259 and UDP port 8472 (Flannel VXLAN) open between cluster nodes
+ *(For more specific add-on requirements see [Networking Requirements](#networking-requirements) below)*
+
+## Host Package Requirements
+
+Host packages are bundled and installed by kURL without the need for external package repositories except for in the case of Red Hat Enterprise Linux 9 and Rocky Linux 9.
+
+For these OSes, the following packages are required per add-on:
+
+| Add-on | Packages |
+| -------------------------------- | -------- |
+| * kURL Core | curl openssl tar |
+| Collectd | bash glibc libcurl libcurl-minimal libgcrypt libgpg-error libmnl openssl-libs rrdtool systemd systemd-libs yajl |
+| Containerd | bash libseccomp libzstd systemd |
+| Kubernetes | conntrack-tools ethtool glibc iproute iptables-nft socat util-linux |
+| Longhorn | iscsi-initiator-utils nfs-utils |
+| OpenEBS *\*versions 1.x and 2.x* | iscsi-initiator-utils |
+| Rook | lvm2 |
+| Velero | nfs-utils |
+
+## Disk Space Requirements
+
+### Per Node Disk Space
+
+256 GB of disk space per node is strongly recommended to accommodate growth and optimal performance. At minimum, kURL requires 100 GB of disk space per node.
+It is important to note that disk usage can vary based on container image sizes, ephemeral data, and specific application requirements.
+
+### Storage Provisioner Add-Ons
+
+Informational Note: **OpenEBS** is configured to allocate its Persistent Volumes within the `/var/openebs/local/` directory, signifying that this specific location is utilized for the storage of data by applications that are actively running on the Kurl platform.
+
+**Rook** add-on, starting from version 1.4.3, requires each node within the cluster to be equipped with an unformatted storage device, which is designated for the storage of Ceph volumes.
+Comprehensive information and guidelines regarding this setup are available in the [Rook Block Storage](https://kurl.sh/docs/add-ons/rook#block-storage) documentation.
+For Rook versions 1.0.x Persistent Volumes are provisioned on `/opt/replicated/rook/` directory.
+
+### On Disk Partitioning
+
+We advise against configuring the system with multiple mount points.
+Experience has shown that utilizing distinct partitions for directories, such as `/var`, often leads to unnecessary complications.
+Usage of *symbolic links* **is not recommended** in any scenario.
+
+Should it be required, the directories utilized by the selected Storage Provisioner (for example, `/var/openebs/local` in the case of OpenEBS) can be set up to mount from a separate partition.
+This configuration should be established **prior to the installation**. It's important to emphasize that Storage Provisioners are not compatible with *symbolic links*.
+
## Networking Requirements
+
+### Hostnames, DNS, and IP Address
+
+#### All hosts in the cluster must have valid DNS records and hostnames
+
+The fully-qualified domain name (FQDN) of any host used with kURL **must** be a valid DNS subdomain name, and its name records **must** be resolvable by DNS.
+
+A valid DNS name must:
+- contain no more than 253 characters
+- contain only lowercase alphanumeric characters, '-' or '.'
+- start with an alphanumeric character
+- end with an alphanumeric character
+
+For more information, see [DNS Subdomain Names](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names) in the Kubernetes documentation.
+
+
+#### All hosts in the cluster must have static IP address assignments on a network interface that will be used for routing to containers
+
+A host in a Kubernetes cluster must have a network interface that can be used for bridging traffic to Kubernetes pods. In order for Pod traffic to work, the host must act as a Layer 3 router to route and switch packets to the right destination. Therefore, a network interface should exist on the host (common names are `eth0`, `enp0s1`, etc.) with an IPv4 address & subnet in a publicly-routable or [private network ranges](https://en.wikipedia.org/wiki/Private_network), and [must be non-overlapping with the subnets used by Kubernetes](https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-ip-address-ranges). It must *not* be a [link-local address](https://en.wikipedia.org/wiki/Link-local_address#IPv4).
+
+> Note: Removing the primary network interface on a node is *not* a supported configuration for deploying an airgap cluster. An interface must exist for routing, so airgaps should be implemented "on the wire" - in the switch/router/VLAN configuration, by firewalls or network ACLs, or by physical disconnection.
+
+After a host is added to a Kubernetes cluster, Kubernetes assumes that the hostname and IP address of the host **will not change.**
+If you need to change the hostname or IP address of a node, you must first remove the node from the cluster.
+
+To change the hostname or IP address of a node in clusters that do not have three or more nodes, use snapshots to move the application to a new cluster before you attempt to remove the node. For more information about using snapshots, see [Velero Add-on](/add-ons/velero).
+
+For more information about the requirements for naming nodes, see [Node naming uniqueness](https://kubernetes.io/docs/concepts/architecture/nodes/#node-name-uniqueness) in the Kubernetes documentation.
+
+#### All hosts in the cluster must not occupy Kubernetes Pod or Service CIDR ranges
+
+Kubernetes also requires exclusive use of two IP subnets (also known as CIDR ranges) for Pod-to-Pod traffic within the cluster. These subnets **must not** overlap with the subnets used in your local network or routing errors will result.
+
+| Subnet | Description |
+|--------------|-------------------------------------|
+| 10.96.0.0/16 | Kubernetes Service IPs |
+| 10.32.0.0/20 | [Flannel CNI Pod IPs](https://kurl.sh/docs/add-ons/flannel#custom-pod-subnet) |
+| 10.10.0.0/16 | [Weave CNI (deprecated) Pod IPs](https://kurl.sh/docs/add-ons/weave#advanced-install-options) |
+
+These ranges can be customized by setting the appropriate add-on options directly in a kURL spec:
+```yaml
+spec:
+ kubernetes:
+ serviceCIDR: ""
+ flannel:
+ podCIDR: ""
+```
+
+Alternatively, the ranges can be customized with a [patch file](https://kurl.sh/docs/install-with-kurl/#select-examples-of-using-a-patch-yaml-file).
+
### Firewall Openings for Online Installations
-The following domains need to accessible from servers performing online kURL installs.
+The following domains need to be accessible from servers performing online kURL installs.
IP addresses for these services can be found in [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/master/ip_addresses.json).
| Host | Description |
|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| amazonaws.com | tar.gz packages are downloaded from Amazon S3 during embedded cluster installations. The IP ranges to allowlist for accessing these can be scraped dynamically from the [AWS IP Address](https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html#aws-ip-download) Ranges documentation. |
-| k8s.gcr.io | Images for the Kubernetes control plane are downloaded from the [Google Container Registry](https://cloud.google.com/container-registry) repository used to publish official container images for Kubernetes. For more information on the Kubernetes control plane components, see the [Kubernetes documentation](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components). |
-| k8s.kurl.sh | Kubernetes cluster installation scripts and artifacts are served from [kurl.sh](https://kurl.sh). Bash scripts and binary executables are served from kurl.sh. This domain is owned by Replicated, Inc which is headquartered in Los Angeles, CA. |
+| k8s.gcr.io, registry.k8s.io | Images for the Kubernetes control plane are downloaded from the [Google Container Registry](https://cloud.google.com/container-registry) repository used to publish official container images for Kubernetes. Starting March 20, 2023, these requests are proxied to the new address `registry.k8s.io`. Both of these URLs must be allowed network traffic using firewall rules. For more information on the Kubernetes control plane components, see the [Kubernetes documentation](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components). |
+| k8s.kurl.sh, s3.kurl.sh | Kubernetes cluster installation scripts and artifacts are served from [kurl.sh](https://kurl.sh). Bash scripts and binary executables are served from kurl.sh. This domain is owned by Replicated, Inc which is headquartered in Los Angeles, CA. |
No outbound internet access is required for airgapped installations.
### Host Firewall Rules
-The kURL install script will prompt to disable firewalld.
+The kURL install script will prompt to disable firewalld.
Note that firewall rules can affect communications between containers on the **same** machine, so it is recommended to disable these rules entirely for Kubernetes.
-Firewall rules can be added after or preserved during an install, but because installation parameters like pod and service CIDRs can vary based on local networking conditions, there is no general guidance available on default requirements.
+Firewall rules can be added after or preserved during an install, but because installation parameters like pod and service CIDRs can vary based on local networking conditions, there is no general guidance available on default requirements.
See [Advanced Options](/docs/install-with-kurl/advanced-options) for installer flags that can preserve these rules.
The following ports must be open between nodes for multi-node clusters:
-
#### Primary Nodes:
-| Protocol | Direction | Port Range | Purpose | Used By |
-| ------- | --------- | ---------- | ----------------------- | ------- |
-| TCP | Inbound | 6443 | Kubernetes API server | All |
-| TCP | Inbound | 2379-2380 | etcd server client API | Primary |
-| TCP | Inbound | 10250 | kubelet API | Primary |
-| TCP | Inbound | 6783 | Weave Net control | All |
-| UDP | Inbound | 6783-6784 | Weave Net data | All |
+| Protocol | Direction | Port Range | Purpose | Used By |
+| ------- | --------- | ---------- | ---------------------------- | ------- |
+| TCP | Inbound | 6443 | Kubernetes API server | All |
+| TCP | Inbound | 2379-2380 | etcd server client API | Primary |
+| TCP | Inbound | 10250 | kubelet API | Primary |
+| UDP | Inbound | 8472 | Flannel VXLAN | All |
+| TCP | Inbound | 6783 | Weave Net control | All |
+| UDP | Inbound | 6783-6784 | Weave Net data | All |
+| TCP | Inbound | 9090 | Rook CSI RBD Plugin Metrics | All |
#### Secondary Nodes:
-| Protocol | Direction | Port Range | Purpose | Used By |
-| ------- | --------- | ---------- | ----------------------- | ------- |
-| TCP | Inbound | 10250 | kubelet API | Primary |
-| TCP | Inbound | 6783 | Weave Net control | All |
-| UDP | Inbound | 6783-6784 | Weave Net data | All |
+| Protocol | Direction | Port Range | Purpose | Used By |
+| ------- | --------- | ---------- | ---------------------------- | ------- |
+| TCP | Inbound | 10250 | kubelet API | Primary |
+| UDP | Inbound | 8472 | Flannel VXLAN | All |
+| TCP | Inbound | 6783 | Weave Net control | All |
+| UDP | Inbound | 6783-6784 | Weave Net data | All |
+| TCP | Inbound | 9090 | Rook CSI RBD Plugin Metrics | All |
-These ports are required for [Kubernetes](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#control-plane-node-s) and [Weave Net](https://www.weave.works/docs/net/latest/faq/#ports).
+These ports are required for [Kubernetes](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#control-plane-node-s), [Flannel](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#vxlan), and [Weave Net](https://www.weave.works/docs/net/latest/faq/#ports).
### Ports Available
@@ -92,6 +182,16 @@ In addition to the ports listed above that must be open between nodes, the follo
| 10257 | kube-controller-manager health server |
| 10259 | kube-scheduler health server |
+### Additional Firewall Rules
+
+When using the Flannel CNI, to allow for outgoing TCP connections from pods, you must configure stateless packet filtering firewalls to allow all packets with TCP flags "ack" with destination port range 1024-65535.
+For more information see the Flannel [Firewalls](/docs/add-ons/flannel#firewalls) add-on documentation.
+
+```
+| Name | Source IP | Destination IP | Source port | Destination port | Protocol | TCP flags | Action |
+| ---- | --------- | -------------- | ----------- | ---------------- | -------- | --------- | ------ |
+| Allow outgoing TCP | 0.0.0.0/0 | 0.0.0.0/0 | 0-65535 | 1024-65535 | tcp | ack | accept |
+```
## High Availability Requirements
@@ -100,15 +200,15 @@ In addition to the networking requirements described in the previous section, op
### Control Plane HA
-To operate the Kubernetes control plane in HA mode, it is recommended to have a minimum of 3 primary nodes.
-In the event that one of these nodes becomes unavailable, the remaining two will still be able to function with an etcd quorom.
+To operate the Kubernetes control plane in HA mode, it is recommended to have a minimum of 3 primary nodes.
+In the event that one of these nodes becomes unavailable, the remaining two will still be able to function with an etcd quorum.
As the cluster scales, dedicating these primary nodes to control-plane only workloads using the `noSchedule` taint should be considered.
This will affect the number of nodes that need to be provisioned.
### Worker Node HA
The number of required secondary nodes is primarily a function of the desired application availability and throughput.
-By default, primary nodes in kURL also run application workloads.
+By default, primary nodes in kURL also run application workloads.
At least 2 nodes should be used for data durability for applications that use persistent storage (i.e. databases) deployed in-cluster.
### Load Balancers
@@ -122,16 +222,16 @@ graph TB
A -->|Port 6443| D[Primary Node]
```
-Highly available cluster setups that do not leverage EKCO's [internal load balancing capability](/docs/add-ons/ekco#internal-load-balancer) require a load balancer to route requests to healthy nodes.
+Highly available cluster setups that do not leverage EKCO's [internal load balancing capability](/docs/add-ons/ekco#internal-load-balancer) require a load balancer to route requests to healthy nodes.
The following requirements need to be met for load balancers used on the control plane (primary nodes):
1. The load balancer must be able to route TCP traffic, as opposed to Layer 7/HTTP traffic.
-1. The load balancer must support hairpinning, i.e. nodes referring to eachother through the load balancer IP.
+1. The load balancer must support hairpinning, i.e. nodes referring to each other through the load balancer IP.
* **Note**: On AWS, only internet-facing Network Load Balancers (NLBs) and internal AWS NLBs **using IP targets** (not Instance targets) support this.
1. Load balancer health checks should be configured using TCP probes of port 6443 on each primary node.
1. The load balancer should target each primary node on port 6443.
1. In accordance with the above firewall rules, port 6443 should be open on each primary node.
-The IP or DNS name and port of the load balancer should be provided as an argument to kURL during the HA setup.
+The IP or DNS name and port of the load balancer should be provided as an argument to kURL during the HA setup.
See [Highly Available K8s](/docs/install-with-kurl/#highly-available-k8s-ha) for more install information.
For more information on configuring load balancers in the public cloud for kURL installs see [Public Cloud Load Balancing](/docs/install-with-kurl/public-cloud-load-balancing).
@@ -142,7 +242,7 @@ Load balancer requirements for application workloads vary depending on workload.
The following example cloud VM instance/disk combinations are known to provide sufficient performance for etcd and will pass the write latency preflight.
-* AWS m4.xlarge with 80 GB standard EBS root device
+* AWS m4.xlarge with 100 GB standard EBS root device
* Azure D4ds_v4 with 8 GB ultra disk mounted at /var/lib/etcd provisioned with 2400 IOPS and 128 MB/s throughput
-* Google Cloud Platform n1-standard-4 with 50 GB pd-ssd boot disk
+* Google Cloud Platform n1-standard-4 with 100 GB pd-ssd boot disk
* Google Cloud Platform n1-standard-4 with 500 GB pd-standard boot disk
diff --git a/src/markdown-pages/install-with-kurl/upgrading.md b/src/markdown-pages/install-with-kurl/upgrading.md
index 820c3825..78ebd2a5 100644
--- a/src/markdown-pages/install-with-kurl/upgrading.md
+++ b/src/markdown-pages/install-with-kurl/upgrading.md
@@ -6,7 +6,7 @@ linktitle: "Upgrading"
title: "Upgrading"
---
-To upgrade Kubernetes or an add-on in a kURL cluster, generate a new installer script and run it on any primary in the cluster.
+To upgrade Kubernetes or an add-on in a kURL cluster, increase the versions in the kURL installer spec as desired and then re-run the install script and on any primary node in the cluster.
## Kubernetes
@@ -15,13 +15,43 @@ Then if there are any remote primaries to upgrade, the script will drain each se
The script will detect when Kubernetes has been upgraded on the remote node and proceed to drain the next node.
After upgrading all primaries the same operation will be performed sequentially on all remote secondaries.
-The install script supports upgrading at most two minor versions of Kubernetes.
-When upgrading two minor versions, the skipped minor version will be installed before proceeding to the desired version.
-For example, it is possible to upgrade directly from Kubernetes 1.22 to 1.24, but the install script completes the installation of 1.23 before proceeding to 1.24.
+It is possible to upgrade from any Kubernetes minor versions to the latest supported Kubernetes version using a single spec.
+This upgrade process will step through minor versions one at a time.
+For example, upgrades from Kubernetes 1.19.x to 1.26.x will step through versions 1.20.x, 1.21x, 1.22.x, 1.23.x, 1.24.x, and 1.25.x before installing 1.26.x.
+
+To upgrade Kubernetes by more than one minor version in air gapped instances without internet access, the end-user must provide a supplemental package that includes the assets required for the upgrade. The script prompts the user to provide the required package during upgrade. For example:
+
+```bash
+⚙ Upgrading Kubernetes from 1.23.17 to 1.26.3
+This involves upgrading from 1.23 to 1.24, 1.24 to 1.25, and 1.25 to 1.26.
+This may take some time.
+⚙ Downloading assets required for Kubernetes 1.23.17 to 1.26.3 upgrade
+The following packages are not available locally, and are required:
+ kubernetes-1.24.12.tar.gz
+ kubernetes-1.25.8.tar.gz
+
+You can download them with the following command:
+
+ curl -LO https://kurl.sh/bundle/version/v2023.04.24-0/19d41b7/packages/kubernetes-1.24.12,kubernetes-1.25.8.tar.gz
+
+Please provide the path to the file on the server.
+Absolute path to file:
+```
+
+Alternatively, to avoid this prompt, end-users can download the required Kubernetes package before beginning the upgrade process and move it to the `/var/lib/kurl/assets/` directory.
## Container Runtimes
-Existing versions of docker and containerd will never be upgraded by the install script.
+If the install script detects an upgrade of a container runtime (`Docker` or `Containerd`) is required, then the new versions provided will be installed.
+For example, if you have a cluster using Containerd version `1.6.4` and then you modify the version in your spec to `1.6.18` and re-run the kURL script, the installer will upgrade Containerd to the newly specified version.
+
+Also, be aware that Docker is not supported with Kubernetes versions 1.24+. Therefore, it is recommended to use Containerd instead. You can upgrade your installation by replacing Docker in your spec with Containerd. If the install script detects a change from Docker to Containerd, it will install Containerd, load the images found in Docker, and remove Docker.
+
+### About Containerd upgrades
+
+The Kurl installer offers a practical solution to the challenge of upgrading Containerd installations that span more than one minor release, despite Containerd not providing official support for this upgrade path. With its automated processes that facilitate such upgrades, Kurl enables users to upgrade their Containerd installations seamlessly, even if spanning two minor releases. This ensures that the upgrade process is streamlined, enabling a smoother transition to the latest version of Containerd.
+
+It is worth noting that while it is possible to upgrade Containerd directly from version 1.3.x to 1.5.x, attempting to upgrade across more than two minor releases, such as upgrading from version 1.3.x to 1.6.x, will result in an upgrade error. Therefore, it is highly recommended that users follow the supported upgrade paths to ensure a successful upgrade of their cluster. By adhering to the supported upgrade paths, you can avoid potential issues and successfully upgrade your Containerd installation.
## Airgap
diff --git a/src/markdown-pages/introduction/index.md b/src/markdown-pages/introduction/index.md
index 7f9043ac..985a97a4 100644
--- a/src/markdown-pages/introduction/index.md
+++ b/src/markdown-pages/introduction/index.md
@@ -6,16 +6,12 @@ linktitle: "Overview"
title: "Introduction to kURL"
---
-The Kubernetes URL Creator is a framework for creating custom Kubernetes distributions. These distros can then be shared as URLs (to install via `curl` and `bash`) or as downloadable packages (to install in airgapped environments). kURL relies on [kubeadm](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/) to bring up the Kubernetes control plane, but there are a variety of tasks a system administrator must perform both before and after running kubeadm init in order to have a production-ready Kubernetes cluster. kURL is open source, with a growing list of [add-on components](/add-ons) (including Rook, Weave, Contour, Prometheus, and more) which is easily extensible by [contributing additional add-ons](/docs/add-on-author/).
-
-As an alternative to using `kubeadm`, kURL has beta support for [K3s](/add-ons/k3s) and [RKE2](/add-ons/rke2), Kubernetes distributions from Rancher.
+The Kubernetes URL Creator is a framework for creating custom Kubernetes distributions. These distros can then be shared as URLs (to install via `curl` and `bash`) or as downloadable packages (to install in airgapped environments). kURL relies on [kubeadm](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/) to bring up the Kubernetes control plane, but there are a variety of tasks a system administrator must perform both before and after running kubeadm init in order to have a production-ready Kubernetes cluster. kURL is open source, with a growing list of [add-on components](/add-ons) (including Rook, Flannel, Contour, Prometheus, and more) which is easily extensible by [contributing additional add-ons](/docs/add-on-author/).
## kURL vs. Standard Distros
### Production Grade Upstream Kubernetes
At its core, kURL is based on [kubeadm](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/), the cluster management tool built by the core Kubernetes team and owned by sig-cluster-lifecycle. This means it benefits from the latest Kubernetes updates, patches and security hot-fixes as they are shipped by Kubernetes maintainers. kURL is a framework for declaring the layers that exist before and after the services that kubeadm provides.
-kURL can also leverage the K3s and RKE2 distributions instead of `kubeadm`. kURL support of these distributions is currently in beta. Several components are already prepackaged with these distributions, such as for networking, storage, and ingress, but other kURL add-ons can still be included.
-
### Flexible
Compared to standard Kubernetes distributions, it's worth emphasizing that kURL is actually a flexible Kubernetes distribution creator. Most distributions make decisions about CRI, CNI, Storage, Ingress, etc. out of the box. Comparatively, [kurl.sh](https://kurl.sh) allows you to choose your own providers and versions of these components.
diff --git a/src/markdown-pages/introduction/what-kurl-does.md b/src/markdown-pages/introduction/what-kurl-does.md
index 1a8326f6..712dc303 100644
--- a/src/markdown-pages/introduction/what-kurl-does.md
+++ b/src/markdown-pages/introduction/what-kurl-does.md
@@ -30,12 +30,12 @@ kURL will perform the following steps on the host prior to delegating to `kubead
* Configure Docker/containerd and Kubernetes to work behind a proxy if detected
## After kubeadm (Adding Add-Ons)
-Once kubeadm gets the cluster running, it’s not ready for an application yet. A cluster will need networking, storage and more. These services are provided by other other open source components, including a lot of the CNCF ecosystem. In a kURL installation manifest, you can specify the additional add-ons that are installed after kubectl starts the cluster. For example, you can include Weave for a CNI plugin, Rook for distributed storage, Prometheus for monitoring and Fluentd for log aggregation. In addition to specifying the add-ons and versions, most add-ons include advanced options that allow you to specify the initial configuration.
+Once kubeadm gets the cluster running, it’s not ready for an application yet. A cluster will need networking, storage and more. These services are provided by other open source components, many of them from the CNCF ecosystem. In a kURL installation manifest, you can specify the additional add-ons that are installed after kubectl starts the cluster. For example, you can include Flannel for a CNI plugin, Rook for distributed storage, Prometheus for monitoring and Fluentd for log aggregation. In addition to specifying the add-ons and versions, most add-ons include advanced options that allow you to specify the initial configuration.
After `kubeadm init` has brought up the Kubernetes control plane, kURL will install add-ons into the cluster.
The available add-ons are:
-* [Weave](https://www.weave.works/oss/net/)
+* [Flannel](https://github.com/flannel-io/flannel/releases)
* [Contour](https://projectcontour.io/)
* [OpenEBS](https://openebs.io/)
* [Rook](https://rook.io/)
@@ -49,7 +49,4 @@ The available add-ons are:
* [Velero](https://velero.io/)
* [EKCO](https://github.com/replicatedhq/ekco)
* [Sonobuoy](https://github.com/vmware-tanzu/sonobuoy/releases)
-* [Cert Mnager](https://github.com/cert-manager/cert-manager)
-
-## kURL and K3s/RKE2 (Beta)
-As an alternative to using `kubeadm`, kURL can use the RKE2 or K3s Kubernetes distributions. These distributions are more opinionated and include several add-ons by default, such as for networking, storage, and ingress. Other kURL add-ons can be included in the kURL specification if they are within the limitations specified in the [K3s](/add-ons/k3s#limitations) and [RKE2](/add-ons/rke2#limitations) add-on documentation.
+* [Cert Manager](https://github.com/cert-manager/cert-manager)
diff --git a/src/scss/utilities/base.scss b/src/scss/utilities/base.scss
index f6369486..acb9efd7 100644
--- a/src/scss/utilities/base.scss
+++ b/src/scss/utilities/base.scss
@@ -82,7 +82,7 @@ body a {
.prerelease-tag {
font-weight: 400;
- font-size: 10px;
+ font-size: 9px;
line-height: 1;
letter-spacing: 1px;
padding: 4px 8px;
@@ -103,6 +103,9 @@ body a {
&.beta {
background-color: #F9485C;
}
+ &.deprecated {
+ background-color: #77000e;
+ }
}
.suite-banner {
diff --git a/src/scss/utilities/icons.scss b/src/scss/utilities/icons.scss
index b4cd5d58..e47f38a1 100644
--- a/src/scss/utilities/icons.scss
+++ b/src/scss/utilities/icons.scss
@@ -318,4 +318,4 @@
background-position: -270px -9px;
margin-left: 10px;
cursor: pointer;
-}
\ No newline at end of file
+}
diff --git a/src/templates/DocsTemplate.js b/src/templates/DocsTemplate.js
index 3388de27..429a4f5d 100644
--- a/src/templates/DocsTemplate.js
+++ b/src/templates/DocsTemplate.js
@@ -40,10 +40,10 @@ export default function Template({