Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion docs/ADMIN_GUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -422,7 +422,13 @@ How to perform the operation in the web interface:
The VirtualMachineClass resource is designed for centralized configuration of preferred virtual machine settings. It allows you to define CPU instructions, configuration policies for CPU and memory resources for virtual machines, as well as define ratios of these resources. In addition, VirtualMachineClass provides management of virtual machine placement across platform nodes. This allows administrators to effectively manage virtualization platform resources and optimally place virtual machines on platform nodes.

During installation, a single VirtualMachineClass `generic` resource is automatically created. It represents a universal CPU type based on the older, but widely supported, Nehalem architecture. This enables running VMs on any nodes in the cluster and allows live migration.

The administrator can modify the parameters of the `generic` VirtualMachineClass resource (except for the `.spec.cpu` section) or delete this resource.

{{< alert level="info" >}}

It is not recommended to use the VirtualMachineClass `generic` for running workloads in production environments, since this class corresponds to a processor with the least functionality.

It is recommended that you create at least one VirtualMachineClass resource in the cluster with the `Discovery` type immediately after all nodes are configured and added to the cluster. This allows virtual machines to utilize a generic CPU with the highest possible CPU performance considering the CPUs on the cluster nodes. This allows the virtual machines to utilize the maximum CPU capabilities and migrate seamlessly between cluster nodes if necessary.

For a configuration example, see [vCPU Discovery configuration example](#vcpu-discovery-configuration-example)
Expand Down Expand Up @@ -974,7 +980,7 @@ Live migration of virtual machines between cluster nodes is used for rebalancing
After the module is enabled, the system automatically monitors the distribution of virtual machines and maintains optimal node utilization. The main features of the module are:

- Load balancing: The system monitors CPU reservation on each node. If more than 80% of CPU resources are reserved on a node, some virtual machines will be automatically migrated to less-loaded nodes. This helps avoid overloads and ensures stable VM operation.
- Correct placement: The system checks whether the current node meets the mandatory requirements of the virtual machine's requests, as well as rules regarding their relative placement. For example, if rules prohibit placing certain VMs on the same node, the module will automatically move them to a suitable server.
- Correct placement: The system checks whether the current node meets the mandatory requirements of the virtual machine's requests, as well as rules regarding their relative placement. For example, if rules prohibit placing certain VMs on the same node, the module will automatically move them to a suitable server.ple, if rules prohibit placing certain VMs on the same node, the module will automatically move them to a suitable server.

### ColdStandby

Expand Down
8 changes: 6 additions & 2 deletions docs/ADMIN_GUIDE.ru.md
Original file line number Diff line number Diff line change
Expand Up @@ -429,9 +429,13 @@ d8 k describe cvi ubuntu-22-04

Во время установки автоматически создаётся ресурс VirtualMachineClass с именем `generic`. Он представляет собой универсальный тип процессора на основе более старой, но широко поддерживаемой архитектуры Nehalem. Это позволяет запускать виртуальные машины на любых узлах кластера и поддерживает их живую миграцию.

Администратор может изменять параметры ресурса VirtualMachineClass `generic` (за исключением секции `.spec.cpu`) либо удалить данный ресурс.

{{< alert level="info" >}}
Рекомендуется создать как минимум один ресурс VirtualMachineClass в кластере с типом `Discovery` сразу после того, как все узлы будут настроены и добавлены в кластер.
Это позволит использовать в виртуальных машинах универсальный процессор с максимально возможными характеристиками с учетом CPU на узлах кластера, что позволит виртуальным машинам использовать максимум возможностей CPU и при необходимости беспрепятственно осуществлять миграцию между узлами кластера.

Не рекомендуется использовать VirtualMachineClass `generic` для запуска рабочих нагрузок в production-средах, поскольку данный класс соответствует процессору с наименьшей функциональностью.

Рекомендуется после добавления и настройки всех узлов в кластере создать хотя бы один ресурс VirtualMachineClass с типом `Discovery`. Это обеспечит выбор наилучшей доступной конфигурации процессора с учётом всех CPU в вашем кластере, позволит виртуальным машинам максимально эффективно использовать возможности процессоров и обеспечит беспрепятственную миграцию между узлами.

Пример настройки смотрите в разделе [Пример конфигурации vCPU Discovery](#пример-конфигурации-vcpu-discovery)
{{< /alert >}}
Expand Down
112 changes: 27 additions & 85 deletions docs/INSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,89 +16,20 @@ The module supports the following configuration:
- Maximum number of nodes: `1000`.
- Maximum number of virtual machines: `50000`.

The module has no additional restrictions and is compatible with any hardware that is supported by [operating systems](#supported-os-for-platform-nodes) on which it can be installed.

## Hardware requirements

1. A dedicated **machine for installation**.

This machine will run the Deckhouse installer. For example, it can be an administrator's laptop or any other computer that is not intended to be added to the cluster. Requirements for this machine:

- OS: Windows 10+, macOS 10.15+, Linux (Ubuntu 18.04+, Fedora 35+).
- Installed Docker Engine or Docker Desktop (instructions for [Ubuntu](https://docs.docker.com/engine/install/ubuntu/), [macOS](https://docs.docker.com/desktop/mac/install/), [Windows](https://docs.docker.com/desktop/windows/install/)).
- HTTPS access to the container image registry at `registry.deckhouse.io`.
- SSH-key-based access to the node that will serve as the **master node** of the future cluster.
- SSH-key-based access to the node that will serve as the **worker node** of the future cluster (if the cluster will consist of more than one master node).

1. **Server for the master node**

There can be multiple servers running the cluster’s control plane components, but only one server is required for installation. The others can be added later via node management mechanisms.

Requirements for a physical bare-metal server:

- Resources:
- CPU:
- x86-64 architecture.
- Support for Intel-VT (VMX) or AMD-V (SVM) instructions.
- At least 4 cores.
- RAM: At least 8 GB.
- Disk space:
- At least 60 GB.
- High-speed disk (400+ IOPS).
- OS [from the list of supported ones](#supported-os-for-platform-nodes):
- Linux kernel version `5.7` or newer.
- **Unique hostname** across all servers in the future cluster.
- Network access:
- HTTPS access to the container image registry at `registry.deckhouse.io`.
- Access to the package repositories of the chosen OS.
- SSH key-based access from the **installation machine** (see item 1).
- Network access from the **installation machine** (see item 1) on port `22322/TCP`.
- Required software:
- The `cloud-utils` and `cloud-init` packages must be installed (package names may vary depending on the chosen OS).

> The container runtime will be installed automatically, so there's no need to install any `containerd` or `docker` packages.

1. **Servers for worker nodes**

These nodes will run virtual machines, so the servers must have enough resources to handle the planned number of VMs. Additional disks may be required if you deploy a software-defined storage solution.

Requirements for a physical bare-metal server:

- Resources:
- CPU:
- x86-64 architecture.
- Support for Intel-VT (VMX) or AMD-V (SVM) instructions.
- At least 4 cores.
- RAM: At least 8 GB.
- Disk space:
- At least 60 GB.
- High-speed disk (400+ IOPS).
- Additional disks for software-defined storage.
- OS [from the list of supported ones](#supported-os-for-platform-nodes):
- Linux kernel version `5.7` or newer;
- **Unique hostname** across all servers in the future cluster.
- Network access:
- HTTPS access to the container image registry at `registry.deckhouse.io`.
- Access to the package repositories of the chosen OS.
- SSH key-based access from the **installation machine** (see item 1).
- Required software:
- The `cloud-utils` and `cloud-init` packages must be installed (package names may vary depending on the chosen OS).

> The container runtime will be installed automatically, so there's no need to install any `containerd` or `docker` packages.

1. **Storage hardware**

Depending on the chosen storage solution, additional resources may be required. For details, refer to [Storage Management](/products/virtualization-platform/documentation/admin/platform-management/storage/sds/lvm-local.html).

## Supported OS for platform nodes

| Linux distribution | Supported versions |
| ------------------ | ------------------- |
| CentOS | 7, 8, 9 |
| Debian | 10, 11, 12 |
| Ubuntu | 20.04, 22.04, 24.04 |

{{< alert level="warning">}}
The module has no additional restrictions and is compatible with any hardware that is supported by operating systems on which it can be installed.

## Hardware and software requirements

Hardware requirements for the virtualization module match the requirements for [Deckhouse Kubernetes Platform](https://deckhouse.io/products/kubernetes-platform/gs/), with the additional requirement for CPU virtualization support on hosts where virtual machines will be launched.

### Additional requirements for virtualization support

On all cluster nodes where virtual machines are planned to be launched, hardware virtualization support must be ensured:

- Processor: support for Intel-VT (VMX) or AMD-V (SVM) instructions;
- BIOS/UEFI: hardware virtualization support enabled in BIOS/UEFI settings.

{{< alert level="warning" >}}
Ensuring stable operation of live migration mechanisms requires the use of an identical version of the Linux kernel on all cluster nodes.

This is because differences in kernel versions can lead to incompatible interfaces, system calls, and resource handling, which can disrupt the virtual machine migration process.
Expand Down Expand Up @@ -212,9 +143,13 @@ The distribution of components across cluster nodes depends on the cluster's con
- master nodes, system nodes, and worker nodes;
- other combinations (depending on the architecture).

The table lists the management plane components and the node types for their placement. Components are distributed by priority only if the corresponding nodes are available in the cluster configuration.
{{< alert level="warning" >}}
Worker nodes are understood as nodes that have no restrictions (taints) that prevent running regular workloads (pods, virtual machines).
{{< /alert >}}

The table lists the main virtualization management plane components and the nodes where they can be placed. Components are distributed by priority — if there is a suitable node type in the cluster, the component will be placed on it.

| Name | Node group for running components | Comment |
| Component Name | Node group for running components | Comment |
| ----------------------------- | --------------------------------- | -------------------------------------------- |
| `cdi-operator-*` | system/worker | |
| `cdi-apiserver-*` | master | |
Expand All @@ -229,6 +164,13 @@ The table lists the management plane components and the node types for their pla
| `virt-handler-*` | All cluster nodes | |
| `vm-route-forge-*` | All cluster nodes | |

Components for creating and loading (importing) virtual machine images or disks (they run only during creation or loading):

| Component Name | Node group for running components | Comment |
| ------------------------------ | --------------------------------- | -------------------------------------------- |
| `importer-*` | system/worker | |
| `uploader-*` | system/worker | |

## Module update

The `virtualization` module uses five update channels designed for use in different environments that have different requirements in terms of reliability:
Expand Down
Loading
Loading