Skip to content

WIP:Fix remark lint warnings in KKP docs #1954

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 24 additions & 13 deletions content/kubermatic/main/_index.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,12 @@ KKP is the easiest and most effective software for managing cloud native IT infr

## Features

#### Powerful & Intuitive Dashboard to Visualize your Kubernetes Deployment
### Powerful & Intuitive Dashboard to Visualize your Kubernetes Deployment

Manage your [projects and clusters with the KKP dashboard]({{< ref "./tutorials-howtos/project-and-cluster-management/" >}}). Scale your cluster by adding and removing nodes in just a few clicks. As an admin, the dashboard also allows you to [customize the theme]({{< ref "./tutorials-howtos/dashboard-customization/" >}}) and disable theming options for other users.

#### Deploy, Scale & Update Multiple Kubernetes Clusters
### Deploy, Scale & Update Multiple Kubernetes Clusters

Kubernetes environments must be highly distributed to meet the performance demands of modern cloud native applications. Organizations can ensure consistent operations across all environments with effective cluster management. KKP empowers you to take advantage of all the advanced features that Kubernetes has to offer and increases the speed, flexibility and scalability of your cloud deployment workflow.

At Kubermatic, we have chosen to do multi-cluster management with Kubernetes Operators. Operators (a method of packaging, deploying and managing a Kubernetes application) allow KKP to automate creation as well as the full lifecycle management of clusters. With KKP you can create a cluster for each need, fine-tune it, reuse it and continue this process hassle-free. This results in:
Expand All @@ -29,15 +31,17 @@ At Kubermatic, we have chosen to do multi-cluster management with Kubernetes Ope
- Smaller individual clusters being more adaptable than one big cluster.
- Faster development thanks to less complex environments.

#### Kubernetes Autoscaler Integration
### Kubernetes Autoscaler Integration

Autoscaling in Kubernetes refers to the ability to increase or decrease the number of nodes as the demand for service response changes. Without autoscaling, teams would manually first provision and then scale up or down resources every time conditions change. This means, either services fail at peak demand due to the unavailability of enough resources or you pay at peak capacity to ensure availability.

[The Kubernetes Autoscaler in a cluster created by KKP]({{< ref "./tutorials-howtos/kkp-autoscaler/cluster-autoscaler/" >}}) can automatically scale up/down when one of the following conditions is satisfied:

1. Some pods fail to run in the cluster due to insufficient resources.
2. There are nodes in the cluster that have been underutilized for an extended period (10 minutes by default) and pods running on those nodes can be rescheduled to other existing nodes.
1. There are nodes in the cluster that have been underutilized for an extended period (10 minutes by default) and pods running on those nodes can be rescheduled to other existing nodes.

### Manage all KKP Users Directly from a Single Panel

#### Manage all KKP Users Directly from a Single Panel
The admin panel allows KKP administrators to manage the global settings that impact all KKP users directly. As an administrator, you can do the following:

- Customize the way custom links (example: Twitter, Github, Slack) are displayed in the Kubermatic dashboard.
Expand All @@ -46,32 +50,39 @@ The admin panel allows KKP administrators to manage the global settings that imp
- Define Preset types in a Kubernetes Custom Resource Definition (CRD) allowing the assignment of new credential types to supported providers.
- Enable and configure etcd backups for your clusters through Backup Buckets.

#### Manage Worker Nodes via the UI or the CLI
### Manage Worker Nodes via the UI or the CLI

Worker nodes can be managed [via the KKP web dashboard]({{< ref "./tutorials-howtos/manage-workers-node/via-ui/" >}}). Once you have installed kubectl, you can also manage them [via CLI]({{< ref "./tutorials-howtos/manage-workers-node/via-command-line" >}}) to automate the creation, deletion, and upgrade of nodes.

#### Monitoring, Logging & Alerting
### Monitoring, Logging & Alerting

When it comes to monitoring, no approach fits all use cases. KKP allows you to adjust things to your needs by enabling certain customizations to enable easy and tactical monitoring.
KKP provides two different levels of Monitoring, Logging, and Alerting.

1. The first targets only the management components (master, seed, CRDs) and is independent. This is the Master/Seed Cluster MLA Stack and only the KKP Admins can access this monitoring data.

2. The other component is the User Cluster MLA Stack which is a true multi-tenancy solution for all your end-users as well as a comprehensive overview for the KKP Admin. It helps to speed up individual progress but lets the Admin keep an overview of the big picture. It can be configured per seed to match the requirements of the organizational structure. All users can access monitoring data of the user clusters under the projects that they are members of.
1. The other component is the User Cluster MLA Stack which is a true multi-tenancy solution for all your end-users as well as a comprehensive overview for the KKP Admin. It helps to speed up individual progress but lets the Admin keep an overview of the big picture. It can be configured per seed to match the requirements of the organizational structure. All users can access monitoring data of the user clusters under the projects that they are members of.

Integrated Monitoring, Logging and Alerting functionality for applications and services in KKP user clusters are built using Prometheus, Loki, Cortex and Grafana. Furthermore, this can be enabled with a single click on the KKP UI.

#### OIDC Provider Configuration
### OIDC Provider Configuration

Since Kubernetes does not provide an OpenID Connect (OIDC) Identity Provider, KKP allows the user to configure a custom OIDC. This way you can grant access and information to the right stakeholders and fulfill security requirements by managing user access in a central identity provider across your whole infrastructure.

#### Easily Upgrading Control Plane and Nodes
### Easily Upgrading Control Plane and Nodes

A specific version of Kubernetes’ control plane typically supports a specific range of kubelet versions connected to it. KKP enforces the rule “kubelet must not be newer than kube-apiserver, and maybe up to two minor versions older” on its own. KKP ensures this rule is followed by checking during each upgrade of the clusters’ control plane or node’s kubelet. Additionally, only compatible versions are listed in the UI as available for upgrades.

#### Open Policy Agent (OPA)
### Open Policy Agent (OPA)

To enforce policies and improve governance in Kubernetes, Open Policy Agent (OPA) can be used. KKP integrates it using OPA Gatekeeper as a kubernetes-native policy engine supporting OPA policies. As an admin you can enable and enforce OPA integration during cluster creation by default via the UI.

#### Cluster Templates
### Cluster Templates

Clusters can be created in a few clicks with the UI. To take the user experience one step further and make repetitive tasks redundant, cluster templates allow you to save data entered into a wizard to create multiple clusters from a single template at once. Templates can be saved to be used subsequently for new cluster creation.

#### Use Default Addons to Extend the Functionality of Kubernetes
### Use Default Addons to Extend the Functionality of Kubernetes

[Addons]({{< ref "./architecture/concept/kkp-concepts/addons/" >}}) are specific services and tools extending the functionality of Kubernetes. Default addons are installed in each user cluster in KKP. The KKP Operator comes with a tool to output full default KKP configuration, serving as a starting point for adjustments. Accessible addons can be installed in each user cluster in KKP on user demand.

{{% notice tip %}}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,12 @@ KKP supports a multitude of operating systems. One of the unique features of KKP

The following operating systems are currently supported by Kubermatic:

* Ubuntu 20.04, 22.04 and 24.04
* RHEL beginning with 8.0 (support is cloud provider-specific)
* Flatcar (Stable channel)
* Rocky Linux beginning with 8.0
* Amazon Linux 2
- Ubuntu 20.04, 22.04 and 24.04
- RHEL beginning with 8.0 (support is cloud provider-specific)
- Flatcar (Stable channel)
- Rocky Linux beginning with 8.0
- Amazon Linux 2

**Note:** CentOS was removed as a supported OS in KKP 2.26.3

This table shows the combinations of operating systems and cloud providers that KKP supports:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,7 @@ recommend upgrading to a supported Kubernetes release as soon as possible. Refer
[Kubernetes website](https://kubernetes.io/releases/) for more information on the supported
releases.

Upgrades from a previous Kubernetes version are generally supported whenever a version is
marked as supported, for example KKP 2.27 supports updating clusters from Kubernetes 1.30 to 1.31.
Upgrades from a previous Kubernetes version are generally supported whenever a version is marked as supported, for example KKP 2.27 supports updating clusters from Kubernetes 1.30 to 1.31.

## Provider Incompatibilities

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,22 +24,22 @@ In general, we recommend the usage of Applications for workloads running inside

Default addons are installed in each user-cluster in KKP. The default addons are:

* [Canal](https://github.com/projectcalico/canal): policy based networking for cloud native applications
* [Dashboard](https://github.com/kubernetes/dashboard): General-purpose web UI for Kubernetes clusters
* [kube-proxy](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/): Kubernetes network proxy
* [rbac](https://kubernetes.io/docs/reference/access-authn-authz/rbac/): Kubernetes Role-Based Access Control, needed for
- [Canal](https://github.com/projectcalico/canal): policy based networking for cloud native applications
- [Dashboard](https://github.com/kubernetes/dashboard): General-purpose web UI for Kubernetes clusters
- [kube-proxy](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/): Kubernetes network proxy
- [rbac](https://kubernetes.io/docs/reference/access-authn-authz/rbac/): Kubernetes Role-Based Access Control, needed for
[TLS node bootstrapping](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
* [OpenVPN client](https://openvpn.net/index.php/open-source/overview.html): virtual private network (VPN). Lets the control
- [OpenVPN client](https://openvpn.net/index.php/open-source/overview.html): virtual private network (VPN). Lets the control
plan access the Pod & Service network. Required for functionality like `kubectl proxy` & `kubectl port-forward`.
* pod-security-policy: Policies to configure KKP access when PSPs are enabled
* default-storage-class: A cloud provider specific StorageClass
* kubeadm-configmap & kubelet-configmap: A set of ConfigMaps used by kubeadm
- pod-security-policy: Policies to configure KKP access when PSPs are enabled
- default-storage-class: A cloud provider specific StorageClass
- kubeadm-configmap & kubelet-configmap: A set of ConfigMaps used by kubeadm

Installation and configuration of these addons is done by 2 controllers which are part of the KKP
seed-controller-manager:

* `addon-installer-controller`: Ensures a given set of addons will be installed in all clusters
* `addon-controller`: Templates the addons & applies the manifests in the user clusters
- `addon-installer-controller`: Ensures a given set of addons will be installed in all clusters
- `addon-controller`: Templates the addons & applies the manifests in the user clusters

The KKP binaries come with a `kubermatic-installer` tool, which can output a full default
`KubermaticConfiguration` (`kubermatic-installer print`). This will also include the default configuration for addons and can serve as
Expand Down Expand Up @@ -86,7 +86,7 @@ regular addons, which are always installed and cannot be removed by the user). I
and accessible, then it will be installed in the user-cluster, but also be visible to the user, who can manage
it from the KKP dashboard like the other accessible addons. The accessible addons are:

* [node-exporter](https://github.com/prometheus/node_exporter): Exports metrics from the node
- [node-exporter](https://github.com/prometheus/node_exporter): Exports metrics from the node

Accessible addons can be managed in the UI from the cluster details view:

Expand Down Expand Up @@ -256,6 +256,7 @@ spec:
```

There is a short explanation of the single `formSpec` fields:

- `displayName` is the name that is displayed in the UI as the control label.
- `internalName` is the name used internally. It can be referenced with template variables (see the description below).
- `required` indicates if the control should be required in the UI.
Expand Down Expand Up @@ -317,7 +318,7 @@ the exact templating syntax.
KKP injects an instance of the `TemplateData` struct into each template. The following
Go snippet shows the available information:

```
```plaintext
{{< readfile "kubermatic/main/data/addondata.go" >}}
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ AWS node termination handler is deployed with any aws user cluster created by KK
cluster once the spot instance is interrupted.

## AWS Spot Instances Creation

To create a user cluster which runs some spot instance machines, the user can specify the machine type whether it's a spot
instance or not at the step number four (Initial Nodes). A checkbox that has the label "Spot Instance" should be checked.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,12 +28,12 @@ Before this addon can be deployed in a KKP user cluster, the KKP installation ha
as an [accessible addon](../#accessible-addons). This needs to be done by the KKP installation administrator,
once per KKP installation.

* Request the KKP addon Docker image with Kubeflow Addon matching your KKP version from Kubermatic
- Request the KKP addon Docker image with Kubeflow Addon matching your KKP version from Kubermatic
(or [build it yourself](../#creating-a-docker-image) from the [Flowmatic repository](https://github.com/kubermatic/flowmatic)).
* Configure KKP - edit `KubermaticConfiguration` as follows:
* modify `spec.userClusters.addons.kubernetes.dockerRepository` to point to the provided addon Docker image repository,
* add `kubeflow` into `spec.api.accessibleAddons`.
* Apply the [AddonConfig from the Flowmatic repository](https://raw.githubusercontent.com/kubermatic/flowmatic/master/addon/addonconfig.yaml) in your KKP installation.
- Configure KKP - edit `KubermaticConfiguration` as follows:
- modify `spec.userClusters.addons.kubernetes.dockerRepository` to point to the provided addon Docker image repository,
- add `kubeflow` into `spec.api.accessibleAddons`.
- Apply the [AddonConfig from the Flowmatic repository](https://raw.githubusercontent.com/kubermatic/flowmatic/master/addon/addonconfig.yaml) in your KKP installation.

### Kubeflow prerequisites

Expand Down Expand Up @@ -66,7 +66,8 @@ For a LoadBalancer service, an external IP address will be assigned by the cloud
This address can be retrieved by reviewing the `istio-ingressgateway` Service in `istio-system` Namespace, e.g.:

```bash
$ kubectl get service istio-ingressgateway -n istio-system
kubectl get service istio-ingressgateway -n istio-system

NAME TYPE CLUSTER-IP EXTERNAL-IP
istio-ingressgateway LoadBalancer 10.240.28.214 a286f5a47e9564e43ab4165039e58e5e-1598660756.eu-central-1.elb.amazonaws.com
```
Expand Down Expand Up @@ -162,33 +163,33 @@ This section contains a list of known issues in different Kubeflow components:

**Kubermatic Kubernetes Platform**

* Not all GPU instances of various providers can be started from the KKP UI:
- Not all GPU instances of various providers can be started from the KKP UI:
<https://github.com/kubermatic/kubermatic/issues/6433>

**Istio RBAC in Kubeflow:**

* If enabled, this issue can be hit in the pipelines:
- If enabled, this issue can be hit in the pipelines:
<https://github.com/kubeflow/pipelines/issues/4976>

**Kubeflow UI issues:**

* Error by adding notebook server: 500 Internal Server Error:
- Error by adding notebook server: 500 Internal Server Error:
<https://github.com/kubeflow/kubeflow/issues/5518>
* Experiment run status shows as unknown:
- Experiment run status shows as unknown:
<https://github.com/kubeflow/pipelines/issues/4972>

**Kale Pipeline:**

* "Namespace is empty" exception:
- "Namespace is empty" exception:
<https://github.com/kubeflow-kale/kale/issues/210>

**NVIDIA GPU Operator**

* Please see the official NVIDIA GPU documentation for known limitations:
- Please see the official NVIDIA GPU documentation for known limitations:
<https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/release-notes.html#operator-known-limitations>

**AMD GPU Support**

* The latest AMD GPU -enabled instances in AWS ([EC2 G4ad](https://aws.amazon.com/blogs/aws/new-amazon-ec2-g4ad-instances-featuring-amd-gpus-for-graphics-workloads/))
- The latest AMD GPU -enabled instances in AWS ([EC2 G4ad](https://aws.amazon.com/blogs/aws/new-amazon-ec2-g4ad-instances-featuring-amd-gpus-for-graphics-workloads/))
featuring Radeon Pro V520 GPUs do not seem to be working with Kubeflow (yet). The GPUs are successfully attached
to the pods but the notebook runtime does not seem to recognize them.
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ Currently, helm is exclusively supported as a templating method, but integration
Helm Applications can both be installed from helm registries directly or from a git repository.

## Concepts

KKP manages Applications using two key mechanisms: [ApplicationDefinitions]({{< ref "./application-definition" >}}) and [ApplicationInstallations]({{< ref "./application-installation" >}}).

`ApplicationDefinitions` are managed by KKP Admins and contain all the necessary information for an application's installation.
Expand Down
Loading