diff --git a/README.md b/README.md
index edfb1949..165859dc 100644
--- a/README.md
+++ b/README.md
@@ -6,7 +6,7 @@ Copyright (c) 2019-2020 Intel Corporation
# OpenNESS Quick Start
## Network Edge
- ### Step 1. Get Hardware ► Step 2. [Getting started](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md) ► Step 3. [Applications Onboarding](https://github.com/open-ness/ido-specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md)
+ ### Step 1. Get Hardware ► Step 2. [Getting started](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/openness-cluster-setup.md) ► Step 3. [Applications Onboarding](https://github.com/open-ness/ido-specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md)
# OpenNESS solution documentation index
@@ -20,10 +20,12 @@ Below is the complete list of OpenNESS solution documentation
## Getting Started - Setup
* [getting-started: Folder containing how to get started with installing and trying OpenNESS Network Edge solutions](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started)
- * [openness-experience-kits.md: Overview of the OpenNESS Experience kits that are used to install the Network Edge solutions](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/openness-experience-kits.md)
- * [network-edge: Folder containing how to get started with installing and trying OpenNESS Network Edge](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge)
- * [controller-edge-node-setup.md: Started here for installing and trying OpenNESS Network Edge](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md)
- * [supported-epa.md: List of Silicon and Software EPA that are features that are supported in OpenNESS Network Edge](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge/supported-epa.md)
+ * [openness-cluster-setup.md: Getting started here for installing and trying OpenNESS Network Edge](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/openness-cluster-setup.md)
+ * [converged-edge-experience-kits.md: Overview of the Converged Edge Experience Kits that are used to install the Network Edge solutions](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/converged-edge-experience-kits.md)
+ * [non-root-user.md: Using the non-root user on the OpenNESS Platform](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/non-root-user.md)
+ * [offline-edge-deployment.md: Setting up OpenNESS in an air-gapped, offline environment](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/offline-edge-deployment.md)
+ * [harbor-registry.md: Enabling Harbor Registry service in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/harbor-registry.md)
+ * [kubernetes-dashboard.md: Installing Kubernetes Dashboard for OpenNESS Network Edge cluster](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/kubernetes-dashboard.md)
## Application onboarding - Deployment
@@ -57,11 +59,11 @@ Below is the complete list of OpenNESS solution documentation
* [openness-sriov-multiple-interfaces.md: Dedicated Physical Network interface allocation support for Edge Applications and Network Functions](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md)
* [openness-dedicated-core.md: Dedicated CPU core allocation support for Edge Applications and Network Functions](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-dedicated-core.md)
* [openness-bios.md: Edge platform BIOS and Firmware and configuration support in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-bios.md)
+ * [openness-qat.md: Resource allocation & configuration of Intel® QuickAssist Adapter](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-qat.md)
* [openness-fpga.md: Dedicated FPGA IP resource allocation support for Edge Applications and Network Functions](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md)
* [openness_hddl.md: Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness_hddl.md)
* [openness-topology-manager.md: Resource Locality awareness support through Topology manager in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-topology-manager.md)
* [openness-vca.md: Visual Compute Accelerator Card - Analytics (VCAC-A)](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md)
- * [openness-kubernetes-dashboard.md: Kubernetes Dashboard in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-kubernetes-dashboard.md)
* [openness-rmd.md: Cache Allocation using Resource Management Daemon(RMD) in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-rmd.md)
* [openness-telemetry: Telemetry Support in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-telemetry.md)
@@ -78,6 +80,8 @@ Below is the complete list of OpenNESS solution documentation
* [openness_appguide.md: How to develop or Port existing cloud application to the Edge cloud based on OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/applications/openness_appguide.md)
* [openness_ovc.md: Open Visual Cloud Smart City reference Application for OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/applications/openness_ovc.md)
* [openness_openvino.md: AI inference reference Edge application for OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/applications/openness_openvino.md)
+ * [openness_va_services.md: Video Analytics Services for OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/applications/openness_va_services.md)
+ * [openness_service_mesh.md: Service Mesh support in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/applications/openness_service_mesh.md)
## Cloud Adapters
@@ -155,5 +159,5 @@ Below is the complete list of OpenNESS solution documentation
- DU: Distributed Unit of RAN
- CU: Centralized Unit of RAN
- SBI: Service Based Interfaces
-- OEK: OpenNESS Experience Kit
+- CEEK: Converged Edge Experience Kits
- IDO: Intel Distribution of OpenNESS
diff --git a/_data/navbars/building-blocks.yml b/_data/navbars/building-blocks.yml
index 2bb56cfd..1f3cd9a4 100644
--- a/_data/navbars/building-blocks.yml
+++ b/_data/navbars/building-blocks.yml
@@ -71,6 +71,11 @@ section:
meta_title: Visual Compute Accelerator Card - Analytics (VCAC-A)
meta_description: The Visual Cloud Accelerator Card - Analytics (VCAC-A) equips Intel® Xeon® Scalable Processor based platforms and Intel Movidius™ VPU to enhance the video codec, computer vision, and inference capabilities.
+ - title: Intel® QuickAssist Adapter
+ path: /doc/building-blocks/enhanced-platform-awareness/openness-qat
+ meta_title: Using Intel® QuickAssist Adapter in OpenNESS - Resource Allocation, and Configuration
+ meta_description: Intel® QuickAssist Adapter plays a key role in accelerating cryptographic operations in 5G networking.
+
- title: Topology Manager Support
path: /doc/building-blocks/enhanced-platform-awareness/openness-topology-manager
meta_title: Topology Manager Support in OpenNESS, Resource Locality Awareness
@@ -86,11 +91,6 @@ section:
meta_title: Telemetry support in OpenNESS
meta_description: OpenNESS supports platform and application telemetry allowing users to retrieve information about the platform, the underlying hardware, cluster and applications deployed.
- - title: Kubernetes Dashboard in OpenNESS
- path: /doc/building-blocks/enhanced-platform-awareness/openness-kubernetes-dashboard
- meta_title: Kubernetes Dashboard in OpenNESS
- meta_description: OpenNESS supports Kubernetes Dashboard that can be used to inspect and manage Kubernetes cluster.
-
- title: Multi-Cluster Orchestration
path:
section:
diff --git a/_data/navbars/getting-started.yml b/_data/navbars/getting-started.yml
index 7307c878..4024ec4e 100644
--- a/_data/navbars/getting-started.yml
+++ b/_data/navbars/getting-started.yml
@@ -5,20 +5,32 @@ title: "Getting Started"
path: /getting-started/
order: 1
section:
- - title: OpenNESS Experience Kits
- path: /doc/getting-started/openness-experience-kits
+ - title: OpenNESS Cluster Setup
+ path: /doc/getting-started/openness-cluster-setup
+ meta_title: Controller and Edge Node Setup
+ meta_description: OpenNESS Network Edge Controller and Edge nodes must be set up on different machines and provided in the inventory may reboot during the installation.
+
+ - title: Converged Edge Experience Kits
+ path: /doc/getting-started/converged-edge-experience-kits
meta_title: OpenNESS Experience Kits Easy Setup of OpenNESS in Network Edge
meta_description: OpenNESS Experience Kits repository contains easy setup of OpenNESS in Network Edge mode.
- - title: Network Edge
- path:
- section:
- - title: Controller & Edge Node Setup
- path: /doc/getting-started/network-edge/controller-edge-node-setup
- meta_title: Controller and Edge Node Setup
- meta_description: OpenNESS Network Edge Controller and Edge nodes must be set up on different machines and provided in the inventory may reboot during the installation.
+ - title: OpenNESS Offline Deployment
+ path: /doc/getting-started/offline-edge-deployment
+ meta_title: OpenNESS Offline Deployment
+ meta_description: The OpenNESS projects supports a deployment of the solution in an air-gapped, offline environment.
+
+ - title: Non-root User in OpenNESS
+ path: /doc/getting-started/non-root-user
+ meta_title: The non-root user on the OpenNESS Platform
+ meta_description: OpenNESS provides a possibility to install all required files on Kubernetes a control plane and nodes with or without root rights.
+
+ - title: Harbor Registry Service
+ path: /doc/getting-started/harbor-registry
+ meta_title: Harbor Registry Service in OpenNESS
+ meta_description: Enabling Harbor registry service in OpenNESS
- - title: Enhanced Platform Awareness Features Supported
- path: /doc/getting-started/network-edge/supported-epa
- meta_title: OpenNESS Network Edge - Enhanced Platform Awareness Features Supported
- meta_description: Enhanced Platform Awareness features supported for network edge is to expose capability to edge cloud orchestrator for better performance, consistency, and reliability.
+ - title: Kubernetes Dashboard in OpenNESS
+ path: /doc/getting-started/kubernetes-dashboard
+ meta_title: Kubernetes Dashboard in OpenNESS
+ meta_description: OpenNESS supports Kubernetes Dashboard that can be used to inspect and manage Kubernetes cluster.
diff --git a/_data/navbars/reference-architectures.yml b/_data/navbars/reference-architectures.yml
index b98555e3..4e177550 100644
--- a/_data/navbars/reference-architectures.yml
+++ b/_data/navbars/reference-architectures.yml
@@ -52,6 +52,6 @@ section:
meta_description: Reference architecture combines wireless and high performance compute for IoT, AI, video and other services.
- title: Converged Edge Reference Architecture for SD-WAN
- path: /doc/reference-architectures/openness_sdwan
+ path: /doc/reference-architectures/cera_sdwan
meta_title: Converged Edge Reference Architecture for SD-WAN
meta_description: OpenNESS provides a reference solution for SD-WAN consisting of building blocks for cloud-native deployments.
diff --git a/_includes/header.html b/_includes/header.html
index b1705e3c..92db4424 100644
--- a/_includes/header.html
+++ b/_includes/header.html
@@ -36,7 +36,7 @@
Getting Started
Documentation
diff --git a/doc/applications-onboard/network-edge-applications-onboarding.md b/doc/applications-onboard/network-edge-applications-onboarding.md
index b3bbd3fd..cfa1aebd 100644
--- a/doc/applications-onboard/network-edge-applications-onboarding.md
+++ b/doc/applications-onboard/network-edge-applications-onboarding.md
@@ -31,13 +31,11 @@ Copyright (c) 2019-2020 Intel Corporation
- [Troubleshooting](#troubleshooting)
- [Useful Commands:](#useful-commands)
-
-
# Introduction
This document aims to familiarize users with the Open Network Edge Services Software (OpenNESS) application on-boarding process for the Network Edge. This document provides instructions on how to deploy an application from the Edge Controller to Edge Nodes in the cluster; it also provides sample deployment scenarios and traffic configuration for the application. The applications will be deployed from the Edge Controller via the Kubernetes `kubectl` command-line utility. Sample specification files for application onboarding are also provided.
# Installing OpenNESS
-The following application onboarding steps assume that OpenNESS was installed through [OpenNESS playbooks](https://github.com/open-ness/specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md).
+The following application onboarding steps assume that OpenNESS was installed through [OpenNESS playbooks](../getting-started/openness-cluster-setup.md).
# Building applications
Users must provide the application to be deployed on the OpenNESS platform for Network Edge. The application must be provided in a Docker\* image format that is available either from an external Docker repository (Docker Hub) or a locally built Docker image. The image must be available on the Edge Node, which the application will be deployed on.
@@ -51,7 +49,7 @@ This document explains the build and deployment of two applications:
2. OpenVINO™ application: A close to real-world inference application
## Building sample application images
-The sample application is available in [the edgeapps repository](https://github.com/open-ness/edgeapps/tree/master/sample-app); further information about the application is contained within the `Readme.md` file.
+The sample application is available in [the edgeapps repository](https://github.com/open-ness/edgeapps/tree/master/applications/sample-app); further information about the application is contained within the `Readme.md` file.
The following steps are required to build the sample application Docker images for testing the OpenNESS Edge Application Agent (EAA) with consumer and producer applications:
@@ -66,7 +64,7 @@ The following steps are required to build the sample application Docker images f
docker images | grep consumer
```
## Building the OpenVINO application images
-The OpenVINO application is available in [the EdgeApps repository](https://github.com/open-ness/edgeapps/tree/master/openvino); further information about the application is contained within `Readme.md` file.
+The OpenVINO application is available in [the EdgeApps repository](https://github.com/open-ness/edgeapps/tree/master/applications/openvino); further information about the application is contained within `Readme.md` file.
The following steps are required to build the sample application Docker images for testing OpenVINO consumer and producer applications:
@@ -116,7 +114,7 @@ To verify that the images for sample application consumer and producer are [buil
## Applying Kubernetes network policies
Kubernetes NetworkPolicy is a mechanism that enables control over how pods are allowed to communicate with each other and other network endpoints. By default, in the Network Edge environment, all *ingress* traffic is blocked (services running inside of deployed applications are not reachable) and all *egress* traffic is enabled (pods can reach the internet).
-1. To apply a network policy for the sample application allowing ingress traffic, create a `sample_policy.yml` file that specifies the network policy:
+1. To apply a network policy for the sample application allowing ingress traffic, create a `sample_policy.yml` file that specifies the network policy (in the example network policy `cidr` field contains Calico CNI cidr; for other CNI use specific CNI cidr, e.g. for Kube-ovn CNI use `10.16.0.0/16`):
```yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
@@ -130,7 +128,7 @@ Kubernetes NetworkPolicy is a mechanism that enables control over how pods are a
ingress:
- from:
- ipBlock:
- cidr: 10.16.0.0/16
+ cidr: 10.245.0.0/16
ports:
- protocol: TCP
port: 80
@@ -251,9 +249,9 @@ Kubernetes NetworkPolicy is a mechanism that enables control over how pods are a
- name: producer
image: producer:1.0
imagePullPolicy: Never
- volumeMounts:
- - name: certs
- mountPath: /home/sample/certs/
+ volumeMounts:
+ - name: certs
+ mountPath: /home/sample/certs/
ports:
- containerPort: 443
volumes:
@@ -398,9 +396,9 @@ Kubernetes NetworkPolicy is a mechanism that enables control over how pods are a
- name: consumer
image: consumer:1.0
imagePullPolicy: Never
- volumeMounts:
- - name: certs
- mountPath: /home/sample/certs/
+ volumeMounts:
+ - name: certs
+ mountPath: /home/sample/certs/
ports:
- containerPort: 443
volumes:
@@ -444,7 +442,7 @@ This section guides users through the complete process of onboarding the OpenVIN
## Prerequisites
-* OpenNESS for Network Edge is fully installed and set up.
+* OpenNESS for Network Edge is fully installed and set up (kubeovn as cni to support Interfaceservice which is openness developed kubectl plugin.).
* The Docker images for OpenVINO are available on the Edge Node.
* A separate host used for generating traffic via Client Simulator is set up.
* The Edge Node host and traffic generating host are connected point to point via unused physical network interfaces.
@@ -506,12 +504,8 @@ This section guides users through the complete process of onboarding the OpenVIN
3. Verify that no errors show up in the logs of the OpenVINO consumer application:
```
kubectl logs openvino-cons-app
- ```
-4. Log into the consumer application pod and modify `analytics.openness` entry in `/etc/hosts` with the IP address set in step one of [Setting up Networking Interfaces](#Setting-up-Networking-Interfaces) (192.168.1.10 by default, the physical interface connected to traffic generating host).
- ```
- kubectl exec -it openvino-cons-app /bin/sh
- apt-get install vim
- vim /etc/hosts
+ kubectl get po -o custom-columns=NAME:.metadata.name,IP:.status.podIP | grep cons-app | awk '{print $2}'
+
```
## Applying Kubernetes network policies
@@ -542,7 +536,7 @@ By default, in a Network Edge environment, all *ingress* traffic is blocked (ser
spec:
podSelector:
matchLabels:
- name: openvino-cons-app
+ app: openvino-cons-app
policyTypes:
- Ingress
ingress:
@@ -594,7 +588,7 @@ The following is an example of how to set up DNS resolution for OpenVINO consume
Add to the file:
nameserver
```
-2. Verify that `openvino.openness` is correctly resolved (“ANSWER” section should contain IP of Edge DNS).
+2. Verify that `openvino.openness` is correctly resolved (“ANSWER” section should contain IP of Consumer pod).
```
dig openvino.openness
```
@@ -728,15 +722,16 @@ kubectl interfaceservice get
## Inter application communication
The IAC is available via the default overlay network used by Kubernetes - Kube-OVN.
-For more information on Kube-OVN, refer to the Kube-OVN support in OpenNESS [documentation](https://github.com/open-ness/ido-specs/blob/master/doc/dataplane/openness-interapp.md#interapp-communication-support-in-openness-network-edge)
+
+For more information on Kube-OVN, refer to the Kube-OVN support in OpenNESS [documentation](../building-blocks/dataplane/openness-interapp.md#interapp-communication-support-in-openness-network-edge)
# Enhanced Platform Awareness
-Enhanced platform awareness (EPA) is supported in OpenNESS via the use of the Kubernetes NFD plugin. This plugin is enabled in OpenNESS for Network Edge by default. Refer to the [NFD whitepaper](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-node-feature-discovery.md) for information on how to make your application pods aware of the supported platform capabilities.
+Enhanced platform awareness (EPA) is supported in OpenNESS via the use of the Kubernetes NFD plugin. This plugin is enabled in OpenNESS for Network Edge by default. Refer to the [NFD whitepaper](../building-blocks/enhanced-platform-awareness/openness-node-feature-discovery.md) for information on how to make your application pods aware of the supported platform capabilities.
-Refer to [supported-epa.md](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge/supported-epa.md) for the list of supported EPA features on OpenNESS network edge.
+Refer to Building Blocks / Enhanced Platform Awareness section for the list of supported EPA features on OpenNESS network edge.
# VM support for Network Edge
-Support for VM deployment on OpenNESS for Network Edge is available and enabled by default, where certain configuration and prerequisites may need to be fulfilled to use all capabilities. For information on application deployment in VM, see [openness-network-edge-vm-support.md](https://github.com/open-ness/ido-specs/blob/master/doc/applications-onboard/openness-network-edge-vm-support.md).
+Support for VM deployment on OpenNESS for Network Edge is available and enabled by default, where certain configuration and prerequisites may need to be fulfilled to use all capabilities. For information on application deployment in VM, see [VM support in OpenNESS for Network Edge](../applications-onboard/openness-network-edge-vm-support.md) section.
# Troubleshooting
This section covers steps for debugging edge applications in Network Edge.
diff --git a/doc/applications-onboard/openness-interface-service.md b/doc/applications-onboard/openness-interface-service.md
index df049f2f..aa715621 100644
--- a/doc/applications-onboard/openness-interface-service.md
+++ b/doc/applications-onboard/openness-interface-service.md
@@ -21,7 +21,7 @@ Copyright (c) 2019-2020 Intel Corporation
Interface service is an application running in the Kubernetes\* pod on each node of the OpenNESS Kubernetes cluster. It allows users to attach additional network interfaces of the node to the provided OVS bridge, enabling external traffic scenarios for applications deployed in the Kubernetes\* pods. Services on each node can be controlled from the control plane using kubectl plugin.
-Interface service can attach both kernel and user space (DPDK) network interfaces to the appropriate OVS bridges.
+Interface service can attach both kernel and user space (DPDK) network interfaces to the appropriate OVS bridges. To perform that operation Kube-OVN needs to be set as main CNI.
## Traffic from the external host
@@ -78,7 +78,7 @@ Currently, interface service supports the following values of the `driver` param
## Userspace (DPDK) bridge
-The default DPDK-enabled bridge `br-userspace` is only available if OpenNESS is deployed with support for [Userspace CNI](https://github.com/open-ness/ido-specs/blob/master/doc/dataplane/openness-userspace-cni.md) and at least one pod was deployed using the Userspace CNI. You can check if the `br-userspace` bridge exists by running the following command on your node:
+The default DPDK-enabled bridge `br-userspace` is only available if OpenNESS is deployed with support for [Userspace CNI](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/dataplane/openness-userspace-cni.md) and at least one pod was deployed using the Userspace CNI. You can check if the `br-userspace` bridge exists by running the following command on your node:
```shell
ovs-vsctl list-br
@@ -104,19 +104,19 @@ ovs-vsctl add-br br-userspace -- set bridge br-userspace datapath_type=netdev
DPDK apps require a specific amount of HugePages\* enabled. By default, the Ansible scripts will enable 1024 of 2M HugePages in a system, and then start OVS-DPDK with 1GB of those HugePages reserved for NUMA node 0. To change this setting to reflect specific requirements, set the Ansible variables as defined in the following example. This example enables four of 1GB HugePages and appends 2GB to OVS-DPDK, leaving two pages for DPDK applications that run in the pods. This example uses the Edge Node with 2 NUMA nodes, each one with 1GB of HugePages reserved.
```yaml
-# group_vars/controller_group/10-default.yml
+# inventory/default/group_vars/controller_group/10-open.yml
hugepage_size: "1G"
hugepage_amount: "4"
```
```yaml
-# group_vars/edgenode_group/10-default.yml
+# inventory/default/group_vars/edgenode_group/10-open.yml
hugepage_size: "1G"
hugepage_amount: "4"
```
```yaml
-# group_vars/all/10-default.yml
+# inventory/default/group_vars/all/10-open.yml
kubeovn_dpdk_socket_mem: "1024,1024" # Will reserve 1024MB of hugepages for NUNA node 0 and NUMA node 1, respectively.
kubeovn_dpdk_hugepage_size: "1Gi" # This is the size of single hugepage to be used by DPDK. Can be 1Gi or 2Mi.
kubeovn_dpdk_hugepages: "2Gi" # This is overall amount of hugepags available to DPDK.
diff --git a/doc/applications-onboard/openness-network-edge-vm-support.md b/doc/applications-onboard/openness-network-edge-vm-support.md
index 852b8b55..d58dcc21 100644
--- a/doc/applications-onboard/openness-network-edge-vm-support.md
+++ b/doc/applications-onboard/openness-network-edge-vm-support.md
@@ -77,38 +77,39 @@ docker build -t centosimage:1.0 .
```
## Enabling in OpenNESS
-The KubeVirt role responsible for bringing up KubeVirt components is enabled by default in the OpenNESS experience kit via Ansible\* automation. In this default state, it does not support SRIOV in a VM and additional steps are required to enable it. The following is a complete list of steps to bring up all components related to VM support in Network Edge. VM support also requires Virtualization and VT-d to be enabled in the BIOS of the Edge Node.
+The KubeVirt role responsible for bringing up KubeVirt components is enabled by default in the Converged Edge Experience Kits via Ansible\* automation. In this default state, it does not support SRIOV in a VM and additional steps are required to enable it. The following is a complete list of steps to bring up all components related to VM support in Network Edge. VM support also requires Virtualization and VT-d to be enabled in the BIOS of the Edge Node.
1. Configure Ansible for KubeVirt:
KubeVirt is deployed by default. To provide SRIOV support, configure the following settings:
- - Enable kubeovn CNI and SRIOV:
+ - Enable calico CNI and SRIOV:
```yaml
- # group_vars/all/10-default.yml
+ # inventory/default/group_vars/all/10-open.yml
kubernetes_cnis:
- - kubeovn
+ - calico
- sriov
```
- Enable SRIOV for KubeVirt:
```yaml
- # group_vars/all/10-default.yml
+ # inventory/default/group_vars/all/10-open.yml
# SR-IOV support for kube-virt based Virtual Machines
sriov_kubevirt_enable: true
```
- Enable necessary Network Interfaces with SRIOV:
```yaml
- # host_vars/node01.yml
+ # inventory/default/host_vars/node01/10-open.yml
sriov:
network_interfaces: {: 1}
```
- Set up the maximum number of stateful VMs and directory where the Virtual Disks will be stored on Edge Node:
```yaml
- # group_vars/all/10-default.yml
+ # inventory/default/group_vars/all/10-open.yml
kubevirt_default_pv_dir: /var/vd/
kubevirt_default_pv_vol_name: vol
kubevirt_pv_vm_max_num: 64
```
- 2. Set up other common configurations for the cluster and enable other EPA features as needed and deploy the cluster using the `deploy_ne.sh` script in the OpenNESS experience kit top-level directory.
+ 2. Set up other common configurations for the cluster and enable other EPA features as needed and deploy the cluster using the `deploy.py` script in the Converged Edge Experience kits top-level directory.
+ > **NOTE**: for more details about deployment please refer to [CEEK](../getting-started/converged-edge-experience-kits.md#converged-edge-experience-kit-explained) getting started page.
3. On successful deployment, the following pods will be in a running state:
```shell
@@ -132,14 +133,14 @@ The KubeVirt role responsible for bringing up KubeVirt components is enabled by
## VM deployment
Provided below are sample deployment instructions for different types of VMs.
-Please use sample `.yaml` specification files provided in the OpenNESS Edge Controller directory, [edgenode/edgecontroller/kubevirt/examples/](https://github.com/open-ness/edgenode/tree/master/edgecontroller/kubevirt/examples), to deploy the workloads. Some of the files require modification to suit the environment they will be deployed in. Specific instructions on modifications are provided in the following steps:
+Please use sample `.yaml` specification files provided in the OpenNESS Edge Controller directory, [edgeservices/edgecontroller/kubevirt/examples/](https://github.com/open-ness/edgeservices/tree/master/edgecontroller/kubevirt/examples), to deploy the workloads. Some of the files require modification to suit the environment they will be deployed in. Specific instructions on modifications are provided in the following steps:
### Stateless VM deployment
To deploy a sample stateless VM with containerDisk storage:
1. Deploy the VM:
```shell
- [root@controller ~]# kubectl create -f /opt/openness/edgenode/edgecontroller/kubevirt/examples/statelessVM.yaml
+ [root@controller ~]# kubectl create -f /opt/openness/edgeservices/edgecontroller/kubevirt/examples/statelessVM.yaml
```
2. Start the VM:
```shell
@@ -164,13 +165,13 @@ To deploy a sample stateful VM with persistent storage and additionally use a Ge
>**NOTE**: Each stateful VM with a new Persistent Volume Claim (PVC) requires a new Persistent Volume (PV) to be created. See more in the [limitations section](#limitations). Also, CDI needs two PVs when creating a PVC and loading a VM image from the qcow2 file: one PV for the actual PVC to be created and one PV to translate the qcow2 image to raw input.
->**NOTE**: An issue appears when the CDI upload pod is deployed with Kube-OVN CNI, the deployed pods readiness probe fails and pod is never in ready state. It is advised that the user uses other CNI such as Calico CNI when using CDI with OpenNESS.
+>**NOTE**: An issue appears when the CDI upload pod is deployed with Calico CNI, the deployed pods readiness probe fails and pod is never in ready state. It is advised that the user uses other CNI such as Calico CNI when using CDI with OpenNESS.
1. Create a persistent volume for the VM:
- Edit the sample yaml with the hostname of the node:
```yaml
- # /opt/openness/edgenode/edgecontroller/kubevirt/examples/persistentLocalVolume.yaml
+ # /opt/openness/edgeservices/edgecontroller/kubevirt/examples/persistentLocalVolume.yaml
# For both kv-pv0 and kv-pv1, enter the correct hostname:
- key: kubernetes.io/hostname
operator: In
@@ -179,7 +180,7 @@ To deploy a sample stateful VM with persistent storage and additionally use a Ge
```
- Create the PV:
```shell
- [root@controller ~]# kubectl create -f /opt/openness/edgenode/edgecontroller/kubevirt/examples/persistentLocalVolume.yaml
+ [root@controller ~]# kubectl create -f /opt/openness/edgeservices/edgecontroller/kubevirt/examples/persistentLocalVolume.yaml
```
- Check that PV is created:
```shell
@@ -232,7 +233,7 @@ To deploy a sample stateful VM with persistent storage and additionally use a Ge
```
8. Edit the .yaml file for the VM with the updated public key:
```yaml
- # /opt/openness/edgenode/edgecontroller/kubevirt/examples/cloudGenericVM.yaml
+ # /opt/openness/edgeservices/edgecontroller/kubevirt/examples/cloudGenericVM.yaml
users:
- name: root
password: root
@@ -242,7 +243,7 @@ To deploy a sample stateful VM with persistent storage and additionally use a Ge
```
9. Deploy the VM:
```shell
- [root@controller ~]# kubectl create -f /opt/openness/edgenode/edgecontroller/kubevirt/examples/cloudGenericVM.yaml
+ [root@controller ~]# kubectl create -f /opt/openness/edgeservices/edgecontroller/kubevirt/examples/cloudGenericVM.yaml
```
10. Start the VM:
```shell
@@ -294,7 +295,7 @@ To deploy a VM requesting SRIOV VF of NIC:
```
4. Deploy the VM requesting the SRIOV device (if a smaller amount is available on the platform, adjust the number of HugePages required in the .yaml file):
```shell
- [root@controller ~]# kubectl create -f /opt/openness/edgenode/edgecontroller/kubevirt/examples/sriovVM.yaml
+ [root@controller ~]# kubectl create -f /opt/openness/edgeservices/edgecontroller/kubevirt/examples/sriovVM.yaml
```
5. Start the VM:
```shell
@@ -400,7 +401,7 @@ kubectl apply -f cdiUploadCentosDvToleration.yaml
sleep 5
-kubectl create -f /opt/openness/edgenode/edgecontroller/kubevirt/examples/persistentLocalVolume.yaml
+kubectl create -f /opt/openness/edgeservices/edgecontroller/kubevirt/examples/persistentLocalVolume.yaml
```
## Useful Commands and Troubleshooting
@@ -431,9 +432,9 @@ Check that the IP address of the `cdi-upload-proxy` is correct and that the Netw
```
2. Cannot SSH to stateful VM with Cloud Generic Image due to the public key being denied.
-Confirm that the public key provided in `/opt/openness/edgenode/edgecontroller/kubevirt/examples/cloudGenericVM.yaml` is valid and in a correct format. Example of a correct format:
+Confirm that the public key provided in `/opt/openness/edgeservices/edgecontroller/kubevirt/examples/cloudGenericVM.yaml` is valid and in a correct format. Example of a correct format:
```yaml
- # /opt/openness/edgenode/edgecontroller/kubevirt/examples/cloudGenericVM.yaml
+ # /opt/openness/edgeservices/edgecontroller/kubevirt/examples/cloudGenericVM.yaml
users:
- name: root
password: root
@@ -450,7 +451,7 @@ Delete VM, DV, PV, PVC, and the Virtual Disk related to VM from the Edge Node:
[node]# rm /var/vd/vol/disk.img
```
-4. Cleanup script `cleanup_ne.sh` does not properly clean up KubeVirt/CDI components, if the user has intentionally/unintentionally deleted one of these components outside the script.
+4. Cleanup script `deploy.py --clean` does not properly clean up KubeVirt/CDI components, if the user has intentionally/unintentionally deleted one of these components outside the script.
The KubeVirt/CDI components must be cleaned up/deleted in a specific order to wipe them successfully and the cleanup script does that for the user. When a user tries to delete the KubeVirt/CDI operator in the wrong order, the namespace for the component may be stuck indefinitely in a `terminating` state. This is not an issue if the user runs the script to completely clean the cluster but might be troublesome if the user wants to run cleanup for KubeVirt only. To fix this, use:
1. Check which namespace is stuck in a `terminating` state:
@@ -475,7 +476,7 @@ The KubeVirt/CDI components must be cleaned up/deleted in a specific order to wi
3. Run clean up script for kubeVirt again:
```shell
- [controller]# ./cleanup_ne.sh
+ [controller]# python3 deploy.py --clean
```
## Helpful Links
diff --git a/doc/applications-onboard/using-openness-cnca.md b/doc/applications-onboard/using-openness-cnca.md
index 68b31d5b..40827379 100644
--- a/doc/applications-onboard/using-openness-cnca.md
+++ b/doc/applications-onboard/using-openness-cnca.md
@@ -46,7 +46,7 @@ Available management with `kube-cnca` against LTE CUPS OAM agent are:
2. Deletion of LTE CUPS userplanes
3. Updating (patching) LTE CUPS userplanes
-The `kube-cnca` plugin is installed automatically on the control plane during the installation phase of the [OpenNESS Experience Kit](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/openness-experience-kits.md).
+The `kube-cnca` plugin is installed automatically on the control plane during the installation phase of the [Converged Edge Experience Kits](../getting-started/converged-edge-experience-kits.md).
In the following sections, a detailed explanation with examples is provided about the CNCA management.
Creation of the LTE CUPS userplane is performed based on the configuration provided by the given YAML file. The YAML configuration should follow the provided sample YAML in [Sample YAML LTE CUPS userplane configuration](#sample-yaml-lte-cups-userplane-configuration) section. Use the `apply` command to post a userplane creation request onto Application Function (AF):
@@ -116,7 +116,7 @@ policy:
# 5G NGC components bring up and Configuration using CNCA
-OpenNESS provides Ansible\* scripts for setting up NGC components for two scenarios. Each of the scenarios is supported by a separate role in the OpenNESS Experience Kit:
+OpenNESS provides Ansible\* scripts for setting up NGC components for two scenarios. Each of the scenarios is supported by a separate role in the Converged Edge Experience Kits:
Role "ngc"
This role brings up the 5g OpenNESS setup in the loopback mode for testing and demonstrating its usability. The Ansible scripts that are part of the "ngc" role build, configure, and start AF, Network Exposure Function (NEF), OAM, and Core Network Test Function (CNTF) in the Network Edge mode. Within this role, AF and OAM are set up on the controller node. NEF and CNTF are set up on the edge node. The description of the configuration and setup of the NGC components provided in the next sections of this document refers to the ngc role. The NGC components set up within the ngc role can be fully integrated and tested with the provided kubectl plugin or CNCA UI.
@@ -125,11 +125,31 @@ This role brings up the 5g OpenNESS setup in the loopback mode for testing and d
### Bring up of NGC components in Network Edge mode
-- If OpenNESS (Edge Controller + Edge Node) is not yet deployed through openness-experience-kit, then:
- Enable the role for ngc by changing the `ne_ngc_enable` variable to `true` in `group_vars/all/20-enhanced.yml` before running `deploy_ne.sh` or `deploy_ne.sh single`, as described in the [OpenNESS Network Edge: Controller and Edge node setup](../getting-started/network-edge/controller-edge-node-setup.md) document. If not, skip this step.
+- If OpenNESS (Edge Controller + Edge Node) is not yet deployed through converged-edge-experience-kits, then:
+ Set `flavor` as `core-cplane` in `inventory.yml` (a sample `inventory.yml` is shown as below) before running `deploy.py` as described in [OpenNESS Network Edge: Controller and Edge node setup](../getting-started/openness-cluster-setup.md) document. If not, skip this step.
+
+ ```yaml
+ ---
+ all:
+ vars:
+ cluster_name: cluster_test # NOTE: Use `_` instead of spaces.
+ flavor: core-cplane # NOTE: Flavors can be found in `flavors` directory.
+ single_node_deployment: true # Request single node deployment (true/false).
+ limit: # Limit ansible deployment to certain inventory group or hosts
+ controller_group:
+ hosts:
+ controller:
+ ansible_host: 172.16.0.1
+ ansible_user: openness
+ edgenode_group:
+ hosts:
+ node01:
+ ansible_host: 172.16.0.1
+ ansible_user: openness
+ ```
- If OpenNESS Edge Controller + Edge Node is already deployed (but without enabling the ngc role) and at a later stage you want to enable NGC components then:
- Enable the role for ngc by changing the `ne_ngc_enable` variable to `true` in `group_vars/all/20-enhanced.yml` and then re-run `deploy_ne.sh` or `deploy_ne.sh single` as described in the [OpenNESS Network Edge: Controller and Edge node setup](../getting-started/network-edge/controller-edge-node-setup.md) document.
+ Enable the role for ngc by changing the `ne_ngc_enable` variable to `true` in `inventory/default/group_vars/all/20-enhanced.yml` and then re-run `deploy.py` with specified `limit: controller` variable in `inventory.yml` (define only one cluster on which the role should be enabled) as described in [OpenNESS Network Edge: Controller and Edge node setup](../getting-started/openness-cluster-setup.md) document.
>**NOTE**: In addition to the OpenNESS controller bring up, by enabling the ngc role, the playbook scripts performs:
@@ -386,7 +406,7 @@ Modifying the certificates. Complete the following steps:
For Network Edge mode, the CNCA provides a kubectl plugin to configure the 5G Core network. Kubernetes adopted plugin concepts to extend its functionality. The `kube-cnca` plugin executes CNCA related functions within the Kubernetes ecosystem. The plugin performs remote callouts against NGC OAM and AF microservice on the controller itself.
-The `kube-cnca` plugin is installed automatically on the control plane node during the installation phase of the [OpenNESS Experience Kit](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md)
+The `kube-cnca` plugin is installed automatically on the control plane node during the installation phase of the [Converged Edge Experience Kits](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/openness-cluster-setup.md)
#### Edge Node services operations with 5G Core (through OAM interface)
@@ -597,19 +617,20 @@ Sample yaml file for updating a single application:
apiVersion: v1
kind: ngc_pfd
policy:
- externalAppID: afApp01
- allowedDelay: 1000
- cachingTime: 1000
- pfds:
- - pfdID: pfdId01
- flowDescriptions:
- - "permit in ip from 10.11.12.123 80 to any"
- - pfdID: pfdId02
- urls:
- - "^http://test.example2.net(/\\S*)?$"
- - pfdID: pfdId03
- domainNames:
- - "www.latest_example.com"
+ pfdDatas:
+ - externalAppID: afApp01
+ allowedDelay: 1000
+ cachingTime: 1000
+ pfds:
+ - pfdID: pfdId01
+ flowDescriptions:
+ - "permit in ip from 10.11.12.123 80 to any"
+ - pfdID: pfdId02
+ urls:
+ - "^http://test.example2.net(/\\S*)?$"
+ - pfdID: pfdId03
+ domainNames:
+ - "www.latest_example.com"
```
#### Policy Authorization operations with 5G Core (through AF interface)
diff --git a/doc/applications/openness_openvino.md b/doc/applications/openness_openvino.md
index 9688283a..6d2ee25e 100644
--- a/doc/applications/openness_openvino.md
+++ b/doc/applications/openness_openvino.md
@@ -133,7 +133,7 @@ openvino-prod-app 1.0
### Streaming & Displaying the Augmented Video
-The OpenVINO edge application accepts a UDP video stream. This video stream can
+The OpenVINO edge application accepts a TCP video stream. This video stream can
be from any video source such as an IP camera. The Client Simulator provided in
this project uses a sample mp4 video file to continuously transmit the video
stream to the OpenNESS Edge Node. Object detection is executed on this video
diff --git a/doc/applications/openness_ovc.md b/doc/applications/openness_ovc.md
index 68786b9f..5da74965 100644
--- a/doc/applications/openness_ovc.md
+++ b/doc/applications/openness_ovc.md
@@ -38,7 +38,7 @@ OpenNESS provides the underpinning network edge infrastructure which comprises t
![Smart City Architecure Deployed with OpenNESS](ovc-images/smart-city-architecture.png)
-The Open Visual Cloud website is located at the [Open Visual Cloud project](https://01.org/openvisualcloud). Smart City sample source code and documentation are available on [GitHub](https://github.com/OpenVisualCloud/Smart-City-Sample) and its integration with OpenNESS is available at [OpenNESS branch](https://github.com/OpenVisualCloud/Smart-City-Sample/tree/openness).
+The Open Visual Cloud website is located at the [Open Visual Cloud project](https://01.org/openvisualcloud). Smart City sample source code and documentation are available on [GitHub](https://github.com/OpenVisualCloud/Smart-City-Sample) and its integration with OpenNESS is available at [v20.10 branch](https://github.com/OpenVisualCloud/Smart-City-Sample/tree/v20.10).
## The Smart City Building Blocks
The Smart City sample consists of the following major building blocks:
diff --git a/doc/applications/openness_service_mesh.md b/doc/applications/openness_service_mesh.md
index 59f25e57..9e5b9a2f 100644
--- a/doc/applications/openness_service_mesh.md
+++ b/doc/applications/openness_service_mesh.md
@@ -20,7 +20,7 @@ Copyright (c) 2020 Intel Corporation
- [NGC Edge Control Plane Functions Enablement via OpenNESS Service Mesh](#ngc-edge-control-plane-functions-enablement-via-openness-service-mesh)
- [Prometheus, Grafana & Kiali integration](#prometheus-grafana--kiali-integration)
- [Getting Started](#getting-started)
- - [Enabling Service Mesh through the Service Mesh Flavor](#enabling-service-mesh-through-the-service-mesh-flavor)
+ - [Enabling Service Mesh through enabling the Service Mesh Role](#enabling-service-mesh-through-enabling-the-service-mesh-role)
- [Enabling Service Mesh with the Media Analytics Flavor](#enabling-service-mesh-with-the-media-analytics-flavor)
- [Enabling 5GC Service Mesh with the Core Control Plane Flavor](#enabling-5gc-service-mesh-with-the-core-control-plane-flavor)
- [References](#references)
@@ -36,7 +36,7 @@ With the Service Mesh approach, the applications do not decide which service end
## OpenNESS Service Mesh Enablement through Istio
-[Istio](https://istio.io/) is a feature-rich, cloud-native service mesh platform that provides a collection of key capabilities such as: [Traffic Management](https://istio.io/latest/docs/concepts/traffic-management/), [Security](https://istio.io/latest/docs/concepts/security/) and [Observability](https://istio.io/latest/docs/concepts/observability/) uniformly across a network of services. OpenNESS integrates natively with the Istio service mesh to help reduce the complexity of large scale edge applications, services, and network functions. The Istio service mesh is deployed automatically through OpenNESS Experience Kits (OEK) with an option to onboard the media analytics services on the service mesh.
+[Istio](https://istio.io/) is a feature-rich, cloud-native service mesh platform that provides a collection of key capabilities such as: [Traffic Management](https://istio.io/latest/docs/concepts/traffic-management/), [Security](https://istio.io/latest/docs/concepts/security/) and [Observability](https://istio.io/latest/docs/concepts/observability/) uniformly across a network of services. OpenNESS integrates natively with the Istio service mesh to help reduce the complexity of large scale edge applications, services, and network functions. The Istio service mesh is deployed automatically through Converged Edge Experience Kits (CEEK) with an option to onboard the media analytics services on the service mesh.
Istio mandates injecting [Envoy sidecars](https://istio.io/latest/docs/ops/deployment/architecture/#envoy) into the applications and services pods to become part of the service mesh. The Envoy sidecars intercept all inter-pod traffic, making it easy to manage, secure, and observe. Sidecar injection is automatically enabled to the `default` namespace in the OpenNESS cluster. This is done by applying the label `istio-injection=enabled` to the `default` namespace.
@@ -64,7 +64,7 @@ The service mesh framework takes care of provisioning, monitoring, and routing t
## Video Analytics Service Mesh Deployment
-The media analytics services can be automatically deployed on the Istio service mesh using the OEK. To do so, the entry `ne_istio_enable` in the file `flavors/media-analytics/all.yml` needs to be set to `true`. After running the `deploy.sh` script, the output should include the following pods in the `default` and `istio-system` namespaces on the cluster:
+The media analytics services can be automatically deployed on the Istio service mesh using the CEEK. To do so, the entry `ne_istio_enable` in the file `flavors/media-analytics/all.yml` needs to be set to `true`. After running the `deploy.sh` script, the output should include the following pods in the `default` and `istio-system` namespaces on the cluster:
```shell
$ kubectl get pods -A
@@ -503,15 +503,28 @@ _Figure - Istio Telemetry with Grafana_
## Getting Started
-### Enabling Service Mesh through the Service Mesh Flavor
+### Enabling Service Mesh through enabling the Service Mesh Role
-Istio service mesh can be deployed with OpenNESS using the OEK through the pre-defined *service-mesh* flavor as described in [Service Mesh Flavor](../flavors.md#service-mesh-flavor) section. Istio is installed with `default` profile by default (for Istio installation profiles refer to: https://istio.io/latest/docs/setup/additional-setup/config-profiles/).
-The Istio management console, [Kiali](https://kiali.io/), is deployed alongside Istio with the default credentials:
+Istio service mesh can be deployed with OpenNESS using the CEEK through the defined istio role. Istio role is enabled with setting parameter `ne_istio_enable: true`. Istio is installed with `default` profile by default (for Istio installation profiles refer to: https://istio.io/latest/docs/setup/additional-setup/config-profiles/).
+The Istio management console, [Kiali](https://kiali.io/), is deployed alongside Istio with the default credentials:
* Username: `admin`
* Nodeport set to `30001`
-To get the randomly generated password run the following command on Kubernetes controller:
+The above settings can be customized by adjusting following parameters in the `inventory/default/group_vars/all/10-default.yml`:
+
+```yml
+# Istio deployment profile possible values: default, demo, minimal, remote
+istio_deployment_profile: "default"
+# Istio is deployed to "default" namespace in the cluster
+istio_deployment_namespace: "default"
+# Kiali
+istio_kiali_username: "admin"
+istio_kiali_password: "{{ lookup('password', '/dev/null length=16') }}"
+istio_kiali_nodeport: 30001
+```
+
+To get the randomly generated password run the following command on Kubernetes controller:
`kubectl get secrets/kiali -n istio-system -o json | jq -r '.data.passphrase' | base64 -d`
Prometheus and Grafana are deployed in the OpenNESS platform as part of the telemetry role and are integrated with the Istio service mesh.
@@ -552,12 +565,12 @@ Status: Active
```
Users can change the namespace labeled with istio label using the parameter `istio_deployment_namespace`
-* in `flavors/service-mesh/all.yml` for deployment with service-mesh flavor
* in `flavors/media-analytics/all.yml` for deployment with media-analytics flavor
+* in `inventory/default/group_vars/all/10-default.yml` for deployment with any flavor (and istio role enabled)
> **NOTE**: The default OpenNESS network policy applies to pods in the `default` namespace and blocks all ingress traffic. Users must remove the default policy and apply custom network policy when deploying applications in the `default` namespace. Refer to the [Kubernetes NetworkPolicies](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md#applying-kubernetes-network-policies) for an example policy allowing ingress traffic from `192.168.1.0/24` subnet on a specific port.
-Kiali console is accessible from a browser using `http://:30001` and credentials defined in OpenNESS Experience Kits:
+Kiali console is accessible from a browser using `http://:30001` and credentials defined in Converged Edge Experience Kits:
![Kiali Dashboard Login](./service-mesh-images/kiali-login.png)
@@ -565,11 +578,11 @@ _Figure - Kiali Dashboard Login_
### Enabling Service Mesh with the Media Analytics Flavor
-The Istio service mesh is not enabled by default in OpenNESS. It can be installed alongside the video analytics services by setting the flag `ne_istio_enable` to `true` in the *media-analytics* flavor. The media analytics services are installed with the OpenNESS service mesh through the OEK playbook as described in the [Media Analytics](../flavors.md#media-analytics-flavor) section.
+The Istio service mesh is not enabled by default in OpenNESS. It can be installed alongside the video analytics services by setting the flag `ne_istio_enable` to `true` in the *media-analytics* flavor. The media analytics services are installed with the OpenNESS service mesh through the CEEK playbook as described in the [Media Analytics](../flavors.md#media-analytics-flavor) section.
### Enabling 5GC Service Mesh with the Core Control Plane Flavor
-The Istio service mesh is integrated with the NGC core control plane and can be deployed through the pre-defined *core-cplane* deployment flavor in OEK playbook as described in the [Core Control Plane Flavor](../flavors.md#core-control-plane-flavor) section. The Istio service mesh flag `ne_istio_enable` is enabled by default.
+The Istio service mesh is integrated with the NGC core control plane and can be deployed through the pre-defined *core-cplane* deployment flavor in CEEK playbook as described in the [Core Control Plane Flavor](../flavors.md#core-control-plane-flavor) section. The Istio service mesh flag `ne_istio_enable` is enabled by default.
## References
diff --git a/doc/applications/openness_va_services.md b/doc/applications/openness_va_services.md
index f943ba85..0a6a28a7 100644
--- a/doc/applications/openness_va_services.md
+++ b/doc/applications/openness_va_services.md
@@ -16,12 +16,12 @@ OpenNESS furnishes the Video Analytics Services to enable third-party edge appli
## Getting Started with Video Analytics Services
-To get started with deploying Video Analytics Services through OpenNESS Experience Kits (OEK), refer to [Media Analytics Flavor](../flavors.md#media-analytics-flavor) and [Media Analytics Flavor with VCAC-A](../flavors.md#media-analytics-flavor-with-vcac-a).
+To get started with deploying Video Analytics Services through Converged Edge Experience Kits (CEEK), refer to [Media Analytics Flavor](../flavors.md#media-analytics-flavor).
> **NOTE**: If creating a customized flavor, the *Video Analytics Services* role can be included in the Ansible\* playbook by setting the flag `video_analytics_services_enable: true` in the flavor file.
## Video Analytics Services Deployment
-Video Analytics Services are installed by the OEK when `media-services` or `media-services-vca` flavors are deployed. These flavors include the *Video Analytics Services* role in the Ansible playbook by turning on the flag `video_analytics_services_enable: true` under the hood. When the role is included, multiple Video Analytics Services are deployed. One instance of the Video Analytics Services consists of two containers:
+Video Analytics Services are installed by the CEEK when `media-services`. These flavors include the *Video Analytics Services* role in the Ansible playbook by turning on the flag `video_analytics_services_enable: true` under the hood. When the role is included, multiple Video Analytics Services are deployed. One instance of the Video Analytics Services consists of two containers:
1. Video analytics serving gateway (VAS gateway)
2. Video analytics serving sidecar (VAS sidecar)
@@ -29,13 +29,15 @@ The *VAS gateway* is the artifact created when [building the VAS](https://github
The *VAS sidecar* interfaces with the Edge Application Agent (EAA) to register a Video Analytics Service whereby it becomes discoverable by third-party (consumer) applications. The service registration phase provides information about the service such as:
1. Service endpoint URI, e.g., `http://analytics-ffmpeg.media:8080`
-2. Acceleration used: `Xeon`, `HDDL`, or `VCAC-A`
+2. Acceleration used: `Xeon`, `HDDL`\*, or `VCAC-A`\*
3. Underpinning multimedia framework: `GStreamer` or `FFmpeg`
4. Available pipelines: `emotion_recoginition`, `object_detection`, and other custom pipelines
![Video Analytics Services Deployment](va-service-images/va-services-deployment.png)
-_Figure - Video Analytics Services Deployment_
+_Figure - Video Analytics Services Deployment\*_
+
+> **\*NOTE**: Video Analytics Services acceleration through HDDL & VCAC-A are directional and are not currently supported in OpenNESS.
Multiple instances of the Video Analytics Service can co-exist in an OpenNESS cluster depending on the available hardware resources, as depicted in the figure above. Standalone service endpoints are created for every multimedia framework and acceleration type.
diff --git a/doc/applications/va-service-images/va-services-deployment.png b/doc/applications/va-service-images/va-services-deployment.png
index 63320114..695e869c 100644
Binary files a/doc/applications/va-service-images/va-services-deployment.png and b/doc/applications/va-service-images/va-services-deployment.png differ
diff --git a/doc/architecture.md b/doc/architecture.md
index a6c506de..1910f522 100644
--- a/doc/architecture.md
+++ b/doc/architecture.md
@@ -156,7 +156,10 @@ OpenNESS supports the following accelerator microservices.
- **FPGA/eASIC/NIC**: Software that enables AI inferencing for applications, high-performance and low-latency packet pre-processing on network cards, and offloading for network functions such as eNB/gNB offloading Forward Error Correction (FEC). It consists of:
- FPGA device plugin for inferencing
- SR-IOV device plugin for FPGA/eASIC
- - Dynamic Device Profile for Network Interface Cards (NIC)
+ - Dynamic Device Profile for Network Interface Cards (NIC)
+- **Intel® QuickAssist Technology (Intel® QAT)**: Software that enables offloading of security and compression task on data in rest or in-motion for the cloud, networking, big data, and storage applications:
+ - Kubernetes CRD operator for discrete and on-board Intel® QAT devices
+ - Intel QuickAssist Technology (QAT) device plugin for Kubernetes
### Dataplane/Container Network Interfaces
@@ -261,7 +264,7 @@ CERA SD-WAN Edge flavor provides a reference deployment with Kubernetes enhancem
CERA SD-WAN Edge flavor provides a reference deployment with Kubernetes enhancements for High performance compute and networking for a SD-WAN node that runs SD-WAN CNF.
-Link: [CERA SD-WAN](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/openness_sdwan.md)
+Link: [CERA SD-WAN](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/cera_sdwan.md)
### CERA Media Analytics Flavor with VCAC-A
@@ -364,7 +367,7 @@ This devkit supports the installation of an OpenNESS Kubernetes cluster on a Mic
| NRF | Network function Repository Function |
| NUMA | NonUniform Memory Access |
| OAM | Operations, Administration and Maintenance |
-| OEK | OpenNESS Experience Kit |
+| CEEK | Converged Edge Experience Kits |
| OpenNESS | Open Network Edge Services Software |
| PCF | Policy Control Function |
| PDN | Packet Data Network |
@@ -384,4 +387,4 @@ This devkit supports the installation of an OpenNESS Kubernetes cluster on a Mic
| UE | User Equipment (in the context of LTE) |
| UPF | User Plane Function |
| UUID | Universally Unique IDentifier |
-| VIM | Virtual Infrastructure Manager |
+| VIM | Virtual Infrastructure Manager |
\ No newline at end of file
diff --git a/doc/building-blocks/dataplane/openness-interapp.md b/doc/building-blocks/dataplane/openness-interapp.md
index ee605dc7..eaa9d1e0 100644
--- a/doc/building-blocks/dataplane/openness-interapp.md
+++ b/doc/building-blocks/dataplane/openness-interapp.md
@@ -15,7 +15,9 @@ Multi-core edge cloud platforms typically host multiple containers or virtual ma
## InterApp Communication support in OpenNESS Network Edge
-InterApp communication on the OpenNESS Network Edge is supported using Open Virtual Network for Open vSwitch [OVN/OVS](https://github.com/open-ness/ido-specs/blob/master/doc/dataplane/openness-ovn.md) as the infrastructure. OVN/OVS in the network edge is supported through the Kubernetes kube-OVN Container Network Interface (CNI).
+InterApp communication on the OpenNESS Network Edge is supported using Open Virtual Network for Open vSwitch [OVN/OVS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/dataplane/openness-ovn.md) as the infrastructure. OVN/OVS in the network edge is supported through the Kubernetes kube-OVN Container Network Interface (CNI).
+
+>**NOTE**: The InterApps Communication also works with Calico cni. Calico is supported as a default cni in Openness from 21.03 release.
OVN/OVS is used as a default networking infrastructure for:
- Data plane interface: User data transmission between User Equipment (UE) and edge applications
diff --git a/doc/building-blocks/dataplane/openness-ovn.md b/doc/building-blocks/dataplane/openness-ovn.md
index b519bad2..cafe7d15 100644
--- a/doc/building-blocks/dataplane/openness-ovn.md
+++ b/doc/building-blocks/dataplane/openness-ovn.md
@@ -9,7 +9,7 @@ Copyright (c) 2019-2020 Intel Corporation
- [Summary](#summary)
## OVN Introduction
-Open Virtual Network (OVN) is an open-source solution based on the Open vSwitch-based (OVS) software-defined networking (SDN) solution for providing network services to instances. OVN adds to the capabilities of OVS to provide native support for virtual network abstractions, such as virtual L2 and L3 overlays and security groups. Further information about the OVN architecture can be found [here](https://www.openvswitch.org/support/dist-docs/ovn-architecture.7.html)
+Open Virtual Network (OVN) is an open-source solution based on the Open vSwitch-based (OVS) software-defined networking (SDN) solution for providing network services to instances. OVN adds to the capabilities of OVS to provide native support for virtual network abstractions, such as virtual L2 and L3 overlays and security groups. Further information about the OVN architecture can be found [here](http://www.openvswitch.org/support/dist-docs-2.5/ovn-architecture.7.html)
## OVN/OVS support in OpenNESS Network Edge
The primary objective of supporting OVN/OVS in OpenNESS is to demonstrate the capability of using a standard dataplane such as OVS for an Edge Compute platform. Using OVN/OVS further provides standard SDN-based flow configuration for the edge Dataplane.
@@ -18,7 +18,7 @@ The diagram below shows OVS as a dataplane and OVN overlay. This mode of deploym
![OpenNESS with NTS as dataplane overview](ovn_images/openness_ovn.png)
-[Kube-OVN](https://github.com/alauda/kube-ovn) has been chosen as the CNI implementation for OpenNESS. Additionally, in the following configuration, OpenNESS applications on Edge Nodes are deployed as DaemonSet Pods (in separate "openness" namespace) and exposed to client applications by k8s services.
+[Kube-OVN](https://github.com/alauda/kube-ovn) can be chosen as the CNI implementation for OVN/OVS in OpenNESS. Additionally, in the following configuration, OpenNESS applications on Edge Nodes are deployed as DaemonSet Pods (in separate "openness" namespace) and exposed to client applications by k8s services.
OVN/OVS is used as the default networking infrastructure for:
- Dataplane Interface: UE's to edge applications
diff --git a/doc/building-blocks/dataplane/openness-userspace-cni.md b/doc/building-blocks/dataplane/openness-userspace-cni.md
index ee17ab91..803f5b7a 100644
--- a/doc/building-blocks/dataplane/openness-userspace-cni.md
+++ b/doc/building-blocks/dataplane/openness-userspace-cni.md
@@ -16,17 +16,17 @@ Userspace CNI is a Container Network Interface (CNI) Kubernetes\* plugin that wa
## Setup Userspace CNI
-OpenNESS for Network Edge has been integrated with Userspace CNI to allow users to easily run DPDK- based applications inside Kubernetes pods. To install OpenNESS Network Edge with Userspace CNI support, add the value `userspace` to variable `kubernetes_cnis` in `group_vars/all/10-default.yml` and set value of the variable `kubeovn_dpdk` in `group_vars/all/10-default.yml` to `true`:
+OpenNESS for Network Edge has been integrated with Userspace CNI to allow users to easily run DPDK- based applications inside Kubernetes pods. To install OpenNESS Network Edge with Userspace CNI support, add the value `userspace` to variable `kubernetes_cnis` in `inventory/default/group_vars/all/10-open.yml` and set value of the variable `kubeovn_dpdk` in `inventory/default/group_vars/all/10-open.yml` to `true`:
```yaml
-# group_vars/all/10-default.yml
+# inventory/default/group_vars/all/10-open.yml
kubernetes_cnis:
- kubeovn
- userspace
```
```yaml
-# group_vars/all/10-default.yml
+# inventory/default/group_vars/all/10-open.yml
kubeovn_dpdk: true
```
@@ -35,19 +35,21 @@ kubeovn_dpdk: true
DPDK apps require that a specific number of HugePages are enabled. By default, the Ansible\* scripts will enable 1024 of 2M HugePages on a system and then start OVS-DPDK with 1Gb of those HugePages. To change this setting to reflect your specific requirements, set the Ansible variables as defined in the example below. This example enables 4 of 1GB HugePages and appends 1 GB to OVS-DPDK, leaving 3 pages for DPDK applications that will be running in the pods.
```yaml
-# group_vars/controller_group/10-default.yml
+# inventory/default/group_vars/controller_group/10-open.yml
hugepage_size: "1G"
hugepage_amount: "4"
+default_grub_params: "default_hugepagesz={{ hugepage_size }} hugepagesz={{ hugepage_size }} hugepages={{ hugepage_amount }} intel_iommu=on iommu=pt"
```
```yaml
-# group_vars/edgenode_group/10-default.yml
+# inventory/default/group_vars/edgenode_group/10-open.yml
hugepage_size: "1G"
hugepage_amount: "4"
+default_grub_params: "default_hugepagesz={{ hugepage_size }} hugepagesz={{ hugepage_size }} hugepages={{ hugepage_amount }} intel_iommu=on iommu=pt"
```
```yaml
-# group_vars/all/10-default.yml
+# inventory/default/group_vars/all/10-open.yml
# Hugepage size to be used with DPDK: 2Mi or 1Gi
kubeovn_dpdk_hugepage_size: "1Gi"
# Overall amount of hugepages available to DPDK
diff --git a/doc/building-blocks/emco/openness-emco-images/openness-emco-smtc-hpa-setup.png b/doc/building-blocks/emco/openness-emco-images/openness-emco-smtc-hpa-setup.png
new file mode 100644
index 00000000..a74772fc
Binary files /dev/null and b/doc/building-blocks/emco/openness-emco-images/openness-emco-smtc-hpa-setup.png differ
diff --git a/doc/building-blocks/emco/openness-emco.md b/doc/building-blocks/emco/openness-emco.md
index 7ffd4ac0..07f644f6 100644
--- a/doc/building-blocks/emco/openness-emco.md
+++ b/doc/building-blocks/emco/openness-emco.md
@@ -1,6 +1,6 @@
```text
SPDX-License-Identifier: Apache-2.0
-Copyright (c) 2020 Intel Corporation
+Copyright (c) 2020-2021 Intel Corporation
```
# Edge Multi-Cluster Orchestrator (EMCO)
@@ -18,11 +18,12 @@ Copyright (c) 2020 Intel Corporation
- [Lifecycle Operations](#lifecycle-operations-2)
- [Level-1 Logical Clouds](#level-1-logical-clouds)
- [Level-0 Logical Clouds](#level-0-logical-clouds)
+ - [Hardware Platform Awareness](#hardware-platform-awareness)
- [OVN Action Controller](#ovn-action-controller)
- [Traffic Controller](#traffic-controller)
- [Generic Action Controller](#generic-action-controller)
- [Resource Synchronizer](#resource-synchronizer)
- - [Placment and Action Controllers in EMCO](#placment-and-action-controllers-in-emco)
+ - [Placement and Action Controllers in EMCO](#placement-and-action-controllers-in-emco)
- [Status Monitoring and Queries in EMCO](#status-monitoring-and-queries-in-emco)
- [EMCO Terminology](#emco-terminology-1)
- [EMCO API](#emco-api)
@@ -34,6 +35,9 @@ Copyright (c) 2020 Intel Corporation
- [Logical Cloud Setup](#logical-cloud-setup)
- [Deploy SmartCity Application](#deploy-smartcity-application)
- [SmartCity Termination](#smartcity-termination)
+ - [Deploy SmartCity Application With HPA Intent](#smartcity-deploy-hpa-intent)
+ - [HPA intent based on alloctable resource requirements - CPU](#hpa-intent-example-cpu)
+ - [HPA intent based on non-alloctable resource requirements - VCAC-A](#hpa-intent-example-vcac-a)
## Background
Edge Multi-Cluster Orchestration(EMCO), an OpenNESS Building Block, is a Geo-distributed application orchestrator for Kubernetes\*. EMCO operates at a higher level than Kubernetes\* and interacts with multiple of edges and clouds running Kubernetes. The main objective of EMCO is automation of the deployment of applications and services across multiple clusters. It acts as a central orchestrator that can manage edge services and network functions across geographically distributed edge clusters from different third parties.
@@ -69,14 +73,14 @@ The following figure shows the topology overview for the OpenNESS EMCO orchestra
_Figure 2 - Topology Overview with OpenNESS EMCO_
All the managed edge clusters and cloud clusters are connected with the EMCO cluster through the WAN network.
-- The central orchestration (EMCO) cluster can be installed and provisioned by using the [OpenNESS Central Orchestrator Flavor](https://github.com/open-ness/specs/blob/master/doc/flavors.md).
-- The edge clusters and the cloud cluster can be installed and provisioned by using the [OpenNESS Flavor](https://github.com/open-ness/specs/blob/master/doc/flavors.md).
+- The central orchestration (EMCO) cluster can be installed and provisioned by using the [OpenNESS Central Orchestrator Flavor](../../flavors.md).
+- The edge clusters and the cloud cluster can be installed and provisioned by using the [OpenNESS Flavor](../../flavors.md).
- The composite application - [SmartCity](https://github.com/OpenVisualCloud/Smart-City-Sample) is composed of two parts: edge application and cloud (web) application.
- The edge application executes media processing and analytics on multiple edge clusters to reduce latency.
- The cloud application is like a web application for additional post-processing, such as calculating statistics and display/visualization on the cloud cluster side.
- The EMCO user can deploy the SmartCity applications across the clusters. Besides that, EMCO allows the operator to override configurations and profiles to satisfy deployment needs.
-This document aims to familiarize the user with EMCO and [OpenNESS deployment flavor](https://github.com/open-ness/specs/blob/master/doc/flavors.md) for EMCO installation and provision, and provide instructions accordingly.
+This document aims to familiarize the user with EMCO and [OpenNESS deployment flavor](../../flavors.md) for EMCO installation and provision, and provide instructions accordingly.
## EMCO Introduction
@@ -186,6 +190,115 @@ Logical Clouds were introduced to group and partition clusters in a multi-tenant
##### Level-0 Logical Clouds
In some use cases, and in the administrative domains where it makes sense, a project may want to access raw, unmodified, administrator-level clusters. For such cases, no namespaces need to be created and no new users need to be created or authenticated in the API. To solve this, the Distributed Cloud Manager introduces Level-0 Logical Clouds, which offer the same consistent interface as Level-1 Logical Clouds to the Distributed Application Scheduler. Being of type Level-0 means "the lowest-level", or the administrator level. As such, no changes will be made to the clusters themselves. Instead, the only operation that takes place is the reuse of credentials already provided via the Cluster Registration API for the clusters assigned to the Logical Cloud (instead of generating new credentials, namespace/resources and kubeconfig files).
+#### Hardware Platform Awareness
+The Hardware Platform Awareness (HPA) is a feature that enables placement
+of workloads in different Kubernetes clusters based on availability of hardware
+resources in those clusters. Some examples of hardware resources are CPU,
+memory, devices such as GPUs, and PCI Virtual Functions (VFs) in SR-IOV
+capable PCI devices. HPA Intents can be added to the deployment intent
+group to express hardware resource requirements for individual
+microservices within an application.
+
+To elaborate, HPA tracks two kinds of resources:
+
+ A. Capabilities, also called Non-Allocatable Resources: A workload may
+ need CPUs with specific instruction sets such as AVX512, or a node in
+ which Huge Pages are enabled for memory. Such capabilities are
+ expressed in Kubernetes as a label on the node. Since capabilities
+ are properties rather than quantities, HPA models them as resources
+ for which one cannot specify how many of them are needed: they are not
+ allocatable.
+
+ B. Capacities, also called Allocatable Resources: A workload may need,
+ say, 2 CPUs, 4 GB RAM and 1 GPU. HPA Intents for such quantifiable
+ resources state how many of each resource type is needed. So they are
+ called allocatable resources.
+
+Every HPA resource has a name and one or more values. The name is exactly
+the same as the one used by Kubernetes. For example, the name
+`feature.node.kubernetes.io/cpu-cpuid.AVX512BW` identifies nodes with CPUs
+that have the AVX512 instruction set. A resource specification for it would
+look like this:
+ ```
+ resource: {"key":"feature.node.kubernetes.io/cpu-cpuid.AVX512BW", "value":"true"}
+ ```
+
+For non-allocatable resources, the `key` is the resource name as reported
+by the [Node Feature Discovery](https://docs.01.org/kubernetes/nfd/overview.html)
+feature in Kubernetes. The value would be the same as what one would use in
+the `nodeSelector` field of a Kubernetes pod manifest for that resource.
+For the example above, the `value` would be `true`.
+
+Allocatable resources fall into two categories: (a) those treated by
+Kubernetes as distinct types, namely, `cpu` and `memory`, and (b) generic
+resources, such as devices reported by device plugins in the cluster nodes.
+For each of these, as per the Kubernetes model, one can assign a `requests`
+parameter, which is the minimum resource amount that needs to be guaranteed
+for the workload to function. Optionally, one can also assign a `limits`
+parameter, which is the maximum amount of that resource that can be
+assigned. Both parameters in the HPA intent get added to the pod manifest
+of the microservice specified in the HPA intent, so that the scheduler of
+the Kubernetes cluster on which the microservice gets placed can act on
+them for node-level placement.
+
+Only the `requests` field is used for placement decisions; the `limits`
+parameter (if present) is passed transparently to Kubernetes but otherwise
+ignored. The HPA placement tracks the total capacity of each resource in
+each cluster, and subtracts the number guaranteed to each microservice
+(i.e. `requests`) to determine the free number of each resource in each
+cluster. If the application's Helm chart specifies default resources, the
+HPA intent values will override them.
+
+Resource specifications in Kubernetes are made at the level of containers.
+HPA intents therefore require the container name to be specified. However,
+non-allocatable resources often correspond to node-level properties or
+capabilities, and they would be common to all containers within a pod.
+
+The intent author should note that Kubernetes has many implicit semantics for
+[CPU management policy](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/)
+based on `requests` and `limits` fields for `cpu` and `memory`. In
+particular, these fields can be used to decide the QoS class of the pod
+and its CPU affinity. Specifically, to get exclusive CPUs for a pod, the
+following need to be done:
+ * In each node of the relevant Kubernetes clusters, set the kubelet option
+ `--cpu-manager-policy=static`. This enables the static CPU manager
+ policy in those nodes.
+ * In the HPA intent, specify both `requests` and `limits` for `cpu`
+ and ensure they are equal. Do the same for `memory`. This puts the
+ pod in Guaranteed QoS class.
+ * In the HPA intent, ensure the CPU counts are integers. This enables
+ exclusive CPU access.
+
+To arrange for a microservice to get access to a specific PCI device or
+PCI Virtual Functions (VFs) from an SR-IOV device, it is assumed that
+the necessary system prerequisites, such as installing device plugins, have
+been addressed in each relevant cluster. Often, the hardware requirements
+of a microservice has two parts: (a) the type and count of needed devices
+and (b) a specific version of the device driver to operate those devices.
+HPA expects that the appropriate driver version has been published
+as a node label (non-allocatable resource in HPA terms). Then the HPA
+intent would have two parts:
+ * A non-allocatable resource requirement, for the driver/software version.
+ Example: `resource: {"key": "foo.driver.version", "value": "10.1.0"}`
+ * An allocatable resource requirement, specifying the device resource
+ name, requests and limits. Example:
+ `resource: {"name": "myvendor.com/foo", "requests": "2", "limits": "2"}`
+
+In general, the HPA resource specifications and semantics are based on the
+corresponding Kubernetes concepts, consistent with the principle that EMCO
+automates Kubernetes deployments rather than pose yet another layer for the
+user to learn. So, please consult the Kubernetes documentation for further
+details.
+
+Examples of HPA Intents can be seen in the repository within the folder
+`src/placement-controllers/hpa/examples`.
+
+In the context of EMCO architecture, HPA provides a placement controller
+and an action controller. The HPA placement controller always runs after
+the generic placement controller.
+
+Please read the release notes regarding caveats and known limitations.
+
#### OVN Action Controller
The OVN Action Controller (ovnaction) microservice is an action controller which may be registered and added to a deployment intent group to apply specific network intents to resources in the composite application. It provides the following functionalities:
- Network intent APIs which allow specification of network connection intents for resources within applications.
@@ -215,7 +328,7 @@ To achieve both the usecases, the controller exposes RESTful APIs to create, upd
#### Resource Synchronizer
This microservice is the one which deploys the resources in edge/cloud clusters. 'Resource contexts' created by various microservices are used by this microservice. It takes care of retrying, in case the remote clusters are not reachable temporarily.
-#### Placment and Action Controllers in EMCO
+#### Placement and Action Controllers in EMCO
This section illustrates some key aspects of the EMCO controller architecture. Depending on the needs of a composite application, intents that handle specific operations for application resources (e.g. addition, modification, etc.) can be created via the APIs provided by the corresponding controller API. The following diagram shows the sequence of interactions to register controllers with EMCO.
![OpenNESS EMCO](openness-emco-images/emco-register-controllers.png)
@@ -262,7 +375,7 @@ _Figure 8 - Status Monitoring and Query Sequence_
### EMCO API
-For user interaction, EMCO provides [RESTful API](https://github.com/open-ness/EMCO/blob/main/docs/emco_apis.yaml). Apart from that, EMCO also provides CLI. For the detailed usage, refer to [EMCO CLI](https://github.com/open-ness/EMCO/tree/main/src/tools/emcoctl)
+For user interaction, EMCO provides [RESTful API](https://github.com/open-ness/IDO-EMCO/blob/main/docs/user/ido-emco-hpa-api.yaml). Apart from that, EMCO also provides CLI. For the detailed usage, refer to [EMCO CLI](https://github.com/open-ness/EMCO/tree/main/src/tools/emcoctl)
> **NOTE**: The EMCO RESTful API is the foundation for the other interaction facilities like the EMCO CLI, EMCO GUI (available in the future) and other orchestrators.
### EMCO Authentication and Authorization
@@ -301,11 +414,37 @@ Steps for EMCO Authentication and Authorization Setup:
- Apply Authentication and Authorization Policies
### EMCO Installation With OpenNESS Flavor
-EMCO supports [multiple deployment options](https://github.com/open-ness/EMCO/tree/main/deployments). [OpenNESS Experience Kit](https://github.com/open-ness/specs/blob/master/doc/getting-started/openness-experience-kits.md) offers the `central_orchestrator` flavor to automate EMCO build and deployment as mentioned below.
-- The first step is to prepare one server environment which needs to fulfill the [preconditions](https://github.com/open-ness/specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md#preconditions).
-- Then place the EMCO server hostname in `[controller_group]` group in `inventory.ini` file of openness-experience-kit.
-> **NOTE**: `[edgenode_group]` and `[edgenode_vca_group]` are not required for configuration, since EMCO micro services just need to be deployed on the Kubernetes* control plane node.
-- Run script `./deploy_ne.sh -f central_orchestrator`. Deployment should complete successfully. In the flavor, harbor registry is deployed to provide images services as well.
+EMCO supports [multiple deployment options](https://github.com/open-ness/IDO-EMCO/tree/main/deployments). [Converged Edge Experience Kits](../../getting-started/converged-edge-experience-kits.md) offers the `central_orchestrator` flavor to automate EMCO build and deployment as mentioned below.
+- The first step is to prepare one server environment which needs to fulfill the [preconditions](../../getting-started/openness-cluster-setup.md#preconditions).
+- Place the EMCO server hostname in `controller_group/hosts/ctrl.openness.org:` dictionary in `inventory.yml` file of converged-edge-experience-kit.
+- Update the `inventory.yaml` file by setting the deployment flavor as `central_orchestrator`
+ ```yaml
+ ---
+ all:
+ vars:
+ cluster_name: central_orchestrator_cluster
+ flavor: central_orchestrator
+ single_node_deployment: false
+ limit: controller_group
+ controller_group:
+ hosts:
+ ctrl.openness.org:
+ ansible_host:
+ ansible_user: openness
+ edgenode_group:
+ hosts:
+ edgenode_vca_group:
+ hosts:
+ ptp_master:
+ hosts:
+ ptp_slave_group:
+ hosts:
+ ...
+ ```
+> **NOTE**: `edgenode_group:` and `edgenode_vca_group:` are not required for configuration, since EMCO micro services just need to be deployed on the Kubernetes* control plane node.
+
+> **NOTE**: for more details about deployment and defining inventory please refer to [CEEK](../../getting-started/converged-edge-experience-kits.md#converged-edge-experience-kit-explained) getting started page.
+- Run script `python3 deploy.py`. Deployment should complete successfully. In the flavor, harbor registry is deployed to provide images services as well.
```shell
# kubectl get pods -n emco
@@ -322,6 +461,8 @@ emco ovnaction-5d8d4447f9-nn7l6 1/1 Running 0 14m
emco rsync-99b85b4x88-ashmc 1/1 Running 0 14m
```
+Besides that, OpenNESS EMCO also provides Azure templates and supports deployment automation for EMCO cluster on Azure public cloud. More details refer to [OpenNESS Development Kit for Microsoft Azure](https://github.com/open-ness/ido-specs/blob/master/doc/devkits/openness-azure-devkit.md).
+
## EMCO Example: SmartCity Deployment
- The [SmartCity application](https://github.com/OpenVisualCloud/Smart-City-Sample) is a sample application that is built on top of the OpenVINO™ and Open Visual Cloud software stacks for media processing and analytics. The composite application is composed of two parts: EdgeApp + WebApp (cloud application for additional post-processing such as calculating statistics and display/visualization)
- The edge cluster (representing regional office), the cloud cluster and the EMCO are connected with each other.
@@ -333,21 +474,27 @@ _Figure 11 - SmartCity Deployment Architecture Overview_
The example steps are shown as follows:
- Prerequisites
- Make one edge cluster and one cloud cluster ready by using OpenNESS Flavor.
- - Prepare one server with a vanilla CentOS\* 7.8.2003 for EMCO installation.
+ - If testing with HPA intent, need to prepare two edge clusters.
+ - Prepare one server with a vanilla CentOS\* 7.9.2009 for EMCO installation.
- EMCO installation
- Cluster setup
- Project setup
- Logical cloud Setup
- Deploy SmartCity application
+### EMCO installation
+Follow the guidance as [EMCO Installation With OpenNESS Flavor](#emco-installation-with-openness-flavor), logon to the EMCO host server and maker sure that Harbor and EMCO microservices are in running status.
+
### Cluster Setup
-In the step, cluster provider will be created. And both the edge cluster and the cloud cluster will be registered in the EMCO.
+The step includes:
+- Prepare edge and cloud clusters kubeconfig files, SmartCity helm charts and relevant artifacts.
+- Register clusters provider by [EMCO CLI](https://github.com/open-ness/EMCO/tree/main/src/tools/emcoctl).
+- Register provider's clusters by [EMCO CLI](https://github.com/open-ness/EMCO/tree/main/src/tools/emcoctl).
+- Register EMCO controllers and resource synchroizer by [EMCO CLI](https://github.com/open-ness/EMCO/tree/main/src/tools/emcoctl).
-1. After [EMCO Installation With OpenNESS Flavor](#emco-installation-with-openness-flavor), logon to the EMCO host server and maker sure that Harbor and EMCO microservices are in running status.
-
-2. On the edge and cloud cluster, run the following command to make Docker logon to the Harbor deployed on the EMCO server, thus the clusters can pull SmartCity images from the Harbor:
+1. On the edge and cloud cluster, run the following command to make Docker logon to the Harbor deployed on the EMCO server, thus the clusters can pull SmartCity images from the Harbor:
```shell
- HARBORRHOST=
+ HARBORRHOST=:30003
cd /etc/docker/certs.d/
mkdir ${HARBORRHOST}
@@ -356,18 +503,22 @@ In the step, cluster provider will be created. And both the edge cluster and the
HARBORRPW=Harbor12345
docker login ${HARBORRHOST} -u admin -p ${HARBORRPW}
```
+
> **NOTE**: should be `:30003`.
-3. On the EMCO server, download the [scripts,profiles and configmap JSON files](https://github.com/open-ness/edgeapps/tree/master/applications/smart-city-app/emco).
+2. On the EMCO server, download the [scripts,profiles and configmap JSON files](https://github.com/open-ness/edgeapps/tree/master/applications/smart-city-app/emco).
-4. Run the command for the environment setup with success return as below:
+3. Artifacts Preparation for clusters's kubeconfig, smartcity helm charts and other relevant artifacts
+ Run the command for the environment setup with success return as below:
```shell
# cd cli-scripts/
- # ./setup_env.sh
+ # ./setup_env.sh -e -d -c -r
```
- > **NOTE**: [SmartCity application](https://github.com/OpenVisualCloud/Smart-City-Sample) secrets need the specific information only accessiable by the edge cluster and the cloud cluster. `setup_env.sh` will automate it.
-5. Run the command for the clusters setup with expected result as below:
+ > **NOTE**: EMCO CLI is used in the setup script, and the steps include SmartCity github repo clone, docker images building, helm charts prepration and clusters configuration information preparation...etc.
+
+
+4. Run the command for the clusters setup with expected result as below:
```shell
# cd cli-scripts/
# ./01_apply.sh
@@ -376,37 +527,50 @@ In the step, cluster provider will be created. And both the edge cluster and the
URL: cluster-providers/smartcity-cluster-provider/clusters/cloud01/labels Response Code: 201 Response: {"label-name":"LabelSmartCityCloud"}
```
+ > **NOTE**: The cluster setup steps include clusters providers registration, clusters registration, adding labels for the clusters, EMCO controller creation and registration.
+
+ > **NOTE**: The `01_apply.sh` script invokes EMCO CLI tool - `emcoctl` and applies resource template file - `01_clusters_template.yaml` which contains the clusters related resources to create in EMCO. For example: Cluster Providers, Labels...etc.
+
### Project Setup
+The step invokes EMCO CLI and registers a project which groups SmartCity application under a common tenant.
Run the command for the project setup with expected result as below:
-```shell
-# cd cli-scripts/
-# ./02_apply.sh
+ ```shell
+ # cd cli-scripts/
+ # ./02_apply.sh
-Using config file: emco_cfg.yaml
-http://localhost:31298/v2
-URL: projects Response Code: 201 Response: {"metadata":{"name":"project_smtc","description":"","UserData1":"","UserData2":""}}
-```
+ Using config file: emco_cfg.yaml
+ http://localhost:31298/v2
+ URL: projects Response Code: 201 Response: {"metadata":{"name":"project_smtc","description":"","UserData1":"","UserData2":""}}
+ ```
+The `02_apply.sh` script invokes EMCO CLI tool - `emcoctl` and applies resource template file - `02_project_template.yaml` which contains the projects related resources to create in EMCO.
### Logical Cloud Setup
+The step invokes EMCO CLI and registers a logical cloud associated with the physical clusters.
Run the command for the logical cloud setup with expected result as below:
-```shell
-# cd cli-scripts/
-# ./03_apply.sh
-
-Using config file: emco_cfg.yaml
-http://localhost:31877/v2
-URL: projects/project_smtc/logical-clouds Response Code: 201 Response: {"metadata":{"name":"default","description":"","userData1":"","userData2":""},"spec":{"namespace":"","level":"0","user":{"user-name":"","type":"","user-permissions":null}}}
-http://localhost:31877/v2
-URL: projects/project_smtc/logical-clouds/default/cluster-references Response Code: 201 Response: {"metadata":{"name":"lc-edge01","description":"","userData1":"","userData2":""},"spec":{"cluster-provider":"smartcity-cluster-provider","cluster-name":"edge01","loadbalancer-ip":"0.0.0.0","certificate":""}}
-http://localhost:31877/v2
-URL: projects/project_smtc/logical-clouds/default/instantiate Response Code: 200 Response:
-```
+ ```shell
+ # cd cli-scripts/
+ # ./03_apply.sh
+
+ Using config file: emco_cfg.yaml
+ http://localhost:31877/v2
+ URL: projects/project_smtc/logical-clouds Response Code: 201 Response: {"metadata":{"name":"default","description":"","userData1":"","userData2":""},"spec":{"namespace":"","level":"0","user":{"user-name":"","type":"","user-permissions":null}}}
+ http://localhost:31877/v2
+ URL: projects/project_smtc/logical-clouds/default/cluster-references Response Code: 201 Response: {"metadata":{"name":"lc-edge01","description":"","userData1":"","userData2":""},"spec":{"cluster-provider":"smartcity-cluster-provider","cluster-name":"edge01","loadbalancer-ip":"0.0.0.0","certificate":""}}
+ http://localhost:31877/v2
+ URL: projects/project_smtc/logical-clouds/default/instantiate Response Code: 200 Response:
+ ```
+The `03_apply.sh` script invokes EMCO CLI tool - `emcoctl` and applies resource template file - `03_logical_cloud_template.yaml` which contains the logical cloud related resources to create in EMCO.
### Deploy SmartCity Application
+The setup includes:
+- Onboard SmartCity Application helm charts and profiles
+- Create generic placement intent to specify the edge/cloud cluster locations for each applicaiton of SmartCity
+- Create deployment intent references of the generic placement intent and generic actions intent for SmartCity generic kuberenetes resource: configmap, secret...etc.
+- Approve and Instantiate SmartCityp deployment
1. Run the command for the SmartCity application deployment with expected result as below:
```shell
@@ -418,8 +582,11 @@ URL: projects/project_smtc/logical-clouds/default/instantiate Response Code: 200
http://localhost:31298/v2
URL: projects/project_smtc/composite-apps/composite_smtc/v1/deployment-intent-groups/smtc-deployment-intent-group/instantiate Response Code: 202 Response:
```
+
> **NOTE**: EMCO supports generic K8S resource configuration including configmap, secret,etc. The example offers the usage about [configmap configuration](https://github.com/open-ness/edgeapps/blob/master/applications/smart-city-app/emco/cli-scripts/04_apps_template.yaml) to the clusters.
+ > **NOTE**: The `04_apply.sh` script invokes EMCO CLI tool - `emcoctl` and applies resource template file - `04_apps_template.yaml` which contains the application related resources to create in EMCO, for example deployment-intent, application helm chart entries, override profiles, configmap...etc. The placement intent for the use case is cluster label name and provider name.
+
2. Verify SmartCity Application Deployment Information.
The pods on the edge cluster are in the running status as shown as below:
@@ -456,13 +623,158 @@ _Figure 12 - SmartCity UI_
### SmartCity Termination
Run the command for the SmartCity termination with expected result as below:
-```shell
-# cd cli-scripts/
-# ./88_terminate.sh
+ ```shell
+ # cd cli-scripts/
+ # ./88_terminate.sh
-Using config file: emco_cfg.yaml
-http://localhost:31298/v2
-URL: projects/project_smtc/composite-apps/composite_smtc/v1/deployment-intent-groups/smtc-deployment-intent-group/terminate Response Code: 202 Response:
-```
+ Using config file: emco_cfg.yaml
+ http://localhost:31298/v2
+ URL: projects/project_smtc/composite-apps/composite_smtc/v1/deployment-intent-groups/smtc-deployment-intent-group/terminate Response Code: 202 Response:
+ ```
After termination, the SmartCity application will be deleted from the clusters.
+
+
+### Deploy SmartCity Application With HPA Intent
+OpenNESS EMCO supports Hardware Platform Awareness (HPA) based Placement Intent.
+- Application developer such as SmartCity can state that a certain microservice needs a specific list of resources.
+- EMCO can pass that requirement to each appropriate K8s cluster so that the K8s scheduler can place the microservice on a node that has that specific list of resources.
+ - There are two kinds of resources:
+ - Allocatable resources which can be quantified and allocated to containers in specific quantities, such as cpu and memory.
+ - Non-allocatable resources which are properties of CPUs, hosts, etc. such as the availablity of a specifc instruction set like AVX512.
+ - Each resource requirement in the intent shall be stated using the same name as in Kubernetes, such as cpu, memory, and intel.com/gpu, for both allocatable and non-allocatable resources.
+- More details about EMCO HPA can refer to [EMCO HPA Design](https://github.com/open-ness/IDO-EMCO/blob/main/docs/developer/hpa-design.md).
+
+
+OpenNESS EMCO offers an example for HPA based SmartCity application deployment. To obtain all the deployment related scripts, contact your Intel representative. Below will give overview about how to enable HPA intent based on EMCO CLI tool - `emcoctl`'s resource template files.
+
+The overall setup topology looks like:
+
+![OpenNESS EMCO](openness-emco-images/openness-emco-smtc-hpa-setup.png)
+
+_Figure 12 - SmartCity HPA Setup_
+
+
+### HPA intent based on alloctable resource requirements - CPU
+
+- Two edge clusters and one cloud cluster need to be prepared beforehand.
+- HPA related controller registeration section as below example
+```yaml
+---
+#creating placement controller entries for determining a suitable cluster based on the hardware requirements for each microservice
+version: emco/v2
+resourceContext:
+ anchor: controllers
+metadata :
+ name: hpa-placement-controller-1
+ description: test
+ userData1: test1
+ userData2: test2
+spec:
+ host: {{ .HpaPlacementIP }}
+ port: {{ .HpaPlacementPort }}
+ type: placement
+ priority: 1
+
+---
+#creating action controller entries for modifying the Kubernetes objects corresponding to the app or microservice, so that the Kubernetes controller in the target cluster can satisfy those requirements.
+version: emco/v2
+resourceContext:
+ anchor: controllers
+metadata :
+ name: hpa-action-controller-1
+spec:
+ host: {{ .HpaActionIP }}
+ port: {{ .HpaActionPort }}
+ type: action
+ priority: 1
+
+---
+#creating clm controller entries
+version: emco/v2
+resourceContext:
+ anchor: clm-controllers
+metadata :
+ name: hpa-placement-controller-1
+ description: test
+ userData1: test1
+ userData2: test2
+spec:
+ host: {{ .HpaPlacementIP }}
+ port: {{ .HpaPlacementPort }}
+ priority: 1
+```
+
+> **NOTE**: To test with multiple edge clusters, can add more edge clusters registration in `01_clusters_template.yaml` and add the new reference edge cluster to logical cloud in `03_logical_cloud_template.yaml`.
+
+
+- Create HPA intent creation and consumer application context section as below example:
+
+```yaml
+---
+#create app hpa placement intent
+version: emco/v2
+resourceContext:
+ anchor: projects/{{ .ProjectName }}/composite-apps/{{ .CompositeApp }}/v1/deployment-intent-groups/{{ .DeploymentIntent }}/hpa-intents
+metadata:
+ name: hpa-placement-intent-1
+ description: "smtc app hpa placement intent"
+ userData1: test1
+ userData2: test2
+spec:
+ app-name: {{ .AppEdge }}
+
+---
+#add consumer 1 to app hpa placement intent. A resource consumer for an allocatable resource is a container within a pod and resource consumer is expressed in terms of any of these Kubernetes objects.
+version: emco/v2
+resourceContext:
+ anchor: projects/{{ .ProjectName }}/composite-apps/{{ .CompositeApp }}/v1/deployment-intent-groups/{{ .DeploymentIntent }}/hpa-intents/hpa-placement-intent-1/hpa-resource-consumers
+metadata:
+ name: hpa-placement-consumer-1
+spec:
+ api-version: apps/v1
+ kind: Deployment
+ name: traffic-office1-analytics-traffic
+ container-name: traffic-office1-analytics-traffic
+
+---
+#add allocatable-resource to app hpa placement consumer
+version: emco/v2
+resourceContext:
+ anchor: projects/{{ .ProjectName }}/composite-apps/{{ .CompositeApp }}/v1/deployment-intent-groups/{{ .DeploymentIntent }}/hpa-intents/hpa-placement-intent-1/hpa-resource-consumers/hpa-placement-consumer-1/resource-requirements
+metadata:
+ name: hpa-placement-allocatable-resource-1
+ description: "resource requirements"
+spec:
+ allocatable : true
+ mandatory : true
+ weight : 1
+ resource : {"name":"cpu", "requests":8, "limits":9}
+```
+
+> **NOTE**: `traffic-office1-analytics-traffic` is SmartCity analytics micro service kubernetes deployment name and container name.
+
+
+- After deployment with SmartCity application instantiation, the expected result is: edge application will be deployed on the edge cluster which satisfies the CPU resource requirements intent.
+
+
+### HPA intent based on non-alloctable resource requirements - VCAC-A
+The Visual Cloud Accelerator Card - Analytics (VCAC-A) equips 2nd Generation Intel® Xeon® processor- based platforms with Iris® Pro Graphics and Intel® Movidius™ VPUs to enhance video codec, computer vision, and inference capabilities. Refer to details in [OpenNESS VCAC-A](../enhanced-platform-awareness/openness-vcac-a.md)
+
+During the VCAC-A installation, the VCA nodes are labeled with `vcac-zone=yes` and features with NFD. For the non-allocatable resource requirement intent, can refer to below example:
+```yaml
+---
+# add non-allocatable-resource to app hpa placement consumer
+version: emco/v2
+resourceContext:
+ anchor: projects/{{ .ProjectName }}/composite-apps/{{ .CompositeApp }}/v1/deployment-intent-groups/{{ .DeploymentIntent }}/hpa-intents/hpa-placement-intent-1/hpa-resource-consumers/hpa-placement-consumer-1/resource-requirements
+metadata:
+ name: hpa-placement-nonallocatable-resource-1
+ description: description of hpa placement_nonallocatable_resource
+spec:
+ allocatable: false
+ mandatory: true
+ weight: 1
+ resource: {"key":"vcac-zone", "value":"yes"}
+```
+After SmartCity application instantiation, the expected result is: edge application will be only deployed on the edge cluster which contains VACA-A accelerator.
diff --git a/doc/building-blocks/enhanced-platform-awareness/openness-acc100.md b/doc/building-blocks/enhanced-platform-awareness/openness-acc100.md
index 4985c27a..236379ec 100644
--- a/doc/building-blocks/enhanced-platform-awareness/openness-acc100.md
+++ b/doc/building-blocks/enhanced-platform-awareness/openness-acc100.md
@@ -5,11 +5,11 @@ Copyright (c) 2020 Intel Corporation
# Using ACC100 eASIC in OpenNESS: Resource Allocation, and Configuration
- [Overview](#overview)
-- [Intel® vRAN Dedicated Accelerator ACC100 FlexRAN Host Interface Overview](#intel-vran-dedicated-accelerator-acc100-flexran-host-interface-overview)
-- [Intel® vRAN Dedicated Accelerator ACC100 Orchestration and Deployment with Kubernetes\* for FlexRAN](#intel-vran-dedicated-accelerator-acc100-orchestration-and-deployment-with-kubernetes-for-flexran)
-- [Using the Intel® vRAN Dedicated Accelerator ACC100 on OpenNESS](#using-the-intel-vran-dedicated-accelerator-acc100-on-openness)
+- [Intel vRAN Dedicated Accelerator ACC100 FlexRAN Host Interface Overview](#intel-vran-dedicated-accelerator-acc100-flexran-host-interface-overview)
+- [Intel vRAN Dedicated Accelerator ACC100 Orchestration and Deployment with Kubernetes\* for FlexRAN](#intel-vran-dedicated-accelerator-acc100-orchestration-and-deployment-with-kubernetes-for-flexran)
+- [Using the Intel vRAN Dedicated Accelerator ACC100 on OpenNESS](#using-the-intel-vran-dedicated-accelerator-acc100-on-openness)
- [ACC100 (FEC) Ansible Installation for OpenNESS Network Edge](#acc100-fec-ansible-installation-for-openness-network-edge)
- - [OpenNESS Experience Kit](#openness-experience-kit)
+ - [Converged Edge Experience Kits](#converged-edge-experience-kits)
- [FEC VF configuration for OpenNESS Network Edge](#fec-vf-configuration-for-openness-network-edge)
- [Requesting Resources and Running Pods for OpenNESS Network Edge](#requesting-resources-and-running-pods-for-openness-network-edge)
- [Verifying Application POD Access and Usage of FPGA on OpenNESS Network Edge](#verifying-application-pod-access-and-usage-of-fpga-on-openness-network-edge)
@@ -51,7 +51,7 @@ This document explains how the ACC100 resource can be used on the Open Network E
FlexRAN is a reference layer 1 pipeline of 4G eNb and 5G gNb on Intel® architecture. The FlexRAN reference pipeline consists of an L1 pipeline, optimized L1 processing modules, BBU pooling framework, cloud and cloud-native deployment support, and accelerator support for hardware offload. Intel® vRAN Dedicated Accelerator ACC100 card is used by FlexRAN to offload FEC (Forward Error Correction) for 4G and 5G.
-## Intel® vRAN Dedicated Accelerator ACC100 FlexRAN Host Interface Overview
+## Intel vRAN Dedicated Accelerator ACC100 FlexRAN Host Interface Overview
Intel® vRAN Dedicated Accelerator ACC100 card used in the FlexRAN solution exposes the following physical functions to the CPU host:
- One FEC interface that can be used of 4G or 5G FEC acceleration
- The LTE FEC IP components have turbo encoder/turbo decoder and rate matching/de-matching
@@ -59,14 +59,14 @@ Intel® vRAN Dedicated Accelerator ACC100 card used in the FlexRAN solution expo
![Intel® vRAN Dedicated Accelerator ACC100 support](acc100-images/acc100-diagram.png)
-## Intel® vRAN Dedicated Accelerator ACC100 Orchestration and Deployment with Kubernetes\* for FlexRAN
+## Intel vRAN Dedicated Accelerator ACC100 Orchestration and Deployment with Kubernetes\* for FlexRAN
FlexRAN is a low-latency network function that implements the FEC. FlexRAN uses the FEC resources from the ACC100 using POD resource allocation and the Kubernetes\* device plugin framework. Kubernetes* provides a device plugin framework that is used to advertise system hardware resources to the Kubelet. Instead of customizing the code for Kubernetes* (K8s) itself, vendors can implement a device plugin that can be deployed either manually or as a DaemonSet. The targeted devices include GPUs, high-performance NICs, FPGAs, InfiniBand\* adapters, and other similar computing resources that may require vendor-specific initialization and setup.
![Intel® vRAN Dedicated Accelerator ACC100 Orchestration and deployment with OpenNESS Network Edge for FlexRAN](acc100-images/acc100-k8s.png)
_Figure - Intel® vRAN Dedicated Accelerator ACC100 Orchestration and deployment with OpenNESS Network Edge for FlexRAN_
-## Using the Intel® vRAN Dedicated Accelerator ACC100 on OpenNESS
+## Using the Intel vRAN Dedicated Accelerator ACC100 on OpenNESS
Further sections provide instructions on how to use the ACC100 eASIC features: configuration and accessing from an application on the OpenNESS Network Edge.
When the Intel® vRAN Dedicated Accelerator ACC100 is available on the Edge Node platform it exposes the Single Root I/O Virtualization (SRIOV) Virtual Function (VF) devices which can be used to accelerate the FEC in the vRAN workload. To take advantage of this functionality for a cloud-native deployment, the PF (Physical Function) of the device must be bound to the DPDK IGB_UIO userspace driver to create several VFs (Virtual Functions). Once the VFs are created, they must also be bound to a DPDK userspace driver to allocate them to specific K8s pods running the vRAN workload.
@@ -79,10 +79,10 @@ The full pipeline of preparing the device for workload deployment and deploying
- Simple sample BBDEV application to validate the pipeline (i.e., SRIOV creation - Queue configuration - Device orchestration - Pod deployment): Script delivery and instructions to build Docker image for sample application delivered as part of Edge Apps package.
### ACC100 (FEC) Ansible Installation for OpenNESS Network Edge
-To run the OpenNESS package with ACC100 (FEC) functionality, the feature needs to be enabled on both Edge Controller and Edge Node. It can be deployed via the ["flexran" flavor of OpenNESS](https://github.com/open-ness/ido-openness-experience-kits/tree/master/flavors/flexran).
+To run the OpenNESS package with ACC100 (FEC) functionality, the feature needs to be enabled on both Edge Controller and Edge Node. It can be deployed via the ["flexran" flavor of OpenNESS](https://github.com/open-ness/ido-converged-edge-experience-kits/tree/master/flavors/flexran).
-#### OpenNESS Experience Kit
-To enable ACC100 support from OEK, SRIOV must be enabled in OpenNESS:
+#### Converged Edge Experience Kits
+To enable ACC100 support from CEEK, SRIOV must be enabled in OpenNESS:
```yaml
# flavors/flexran/all.yml
kubernetes_cnis:
@@ -106,20 +106,17 @@ acc100_userspace_vf:
vf_driver: "vfio-pci"
```
-Run setup script `deploy_ne.sh -f flexran`.
+Run setup script `deploy.py` with `flexran` flavor defined in `inventory.yml` for specific cluster.
+
+> **NOTE**: for more details about deployment and defining inventory please refer to [CEEK](../../getting-started/converged-edge-experience-kits.md#converged-edge-experience-kit-explained) getting started page.
After a successful deployment, the following pods will be available in the cluster:
```shell
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
-kube-ovn kube-ovn-cni-hdgrl 1/1 Running 0 3d19h
-kube-ovn kube-ovn-cni-px79b 1/1 Running 0 3d18h
-kube-ovn kube-ovn-controller-578786b499-74vzm 1/1 Running 0 3d19h
-kube-ovn kube-ovn-controller-578786b499-j22gl 1/1 Running 0 3d19h
-kube-ovn ovn-central-5f456db89f-z7d6x 1/1 Running 0 3d19h
-kube-ovn ovs-ovn-46k8f 1/1 Running 0 3d18h
-kube-ovn ovs-ovn-5r2p6 1/1 Running 0 3d19h
+kube-system calico-kube-controllers-646546699f-wl6rn 1/1 Running 0 3d19h
+kube-system calico-node-hrtn4 1/1 Running 0 3d19h
kube-system coredns-6955765f44-mrc82 1/1 Running 0 3d19h
kube-system coredns-6955765f44-wlvhc 1/1 Running 0 3d19h
kube-system etcd-silpixa00394960 1/1 Running 0 3d19h
@@ -148,7 +145,7 @@ To configure the VFs with the necessary number of queues for the vRAN workload,
Sample configMap, which can be configured by changing values, if other than typical config is required, with a profile for the queue configuration is provided as part of Helm chart template `/opt/openness/helm-charts/bb_config/templates/acc100-config.yaml` populated with values from `/opt/openness/helm-charts/bb_config/values.yaml`. Helm chart installation requires a provision of hostname for the target node during job deployment. Additionally, the default values in Helm chart will deploy FPGA config, a flag needs to be provided to invoke ACC100 config.
-Install the Helm chart by providing configmap and BBDEV config utility job with the following command from `/opt/openness/helm-charts/` on Edge Controller:
+Install the Helm chart by providing configmap and BBDEV config utility job with the following command from `/opt/openness/helm-charts/` on Edge Controller (this job needs to be re-run on each node reboot):
```shell
helm install --set nodeName= --set device=ACC100 intel-acc100-cfg bb_config
@@ -182,7 +179,7 @@ kubectl get node -o json | jq '.status.allocatable'
```
To request the device as a resource in the pod, add the request for the resource into the pod specification file by specifying its name and the amount of resources required. If the resource is not available or the amount of resources requested is greater than the number of resources available, the pod status will be “Pending” until the resource is available.
-**NOTE**: The name of the resource must match the name specified in the configMap for the K8s devices plugin [configMap.yml](https://github.com/open-ness/openness-experience-kits/blob/master/roles/kubernetes/cni/sriov/controlplane/files/sriov/templates/configMap.yml).
+**NOTE**: The name of the resource must match the name specified in the configMap for the K8s devices plugin [configMap.yml](https://github.com/open-ness/converged-edge-experience-kits/blob/master/roles/kubernetes/cni/sriov/controlplane/templates/configMap.yml.j2).
A sample pod requesting the ACC100 (FEC) VF may look like this:
diff --git a/doc/building-blocks/enhanced-platform-awareness/openness-bios.md b/doc/building-blocks/enhanced-platform-awareness/openness-bios.md
index 9ea4f6be..edd96ad4 100644
--- a/doc/building-blocks/enhanced-platform-awareness/openness-bios.md
+++ b/doc/building-blocks/enhanced-platform-awareness/openness-bios.md
@@ -41,13 +41,13 @@ Intel SYSCFG must be manually downloaded by the user after accepting the license
### Setup
To enable BIOSFW, perform the following steps:
-1. The SYSCFG package must be downloaded and stored inside OpenNESS Experience Kits' `biosfw/` directory as a `syscfg_package.zip`:
-`ido-openness-experience-kits/biosfw/syscfg_package.zip`
-2. Change the variable `ne_biosfw_enable` in `group_vars/all/10-open.yml` to “true”:
+1. The SYSCFG package must be downloaded and stored inside Converged Edge Experience Kits' `biosfw/` directory as a `syscfg_package.zip`:
+`ido-converged-edge-experience-kits/ceek/biosfw/syscfg_package.zip`
+2. Change the variable `ne_biosfw_enable` in `inventory/default/group_vars/all/10-open.yml` to “true”:
```yaml
ne_biosfw_enable: true
```
-3. OpenNESS Experience Kits' NetworkEdge deployment for both controller and nodes can be started.
+3. Converged Edge Experience Kits' NetworkEdge deployment for both controller and nodes can be started.
### Usage
diff --git a/doc/building-blocks/enhanced-platform-awareness/openness-dedicated-core-cmk-deprecated.md b/doc/building-blocks/enhanced-platform-awareness/openness-dedicated-core-cmk-deprecated.md
new file mode 100644
index 00000000..cf12441a
--- /dev/null
+++ b/doc/building-blocks/enhanced-platform-awareness/openness-dedicated-core-cmk-deprecated.md
@@ -0,0 +1,141 @@
+```text
+SPDX-License-Identifier: Apache-2.0
+Copyright (c) 2019-2020 Intel Corporation
+```
+
+# Dedicated CPU core for workload support in OpenNESS
+**CMK support was deprecated in Openess release 21.03 and replaced with Kubernetes native CPU Manager***
+
+- [Overview](#overview)
+- [Details - CPU Manager support in OpenNESS](#details---cpu-manager-support-in-openness)
+ - [Setup](#setup)
+ - [Usage](#usage)
+- [Reference](#reference)
+
+## Overview
+Multi-core, commercial, off-the-shelf platforms are typical in any cloud or cloud-native deployment. Running processes in parallel on multiple cores helps achieve a better density of processes per platform. On a multi-core platform, one challenge for applications and network functions that are latency and throughput dependent is deterministic compute. It is important to achieve deterministic compute that can allocate dedicated resources. Dedicated resource allocation avoids interference with other applications (noisy neighbor). When deploying on a cloud-native platform, applications are deployed as PODs. And providing required information to the container orchestrator on dedicated CPU cores is key. CPU manager allows provisioning of a POD to dedicated cores.
+
+![CPU Manager - CMK ](cmk-images/cmk1.png)
+
+_Figure - CPU Manager - CMK_
+
+The following are typical usages of this feature.
+
+- Consider an edge application that uses an AI library such as OpenVINO™ for inference. This library uses a special instruction set on the CPU to get a higher performance for the AI algorithm. To achieve a deterministic inference rate, the application thread executing the algorithm needs a dedicated CPU core so that there is no interference from other threads or other application pods (noisy neighbor).
+
+![CPU Manager support on OpenNESS ](cmk-images/cmk2.png)
+
+_Figure - CPU Manager support on OpenNESS_
+
+>**NOTE**: With Linux CPU isolation and CPU Manager for Kubernetes\* (CMK), a certain amount of isolation can be achieved but not all the kernel threads can be moved away.
+
+What is CMK?
+The following section outlines some considerations for using CMK:
+
+- If the workload already uses a threading library (e.g., pthread) and uses set affinity like APIs, CMK may not be needed. For such workloads, to provide cores to use for deployment, Kubernetes ConfigMaps are the recommended methodology. ConfigMaps can be used to pass the CPU core mask to the application for use.
+- The workload is a medium to long-lived process with interarrival times on the order of ones to tens of seconds or greater.
+- After a workload has started executing, there is no need to dynamically update its CPU assignments.
+- Machines running workloads explicitly isolated by CMK must be guarded against other workloads that do not consult the CMK toolchain. The recommended way to do this is for the operator to taint the node. The provided cluster-init sub-command automatically adds such a taint.
+- CMK does not need to perform additional tuning to IRQ affinity, CFS settings, or process scheduling classes.
+- The preferred mode of deploying additional infrastructure components is to run them in containers on top of Kubernetes.
+
+CMK accomplishes core isolation by controlling what logical CPUs each container may use for execution by wrapping target application commands with the CMK command-line program. The CMK wrapper program maintains state in a directory hierarchy on disk that describes pools from which user containers can acquire available CPU lists. These pools can be exclusive (only one container per CPU list) or non-exclusive (multiple containers can share a CPU list.) Each CPU list directory contains a task file that tracks process IDs of the container subcommand(s) that acquired the CPU list. When the child process exits, the CMK wrapper program clears its PID from the tasks file. If the wrapper program is killed before it can perform this cleanup step, a separate periodic reconciliation program detects this condition and cleans the tasks file accordingly. A file system lock guards against conflicting concurrent modifications.
+
+## Details - CPU Manager support in OpenNESS
+
+[CPU Manager for Kubernetes (CMK)](https://github.com/intel/CPU-Manager-for-Kubernetes) is a Kubernetes plugin that provides core affinity for applications deployed as Kubernetes pods. It is advised to use “isolcpus” for core isolation when using CMK (otherwise full isolation cannot be guaranteed).
+
+CMK is a command-line program that wraps target application to provide core isolation (an example pod with an application wrapped by CMK is given in [Usage](#usage-3) section).
+
+CMK documentation available on GitHub\* includes:
+
+- [operator manual](https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/operator.md)
+- [user manual](https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/user.md)
+
+CMK can be deployed using a [Helm chart](https://helm.sh/). The CMK Helm chart used in OpenNESS deployment is available on the following GitHub repository: [container-experience-kits](https://github.com/intel/container-experience-kits/tree/master/roles/cmk_install).
+
+### Setup
+
+**Edge Controller / Kubernetes control plane**
+
+1. In `inventory/default/group_vars/all/10-default.yml`, change `ne_cmk_enable` to `true` and adjust the settings if needed.
+ CMK default settings are:
+ ```yaml
+ # CMK - Number of cores in exclusive pool
+ cmk_num_exclusive_cores: "4"
+ # CMK - Number of cores in shared pool
+ cmk_num_shared_cores: "1"
+ # CMK - Comma separated list of nodes' hostnames
+ cmk_host_list: "node01,node02"
+ ```
+2. Deploy the controller with `deploy_ne.sh -f controller`.
+
+**Edge Node / Kubernetes node**
+
+1. In `inventory/default/group_vars/all/10-open.yml`, change `ne_cmk_enable` to “true”.
+2. To change core isolation set isolated cores in `inventory/default/group_vars/edgenode_group/10-default.yml` as `additional_grub_params` for your node e.g. in `inventory/default/group_vars/edgenode_group/10-default.yml`, set `additional_grub_params: "isolcpus=1-10,49-58"`.
+3. Deploy the node with `deploy_ne.sh -f node`.
+
+The environment setup can be validated using steps from the [CMK operator manual](https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/operator.md#validating-the-environment).
+
+**Note:**
+Up to version 20.12 choosing flavor was optional. Since version 21.03 and moving forward this parameter is no longer optional. To learn more about [flavors go to this page](https://github.com/open-ness/specs/blob/master/doc/flavors.md).
+
+### Usage
+
+The following example creates a `Pod` that can be used to deploy an application pinned to a core:
+
+1. `DEPLOYED-APP` in `args` should be changed to deployed application name (the same for labels and names)
+2. `image` value `DEPLOYED-APP-IMG:latest` should be changed to valid application image available in Docker\* (if the image is to be downloaded, change `ImagePullPolicy` to `Always`):
+
+```bash
+cat <
+ containers:
+ - args:
+ - "/opt/bin/cmk isolate --conf-dir=/etc/cmk --pool=exclusive DEPLOYED-APP"
+ command:
+ - "/bin/bash"
+ - "-c"
+ env:
+ - name: CMK_PROC_FS
+ value: "/host/proc"
+ image: DEPLOYED-APP-IMG:latest
+ imagePullPolicy: "Never"
+ name: cmk-DEPLOYED-APP
+ resources:
+ limits:
+ cmk.intel.com/exclusive-cores: 1
+ requests:
+ cmk.intel.com/exclusive-cores: 1
+ restartPolicy: Never
+EOF
+```
+
+>**NOTE**: CMK requires modification of deployed pod manifest for all deployed pods:
+> - nodeName: must be added under pod spec section before deploying application (to point node on which pod is to be deployed)
+>
+> alternatively,
+> - toleration must be added to deployed pod under spec:
+>
+> ```yaml
+> ...
+> tolerations:
+>
+> - ...
+>
+> - effect: NoSchedule
+> key: cmk
+> operator: Exists
+> ```
+
+## Reference
+- [CPU Manager Repo](https://github.com/intel/CPU-Manager-for-Kubernetes)
+- More examples of Kubernetes manifests are available in the [CMK repository](https://github.com/intel/CPU-Manager-for-Kubernetes/tree/master/resources/pods) and [documentation](https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/user.md).
diff --git a/doc/building-blocks/enhanced-platform-awareness/openness-dedicated-core.md b/doc/building-blocks/enhanced-platform-awareness/openness-dedicated-core.md
index 39f60f84..3bdece64 100644
--- a/doc/building-blocks/enhanced-platform-awareness/openness-dedicated-core.md
+++ b/doc/building-blocks/enhanced-platform-awareness/openness-dedicated-core.md
@@ -5,132 +5,157 @@ Copyright (c) 2019-2020 Intel Corporation
# Dedicated CPU core for workload support in OpenNESS
- [Overview](#overview)
+ - [What is Kubernetes Native CPU management?](#what-is-kubernetes-native-cpu-management)
- [Details - CPU Manager support in OpenNESS](#details---cpu-manager-support-in-openness)
- [Setup](#setup)
- - [Usage](#usage)
-- [Reference](#reference)
+ - [CPU Manager QoS classes](#cpu-manager-qos-classes)
+ - [POD definitions](#pod-definitions)
+ - [Examples](#examples)
## Overview
Multi-core, commercial, off-the-shelf platforms are typical in any cloud or cloud-native deployment. Running processes in parallel on multiple cores helps achieve a better density of processes per platform. On a multi-core platform, one challenge for applications and network functions that are latency and throughput dependent is deterministic compute. It is important to achieve deterministic compute that can allocate dedicated resources. Dedicated resource allocation avoids interference with other applications (noisy neighbor). When deploying on a cloud-native platform, applications are deployed as PODs. And providing required information to the container orchestrator on dedicated CPU cores is key. CPU manager allows provisioning of a POD to dedicated cores.
-![CPU Manager - CMK ](cmk-images/cmk1.png)
-
-_Figure - CPU Manager - CMK_
+Openness release 21.03 deprecated Intel CMK in favour of Kubernetes native CPU Management.
The following are typical usages of this feature.
- Consider an edge application that uses an AI library such as OpenVINO™ for inference. This library uses a special instruction set on the CPU to get a higher performance for the AI algorithm. To achieve a deterministic inference rate, the application thread executing the algorithm needs a dedicated CPU core so that there is no interference from other threads or other application pods (noisy neighbor).
-![CPU Manager support on OpenNESS ](cmk-images/cmk2.png)
-
-_Figure - CPU Manager support on OpenNESS_
-
->**NOTE**: With Linux CPU isolation and CPU Manager for Kubernetes\* (CMK), a certain amount of isolation can be achieved but not all the kernel threads can be moved away.
-What is CMK?
-The following section outlines some considerations for using CMK:
+### What is Kubernetes Native CPU management?
-- If the workload already uses a threading library (e.g., pthread) and uses set affinity like APIs, CMK may not be needed. For such workloads, to provide cores to use for deployment, Kubernetes ConfigMaps are the recommended methodology. ConfigMaps can be used to pass the CPU core mask to the application for use.
+- If the workload already uses a threading library (e.g., pthread) and uses set affinity like APIs, Kbernetes CPU Management may not be needed. For such workloads, to provide cores to use for deployment, Kubernetes ConfigMaps are the recommended methodology. ConfigMaps can be used to pass the CPU core mask to the application for use. However, Kubernetes CPU Management offers transparent and out of the box support for cpu management which does not need any additional configuration. The only issue is threading aware software can interfere with Kubernetes when Kubernetes is configured to use CPU Manager.
- The workload is a medium to long-lived process with interarrival times on the order of ones to tens of seconds or greater.
- After a workload has started executing, there is no need to dynamically update its CPU assignments.
-- Machines running workloads explicitly isolated by CMK must be guarded against other workloads that do not consult the CMK toolchain. The recommended way to do this is for the operator to taint the node. The provided cluster-init sub-command automatically adds such a taint.
-- CMK does not need to perform additional tuning to IRQ affinity, CFS settings, or process scheduling classes.
+- Kubernetes CPU management does not need to perform additional tuning to IRQ affinity, CFS settings, or process scheduling classes.
- The preferred mode of deploying additional infrastructure components is to run them in containers on top of Kubernetes.
-CMK accomplishes core isolation by controlling what logical CPUs each container may use for execution by wrapping target application commands with the CMK command-line program. The CMK wrapper program maintains state in a directory hierarchy on disk that describes pools from which user containers can acquire available CPU lists. These pools can be exclusive (only one container per CPU list) or non-exclusive (multiple containers can share a CPU list.) Each CPU list directory contains a task file that tracks process IDs of the container subcommand(s) that acquired the CPU list. When the child process exits, the CMK wrapper program clears its PID from the tasks file. If the wrapper program is killed before it can perform this cleanup step, a separate periodic reconciliation program detects this condition and cleans the tasks file accordingly. A file system lock guards against conflicting concurrent modifications.
+Default kubelet configuration uses [CFS quota](https://en.wikipedia.org/wiki/Completely_Fair_Scheduler) to manage PODs execution times and enforce imposed CPU limits. For such a solution it is possible that individual PODs are moved between different CPU because of changing circumistances on Kubernetes node. When cetrains PODs end its lifespan or CPU throttling comes in place then a POD can be moved to another CPU.
+
+Another, default for Openness, solution supported by Kubernetes is CPU manager. CPU manager uses [Linux CPUSET](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt) mechanism to schedule PODS to invividual CPUs. Kubernetes defines shared pool of CPUs which initially contains all the system CPUs without CPUs reverved for system and kubelet itself. CPU selection is configurable with kubelet options. Kubernetes uses shared CPU pool to schedule PODs with three QoS classes `BestEffort`, `Burstable` and `Guaranteed`.
+When POD is qualified as `Guaranteed` QoS class then kubelet removes requested CPUs amount from shared pool and assigns the POD exclusively to the CPUs.
## Details - CPU Manager support in OpenNESS
-[CPU Manager for Kubernetes (CMK)](https://github.com/intel/CPU-Manager-for-Kubernetes) is a Kubernetes plugin that provides core affinity for applications deployed as Kubernetes pods. It is advised to use “isolcpus” for core isolation when using CMK (otherwise full isolation cannot be guaranteed).
+### Setup
-CMK is a command-line program that wraps target application to provide core isolation (an example pod with an application wrapped by CMK is given in [Usage](#usage-3) section).
+**Deployment setup**
-CMK documentation available on GitHub\* includes:
+1. Kubernetes CPU Management needs CPU Manager Policy to be set to `static` which is a default option in Openness. This can be examined in `inventory/default/group_vars/all/10-open.yml` file.
+ ```yaml
+ # CPU policy - possible values: none (disabled), static (default)
+ policy: "static" ```
+2. Amount of CPUs reserved for Kubernetes and operating system is defined in `inventory/default/group_vars/all/10-open.yml` file.
+ ```yaml
+ # Reserved CPUs for K8s and OS daemons - list of reserved CPUs
+ reserved_cpus: "0,1"
+ ```
+3. Deploy the node with `deploy.py`.
+> **NOTE**: for more details about deployment and defining inventory please refer to [CEEK](../../getting-started/converged-edge-experience-kits.md#converged-edge-experience-kit-explained) getting started page.
-- [operator manual](https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/operator.md)
-- [user manual](https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/user.md)
+**Edge Controller / Kubernetes control plane**
-CMK can be deployed using a [Helm chart](https://helm.sh/). The CMK Helm chart used in OpenNESS deployment is available on the following GitHub repository: [container-experience-kits](https://github.com/intel/container-experience-kits/tree/master/roles/cmk-install).
+No setup needed.
-### Setup
+**Edge Node / Kubernetes node**
-**Edge Controller / Kubernetes control plane**
+No setup needed.
-1. In `group_vars/all/10-open.yml`, change `ne_cmk_enable` to `true` and adjust the settings if needed.
- CMK default settings are:
- ```yaml
- # CMK - Number of cores in exclusive pool
- cmk_num_exclusive_cores: "4"
- # CMK - Number of cores in shared pool
- cmk_num_shared_cores: "1"
- # CMK - Comma separated list of nodes' hostnames
- cmk_host_list: "node01,node02"
- ```
-2. Deploy the controller with `deploy_ne.sh controller`.
+### CPU Manager QoS classes
+Kubernetes CPU Manager defines three quality of service classes for PODs.
+- Best effort
+ `BestEffort` QoS class is assigned to PODs which do not define any memory and CPU limits and requests. PODs from this QoC class run in the shared pool
+- Burstable
+ `Bustrable` QoS class is assigned to PODS which define memory or CPU limits and requests which do not match. PODs from `Bustrable` QoS class run in the shared pool.
+- Guaranteed
+ `Guaranteed` QoS class is assigned to PODs which define memory and CPU limits and requests and those two values are equal. The values set to CPU limits and request have to be integral, factional CPU specified caused the POD to be run on the shared pool.
-**Edge Node / Kubernetes node**
+### POD definitions
+POD defined without any constraints. This will ne assigned `BestEffort` QoS class and will run on shared poll.
+```yaml
+spec:
+ containers:
+ - name: nginx
+ image: nginx
+```
-1. In `group_vars/all/10-open.yml`, change `ne_cmk_enable` to “true”.
-2. To change core isolation set isolated cores in `group_vars/edgenode_group/10-open.yml` as `additional_grub_params` for your node e.g. in `group_vars/edgenode_group/10-open.yml`, set `additional_grub_params: "isolcpus=1-10,49-58"`.
-3. Deploy the node with `deploy_ne.sh node`.
+POD defined with some constraints. This will be assigned `Bustrable` QoS class and will run on shared poll.
+```yaml
+spec:
+ containers:
+ - name: nginx
+ image: nginx
+ resources:
+ limits:
+ memory: "200Mi"
+ cpu: "2"
+ requests:
+ memory: "100Mi"
+ cpu: "1"
+```
-The environment setup can be validated using steps from the [CMK operator manual](https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/operator.md#validating-the-environment).
+POD defined with constraints, limits are equal to requests and CPU is integral bigger than or equal to one. This will be assigned `Guaranteed` QoS classs and will run exclusively on CPUs assigned by Kubernetes.
+```yaml
+spec:
+ containers:
+ - name: nginx
+ image: nginx
+ resources:
+ limits:
+ memory: "200Mi"
+ cpu: "2"
+ requests:
+ memory: "200Mi"
+ cpu: "2"
+```
-### Usage
+POD defined with constraints even when limits are equal to request but CPU is specified as a fractional number will not get exclusive CPUs but will be run on the shared pool. Still, QoS class for such a pod is `Guaranteed`.
-The following example creates a `Pod` that can be used to deploy an application pinned to a core:
-1. `DEPLOYED-APP` in `args` should be changed to deployed application name (the same for labels and names)
-2. `image` value `DEPLOYED-APP-IMG:latest` should be changed to valid application image available in Docker\* (if the image is to be downloaded, change `ImagePullPolicy` to `Always`):
+### Examples
-```bash
-cat <
containers:
- - args:
- - "/opt/bin/cmk isolate --conf-dir=/etc/cmk --pool=exclusive DEPLOYED-APP"
- command:
- - "/bin/bash"
- - "-c"
- env:
- - name: CMK_PROC_FS
- value: "/host/proc"
- image: DEPLOYED-APP-IMG:latest
- imagePullPolicy: "Never"
- name: cmk-DEPLOYED-APP
+ - name: nginx
+ image: nginx
+ imagePullPolicy: "IfNotPresent"
resources:
limits:
- cmk.intel.com/exclusive-cores: 1
+ cpu: 1
+ memory: "200Mi"
requests:
- cmk.intel.com/exclusive-cores: 1
+ cpu: 1
+ memory: "200Mi"
restartPolicy: Never
-EOF
+ ```
+
+
+ Scheduled POD is assigned `Guaranteed` quality of service class, this can be examined by issuing `kubectl describe pod/test-pod`.
+
+Part of sample ouput is:
+ ```yaml
+ QoS Class: Guaranteed
+ ```
+
+Invidual processes/threads processor affinity can be checked on the node where the pod was scheduled with `taskset` command.
+Process started by a container with `Guaranteed` POD QoS class has set CPU affinity according to the POD definition. It runs exclusively on CPUs removed from shared pool. All processes spawned from POD assigned to `Guaranteed` QoS class are scheduled to run on the same exclusive CPU. Processes from `Burstable` and `BestEffort` QoS classes PODs are scheduled to run on shared pool CPUs. This can be examined with example nginx container.
+
+```bash
+[root@vm ~]# for p in `top -n 1 -b|grep nginx|gawk '{print $1}'`; do taskset -c -p $p; done
+pid 5194's current affinity list: 0,1,3-7
+pid 5294's current affinity list: 0,1,3-7
+pid 7187's current affinity list: 0,1,3-7
+pid 7232's current affinity list: 0,1,3-7
+pid 17715's current affinity list: 2
+pid 17757's current affinity list: 2
```
->**NOTE**: CMK requires modification of deployed pod manifest for all deployed pods:
-> - nodeName: must be added under pod spec section before deploying application (to point node on which pod is to be deployed)
->
-> alternatively,
-> - toleration must be added to deployed pod under spec:
->
-> ```yaml
-> ...
-> tolerations:
->
-> - ...
->
-> - effect: NoSchedule
-> key: cmk
-> operator: Exists
-> ```
-
-## Reference
-- [CPU Manager Repo](https://github.com/intel/CPU-Manager-for-Kubernetes)
-- More examples of Kubernetes manifests are available in the [CMK repository](https://github.com/intel/CPU-Manager-for-Kubernetes/tree/master/resources/pods) and [documentation](https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/user.md).
+
diff --git a/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md b/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md
index ef302795..82534786 100644
--- a/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md
+++ b/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md
@@ -10,7 +10,7 @@ Copyright (c) 2019-2020 Intel Corporation
- [Intel(R) FPGA PAC N3000 remote system update flow in OpenNESS Network edge Kubernetes](#intelr-fpga-pac-n3000-remote-system-update-flow-in-openness-network-edge-kubernetes)
- [Using an FPGA on OpenNESS](#using-an-fpga-on-openness)
- [FPGA (FEC) Ansible installation for OpenNESS Network Edge](#fpga-fec-ansible-installation-for-openness-network-edge)
- - [OpenNESS Experience Kit](#openness-experience-kit)
+ - [Converged Edge Experience Kits](#converged-edge-experience-kits)
- [FPGA programming and telemetry on OpenNESS Network Edge](#fpga-programming-and-telemetry-on-openness-network-edge)
- [Telemetry monitoring](#telemetry-monitoring)
- [FEC VF configuration for OpenNESS Network Edge](#fec-vf-configuration-for-openness-network-edge)
@@ -84,25 +84,25 @@ For information on how to update and flash the MAX10 to supported version see [I
### FPGA (FEC) Ansible installation for OpenNESS Network Edge
To run the OpenNESS package with FPGA (FEC) functionality, the feature needs to be enabled on both Edge Controller and Edge Node.
-#### OpenNESS Experience Kit
-To enable FPGA support from OEK, change the variable `ne_opae_fpga_enable` in `group_vars/all/10-default.yml` (or flavour alternative file) to `true`:
+#### Converged Edge Experience Kits
+To enable FPGA support from CEEK, change the variable `ne_opae_fpga_enable` in `inventory/default/group_vars/all/10-open.yml` (or flavor alternative file) to `true`:
```yaml
-# group_vars/all/10-default.yml
+# inventory/default/group_vars/all/10-open.yml
ne_opae_fpga_enable: true
```
Additionally, SRIOV must be enabled in OpenNESS:
```yaml
-# group_vars/all/10-default.yml
+# inventory/default/group_vars/all/10-open.yml
kubernetes_cnis:
-
- sriov
```
-Also, enable the following options in `group_vars/all/10-default.yml`:
+Also, enable the following options in `inventory/default/group_vars/all/10-open.yml`:
The following device config is the default config for the Intel® FPGA PAC N3000 with a 5GNR vRAN user image tested (this configuration is common to both the EdgeNode and EdgeController setup).
```yaml
-# group_var/all/10-default.yml
+# group_var/all/10-open.yml
fpga_sriov_userspace_enable: true
@@ -117,22 +117,19 @@ fpga_userspace_vf:
The following packages need to be placed into specific directories for the feature to work:
-1. The OPAE package `OPAE_SDK_1.3.7-5_el7.zip` needs to be placed inside the `ido-openness-experience-kits/opae_fpga` directory. The package can be obtained as part of Intel® FPGA PAC N3000 OPAE beta release. To obtain the package, contact your Intel representative.
+1. The OPAE package `OPAE_SDK_1.3.7-5_el7.zip` needs to be placed inside the `converged-edge-experience-kits/opae_fpga` directory. The package can be obtained as part of Intel® FPGA PAC N3000 OPAE beta release. To obtain the package, contact your Intel representative.
-Run setup script `deploy_ne.sh`.
+Run setup script `deploy.py` with defined `inventory.yml` file.
+
+> **NOTE**: for more details about deployment and defining inventory please refer to [CEEK](../../getting-started/converged-edge-experience-kits.md#converged-edge-experience-kit-explained) getting started page.
After a successful deployment, the following pods will be available in the cluster (CNI pods may vary depending on deployment):
```shell
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
-kube-ovn kube-ovn-cni-hdgrl 1/1 Running 0 3d19h
-kube-ovn kube-ovn-cni-px79b 1/1 Running 0 3d18h
-kube-ovn kube-ovn-controller-578786b499-74vzm 1/1 Running 0 3d19h
-kube-ovn kube-ovn-controller-578786b499-j22gl 1/1 Running 0 3d19h
-kube-ovn ovn-central-5f456db89f-z7d6x 1/1 Running 0 3d19h
-kube-ovn ovs-ovn-46k8f 1/1 Running 0 3d18h
-kube-ovn ovs-ovn-5r2p6 1/1 Running 0 3d19h
+kube-system calico-kube-controllers-646546699f-wl6rn 1/1 Running 0 3d19h
+kube-system calico-node-hrtn4 1/1 Running 0 3d19h
kube-system coredns-6955765f44-mrc82 1/1 Running 0 3d19h
kube-system coredns-6955765f44-wlvhc 1/1 Running 0 3d19h
kube-system etcd-silpixa00394960 1/1 Running 0 3d19h
@@ -159,7 +156,7 @@ openness syslog-ng-br92z 1/1 Running 0
### FPGA programming and telemetry on OpenNESS Network Edge
It is expected the the factory image of the Intel® FPGA PAC N3000 is of version 2.0.x. To program the user image (5GN FEC vRAN) of the Intel® FPGA PAC N3000 via OPAE a `kubectl` plugin for K8s is provided - it is expected that the provided user image is signed or un-signed (development purposes) by the user, see the [documentation](https://www.intel.com/content/www/us/en/programmable/documentation/pei1570494724826.html) for more information on how to sign/un-sign the image file. The plugin also allows for obtaining basic FPGA telemetry. This plugin will deploy K8s jobs that run to completion on the desired host and display the logs/output of the command.
-The following are the operations supported by the `kubectl rsu` K8s plugin. They are run from the Edge Controller:
+The following are the operations supported by the `kubectl rsu` K8s plugin. They are run from the Edge Controller (the user who runs the commands needs to be a privileged user):
1. To check the version of the MAX10 image and FW run:
```
@@ -224,7 +221,7 @@ To run vRAN workloads on the Intel® FPGA PAC N3000, the FPGA must be programmed
#### Telemetry monitoring
- Support for monitoring temperature and power telemetry of the Intel® FPGA PAC N3000 is also provided from OpenNESS with a CollectD collector that is configured for the `flexran` flavor. Intel® FPGA PAC N3000 telemetry monitoring is provided to CollectD as a plugin. It collects the temperature and power metrics from the card and exposes them to Prometheus\* from which the user can easily access the metrics. For more information on how to enable telemetry for FPGA in OpenNESS, see the [telemetry whitepaper](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-telemetry.md#collectd).
+ Support for monitoring temperature and power telemetry of the Intel® FPGA PAC N3000 is also provided from OpenNESS with a CollectD collector that is configured for the `flexran` flavor. Intel® FPGA PAC N3000 telemetry monitoring is provided to CollectD as a plugin. It collects the temperature and power metrics from the card and exposes them to Prometheus\* from which the user can easily access the metrics. For more information on how to enable telemetry for FPGA in OpenNESS, see the [telemetry whitepaper](../../building-blocks/enhanced-platform-awareness/openness-telemetry.md#collectd).
![PACN3000 telemetry](fpga-images/openness-fpga4.png)
@@ -233,7 +230,7 @@ To configure the VFs with the necessary number of queues for the vRAN workload t
Sample configMap, which can be configured by changing values if other than typical configuration is required, with a profile for the queue configuration, is provided as part of Helm chart template `/opt/openness/helm-charts/bb_config/templates/fpga-config.yaml` populated with values from `/opt/openness/helm-charts/bb_config/values.yaml`. Helm chart installation requires a provision of hostname for the target node during job deployment.
-Install the Helm chart by providing configmap and BBDEV config utility job with the following command from `/opt/openness/helm-charts/` on Edge Controller:
+Install the Helm chart by providing configmap and BBDEV config utility job with the following command from `/opt/openness/helm-charts/` on Edge Controller (this job needs to be re-run on each node reboot):
```shell
helm install --set nodeName= intel-fpga-cfg bb_config
@@ -267,7 +264,7 @@ kubectl get node -o json | jq '.status.allocatable'
```
To request the device as a resource in the pod, add the request for the resource into the pod specification file by specifying its name and amount of resources required. If the resource is not available or the amount of resources requested is greater than the number of resources available, the pod status will be “Pending” until the resource is available.
-**NOTE**: The name of the resource must match the name specified in the configMap for the K8s devices plugin [configMap.yml](https://github.com/open-ness/openness-experience-kits/blob/master/roles/kubernetes/cni/sriov/controlplane/files/sriov/templates/configMap.yml).
+**NOTE**: The name of the resource must match the name specified in the configMap for the K8s devices plugin [configMap.yml](https://github.com/open-ness/converged-edge-experience-kits/blob/master/roles/kubernetes/cni/sriov/controlplane/templates/configMap.yml.j2).
A sample pod requesting the FPGA (FEC) VF may look like this:
@@ -321,7 +318,7 @@ Build the image:
`./build-image.sh`
-From the Edge Controlplane, deploy the application pod. The pod specification is located at `/opt/openness/edgenode/edgecontroller/fpga/fpga-sample-app.yaml`:
+From the Edge Controlplane, deploy the application pod. The pod specification is located at `/opt/openness/edgeservices/edgecontroller/fpga/fpga-sample-app.yaml`:
```
kubectl create -f fpga-sample-app.yaml
diff --git a/doc/building-blocks/enhanced-platform-awareness/openness-hugepage.md b/doc/building-blocks/enhanced-platform-awareness/openness-hugepage.md
index e8bbe4f8..c02b6f0e 100644
--- a/doc/building-blocks/enhanced-platform-awareness/openness-hugepage.md
+++ b/doc/building-blocks/enhanced-platform-awareness/openness-hugepage.md
@@ -19,25 +19,25 @@ Both applications and network functions can improve performance using HugePages.
## Details of HugePage support on OpenNESS
-OpenNESS deployment enables hugepages by default and provides parameters for tuning hugepages:
+Deployment of OpenNESS' minimal flavor does not enable the hugepages.
+To enable hugepages either use flavor that supports hugepages (e.g. flexran) or enable hugepages by editing `default_grub_params` variable in `group_vars` and/or `host_vars`. Suggested value for hugepage enablement is `default_hugepagesz={{ hugepage_size }} hugepagesz={{ hugepage_size }} hugepages={{ hugepage_amount }}`.
+
+Next, following parameters can be used for tuning hugepages:
* `hugepage_size` - size, which can be either `2M` or `1G`
* `hugepage_amount` - amount
-By default, these variables have values:
-
-| Mode | Machine type | `hugepage_amount` | `hugepage_size` | Comments |
-| ------------ | ------------ | :---------------: | :-------------: | -------------------------------------------- |
-| Network Edge | Controller | `1024` | `2M` | |
-| | Node | `1024` | `2M` | |
+Previously default values were:
+| Machine type | `hugepage_amount` | `hugepage_size` |
+|--------------|-------------------|-----------------|
+| Controller | `1024` | `2M` |
+| Node | `1024` | `2M` |
Find below a guide on changing these values. Customizations must be made before OpenNESS deployment.
Variables for hugepage customization can be placed in several files:
-
-* `group_vars/controller_group/10-open.yml` and `group_vars/edgenode_group/10-open.yml` will affect Edge Controller and Edge Nodes respectively in every mode
-* `host_vars/.yml` will only affect `` host present in `inventory.ini` (in all modes)
-* Hugepages can be also specified for mode and machine type, e.g. hugepages for NetworkEdge Edge Node can be set in `network_edge.yml` in a play for Edge Nodes:
-
+* `inventory/default/group_vars/controller_group/10-open.yml` and `inventory/default/group_vars/edgenode_group/10-open.yml` will affect Edge Controller and Edge Nodes
+* `inventory/default/host_vars//10-open.yml` will only affect `` host present in `inventory.yml`
+* Hugepages can be also specified inside playbook, however due to Ansible's\* variable priority this is not recommended (it will override both `group_vars` and `host_vars`). For example:
```yaml
# network_edge.yml
@@ -45,52 +45,54 @@ Variables for hugepage customization can be placed in several files:
vars:
hugepage_amount: "5000"
```
- >**NOTE**: Due to Ansible’s\* variable precedence, configuring hugepages in `network_edge.yml` is not recommended because it overrides customization in `group_vars` and `host_vars`.
The usage is summarized in the following table:
-| File | Network Edge | Native On Premises | Edge Controller | Edge Node | Comment |
-| --------------------------------------------- | :----------: | :---------: | :------------------------------------: | :-----------------------------------------------: | :-----------------------------------------------------------------------------: |
-| `group_vars/controller_group/10-open.yml` | yes | yes | yes | | |
-| `group_vars/edgenode_group/10-open.yml` | yes | yes | | yes - every node | |
-| `host_vars//10-open.yml` | yes | yes | yes | yes | affects machine specified in `inventory.ini` with name `` |
-| `network_edge.yml` | yes | | `vars` under `hosts: controller_group` | `vars` under `hosts: edgenode_group` - every node | not recommended |
+| File | Edge Controller | Edge Node | Comment |
+|--------------------------------------------------------------------|----------------------------------------|---------------------------------------------------|---------------------------------------------------------------------------------|
+| `inventory/default/group_vars/controller_group/10-open.yml` | yes | | |
+| `inventory/default/group_vars/edgenode_group/10-open.yml` | | yes - every node | |
+| `inventory/default/host_vars//10-open.yml` | yes | yes | affects machine specified in `inventory.yml` with name `` |
+| `network_edge.yml` | `vars` under `hosts: controller_group` | `vars` under `hosts: edgenode_group` - every node | not recommended |
Note that variables have precedence:
1. **not recommended:** `network_edge.yml` will always take precedence for files from this list (overrides every other var)
-2. `host_vars/`
-3. `group_vars/edgenode_group/20-enhanced.yml` and `group_vars/controller_group/20-enhanced.yml`
-4. `group_vars/edgenode_group/10-open.yml` and `group_vars/controller_group/10-open.yml`
-5. `group_vars/all/20-enhanced.yml`
-6. `group_vars/all/10-open.yml`
+2. `inventory/default/host_vars/`
+3. `inventory/default/group_vars/edgenode_group/20-enhanced.yml` and `inventory/default/group_vars/controller_group/20-enhanced.yml`
+4. `inventory/default/group_vars/edgenode_group/10-open.yml` and `inventory/default/group_vars/controller_group/10-open.yml`
+5. `inventory/default/group_vars/all/20-enhanced.yml`
+6. `inventory/default/group_vars/all/10-open.yml`
7. `default/main.yml` in roles' directory
### Examples
#### Changing size and amount of the hugepages for both controller and nodes
-Change the following lines in the `group_vars/edgenode_group/10-open.yml` or `group_vars/controller_group/10-open.yml`:
+Change the following lines in the `inventory/default/group_vars/edgenode_group/10-open.yml` or `inventory/default/group_vars/controller_group/10-open.yml`:
* To set 1500 of the hugepages with the page size of 2 MB (which is the default value) for the Edge Controller:
```yaml
- # group_vars/controller_group/10-open.yml
+ # inventory/default/group_vars/controller_group/10-open.yml
hugepage_size: "2M"
hugepage_amount: "1500"
+ default_grub_params: "default_hugepagesz={{ hugepage_size }} hugepagesz={{ hugepage_size }} hugepages={{ hugepage_amount }}"
```
* To set 10 of the hugepages with the page size of 1GB for the Edge Nodes:
```yaml
- # group_vars/edgenode_group/10-open.yml
+ # inventory/default/group_vars/edgenode_group/10-open.yml
hugepage_size: "1G"
hugepage_amount: "10"
+ default_grub_params: "default_hugepagesz={{ hugepage_size }} hugepagesz={{ hugepage_size }} hugepages={{ hugepage_amount }}"
```
#### Customizing hugepages for specific machine
-To specify the size or amount only for a specific machine, `hugepage_size` and/or `hugepage_amount` can be provided in `host_vars//10-open.yml` (i.e., if host is named `node01`, then the file is `host_vars/node01/open-10.yml`). For example:
+To specify the size or amount only for a specific machine, `hugepage_size` and/or `hugepage_amount` can be provided in `inventory/default/host_vars//10-open.yml` (i.e., if host is named `node01`, then the file is `inventory/default/host_vars/node01/10-open.yml`). For example:
```yaml
-# host_vars/node01/10-open.yml
+# inventory/default/host_vars/node01/10-open.yml
hugepage_size: "2M"
hugepage_amount: "1500"
+default_grub_params: "default_hugepagesz={{ hugepage_size }} hugepagesz={{ hugepage_size }} hugepages={{ hugepage_amount }}"
```
## Reference
diff --git a/doc/building-blocks/enhanced-platform-awareness/openness-node-feature-discovery.md b/doc/building-blocks/enhanced-platform-awareness/openness-node-feature-discovery.md
index 98742fd7..90572ad7 100644
--- a/doc/building-blocks/enhanced-platform-awareness/openness-node-feature-discovery.md
+++ b/doc/building-blocks/enhanced-platform-awareness/openness-node-feature-discovery.md
@@ -48,11 +48,11 @@ _Figure - CDN app deployment with NFD Features_
### Node Feature Discovery support in OpenNESS Network Edge
-Node Feature Discovery is enabled by default. It does not require any configuration or user input. It can be disabled by changing the `ne_nfd_enable` variable to `false` in the `group_vars/all/10-default.yml` before the OpenNESS installation.
+Node Feature Discovery is enabled by default. It does not require any configuration or user input. It can be disabled by changing the `ne_nfd_enable` variable to `false` in the `inventory/default/group_vars/all/10-open.yml` before the OpenNESS installation.
The connection between `nfd-nodes` and `nfd-control-plane` is secured by certificates generated before running NFD pods.
-Node Feature Discovery is deployed in OpenNESS using a Helm chart downloaded from [container-experience-kits](https://github.com/intel/container-experience-kits/tree/master/roles/nfd-install/charts/node-feature-discovery) repository.
+Node Feature Discovery is deployed in OpenNESS using a Helm chart downloaded from [container-experience-kits](https://github.com/intel/container-experience-kits/tree/master/roles/nfd_install/charts/node-feature-discovery) repository.
#### Usage
diff --git a/doc/building-blocks/enhanced-platform-awareness/openness-qat.md b/doc/building-blocks/enhanced-platform-awareness/openness-qat.md
new file mode 100644
index 00000000..6e65e8a7
--- /dev/null
+++ b/doc/building-blocks/enhanced-platform-awareness/openness-qat.md
@@ -0,0 +1,151 @@
+```text
+SPDX-License-Identifier: Apache-2.0
+Copyright (c) 2020 Intel Corporation
+```
+
+# Using Intel® QuickAssist Adapter in OpenNESS: Resource Allocation, and Configuration
+- [Overview](#overview)
+- [Intel QuickAssist Adapter CU/DU Host Interface Overview](#intel-quickassist-adapter-cudu-host-interface-overview)
+- [Intel QuickAssist Adapter Device Plugin Deployment with Kubernetes\* for CU/DU](#intel-quickassist-adapter-device-plugin-deployment-with-kubernetes-for-cudu)
+- [Using the Intel QuickAssist Adapter on OpenNESS](#using-the-intel-quickassist-adapter-on-openness)
+ - [Intel QuickAssist Adapter for OpenNESS Network Edge](#intel-quickassist-adapter-for-openness-network-edge)
+ - [Converged Edge Experience Kits (CEEK)](#converged-edge-experience-kits-ceek)
+ - [Requesting Resources and Running Pods for OpenNESS Network Edge](#requesting-resources-and-running-pods-for-openness-network-edge)
+- [Reference](#reference)
+
+## Overview
+
+Intel® QuickAssist Adapter plays a key role in accelerating cryptographic operations in 5G networking.
+
+Intel® QuickAssist Adapter provides the following features:
+
+- Symmetric (Bulk) Cryptography:
+ - Ciphers (AES, 3DES/DES, RC4, KASUMI, ZUC, Snow 3G)
+ - Message digset/hash (MD5, SHA1, SHA2, SHA3) and authentcation (HMAC, AES-XCBC)
+ - Algorithm chaining (one cipher and one hash in a sigle operation)
+ - Authenticated encription (AES-GCM, AES-CCM)
+- Asymmetric (Public Key) Cryptography:
+ - Modular exponentation for Diffie-Hellman (DH)
+ - RSA key generation, encryption/decryption and digital signature generation/verification
+ - DSA parameter generation and digital signature generation/verification
+ - Elliptic Curve Cryptography: ECDSA, ECDHE, Curve25519
+
+Intel® QuickAssist Adapter benefits include:
+- Reduced platform power, E2E latency and Intel® CPU core count requirements
+- Accelerates wireless data encryption and authentication
+- Accommodates space-constrained implementations via a low-profile PCIe* card form factor
+
+For more information, see product brief in [Intel® QuickAssist Adapter](https://www.intel.com/content/dam/www/public/us/en/documents/product-briefs/quickassist-adapter-8960-8970-brief.pdf).
+
+This document explains how the Intel® QuickAssist (QAT) device plugin is enabled and used on the Open Network Edge Services Software (OpenNESS) platform for accelerating network functions and edge application workloads. The Intel® QuickAssist Adapter is used to accelerate the LTE/5G encryption tasks in the CU/DU.
+
+## Intel QuickAssist Adapter CU/DU Host Interface Overview
+Intel® QuickAssist Adapter used in the CU/DU solution exposes the following Physical Functions (PF) to the CPU host:
+- Three interfaces, that can provide 16 Virtual Functions each.
+
+## Intel QuickAssist Adapter Device Plugin Deployment with Kubernetes\* for CU/DU
+CU/DU applications use the `qat.intel.com/generic` resources from the Intel® QuickAssist Adapter using POD resource allocation and the Kubernetes\* device plugin framework. Kubernetes* provides a device plugin framework that is used to advertise system hardware resources to the Kubelet. Instead of customizing the code for Kubernetes* (K8s) itself, vendors can implement a device plugin that can be deployed either manually or as a DaemonSet. The targeted devices include GPUs, high-performance NICs, FPGAs, InfiniBand\* adapters, and other similar computing resources that may require vendor-specific initialization and setup.
+
+## Using the Intel QuickAssist Adapter on OpenNESS
+Further sections provide instructions on how to use the Intel® QuickAssist Adapter features: configuration and accessing from an application on the OpenNESS Network Edge.
+
+When the Intel® QuickAssist Adapter is available on the Edge Node platform it exposes three Root I/O Virtualization (SRIOV) Physical Functions (PF) devices which can be used to create Virtual Functions. To take advantage of this functionality for a cloud-native deployment, the PF (Physical Function) of the device must be bound to the DPDK IGB_UIO userspace driver to create several VFs (Virtual Functions). Once the VFs are created, they must also be bound to a DPDK userspace driver to allocate them to specific K8s pods running the vRAN workload.
+
+The full pipeline of preparing the device for workload deployment and deploying the workload can be divided into the following stages:
+
+- Enabling SRIOV, binding devices to appropriate drivers, and the creation of VFs: delivered as part of the Edge Nodes Ansible automation.
+- QAT Device Plugin deployment.
+- Queue configuration of QAT's PFs/VFs.
+- Binding QAT's PFs/VFs to igb_uio driver.
+
+### Intel QuickAssist Adapter for OpenNESS Network Edge
+To run the OpenNESS package with Intel® QuickAssist Adapter Device Plugin functionality, the feature needs to be enabled on both Edge Controller and Edge Node. It can be deployed by setting the following variable in the flavor or *group_vars/all* file in Converged Edge Experience Kits:
+```yaml
+qat_device_plugin_enable: true
+```
+
+#### Converged Edge Experience Kits (CEEK)
+To enable Intel® QuickAssist Adapter Device Plugin support from CEEK, SRIOV must be enabled in OpenNESS:
+```yaml
+kubernetes_cnis:
+-
+- sriov
+```
+
+> **NOTE**: `sriov` cannot be the primary CNI.
+
+Intel® QuickAssist Adapter Device Plugin is enabled by default in the `cera_5g_on_prem` flavor:
+
+After a successful deployment, the following pods will be available in the cluster:
+```shell
+kubectl get pods -n kube-system
+
+NAME READY STATUS RESTARTS AGE
+intel-qat-plugin-dl42c 1/1 Running 0 7d9h
+```
+
+### Requesting Resources and Running Pods for OpenNESS Network Edge
+As part of the OpenNESS Ansible automation, a K8s SRIOV device plugin to orchestrate the Intel® QuickAssist Adapter VFs (bound to the userspace driver) is deployed and running. This enables the scheduling of pods requesting this device. To check the number of devices available on the Edge Node from Edge Controller, run:
+
+```shell
+kubectl get node $(hostname) -o json | jq '.status.allocatable'
+
+"qat.intel.com/generic": "48"
+```
+
+To request the QAT VFs as a resource in the pod, add the request for the resource into the pod specification file by specifying its name and the amount of resources required. If the resource is not available or the amount of resources requested is greater than the number of resources available, the pod status will be “Pending” until the resource is available.
+
+A sample pod requesting the Intel® QuickAssist Adapter VF may look like this:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: test
+ labels:
+ env: test
+spec:
+ containers:
+ - name: test
+ image: centos:latest
+ command: [ "/bin/bash", "-c", "--" ]
+ args: [ "while true; do sleep 300000; done;" ]
+ resources:
+ requests:
+ qat.intel.com/generic: 1
+ limits:
+ qat.intel.com/generic: 1
+```
+
+To test the resource allocation to the pod, save the above code snippet to the `sample.yaml` file and create the pod.
+```
+kubectl create -f sample.yaml
+```
+Once the pod is in the 'Running' state, check that the device was allocated to the pod (a uioX device and an environmental variable with a device PCI address should be available):
+```
+kubectl exec -it test -- ls /dev
+kubectl exec -it test -- printenv | grep QAT
+```
+Sample output:
+```shell
+[...]
+crw------- 1 root root 241, 18 Mar 22 14:11 uio18
+crw------- 1 root root 241, 39 Mar 22 14:11 uio39
+crw------- 1 root root 241, 46 Mar 22 14:11 uio46
+crw------- 1 root root 241, 8 Mar 22 14:11 uio8
+[...]
+```
+```shell
+QAT3=0000:1e:02.6
+QAT2=0000:1c:01.2
+QAT1=0000:1e:01.7
+QAT0=0000:1a:02.0
+```
+To check the number of devices currently allocated to pods, run (and search for 'Allocated Resources'):
+
+```shell
+kubectl describe node $(hostname)
+```
+
+## Reference
+- [Intel® QuickAssist Adapter](https://www.intel.com/content/dam/www/public/us/en/documents/product-briefs/quickassist-adapter-8960-8970-brief.pdf)
diff --git a/doc/building-blocks/enhanced-platform-awareness/openness-rmd.md b/doc/building-blocks/enhanced-platform-awareness/openness-rmd.md
index 2a0242a8..a4d2f440 100644
--- a/doc/building-blocks/enhanced-platform-awareness/openness-rmd.md
+++ b/doc/building-blocks/enhanced-platform-awareness/openness-rmd.md
@@ -53,10 +53,10 @@ For more information about cache allocation and available cache pools, refer to
This feature is for the OpenNESS Network Edge deployment mode.
## Usage
-Enable the RMD feature in *group_vars/all/10-default.yml* when installing OpenNESS (Under the Network Edge section):
+Enable the RMD feature in *inventory/default/group_vars/all/10-open.yml* when installing OpenNESS (Under the Network Edge section):
> rmd_operator_enable: True
>
-This will install the underlying infrastructure.
+This will install the underlying infrastructure. Please note you need to use the FlexRAN flavor for RMD to work.
Next, use the following shell function to determine which cores are used by your container:
```bash
#!/bin/sh
diff --git a/doc/building-blocks/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md b/doc/building-blocks/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md
index b018817d..e761157d 100644
--- a/doc/building-blocks/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md
+++ b/doc/building-blocks/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md
@@ -52,16 +52,16 @@ _Figure - SR-IOV Device plugin_
## Details - Multiple Interface and PCIe\* SRIOV support in OpenNESS
-In Network Edge mode, the Multus CNI, which provides the possibility for attaching multiple interfaces to the pod, is deployed automatically when the `kubernetes_cnis` variable list (in the `group_vars/all/10-open.yml` file) contains at least two elements, e.g.,:
+In Network Edge mode, the Multus CNI, which provides the possibility for attaching multiple interfaces to the pod, is deployed automatically when the `kubernetes_cnis` variable list (in the `inventory/default/group_vars/all/10-open.yml` file) contains at least two elements, e.g.,:
```yaml
kubernetes_cnis:
-- kubeovn
+- calico
- sriov
```
### Multus usage
-Multus CNI is deployed in OpenNESS using a Helm chart. The Helm chart is available in [openness-experience-kits](https://github.com/open-ness/openness-experience-kits/tree/master/roles/kubernetes/cni/multus/master/files/multus-cni). The Multus image is pulled by Ansible\* Multus role and pushed to a local Docker\* registry on Edge Controller.
+Multus CNI is deployed in OpenNESS using a Helm chart. The Helm chart is available in [converged-edge-experience-kits](https://github.com/open-ness/converged-edge-experience-kits/tree/master/roles/kubernetes/cni/multus/controlplane/files/multus-cni). The Multus image is pulled by Ansible\* Multus role and pushed to a local Docker\* registry on Edge Controller.
[Custom resource definition](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-resources) (CRD) is used to define an additional network that can be used by Multus.
@@ -117,29 +117,29 @@ EOF
valid_lft forever preferred_lft forever
308: eth0@if309: mtu 1400 qdisc noqueue state UP
link/ether 0a:00:00:10:00:12 brd ff:ff:ff:ff:ff:ff link-netnsid 0
- inet 10.16.0.17/16 brd 10.16.255.255 scope global eth0
+ inet 10.245.0.17/16 brd 10.245.255.255 scope global eth0
valid_lft forever preferred_lft forever
```
### SR-IOV configuration and usage
-To deploy the OpenNESS' Network Edge with SR-IOV, `sriov` must be added to the `kubernetes_cnis` list in `group_vars/all/10-default.yml`:
+To deploy the OpenNESS' Network Edge with SR-IOV, `sriov` must be added to the `kubernetes_cnis` list in `inventory/default/group_vars/all/10-open.yml`:
```yaml
kubernetes_cnis:
-- kubeovn
+- calico
- sriov
```
-SR-IOV CNI and device plugin are deployed in OpenNESS using Helm chart. The Helm chart is available in [openness-experience-kits](https://github.com/open-ness/openness-experience-kits/tree/master/roles/kubernetes/cni/sriov/master/files/sriov). Additional chart templates for SR-IOV device plugin can be downloaded from [container-experience-kits repository](https://github.com/intel/container-experience-kits/tree/master/roles/sriov-dp-install/charts/sriov-net-dp/templates). SR-IOV images are built from source by the Ansible SR-IOV role and pushed to a local Docker registry on Edge Controller.
+SR-IOV CNI and device plugin are deployed in OpenNESS using Helm chart. The Helm chart is available in [converged-edge-experience-kits](https://github.com/open-ness/converged-edge-experience-kits/tree/master/roles/kubernetes/cni/sriov/controlplane/files/sriov). Additional chart templates for SR-IOV device plugin can be downloaded from [container-experience-kits repository](https://github.com/intel/container-experience-kits/tree/master/roles/sriov_dp_install/charts/sriov-net-dp/templates). SR-IOV images are built from source by the Ansible SR-IOV role and pushed to a local Docker registry on Edge Controller.
#### Edge Node SR-IOV interfaces configuration
-For the installer to turn on the specified number of SR-IOV VFs for a selected network interface of node, provide that information in the format `{interface_name: VF_NUM, ...}` in the `sriov.network_interfaces` variable inside the config files in `host_vars` Ansible directory.
-For technical reasons, each node must be configured separately. Copy the example file `host_vars/node01/10-open.yml` and then create a similar one for each node being deployed.
+For the installer to turn on the specified number of SR-IOV VFs for a selected network interface of node, provide that information in the format `{interface_name: VF_NUM, ...}` in the `sriov.network_interfaces` variable inside the config files in `inventory/default/host_vars` Ansible directory.
+For technical reasons, each node must be configured separately. Copy the example file `inventory/default/host_vars/node01/10-open.yml` and then create a similar one for each node being deployed.
-Also, each node must be added to the Ansible inventory file `inventory.ini`.
+Also, each node must be added to the Ansible inventory file `inventory/default/inventory.ini`.
-For example providing `host_vars/node01/10-open.yml` (for Single Node deployment create and edit `host_vars//20-enhanced.yml`) with:
+For example providing `inventory/default/host_vars/node01/10-open.yml` (for Single Node deployment create and edit `inventory/default/host_vars//20-enhanced.yml`) with:
```yaml
sriov:
@@ -207,7 +207,7 @@ spec:
valid_lft forever preferred_lft forever
169: eth0@if170: mtu 1400 qdisc noqueue state UP group default
link/ether 0a:00:00:10:00:0b brd ff:ff:ff:ff:ff:ff link-netnsid 0
- inet 10.16.0.10/16 brd 10.16.255.255 scope global eth0
+ inet 10.245.0.10/16 brd 10.245.255.255 scope global eth0
valid_lft forever preferred_lft forever
```
@@ -217,7 +217,7 @@ SR-IOV device plugin image building requires downloading the ddptool from `downl
```shell
TASK [kubernetes/cni/sriov/master : build device plugin image] *****************************************************
-task path: /root/testy/openness-experience-kits/roles/kubernetes/cni/sriov/master/tasks/main.yml:52
+task path: /root/testy/converged-edge-experience-kits/roles/kubernetes/cni/sriov/master/tasks/main.yml:52
...
STDERR:
The command '/bin/sh -c apk add --update --virtual build-dependencies build-base linux-headers && cd /usr/src/sriov-network-device-plugin && make clean && make build && cd /tmp/ddptool && tar zxvf ddptool-1.0.0.0.tar.gz && make' returned a non-zero code: 1
diff --git a/doc/building-blocks/enhanced-platform-awareness/openness-telemetry.md b/doc/building-blocks/enhanced-platform-awareness/openness-telemetry.md
index 0ef6db58..d2caaea7 100644
--- a/doc/building-blocks/enhanced-platform-awareness/openness-telemetry.md
+++ b/doc/building-blocks/enhanced-platform-awareness/openness-telemetry.md
@@ -47,14 +47,14 @@ Depending on the role of the component, it is deployed as either a `Deployment`
## Flavors and configuration
-The deployment of telemetry components in OpenNESS is easily configurable from the OpenNESS Experience Kit (OEK). The deployment of the Grafana dashboard and PCM (Performance Counter Monitoring) collector is optional (`telemetry_grafana_enable` enabled by default, `telemetry_pcm_enable` disabled by default). There are four distinctive flavors for the deployment of the CollectD collector, enabling the respective set of plugins (`telemetry_flavor`):
+The deployment of telemetry components in OpenNESS is easily configurable from the Converged Edge Experience Kits (CEEK). The deployment of the Grafana dashboard and PCM (Performance Counter Monitoring) collector is optional (`telemetry_grafana_enable` enabled by default, `telemetry_pcm_enable` disabled by default). There are four distinctive flavors for the deployment of the CollectD collector, enabling the respective set of plugins (`telemetry_flavor`):
- common (default)
- flexran
- smartcity
- corenetwork
-Further information on what plugins each flavor enables can be found in the [CollectD section](#collectd). All flags can be changed in `./group_vars/all/10-default.yml` for the default configuration or in `./flavors` in a configuration for a specific platform flavor.
+Further information on what plugins each flavor enables can be found in the [CollectD section](#collectd). All flags can be changed in `./inventory/default/group_vars/all/10-open.yml` for the default configuration or in `./flavors` in a configuration for a specific platform flavor.
## Telemetry features
@@ -69,7 +69,7 @@ Prometheus is an open-source, community-driven toolkit for systems monitoring an
The main idea behind Prometheus is that it defines a unified metrics data format that can be hosted as part of any application that incorporates a simple web server. The data can be then scraped (downloaded) and processed by Prometheus using a simple HTTP/HTTPS connection.
-In OpenNESS, Prometheus is deployed as a K8s Deployment with a single pod/replica on the Edge Controller node. It is configured out of the box to scrape all other telemetry endpoints/collectors enabled in OpenNESS and gather data from them. Prometheus is enabled in the OEK by default with the `telemetry/prometheus` role.
+In OpenNESS, Prometheus is deployed as a K8s Deployment with a single pod/replica on the Edge Controller node. It is configured out of the box to scrape all other telemetry endpoints/collectors enabled in OpenNESS and gather data from them. Prometheus is enabled in the CEEK by default with the `telemetry/prometheus` role.
#### Usage
@@ -89,7 +89,7 @@ In OpenNESS, Prometheus is deployed as a K8s Deployment with a single pod/replic
### Grafana
-Grafana is an open-source visualization and analytics software. It takes the data provided from external sources and displays relevant data to the user via dashboards. It enables the user to create customized dashboards based on the information the user wants to monitor and allows for the provision of additional data sources. In OpenNESS, the Grafana pod is deployed on a control plane as a K8s `Deployment` type and is by default provisioned with data from Prometheus. It is enabled by default in OEK and can be enabled/disabled by changing the `telemetry_grafana_enable` flag.
+Grafana is an open-source visualization and analytics software. It takes the data provided from external sources and displays relevant data to the user via dashboards. It enables the user to create customized dashboards based on the information the user wants to monitor and allows for the provision of additional data sources. In OpenNESS, the Grafana pod is deployed on a control plane as a K8s `Deployment` type and is by default provisioned with data from Prometheus. It is enabled by default in CEEK and can be enabled/disabled by changing the `telemetry_grafana_enable` flag.
#### Usage
@@ -139,7 +139,7 @@ Grafana is an open-source visualization and analytics software. It takes the dat
### Node Exporter
-Node Exporter is a Prometheus exporter that exposes hardware and OS metrics of *NIX kernels. The metrics are gathered within the kernel and exposed on a web server so they can be scraped by Prometheus. In OpenNESS, the Node Exporter pod is deployed as a K8s `Daemonset`; it is a privileged pod that runs on every Edge Node in the cluster. It is enabled by default by OEK.
+Node Exporter is a Prometheus exporter that exposes hardware and OS metrics of *NIX kernels. The metrics are gathered within the kernel and exposed on a web server so they can be scraped by Prometheus. In OpenNESS, the Node Exporter pod is deployed as a K8s `Daemonset`; it is a privileged pod that runs on every Edge Node in the cluster. It is enabled by default by CEEK.
#### Usage
@@ -169,7 +169,7 @@ CollectD is a daemon/collector enabling the collection of hardware metrics from
#### Plugins
There are four distinct sets of plugins (flavors) enabled for CollectD deployment that can be used depending on the use-case/workload being deployed on OpenNESS. `Common` is the default flavor in OpenNESS. The flavors available are: `common`, `corenetwork`, `flexran`, and `smartcity`. Below is a table specifying which CollectD plugins are enabled for each flavor.
-The various OEK flavors are enabled for CollectD deployment as follows:
+The various CEEK flavors are enabled for CollectD deployment as follows:
| Common | Core Network | FlexRAN | SmartCity |
@@ -187,9 +187,9 @@ The various OEK flavors are enabled for CollectD deployment as follows:
#### Usage
-1. Select the flavor for the deployment of CollectD from the OEK during OpenNESS deployment; the flavor is to be selected with `telemetry_flavor: `.
+1. Select the flavor for the deployment of CollectD from the CEEK during OpenNESS deployment; the flavor is to be selected with `telemetry_flavor: `.
- In the event of using the `flexran` profile, `OPAE_SDK_1.3.7-5_el7.zip` needs to be available in `./ido-openness-experience-kits/opae_fpga` directory; for details about the packages, see [FPGA support in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md#edge-controller)
+ In the event of using the `flexran` profile, `OPAE_SDK_1.3.7-5_el7.zip` needs to be available in `./ido-converged-edge-experience-kits/ceek/opae_fpga` directory; for details about the packages, see [FPGA support in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md#edge-controller)
2. To access metrics available from CollectD, connect to the Prometheus [dashboard](#prometheus).
3. Look up an example the CollectD metric by specifying the metric name (ie. `collectd_cpufreq`) and pressing `execute` under the `graph` tab.
![CollectD Metric](telemetry-images/collectd_metric.png)
@@ -215,7 +215,7 @@ OpenCensus exporter/receiver is used in the default OpenNESS configuration for a
./build.sh push
```
-3. Create a secret using a root-ca created as part of OEK telemetry deployment (this will authorize against the Collector certificates).
+3. Create a secret using a root-ca created as part of CEEK telemetry deployment (this will authorize against the Collector certificates).
```shell
cd edgeapps/applications/telemetry-sample-app/
@@ -245,7 +245,8 @@ OpenCensus exporter/receiver is used in the default OpenNESS configuration for a
### PCM
Processor Counter Monitor (PCM) is an application programming interface (API) and a set of tools based on the API to monitor performance and energy metrics of Intel® Core™, Xeon®, Atom™ and Xeon Phi™ processors. In OpenNESS, the PCM pod is deployed as a K8s `Daemonset` on every available node. PCM metrics are exposed to Prometheus via the Host's NodePort on each EdgeNode.
->**NOTE**: The PCM feature is intended to run on physical hardware (i.e., no support for VM virtualized Edge Nodes in OpenNESS). Therefore, this feature is disabled by default. The feature can be enabled by setting the `telemetry_pcm_enable` flag in OEK. Additionally, a preset dashboard is created for PCM in Grafana visualizing the most crucial metrics.
+>**NOTE**: The PCM feature is intended to run on physical hardware (i.e., no support for VM virtualized Edge Nodes in OpenNESS). Therefore, this feature is disabled by default. The feature can be enabled by setting the `telemetry_pcm_enable` flag in CEEK. Additionally, a preset dashboard is created for PCM in Grafana visualizing the most crucial metrics.
+>**NOTE**: There is currently a limitation in OpenNESS where a conflict between deployment of CollectD and PCM prevents PCM server from starting successfully, it is advised to run PCM with CollectD disabled at this time.
#### Usage
@@ -263,7 +264,7 @@ Processor Counter Monitor (PCM) is an application programming interface (API) an
[Telemetry Aware Scheduler](https://github.com/intel/telemetry-aware-scheduling) enables the user to make K8s scheduling decisions based on the metrics available from telemetry. This is crucial for a variety of Edge use-cases and workloads where it is critical that the workloads are balanced and deployed on the best suitable node based on hardware ability and performance. The user can create a set of policies defining the rules to which pod placement must adhere. Functionality to de-schedule pods from given nodes if a rule is violated is also provided. TAS consists of a TAS Extender which is an extension to the K8s scheduler. It correlates the scheduling policies with deployment strategies and returns decisions to the K8s Scheduler. It also consists of a TAS Controller that consumes TAS policies and makes them locally available to TAS components. A metrics pipeline that exposes metrics to a K8s API must be established for TAS to be able to read in the metrics. In OpenNESS, the metrics pipeline consists of:
- Prometheus: responsible for collecting and providing metrics.
- Prometheus Adapter: exposes the metrics from Prometheus to a K8s API and is configured to provide metrics from Node Exporter and CollectD collectors.
-TAS is enabled by default in OEK, a sample scheduling policy for TAS is provided for [VCAC-A node deployment](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md#telemetry-support).
+TAS is enabled by default in CEEK, a sample scheduling policy for TAS is provided for [VCAC-A node deployment](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md#telemetry-support).
#### Usage
diff --git a/doc/building-blocks/enhanced-platform-awareness/openness-topology-manager.md b/doc/building-blocks/enhanced-platform-awareness/openness-topology-manager.md
index e5f60e13..5a36722c 100644
--- a/doc/building-blocks/enhanced-platform-awareness/openness-topology-manager.md
+++ b/doc/building-blocks/enhanced-platform-awareness/openness-topology-manager.md
@@ -8,13 +8,21 @@ Copyright (c) 2019 Intel Corporation
- [Edge use case](#edge-use-case)
- [Details - Topology manager support in OpenNESS](#details---topology-manager-support-in-openness)
- [Usage](#usage)
+- [OpenNESS NUMA and DPDK Cheat Sheet](#OpenNESS-NUMA-and-DPDK-Cheat-Sheet)
+ - [DPDK in OpenNESS](#DPDK-in-OpenNESS)
+ - [What is NUMA and Why You Care](#What-is-NUMA-and-Why-You-Care)
+ - [Determine NIC NUMA placement](#Determine-NIC-NUMA-placement)
+ - [CPU Mask Calculations](#CPU-Mask-Calculations)
+ - [Hugepages and NUMA](#Hugepages-and-NUMA)
+ - [DPDK CPU Mask Script](#DPDK-CPU-Mask-Script)
+ - [TuneD and DPDK CPU Bindings](#TuneD-and-DPDK-CPU-Bindings)
- [Reference](#reference)
## Overview
Multi-core and Multi-Socket commercial, off-the-shelf (COTS) systems are widely used for the deployment of application and network functions. COTS systems provide a variety of IO and memory features. In order to achieve determinism and high performance, mechanisms like CPU isolation, IO device locality, and socket memory allocation are critical. Cloud-native stacks such as Kubernetes\* are beginning to leverage resources such as CPU, hugepages, and I/O, but are agnostic to the Non-Uniform Memory Access (NUMA) alignment of these. Non-optimal, topology-aware NUMA resource allocation can severely impact the performance of latency-sensitive workloads.
-To address this requirement, OpenNESS uses the Topology manager. The topology manager is now supported by Kubernetes. Topology Manager is a solution that permits k8s components (e.g., CPU Manager and Device Manager) to coordinate the resources allocated to a workload.
+To address this requirement, OpenNESS uses the Topology manager. The topology manager is now supported by Kubernetes. Topology Manager is a solution that permits k8s components (e.g., CPU Manager and Device Manager) to coordinate the resources allocated to a workload.
### Edge use case
@@ -32,7 +40,7 @@ Topology Manager is a Kubelet component that aims to co-ordinate the set of comp
## Details - Topology manager support in OpenNESS
-Topology Manager is enabled by default with a `best-effort` policy. You can change the settings before OpenNESS installation by editing the `group_vars/all/10-default.yml` file:
+Topology Manager is enabled by default with a `best-effort` policy. You can change the settings before OpenNESS installation by editing the `inventory/default/group_vars/all/10-open.yml` file:
```yaml
### Kubernetes Topology Manager configuration (for a node)
@@ -53,6 +61,7 @@ Where `` can be `none`, `best-effort`, `restricted` or `single-
You can also set `reserved_cpus` to a number that suits you best. This parameter specifies the logical CPUs that will be reserved for a Kubernetes system Pods and OS daemons.
### Usage
+
To use Topology Manager create a Pod with a `guaranteed` QoS class (requests equal to limits). For example:
```yaml
@@ -82,6 +91,274 @@ Nov 05 09:22:52 tmanager kubelet[64340]: I1105 09:22:52.550016 64340 topology_
Nov 05 09:22:52 tmanager kubelet[64340]: I1105 09:22:52.550171 64340 topology_hints.go:60] [cpumanager] TopologyHints generated for pod 'examplePod', container 'example': [{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000010 true} {0000000000000000000000000000000000000000000000000000000000000011 false}]
Nov 05 09:22:52 tmanager kubelet[64340]: I1105 09:22:52.550204 64340 topology_manager.go:285] [topologymanager] ContainerTopologyHint: {0000000000000000000000000000000000000000000000000000000000000010 true}
Nov 05 09:22:52 tmanager kubelet[64340]: I1105 09:22:52.550216 64340 topology_manager.go:329] [topologymanager] Topology Affinity for Pod: 4ad6fb37-509d-4ea6-845c-875ce41049f9 are map[example:{0000000000000000000000000000000000000000000000000000000000000010 true}]
+
+```
+
+# OpenNESS NUMA and DPDK Cheat Sheet
+
+_Disclaimer: this document is not intended to serve as a comprehensive guide to DPDK, optimizing DPDK deployments, or as a replacement for the extensive DPDK documentation. This guide is intended as a supplement to the OpenNESS/CERA experience kits in order to adapt them to a platform that is different than those used in the OpenNESS development and validation labs._
+
+_The examples in this document are taken from a DELL R740 server, which happens to enumerate CPUs very differently than the Intel WolfPass S2600WFQ reference platform commonly utilized within Intel labs._
+
+[TOC]
+
+
+# DPDK in OpenNESS
+
+When the OpenNESS is deployed with the KubeOVN CNI we include the DPDK optimizations, this is managed through the _flavor_ group_vars via the flag:
+
+```bash
+kubeovn_dpdk: true
+```
+
+When the flag is set to true, then the OVS is deployed with DPDK bindings. In the case that the deployment succeeds, this can be verified with the following:
+
+```bash
+ovs-vsctl get Open_vSwitch . dpdk_initialized
+```
+
+Example:
+
+```bash
+[root@edgenode ~]# ovs-vsctl get Open_vSwitch . dpdk_initialized
+true
+```
+
+## What is NUMA and Why You Care
+
+The [Wikipedia entry for NUMA](https://en.wikipedia.org/wiki/Non-uniform_memory_access#:~:text=Non-uniform%20memory%20access%20%28%20NUMA%29%20is%20a%20computer,to%20another%20processor%20or%20memory%20shared%20between%20processors%29.) states:
+
+> **Non-Uniform Memory Access** (**NUMA**) is a [computer memory](https://en.wikipedia.org/wiki/Computer_storage) design used in [multiprocessing](https://en.wikipedia.org/wiki/Multiprocessing), where the memory access time depends on the memory location relative to the processor. Under NUMA, a processor can access its own [local memory](https://en.wikipedia.org/wiki/Local_memory) faster than non-local memory (memory local to another processor or memory shared between processors). The benefits of NUMA are limited to particular workloads, notably on servers where the data is often associated strongly with certain tasks or users.
+
+So in summary, each physical CPU core has memory that is _more_ local than other memory. The latency to access the more local has lower latency, when we look at maximizing throughput, bandwith, and latency the memory access latency has increasing impact on the observed performance. This applies to the queues for networking (and NVMe storage) devices as well, in order to maximize performance you want the execution threads to have affinity to the same memory space in order to benefit from this reduced localized access. It is recommended to place the latency sensitive applications to be localized within the same NUMA node as the network interface (and storage devices) it will utilize in order to see the full capability of the hardware.
+
+Intel publishes various resources for [optimizing applications for NUMA](https://software.intel.com/content/www/us/en/develop/articles/optimizing-applications-for-numa.html) including a guide on assessing the effect of NUMA using [Intel® VTune™ Amplifier](https://software.intel.com/content/www/us/en/develop/videos/how-numa-affects-your-workloads-intel-vtune-amplifier.html).
+
+Within OpenNESS we implement [CPU management extensions](https://www.openness.org/docs/doc/building-blocks/enhanced-platform-awareness/openness-dedicated-core#details---cpu-manager-support-in-openness) to Kubernetes to allow scheduling workloads with NUMA awareness. In the context of DPDK, we bind the descriptors for dataplane to the NUMA node of the network interface(s).
+
+## Determine NIC NUMA placement
+
+While this currently focuses on the NIC, the same applies to decoding the NUMA affinity of any PCI resource, including accelerator cards.
+
+Find NIC's PCI address (in this example we know we are using an Intel X710 NIC but XXV710 would also be found):
+
+```bash
+[root@edgenode ~]# lspci | grep 710
+86:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)
+86:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)
+```
+
+The NIC PCI location can also be found by using `ethtool` against the interface name as shown in `ip link show` or `ifconfig` by using `ethtool -m `:
+
+```bash
+[root@edgenode ~]# ethtool -i eth0 | grep bus-info
+bus-info: 86:00.0
+```
+
+Find NUMA node of NICs:
+
+```bash
+[root@edgenode ~]# lspci -vmms 86:00.0
+Slot: 86:00.0
+Class: Ethernet controller
+Vendor: Intel Corporation
+Device: Ethernet Controller X710 for 10GbE SFP+
+SVendor: Intel Corporation
+SDevice: Ethernet Converged Network Adapter X710-2
+Rev: 02
+NUMANode: 1
+```
+
+`lspci` command options selected here are:
+
+```bash
+Basic display modes:
+-mm Dump PCI device data in a machine readable form for easy parsing by scripts. See below for details.
+
+Display options:
+-v Be verbose and display detailed information about all devices. (-vv for very verbose)
+
+Selection of devices:
+-s [[[[]:]]:][][.[]] Show only devices in the specified domain (in case your machine has several host bridges, they can either share a common bus number space or each of them can address a PCI domain of its own; domains are numbered from 0 to ffff), bus (0 to ff), device (0 to 1f) and function (0 to 7). Each component of the device address can be omitted or set to "*", both meaning "any value". All numbers are hexadecimal. E.g., "0:" means all devices on bus 0, "0" means all functions of device 0 on any bus, "0.3" selects third function of device 0 on all buses and ".4" shows only the fourth function of each device.
+```
+
+Alternatively to using `lspci -vmss` (as it may not work on all platforms), this can also be found by reading the PCI device properties:
+
+```bash
+[root@edgenode ~]# cat /sys/bus/pci/devices/0000\:86\:00.0/numa_node
+1
+```
+
+You can also read the local CPU list of the PCI device:
+
+```bash
+[root@edgenode ~]# cat /sys/bus/pci/devices/0000\:86\:00.0/local_cpulist
+1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47
+```
+
+When we proceed to selecting CPUs for the `pmd_cpu_mask` the above CPU list is critical, as we must select a CPU that is within the _same_ NUMA node as the NIC(s).
+
+## CPU Mask Calculations
+
+CPU masks are defined within `group_vars/all/10-default.yml` as `kubeovn_dpdk_pmd_cpu_mask` and `kubeovn_dpdk_lcore_mask`.
+
+CPU Mask is used to assign cores to DPDK.
+
+`numactl -H` will provide a list of CPUs, however it doesn't show which threads are "peers". On some platforms the CPU IDs are enumerated in sequential order within NUMA node 0 and then into NUMA node 1 (e.g. `0`,`1`,`2`,`3` etc would be sequentially in NUMA node 0):
+
+```bash
+[root@edgenode ~]# numactl -H
+available: 2 nodes (0-1)
+node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
+node 0 size: 95128 MB
+node 0 free: 71820 MB
+node 1 cpus: 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
+node 1 size: 96729 MB
+node 1 free: 71601 MB
+node distances:
+node 0 1
+ 0: 10 21
+ 1: 21 10
+```
+
+Other platforms enumerate across the NUMA nodes, where _even_ CPU IDs are within pme NUMA node while _odd_ CPU IDs are on the opposing NUMA node as shown below:
+
+```bash
+[root@edgenode ~]# numactl -H
+available: 2 nodes (0-1)
+node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46
+node 0 size: 96749 MB
+node 0 free: 86064 MB
+node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47
+node 1 size: 98304 MB
+node 1 free: 86713 MB
+node distances:
+node 0 1
+ 0: 10 21
+ 1: 21 10
+```
+
+Within the DPDK source code there is a tool to show CPU topology, the OEK places this in `/opt/dpdk-18.11.6/usertools`. The advantage of this output is that it will show the peer threads (HyperThreading) for the same physical CPU.
+
+Example, CPUs `0` and `24` execute on the same physical core.
+
+```bash
+[root@edgenode ~]# /opt/dpdk-18.11.6/usertools/cpu_layout.py
+======================================================================
+Core and Socket Information (as reported by '/sys/devices/system/cpu')
+======================================================================
+
+cores = [0, 5, 1, 4, 2, 3, 8, 13, 9, 12, 10, 11]
+sockets = [0, 1]
+
+ Socket 0 Socket 1
+ -------- --------
+Core 0 [0, 24] [1, 25]
+Core 5 [2, 26] [3, 27]
+Core 1 [4, 28] [5, 29]
+Core 4 [6, 30] [7, 31]
+Core 2 [8, 32] [9, 33]
+Core 3 [10, 34] [11, 35]
+Core 8 [12, 36] [13, 37]
+Core 13 [14, 38] [15, 39]
+Core 9 [16, 40] [17, 41]
+Core 12 [18, 42] [19, 43]
+Core 10 [20, 44] [21, 45]
+Core 11 [22, 46] [23, 47]
+```
+
+CPU mask is calculated as a bitmask, where the bit location corresponds to the CPU in a numercial list. If we list CPUs values in a row:
+
+```bash
+0 1 2 3 4 5 6 7 8 9 10 11 12... 24 25 26 27 28 29 30 31 32 33 34 35 36...
+```
+
+The bit placement is the position of the CPU ID, so CPU ID `2` has is the _third_ bit placement...or `0100` in binary and CPU ID 8 is the _ninth_ bit placement or `1 0000 0000` in binary. We can then convert from `bin` to `hex`. `0001` = `0x4` and `1 0000 0000` = `0x100`.
+
+This can also be found via:
+
+```bash
+echo "ibase=10; obase=16; 2^($CPUID)" | bc
+```
+
+Example for CPU ID = `3` (as it is local to the NUMA node with the NIC above)
+
+```bash
+[root@edgenode ~]# echo "ibase=10; obase=16; 2^(3)" | bc
+8
+```
+
+In this case we would set the CPU mask to `0x8`.
+
+A web based tool to calculate CPU mask can be found [here](https://bitsum.com/tools/cpu-affinity-calculator/).
+
+**Setting CPU bindings to CPU0 (`0x1`) will fail, CPU0 is reserved for system kernel.**
+
+## Hugepages and NUMA
+
+Hugepages are allocated per NUMA node, when DPDK binds to a CPU there **must be hugepages within the relative NUMA** node to support DPDK. The _first_ NUMA node cannot have `0` hugepages, so you must either be `,0` or `,` (e.g. `1024,0` or `1024,1024`).
+
+If there are no hugepages in the NUMA node where DPDK is set to bind it will fail with an error similar to the following within `/var/log/openvswitch/ovs-vswitchd.log`. In the case of this example, the assigned `kubeovn_dpdk_pmd_cpu_mask` resides on NUMA socket1, however the hugepages were only allocated on socket0 (--socket-mem 1024,0):
+
+Example error message from `ovs-vswitchd.log`:
+
+```ovs-vswitchd.log
+2020-09-29T21:07:46.401Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log
+2020-09-29T21:07:46.415Z|00002|ovs_numa|INFO|Discovered 24 CPU cores on NUMA node 0
+2020-09-29T21:07:46.415Z|00003|ovs_numa|INFO|Discovered 24 CPU cores on NUMA node 1
+2020-09-29T21:07:46.415Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 48 CPU cores
+2020-09-29T21:07:46.415Z|00005|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting...
+2020-09-29T21:07:46.415Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected
+2020-09-29T21:07:46.418Z|00007|dpdk|INFO|Using DPDK 18.11.6
+2020-09-29T21:07:46.418Z|00008|dpdk|INFO|DPDK Enabled - initializing...
+2020-09-29T21:07:46.418Z|00009|dpdk|INFO|No vhost-sock-dir provided - defaulting to /var/run/openvswitch
+2020-09-29T21:07:46.418Z|00010|dpdk|INFO|IOMMU support for vhost-user-client disabled.
+2020-09-29T21:07:46.418Z|00011|dpdk|INFO|POSTCOPY support for vhost-user-client disabled.
+2020-09-29T21:07:46.418Z|00012|dpdk|INFO|Per port memory for DPDK devices disabled.
+2020-09-29T21:07:46.418Z|00013|dpdk|INFO|EAL ARGS: ovs-vswitchd -c 0x2 --huge-dir /hugepages --socket-mem 1024,0 --socket-limit 1024,0.
+2020-09-29T21:07:46.426Z|00014|dpdk|INFO|EAL: Detected 48 lcore(s)
+2020-09-29T21:07:46.426Z|00015|dpdk|INFO|EAL: Detected 2 NUMA nodes
+2020-09-29T21:07:46.431Z|00016|dpdk|INFO|EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
+2020-09-29T21:07:46.497Z|00017|dpdk|WARN|EAL: No free hugepages reported in hugepages-1048576kB
+2020-09-29T21:07:46.504Z|00018|dpdk|INFO|EAL: Probing VFIO support...
+2020-09-29T21:07:46.504Z|00019|dpdk|ERR|EAL: no supported IOMMU extensions found!
+2020-09-29T21:07:46.504Z|00020|dpdk|INFO|EAL: VFIO support could not be initialized
+2020-09-29T21:08:05.475Z|00002|daemon_unix|ERR|fork child died before signaling startup (killed (Bus error), core dumped)
+2020-09-29T21:08:05.475Z|00003|daemon_unix|EMER|could not initiate process monitoring
+```
+
+This error was corrected with adding the following to a ~/node_var/ node specific yaml:
+
+```yaml
+kubeovn_dpdk_socket_mem: "1024,1024"
+kubeovn_dpdk_pmd_cpu_mask: "0x8"
+kubeovn_dpdk_lcore_mask: "0x20"
```
+
+These bind `dpdk_pmd_cpu_mask` to CPU 3 (binary `1000`) and `dpdk_lcore_mask` to CPU 5 (binary `10 0000`).
+
+## Script finding CPUs in NIC NUMA Node
+
+This script can be executed to determine the CPUs that are local to a targeted network interface.
+
+```bash
+echo "What network interface is the target (e.g. as output in 'ip link show' or 'nmcli dev status')" &&
+read interfacename &&
+pcibus=`ethtool -i $interfacename | grep bus-info | cut -d " " -f 2 | sed 's,:,\\\:,g'` &&
+echo "*** The following CPUs are NUMA adjacent to network interface $interfacename ***" &&
+eval cat /sys/bus/pci/devices/$pcibus/local_cpulist
+```
+
+# TuneD and DPDK CPU Bindings
+
+[TBA]
+
## Reference
+
- [Topology Manager](https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/)
+
+- [DPDK Programmers Guide](https://doc.dpdk.org/guides/prog_guide/index.html)
+
+- [DPDK Getting Started Guide for Linux](https://doc.dpdk.org/guides/linux_gsg/index.html)
+
+- [How to get best performance with NICs on Intel platforms](https://doc.dpdk.org/guides/linux_gsg/nic_perf_intel_platform.html#configurations-before-running-dpdk)
diff --git a/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md b/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md
index 52a77028..9b4be173 100644
--- a/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md
+++ b/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md
@@ -1,6 +1,6 @@
```text
SPDX-License-Identifier: Apache-2.0
-Copyright (c) 2020 Intel Corporation
+Copyright (c) 2020-2021 Intel Corporation
```
# Using Visual Compute Accelerator Card - Analytics (VCAC-A) in OpenNESS
@@ -15,7 +15,7 @@ Copyright (c) 2020 Intel Corporation
- [References](#references)
## Overview
-The Visual Cloud Accelerator Card - Analytics (VCAC-A) equips 2nd Generation Intel® Xeon® processor- based platforms with Iris® Pro Graphics and Intel® Movidius™ VPUs to enhance video codec, computer vision, and inference capabilities. Comprised of one Intel i3-7100U CPU and 12 Intel® Movidius™ VPUs, this PCIe add-in card delivers competent stream inference capability and outstanding total cost of ownership. Provisioning the network edge with VCAC-A acceleration through the OpenNESS Experience Kits (OEK) enables dense and performant media analytics and transcoding pipelines.
+The Visual Cloud Accelerator Card - Analytics (VCAC-A) equips 2nd Generation Intel® Xeon® processor- based platforms with Iris® Pro Graphics and Intel® Movidius™ VPUs to enhance video codec, computer vision, and inference capabilities. Comprised of one Intel i3-7100U CPU and 12 Intel® Movidius™ VPUs, this PCIe add-in card delivers competent stream inference capability and outstanding total cost of ownership. Provisioning the network edge with VCAC-A acceleration through the Converged Edge Experience Kits (CEEK) enables dense and performant media analytics and transcoding pipelines.
## Architecture
@@ -27,10 +27,10 @@ Equipped with a CPU, the VCAC-A card is installed with a standalone operating sy
> * The full acronym *VCAC-A* is loosely used when talking about the PCIe card.
The VCAC-A installation involves a [two-stage build](https://github.com/OpenVisualCloud/VCAC-SW-Analytics/):
-1. VCA host kernel build and configuration: this stage patches the CentOS\* 7.8 kernel and builds the necessary modules and dependencies.
+1. VCA host kernel build and configuration: this stage patches the CentOS\* 7.9 kernel and builds the necessary modules and dependencies.
2. VCAC-A system image (VCAD) generation: this stage builds an Ubuntu\*-based (VCAD) image that is loaded on the VCAC-A card.
-The OEK automates the overall build and installation process of the VCAC-A card by joining it as a standalone logical node to the OpenNESS cluster. The OEK supports force build VCAC-A system image (VCAD) via flag (force\_build\_enable: true (default value)), it also allows the customer to disable the flag to re-use last system image built. When successful, the OpenNESS controller is capable of selectively scheduling workloads on the "VCA node" for proximity to the hardware acceleration.
+The CEEK automates the overall build and installation process of the VCAC-A card by joining it as a standalone logical node to the OpenNESS cluster. The CEEK supports force build VCAC-A system image (VCAD) via flag (force\_build\_enable: true (default value)), it also allows the customer to disable the flag to re-use last system image built. When successful, the OpenNESS controller is capable of selectively scheduling workloads on the "VCA node" for proximity to the hardware acceleration.
When onboarding applications such as [Open Visual Cloud Smart City Sample](https://github.com/open-ness/edgeapps/tree/master/applications/smart-city-app) with the existence of VCAC-A, the OpenNESS controller schedules all the application pods onto the edge node except the *video analytics* processing that is scheduled on the VCA node as shown in the figure below.
@@ -94,7 +94,7 @@ $ kubectl get no -o json | jq '.items[].metadata.labels'
```
## VPU, GPU Device Plugins, and HDDL Daemonset
-Kubernetes provides the [Device Plugins framework](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) that is used to advertise system hardware resources. The device plugins of interest for VCAC-A are: [VPU](https://github.com/intel/intel-device-plugins-for-kubernetes/blob/master/cmd/vpu_plugin/README.md) and [GPU](https://github.com/intel/intel-device-plugins-for-kubernetes/blob/master/cmd/gpu_plugin/README.md). They are installed as part of the VCAC-A install sequence that is performed by the OEK.
+Kubernetes provides the [Device Plugins framework](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) that is used to advertise system hardware resources. The device plugins of interest for VCAC-A are: [VPU](https://github.com/intel/intel-device-plugins-for-kubernetes/blob/master/cmd/vpu_plugin/README.md) and [GPU](https://github.com/intel/intel-device-plugins-for-kubernetes/blob/master/cmd/gpu_plugin/README.md). They are installed as part of the VCAC-A install sequence that is performed by the CEEK.
Another ingredient involved in the inference execution through VCAC-A VPUs is the *HDDL-daemon* that is deployed as a [Kubernetes Daemonset](https://github.com/OpenVisualCloud/Dockerfiles/blob/master/VCAC-A/script/setup_hddl_daemonset.yaml). It acts as an arbiter for the various applications/Pods trying to gain access to VPU resources. Therefore, the OpenNESS cluster is ready for onboarding applications and availing of VCAC-A acceleration without worrying about other dependencies.
@@ -106,7 +106,7 @@ default intel-vpu-plugin 1 1 1 1 1
kube-system intel-vpu-hddl 1 1 1 1 1 vcac-zone=yes 31h
...
```
-> VPU and GPU device plugins as well as HDDL Daemonset are deployed in the OpenNESS cluster as part of the VCAC-A installation sequence that is performed by the OEK.
+> VPU and GPU device plugins as well as HDDL Daemonset are deployed in the OpenNESS cluster as part of the VCAC-A installation sequence that is performed by the CEEK.
## Telemetry Support
VCAC-A telemetry is an integral part of the OpenNESS telemetry suite that enables the Kubernetes scheduler to perform telemetry-aware scheduling decisions. The following metrics are exported:
@@ -118,13 +118,13 @@ The VCAC-A VPU metrics are exported by the *NodeExporter* that integrates with P
```
$ /opt/intel/vcaa/vpu_metric/run.sh start
```
-> The VPU metrics exporter script is executed as part of the VCAC-A install sequence that is performed by the OEK.
+> The VPU metrics exporter script is executed as part of the VCAC-A install sequence that is performed by the CEEK.
![Exporting VCAC-A VPU Metrics to OpenNESS Telemetry](vcaca-images/vcac-a-vpu-metrics.png)
_Figure - Exporting VCAC-A VPU Metrics to OpenNESS Telemetry_
-Telemetry-Aware Scheduling (TAS) is the mechanism of defining policies that the controller aims to fulfill at run-time (based on the collected real-time metrics). A sample VCAC-A VPU telemetry policy is given below that is applied by default as part of the install sequence performed by the OEK.
+Telemetry-Aware Scheduling (TAS) is the mechanism of defining policies that the controller aims to fulfill at run-time (based on the collected real-time metrics). A sample VCAC-A VPU telemetry policy is given below that is applied by default as part of the install sequence performed by the CEEK.
```yaml
apiVersion: telemetry.intel.com/v1alpha1
@@ -152,7 +152,7 @@ spec:
- metricname: vpu_device_utilization
operator: LessThan
```
-> The above telemetry policy is applied by default as part of the VCAC-A install sequence performed by OEK.
+> The above telemetry policy is applied by default as part of the VCAC-A install sequence performed by CEEK.
The diagram below demonstrates an example use of the VCAC-A telemetry within the OpenNESS context:
@@ -166,11 +166,11 @@ _Figure - Using VCAC-A Telemetry with OpenNESS_
4. Now that the VPU device usage became 60, when the `OpenVINO` application turns up, it gets scheduled on VCA pool B in fulfillment of the policy.
## Media-Analytics-VCA Flavor
-The pre-defined OpenNESS flavor *media-analytics-vca* is provided to provision an optimized system configuration for media analytics workloads leveraging VCAC-A acceleration. This flavor is applied through the OEK playbook as described in the [OpenNESS Flavors](../flavors.md#media-analytics-flavor-with-vcac-a) document and encompasses the VCAC-A installation.
+The pre-defined OpenNESS flavor *media-analytics-vca* is provided to provision an optimized system configuration for media analytics workloads leveraging VCAC-A acceleration. This flavor is applied through the CEEK playbook as described in the [OpenNESS Flavors](../flavors.md#media-analytics-flavor-with-vcac-a) document and encompasses the VCAC-A installation.
-The VCAC-A installation in OEK performs the following tasks:
+The VCAC-A installation in CEEK performs the following tasks:
- Pull the release package from [Open Visual Cloud VCAC-A card media analytics software](https://github.com/OpenVisualCloud/VCAC-SW-Analytics) and the required dependencies
-- Apply CentOS 7.8 kernel patches and build kernel RPM
+- Apply CentOS 7.9 kernel patches and build kernel RPM
- Apply module patches and build driver RPM
- Build daemon utilities RPM
- Install docker-ce and kubernetes on the VCA host
diff --git a/doc/building-blocks/enhanced-platform-awareness/openness_hddl.md b/doc/building-blocks/enhanced-platform-awareness/openness_hddl.md
index 5dfca7b6..77f7af31 100644
--- a/doc/building-blocks/enhanced-platform-awareness/openness_hddl.md
+++ b/doc/building-blocks/enhanced-platform-awareness/openness_hddl.md
@@ -27,10 +27,10 @@ Each implementation for each hardware is an inference engine plugin.
The plugin for the Intel® Movidius™ Myriad™ X HDDL solution, or IE HDDL plugin for short, supports the Intel® Movidius™ Myriad™ X HDDL Solution hardware PCIe card. It communicates with the Intel® Movidius™ Myriad™ X HDDL HAL API to manage multiple Intel® Movidius™ Myriad™ X devices in the card, and it schedules deep-learning neural networks and inference tasks to these devices.
## HDDL OpenNESS Integration
-OpenNESS provides support for the deployment of OpenVINO™ applications and workloads accelerated through Intel® Vision Accelerator Design with the Intel® Movidius™ VPU HDDL-R add-in card. As a prerequisite for enabling the support, it is required for the HDDL add-in card to be inserted into the PCI slot of the Edge Node platform. The support is then enabled by setting the appropriate flag - 'ne_hddl_enable' in the '/group_vars/all/10-default.yml' before running OEK playbooks.
+OpenNESS provides support for the deployment of OpenVINO™ applications and workloads accelerated through Intel® Vision Accelerator Design with the Intel® Movidius™ VPU HDDL-R add-in card. As a prerequisite for enabling the support, it is required for the HDDL add-in card to be inserted into the PCI slot of the Edge Node platform. The support is then enabled by setting the appropriate flag - 'ne_hddl_enable' in the '/inventory/default/group_vars/all/10-open.yml' before running CEEK playbooks.
> **NOTE** No pre-defined flavor is provided for HDDL. If user wants to enable HDDL with flavor, can set flag - 'ne_hddl_enable' in the 'flavors//all.yml'. The node with HDDL card inserted will be labelled as 'hddl-zone=true'.
-The OEK automation script for HDDL will involve the following steps:
+The CEEK automation script for HDDL will involve the following steps:
- Download the HDDL DaemonSet yaml file from [Open Visual Cloud dockerfiles software](https://github.com/OpenVisualCloud/Dockerfiles) and templates it with specific configuration to satifiy OpenNESS need such as OpenVINO version...etc.
- Download the OpenVINO™, install kernel-devel and then install HDDL dependencies.
- Build the HDDLDdaemon image.
@@ -38,7 +38,7 @@ The OEK automation script for HDDL will involve the following steps:
- HDDL Daemon automatically brings up on the node with label 'hddl-zone=true'.
The HDDL Daemon provides the backend service to manage VPUs and dispatch inference tasks to VPUs. OpenVINO™-based applications that utilizes HDDL hardware need to access the device node '/dev/ion' and domain socket under '/var/tmp' to communicate with the kernel and HDDL service.
-> **NOTE** With the default kernel used by OpenNESS OEK, the ion driver will not enabled by OpenVINO™ toolkits, and the shared memory - '/dev/shm' will be used as fallback. More details refer to [installing_openvino_docker_linux](https://docs.openvinotoolkit.org/2020.2/_docs_install_guides_installing_openvino_docker_linux.html)
+> **NOTE** With the default kernel used by OpenNESS CEEK, the ion driver will not enabled by OpenVINO™ toolkits, and the shared memory - '/dev/shm' will be used as fallback. More details refer to [installing_openvino_docker_linux](https://docs.openvinotoolkit.org/2020.2/_docs_install_guides_installing_openvino_docker_linux.html)
![HDDL-Block-Diagram](hddl-images/hddlservice.png)
diff --git a/doc/building-blocks/index.html b/doc/building-blocks/index.html
index f1499d27..4acbbda9 100644
--- a/doc/building-blocks/index.html
+++ b/doc/building-blocks/index.html
@@ -10,5 +10,5 @@
---
You are being redirected to the OpenNESS Docs.
diff --git a/doc/cloud-adapters/openness_baiducloud.md b/doc/cloud-adapters/openness_baiducloud.md
index 57619722..5083d4ab 100644
--- a/doc/cloud-adapters/openness_baiducloud.md
+++ b/doc/cloud-adapters/openness_baiducloud.md
@@ -1,6 +1,6 @@
```text
SPDX-License-Identifier: Apache-2.0
-Copyright (c) 2019 Intel Corporation
+Copyright (c) 2019-2021 Intel Corporation
```
# OpenNESS Integration with Baidu OpenEdge
@@ -322,7 +322,7 @@ The scripts can be found in the release package with the subfolder name `setup_b
└── measure_rtt_openedge.py
```
-Before running the scripts, install python3.6 and paho mqtt on a CentOS\* Linux\* machine, where the recommended version is CentOS Linux release 7.8.2003 (Core).
+Before running the scripts, install python3.6 and paho mqtt on a CentOS\* Linux\* machine, where the recommended version is CentOS Linux release 7.9.2009 (Core).
The following are recommended install commands:
```docker
diff --git a/doc/devkits/openness-azure-devkit.md b/doc/devkits/openness-azure-devkit.md
index 889f9af5..0850bc74 100644
--- a/doc/devkits/openness-azure-devkit.md
+++ b/doc/devkits/openness-azure-devkit.md
@@ -14,4 +14,4 @@ for automated depoyment, and supports deployment using Porter. It enables cloud
## Getting Started
Following document contains steps for quick deployment on Azure:
-* [openness-experience-kits/cloud/README.md: Deployment and setup guide](https://github.com/open-ness/openness-experience-kits/blob/master/cloud/README.md)
+* [converged-edge-experience-kits/cloud/README.md: Deployment and setup guide](https://github.com/open-ness/converged-edge-experience-kits/blob/master/cloud/README.md)
diff --git a/doc/flavors.md b/doc/flavors.md
index cbb84c1b..160c8c4d 100644
--- a/doc/flavors.md
+++ b/doc/flavors.md
@@ -3,56 +3,82 @@ SPDX-License-Identifier: Apache-2.0
Copyright (c) 2020 Intel Corporation
```
-- [OpenNESS Deployment Flavors](#openness-deployment-flavors)
- - [CERA Minimal Flavor](#cera-minimal-flavor)
- - [CERA Access Edge Flavor](#cera-access-edge-flavor)
- - [CERA Media Analytics Flavor](#cera-media-analytics-flavor)
- - [CERA Media Analytics Flavor with VCAC-A](#cera-media-analytics-flavor-with-vcac-a)
- - [CERA CDN Transcode Flavor](#cera-cdn-transcode-flavor)
- - [CERA CDN Caching Flavor](#cera-cdn-caching-flavor)
- - [CERA Core Control Plane Flavor](#cera-core-control-plane-flavor)
- - [CERA Core User Plane Flavor](#cera-core-user-plane-flavor)
- - [CERA Untrusted Non3gpp Access Flavor](#cera-untrusted-non3gpp-access-flavor)
- - [CERA Near Edge Flavor](#cera-near-edge-flavor)
- - [CERA 5G On-Prem Flavor](#cera-5g-on-prem-flavor)
- - [Reference Service Mesh](#reference-service-mesh)
- - [Central Orchestrator Flavor](#central-orchestrator-flavor)
-
# OpenNESS Deployment Flavors
-
-This document introduces the supported deployment flavors that are deployable through OpenNESS Experience Kits (OEKs.
+This document introduces the supported deployment flavors that are deployable through the Converged Edge Experience Kits (CEEK).
+
+- [CERA Minimal Flavor](#cera-minimal-flavor)
+- [CERA Access Edge Flavor](#cera-access-edge-flavor)
+- [CERA Media Analytics Flavor](#cera-media-analytics-flavor)
+- [CERA Media Analytics Flavor with VCAC-A](#cera-media-analytics-flavor-with-vcac-a)
+- [CERA CDN Transcode Flavor](#cera-cdn-transcode-flavor)
+- [CERA CDN Caching Flavor](#cera-cdn-caching-flavor)
+- [CERA Core Control Plane Flavor](#cera-core-control-plane-flavor)
+- [CERA Core User Plane Flavor](#cera-core-user-plane-flavor)
+- [CERA Untrusted Non3gpp Access Flavor](#cera-untrusted-non3gpp-access-flavor)
+- [CERA Near Edge Flavor](#cera-near-edge-flavor)
+- [CERA 5G On-Prem Flavor](#cera-5g-on-prem-flavor)
+- [CERA 5G Central Office Flavor](#cera-5g-central-office-flavor)
+- [Central Orchestrator Flavor](#central-orchestrator-flavor)
+- [CERA SD-WAN Edge Flavor](#cera-sd-wan-edge-flavor)
+- [CERA SD-WAN Hub Flavor](#cera-sd-wan-hub-flavor)
## CERA Minimal Flavor
The pre-defined *minimal* deployment flavor provisions the minimal set of configurations for bringing up the OpenNESS network edge deployment.
The following are steps to install this flavor:
-1. Configure the OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md).
-2. Run the OEK deployment script:
+1. Configure the CEEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/openness-cluster-setup.md).
+2. Update the `inventory.yaml` file by setting the deployment flavor as `minimal`
+ ```yaml
+ ---
+ all:
+ vars:
+ cluster_name: minimal_cluster
+ flavor: minimal
+ ...
+ ```
+3. Run CEEK deployment script:
```shell
- $ deploy_ne.sh -f minimal
+ $ python3 deploy.py
```
This deployment flavor enables the following ingredients:
* Node feature discovery
-* The default Kubernetes CNI: `kube-ovn`
+* The default Kubernetes CNI: `calico`
* Telemetry
+To customize this flavor we recommend creating additional file in converged-edge-experience-kits that will override any variables used in previous configuration. This file should be placed in location: `converged-edge-experience-kits/inventory/default/group_vars/all` and filenames should start with number greater than highest value currently present (e.g. `40-overrides.yml`).
+
## CERA Access Edge Flavor
The pre-defined *flexran* deployment flavor provisions an optimized system configuration for vRAN workloads on Intel® Xeon® platforms. It also provisions for deployment of Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000 tools and components to enable offloading for the acceleration of FEC (Forward Error Correction) to the FPGA.
The following are steps to install this flavor:
-1. Configure the OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md).
+1. Configure the CEEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/openness-cluster-setup.md).
2. Configure the flavor file to reflect desired deployment.
- - Configure the CPUs selected for isolation and OS/K8s processes from command line in files [controller_group.yml](https://github.com/open-ness/openness-experience-kits/blob/master/flavors/flexran/controller_group.yml) and [edgenode_group.yml](https://github.com/open-ness/openness-experience-kits/blob/master/flavors/flexran/edgenode_group.yml) - please note that in single node mode the edgenode_group.yml is used to configure the CPU isolation.
- - Configure the amount of CPUs reserved for K8s and OS from K8s level with `reserved_cpu` flag in [all.yml](https://github.com/open-ness/openness-experience-kits/blob/master/flavors/flexran/all.yml) file.
- - Configure whether the FPGA or eASIC support for FEC is desired or both in [all.yml](https://github.com/open-ness/openness-experience-kits/blob/master/flavors/flexran/all.yml) file.
-
-3. Run OEK deployment script:
+ - Configure the CPUs selected for isolation and OS/K8s processes from command line in files [controller_group.yml](https://github.com/open-ness/ido-converged-edge-experience-kits/blob/master/flavors/flexran/controller_group.yml) and [edgenode_group.yml](https://github.com/open-ness/ido-converged-edge-experience-kits/blob/master/flavors/flexran/edgenode_group.yml) - please note that in single node mode the edgenode_group.yml is used to configure the CPU isolation.
+ - Configure which CPUs are to be reserved for K8s and OS from K8s level with `reserved_cpu` flag in [all.yml](https://github.com/open-ness/ido-converged-edge-experience-kits/blob/master/flavors/flexran/all.yml) file.
+ - Configure whether the FPGA or eASIC support for FEC is desired or both in [all.yml](https://github.com/open-ness/ido-converged-edge-experience-kits/blob/master/flavors/flexran/all.yml) file.
+
+3. Provide necessary files:
+ - Create the `ido-converged-edge-experience-kits/ceek/biosfw` directory and copy the `syscfg_package.zip` file to the directory (can be disabled with `ne_biosfw_enable` flag).
+ - Create the `ido-converged-edge-experience-kits/ceek/opae_fpga` directory and copy the OPAE_SDK_1.3.7-5_el7.zip to the directory (can be disabled with `ne_opae_fpga_enable` flag)
+ - Create the `ido-converged-edge-experience-kits/ceek/nic_drivers` directory and copy the `ice-1.3.2.tar.gz` and `iavf-4.0.2.tar.gz` files to the directory (can be disabled with `e810_driver_enable` flag).
+
+4. Update the `inventory.yaml` file by setting the deployment flavor as `flexran`
+ ```yaml
+ ---
+ all:
+ vars:
+ cluster_name: flexran_cluster
+ flavor: flexran
+ ...
+ ```
+
+5. Run CEEK deployment script:
```shell
- $ deploy_ne.sh -f flexran
+ $ python3 deploy.py
```
This deployment flavor enables the following ingredients:
* Node Feature Discovery
@@ -62,6 +88,7 @@ This deployment flavor enables the following ingredients:
* FPGA remote system update through OPAE
* FPGA configuration
* eASIC ACC100 configuration
+* E810 and IAVF kernel driver update
* RT Kernel
* Topology Manager
* RMD operator
@@ -71,18 +98,28 @@ This deployment flavor enables the following ingredients:
The pre-defined *media-analytics* deployment flavor provisions an optimized system configuration for media analytics workloads on Intel® Xeon® platforms. It also provisions a set of video analytics services based on the [Video Analytics Serving](https://github.com/intel/video-analytics-serving) for analytics pipeline management and execution.
The following are steps to install this flavor:
-1. Configure the OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md).
-2. Run the OEK deployment script:
+1. Configure the CEEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/openness-cluster-setup.md).
+2. Update the `inventory.yaml` file by setting the deployment flavor as `media-analytics`
+ ```yaml
+ ---
+ all:
+ vars:
+ cluster_name: media_analytics_cluster
+ flavor: media-analytics
+ ...
+ ```
+3. Run CEEK deployment script:
```shell
- $ deploy_ne.sh -f media-analytics
+ $ python3 deploy.py
```
> **NOTE:** The video analytics services integrates with the OpenNESS service mesh when the flag `ne_istio_enable: true` is set.
> **NOTE:** Kiali management console username can be changed by editing the variable `istio_kiali_username`. By default `istio_kiali_password` is randomly generated and can be retirieved by running `kubectl get secrets/kiali -n istio-system -o json | jq -r '.data.passphrase' | base64 -d` on the Kubernetes controller.
+> **NOTE:** Istio deployment can be customized using parameters in the `flavor/media-analytics/all.yaml` (parameters set in the flavor file override default parameters set in `inventory/default/group_vars/all/10-default.yml`).
This deployment flavor enables the following ingredients:
* Node feature discovery
-* The default Kubernetes CNI: `kube-ovn`
+* The default Kubernetes CNI: `calico`
* Video analytics services
* Telemetry
* Istio service mesh - conditional
@@ -90,32 +127,43 @@ This deployment flavor enables the following ingredients:
## CERA Media Analytics Flavor with VCAC-A
-The pre-defined *media-analytics-vca* deployment flavor provisions an optimized system configuration for media analytics workloads leveraging Visual Cloud Accelerator Card – Analytics (VCAC-A) acceleration. It also provisions a set of video analytics services based on the [Video Analytics Serving](https://github.com/intel/video-analytics-serving) for analytics pipeline management and execution.
+The pre-defined *media-analytics-vca* deployment flavor provisions an optimized system configuration for media analytics workloads leveraging Visual Cloud Accelerator Card for Analytics (VCAC-A) acceleration.
The following are steps to install this flavor:
-1. Configure the OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md).
-2. Add the VCA hostname in the `[edgenode_vca_group]` group in `inventory.ini` file of the OEK, for example:
+1. Configure the CEEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/openness-cluster-setup.md).
+2. Add the VCA host name in the `edgenode_vca_group:` group in `inventory.yml` file of the CEEK, e.g:
+ ```yaml
+ edgenode_vca_group:
+ hosts:
+ vca-node01.openness.org:
+ ansible_host: 172.16.0.1
+ ansible_user: openness
```
- [edgenode_vca_group]
- silpixa00400194
+ > **NOTE:** The VCA host name should *only* be placed once in the `inventory.yml` file and under the `edgenode_vca_group:` group.
+
+3. Update the `inventory.yaml` file by setting the deployment flavor as `media-analytics-vca`
+ ```yaml
+ ---
+ all:
+ vars:
+ cluster_name: media_analytics_vca_cluster
+ flavor: media-analytics-vca
+ ...
```
- > **NOTE:** The VCA host name should *only* be placed once in the `inventory.ini` file and under the `[edgenode_vca_group]` group.
-
-3. Run the OEK deployment script:
+4. Run CEEK deployment script:
```shell
- $ deploy_ne.sh -f media-analytics-vca
+ $ python3 deploy.py
```
-> **NOTE:** At the time of writing this document, *Weave Net*\* is the only supported CNI for network edge deployments involving VCAC-A acceleration. The `weavenet` CNI is automatically selected by the *media-analytics-vca*.
-> **NOTE:** The flag `force_build_enable` (default true) supports force build VCAC-A system image (VCAD) by default, it is defined in flavors/media-analytics-vca/all.yml. By setting the flag as false, OEK will not rebuild the image and re-use the last system image built during deployment. If the flag is true, OEK will force build VCA host kernel and node system image which will take several hours.
+> **NOTE:** At the time of writing this document, *Weave Net*\* is the only supported CNI for network edge deployments involving VCAC-A acceleration. The `weavenet` CNI is automatically selected by the *media-analytics-vca*.
+> **NOTE:** The flag `force_build_enable` (default true) supports force build VCAC-A system image (VCAD) by default, it is defined in flavors/media-analytics-vca/all.yml. By setting the flag as false, CEEK will not rebuild the image and re-use the last system image built during deployment. If the flag is true, CEEK will force build VCA host kernel and node system image which will take several hours.
This deployment flavor enables the following ingredients:
* Node feature discovery
* VPU and GPU device plugins
* HDDL daemonset
* The `weavenet` Kubernetes CNI
-* Video analytics services
* Telemetry
## CERA CDN Transcode Flavor
@@ -123,15 +171,24 @@ This deployment flavor enables the following ingredients:
The pre-defined *cdn-transcode* deployment flavor provisions an optimized system configuration for Content Delivery Network (CDN) transcode sample workloads on Intel® Xeon® platforms.
The following are steps to install this flavor:
-1. Configure the OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md).
-2. Run the OEK deployment script:
+1. Configure the CEEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/openness-cluster-setup.md).
+2. Update the `inventory.yaml` file by setting the deployment flavor as `cdn-transcode`
+ ```yaml
+ ---
+ all:
+ vars:
+ cluster_name: cdn_transcode_cluster
+ flavor: cdn-transcode
+ ...
+ ```
+3. Run CEEK deployment script:
```shell
- $ deploy_ne.sh -f cdn-transcode
+ $ python3 deploy.py
```
This deployment flavor enables the following ingredients:
* Node feature discovery
-* The default Kubernetes CNI: `kube-ovn`
+* The default Kubernetes CNI: `calico`
* Telemetry
## CERA CDN Caching Flavor
@@ -139,10 +196,19 @@ This deployment flavor enables the following ingredients:
The pre-defined *cdn-caching* deployment flavor provisions an optimized system configuration for CDN content delivery workloads on Intel® Xeon® platforms.
The following are steps to install this flavor:
-1. Configure the OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md).
-2. Run the OEK deployment script:
+1. Configure the CEEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/openness-cluster-setup.md).
+2. Update the `inventory.yaml` file by setting the deployment flavor as `cdn-caching`
+ ```yaml
+ ---
+ all:
+ vars:
+ cluster_name: cdn_caching_cluster
+ flavor: cdn-caching
+ ...
+ ```
+3. Run CEEK deployment script:
```shell
- $ deploy_ne.sh -f cdn-caching
+ $ python3 deploy.py
```
This deployment flavor enables the following ingredients:
@@ -157,17 +223,25 @@ The pre-defined Core Control Plane flavor provisions the minimal set of configur
The following are steps to install this flavor:
-1. Configure the OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md).
-
-2. Run the x-OEK deployment script:
- ```
- $ ido-openness-experience-kits# deploy_ne.sh -f core-cplane
- ```
+1. Configure the CEEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/openness-cluster-setup.md).
+2. Update the `inventory.yaml` file by setting the deployment flavor as `core-cplane`
+ ```yaml
+ ---
+ all:
+ vars:
+ cluster_name: core_cplane_cluster
+ flavor: core-cplane
+ ...
+ ```
+3. Run ido-CEEK deployment script:
+ ```shell
+ $ python3 deploy.py
+ ```
This deployment flavor enables the following ingredients:
- Node feature discovery
-- The default Kubernetes CNI: kube-ovn
+- The default Kubernetes CNIs: calico, sriov
- Telemetry
- OpenNESS 5G Microservices
- OAM(Operation, Administration, Maintenance) and AF(Application Function) on the OpenNESS Controller/K8S Master.
@@ -175,7 +249,7 @@ This deployment flavor enables the following ingredients:
- Istio service mesh
- Kiali management console
-> **NOTE:** It is an expectation that the `core-cplane` deployment flavor is done for a setup consisting of *at least one* OpenNESS edge node, i.e: the `inventory.ini` must contain at least one host name under the `edgenode_group` section.
+> **NOTE:** It is an expectation that the `core-cplane` deployment flavor is done for a setup consisting of *at least one* OpenNESS edge node, i.e: the `inventory/default/inventory.ini` must contain at least one host name under the `edgenode_group` section.
> **NOTE:** For a real deployment with the 5G Core Network Functions the NEF and CNTF can be uninstalled using helm charts. Refer to [OpenNESS using CNCA](applications-onboard/using-openness-cnca.md)
@@ -186,17 +260,25 @@ This deployment flavor enables the following ingredients:
The pre-defined Core Control Plane flavor provisions the minimal set of configurations for a 5G User Plane Function on Intel® Xeon® platforms.
The following are steps to install this flavor:
-1. Configure the OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md).
-
-2. Run the x-OEK deployment script:
- ```
- $ ido-openness-experience-kits# deploy_ne.sh -f core-uplane
- ```
+1. Configure the CEEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/openness-cluster-setup.md).
+2. Update the `inventory.yaml` file by setting the deployment flavor as `core-uplane`
+ ```yaml
+ ---
+ all:
+ vars:
+ cluster_name: core_uplane_cluster
+ flavor: core-uplane
+ ...
+ ```
+3. Run ido-CEEK deployment script:
+ ```shell
+ $ python3 deploy.py
+ ```
This deployment flavor enables the following ingredients:
- Node feature discovery
-- Kubernetes CNI: kube-ovn and SRIOV.
+- Kubernetes CNI: calico and SRIOV.
- CPU Manager for Kubernetes (CMK) with 4 exclusive cores (1 to 4) and 1 core in shared pool.
- Kubernetes Device Plugin
- Telemetry
@@ -210,13 +292,20 @@ The pre-defined Untrusted Non3pp Access flavor provisions the minimal set of con
The following are steps to install this flavor:
-1. Configure the OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md).
-
-2. Run the x-OEK deployment script:
-
- ```bash
- $ ido-openness-experience-kits# deploy_ne.sh -f untrusted-non3pp-access
- ```
+1. Configure the CEEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/openness-cluster-setup.md).
+2. Update the `inventory.yaml` file by setting the deployment flavor as `untrusted-non3pp-access`
+ ```yaml
+ ---
+ all:
+ vars:
+ cluster_name: untrusted_non3pp_access_cluster
+ flavor: untrusted-non3pp-access
+ ...
+ ```
+3. Run ido-CEEK deployment script:
+ ```shell
+ $ python3 deploy.py
+ ```
This deployment flavor enables the following ingredients:
@@ -231,12 +320,21 @@ This deployment flavor enables the following ingredients:
The pre-defined CERA Near Edge flavor provisions the required set of configurations for a 5G Converged Edge Reference Architecture for Near Edge deployments on Intel® Xeon® platforms.
The following are steps to install this flavor:
-1. Configure the OEK under CERA repository as described in the [Converged Edge Reference Architecture Near Edge](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/CERA-Near-Edge.md).
-
-2. Run the x-OEK for CERA deployment script:
- ```shell
- $ ido-converged-edge-experience-kits# deploy_openness_for_cera.sh
- ```
+1. Configure the CEEK under CERA repository as described in the [Converged Edge Reference Architecture Near Edge](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/CERA-Near-Edge.md).
+2. Update the `inventory.yaml` file by setting the deployment flavor as `cera_5g_near_edge`
+ ```yaml
+ ---
+ all:
+ vars:
+ cluster_name: cera_5g_near_edge_cluster
+ flavor: cera_5g_near_edge
+ single_node_deployment: true
+ ...
+ ```
+3. Run ido-CEEK deployment script:
+ ```shell
+ $ python3 deploy.py
+ ```
This deployment flavor enables the following ingredients:
@@ -255,12 +353,21 @@ This deployment flavor enables the following ingredients:
The pre-defined CERA Near Edge flavor provisions the required set of configurations for a 5G Converged Edge Reference Architecture for On Premises deployments on Intel® Xeon® platforms. It also provisions for deployment of Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000 tools and components to enable offloading for the acceleration of FEC (Forward Error Correction) to the FPGA.
The following are steps to install this flavor:
-1. Configure the OEK under CERA repository as described in the [Converged Edge Reference Architecture On Premises Edge](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/CERA-5G-On-Prem.md).
-
-2. Run the x-OEK for CERA deployment script:
- ```shell
- $ ido-converged-edge-experience-kits# deploy_openness_for_cera.sh
- ```
+1. Configure the CEEK under CERA repository as described in the [Converged Edge Reference Architecture On Premises Edge](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/CERA-5G-On-Prem.md).
+2. Update the `inventory.yaml` file by setting the deployment flavor as `cera_5g_on_premise`
+ ```yaml
+ ---
+ all:
+ vars:
+ cluster_name: cera_5g_on_premise_cluster
+ flavor: cera_5g_on_premise
+ single_node_deployment: true
+ ...
+ ```
+3. Run ido-CEEK deployment script:
+ ```shell
+ $ python3 deploy.py
+ ```
This deployment flavor enables the following ingredients:
@@ -277,43 +384,35 @@ This deployment flavor enables the following ingredients:
- HugePages of size 1Gi and the amount of HugePages as 40G for the nodes
- RMD operator
-## Reference Service Mesh
-
-Service Mesh technology enables services discovery and sharing of data between application services. This technology can be useful in any CERA. Customers will find Service Mesh under flavors directory as a reference to quickly try out the technology and understand the implications. In future OpenNESS releases this Service Mesh will not be a dedicated flavor.
-
-The pre-defined *service-mesh* deployment flavor installs the OpenNESS service mesh that is based on [Istio](https://istio.io/).
+## CERA 5G Central Office Flavor
-> **NOTE**: When deploying Istio Service Mesh in VMs, a minimum of 8 CPU core and 16GB RAM must be allocated to each worker VM so that Istio operates smoothly
+The pre-defined CERA 5g Central Office flavor provisions the required set of configurations for a 5G Converged Edge Reference Architecture for Core Network applications deployments on Intel® Xeon® platforms.
-Steps to install this flavor are as follows:
-1. Configure OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md).
-2. Run OEK deployment script:
+The following are steps to install this flavor:
+1. Configure the CEEK under CERA repository as described in the [Converged Edge Reference Architecture On Premises Edge](reference-architectures/CERA-5G-On-Prem.md) or [Converged Edge Reference Architecture Near Edge](reference-architectures/CERA-Near-Edge.md).
+2. Update the `inventory.yaml` file by setting the deployment flavor as `cera_5g_central_office`
+ ```yaml
+ ---
+ all:
+ vars:
+ cluster_name: cera_5g_central_office_cluster
+ flavor: cera_5g_central_office
+ single_node_deployment: true
+ ...
+ ```
+3. Run ido-CEEK deployment script:
```shell
- $ deploy_ne.sh -f service-mesh
+ $ python3 deploy.py
```
This deployment flavor enables the following ingredients:
-* Node Feature Discovery
-* The default Kubernetes CNI: `kube-ovn`
-* Istio service mesh
-* Kiali management console
-* Telemetry
-
-> **NOTE:** Kiali management console username can be changed by editing the variable `istio_kiali_username`. By default `istio_kiali_password` is randomly generated and can be retirieved by running `kubectl get secrets/kiali -n istio-system -o json | jq -r '.data.passphrase' | base64 -d` on the Kubernetes controller.
-
-Following parameters in the flavor/all.yaml can be customize for Istio deployment:
-
-```code
-# Istio deployment profile possible values: default, demo, minimal, remote
-istio_deployment_profile: "default"
-# Kiali
-istio_kiali_username: "admin"
-istio_kiali_password: "{{ lookup('password', '/dev/null length=16') }}"
-istio_kiali_nodeport: 30001
-```
-
-> **NOTE:** If creating a customized flavor, the Istio service mesh installation can be included in the Ansible playbook by setting the flag `ne_istio_enable: true` in the flavor file.
+- Kubernetes CNI: Calico and SRIOV.
+- SRIOV device plugin
+- Virtual Functions
+- Kubernetes Device Plugin
+- BIOSFW feature
+- HugePages of size 8Gi and the amount of HugePages as 40G for the nodes
## Central Orchestrator Flavor
@@ -321,14 +420,95 @@ Central Orchestrator Flavor is used to deploy EMCO.
The pre-defined *orchestration* deployment flavor provisions an optimized system configuration for emco (central orchestrator) workloads on Intel Xeon servers. It also provisions a set of central orchestrator services for [edge, multiple clusters orchestration](building-blocks/emco/openness-emco.md).
-Steps to install this flavor are as follows:
-1. Configure OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md).
-2. Run OEK deployment script:
+The following are steps to install this flavor:
+1. Configure the CEEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/openness-cluster-setup.md).
+2. Update the `inventory.yaml` file by setting the deployment flavor as `central_orchestrator`
+ ```yaml
+ ---
+ all:
+ vars:
+ cluster_name: central_orchestrator_cluster
+ flavor: central_orchestrator
+ ...
+ ```
+3. Run CEEK deployment script:
```shell
- $ deploy_ne.sh -f central_orchestrator
+ $ python3 deploy.py
```
This deployment flavor enables the following ingredients:
* Harbor Registry
-* The default Kubernetes CNI: `kube-ovn`
-* EMCO services
\ No newline at end of file
+* The default Kubernetes CNI: `calico`
+* EMCO services
+
+## CERA SD-WAN Edge Flavor
+
+CERA SD-WAN Edge flavor is used to deploy SD-WAN on the OpenNESS cluster acting as an Edge platform. This CERA flavor only supports single-node OpenNESS deployments. It provides configuration that supports running SD-WAN CNFs on the OpenNESS cluster, enables hardware accelerators with the HDDL plugin, and adds support for service mesh and node feature disovery to aid other applications and services runing on the Edge node. This CERA flavor disbless EAA, Kafka adn Edge DNS services for platform optimization.
+
+The following are steps to install this flavor:
+1. Configure the CEEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/openness-cluster-setup.md).
+2. Configure the CNF as described in [Converged Edge Reference Architecture for SD-WAN](reference-architectures/cera_sdwan.md#ewo-configuration).
+3. Update the `inventory.yaml` file by setting the deployment flavor as `sdewan-edge`
+ ```yaml
+ ---
+ all:
+ vars:
+ cluster_name: sdewan_edge_cluster
+ flavor: sdewan-edge
+ single_node_deployment: true
+ ...
+ ```
+4. Run CEEK deployment script:
+ ```shell
+ $ python3 deploy.py
+ ```
+
+This CERA flavor enables the following deployment configuration:
+* Istio servise mesh on the default namespace
+* Node Feature Discovery
+* The primamary K8s CNI: 'calico'
+* The secondary K8s CNI: 'ovn4nfv'
+* HDDL support
+* Telemetry
+* Reserved CPUs for K8s and OS daemons
+* Kiali management console
+
+This CERA flavor disables the following deployment configuration:
+* EAA service with Kafka
+* Edge DNS
+
+## CERA SD-WAN Hub Flavor
+
+CERA SD-WAN Hub flavor is used to deploy SD-WAN on the OpenNESS cluster acting as a Hub for Edge clusters. It only supports single-node OpenNESS deployments. This CERA flavor disabless EAA, Kafka and EAA services for platform optimization.
+
+The following are steps to install this flavor:
+1. Configure the CEEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/openness-cluster-setup.md).
+2. Configure the CNF as described in [Converged Edge Reference Architecture for SD-WAN](reference-architectures/cera_sdwan.md#ewo-configuration).
+3. Update the `inventory.yaml` file by setting the deployment flavor as `sdewan-hub`
+ ```yaml
+ ---
+ all:
+ vars:
+ cluster_name: sdewan_hub_cluster
+ flavor: sdewan-hub
+ single_node_deployment: true
+ ...
+ ```
+4. Run CEEK deployment script:
+ ```shell
+ $ python3 deploy.py
+ ```
+
+This CERA flavor enables the following deployment configuration:
+* The primamary CNI 'calico'
+* The secondary CNI 'ovn4nfv'
+* Telemetry
+* Reserved CPUs for K8s and OS daemons
+* Kiali management console
+
+
+This CERA flavor disables the following deployemnt configuration:
+* Node Feature Discovery
+* EAA service with Kafka
+* Edge DNS
+* HDDL support
diff --git a/doc/getting-started/converged-edge-experience-kits.md b/doc/getting-started/converged-edge-experience-kits.md
new file mode 100644
index 00000000..d6db91b0
--- /dev/null
+++ b/doc/getting-started/converged-edge-experience-kits.md
@@ -0,0 +1,416 @@
+```text
+SPDX-License-Identifier: Apache-2.0
+Copyright (c) 2019-2021 Intel Corporation
+```
+
+# Converged Edge Experience Kits
+- [Purpose](#purpose)
+- [Converged Edge Experience Kit explained](#converged-edge-experience-kit-explained)
+- [The inventory file](#the-inventory-file)
+- [Sample Deployment Definitions](#sample-deployment-definitions)
+ - [Single Cluster Deployment](#single-cluster-deployment)
+ - [Single-node Cluster Deployment](#single-node-cluster-deployment)
+ - [Multi-cluster deployment](#multi-cluster-deployment)
+- [Deployment customization](#deployment-customization)
+- [Customizing kernel, grub parameters, and tuned profile & variables per host](#customizing-kernel-grub-parameters-and-tuned-profile--variables-per-host)
+ - [IP address range allocation for various CNIs and interfaces](#ip-address-range-allocation-for-various-cnis-and-interfaces)
+ - [Default values](#default-values)
+ - [Use different realtime kernel (3.10.0-1062)](#use-different-realtime-kernel-3100-1062)
+ - [Use different non-rt kernel (3.10.0-1062)](#use-different-non-rt-kernel-3100-1062)
+ - [Use tuned 2.9](#use-tuned-29)
+ - [Default kernel and configure tuned](#default-kernel-and-configure-tuned)
+ - [Change amount of HugePages](#change-amount-of-hugepages)
+ - [Change size of HugePages](#change-size-of-hugepages)
+ - [Change amount and size of HugePages](#change-amount-and-size-of-hugepages)
+ - [Remove input output memory management unit (IOMMU) from grub params](#remove-input-output-memory-management-unit-iommu-from-grub-params)
+ - [Add custom GRUB parameter](#add-custom-grub-parameter)
+ - [Configure OVS-DPDK in kube-ovn](#configure-ovs-dpdk-in-kube-ovn)
+- [Adding new CNI plugins for Kubernetes (Network Edge)](#adding-new-cni-plugins-for-kubernetes-network-edge)
+
+## Purpose
+
+The Converged Edge Experience Kit is a refreshed repository of Ansible\* playbooks for automated deployment of Converged Edge Reference Architectures.
+
+The Converged Edge Experience Kit introduces the following capabilities:
+1. Wide range of deployments from individual building blocks to full end-to-end reference deployments
+2. Minimal to near-zero user interventions.. Typically, the user provides the details of the nodes that constitute the OpenNESS edge cluster and executes the deployment script
+3. More advanced deployments can be customized in the form of Ansible\* group and host variables. This mode requires users with in-depth knowledge and expertise of the subject edge deployment
+4. Enablement of end-to-end multi-cluster deployments such as Near Edge and On-premises reference architectures
+
+## Converged Edge Experience Kit explained
+The Converged Edge Experience Kit repository is organized as detailed in the following structure:
+```
+├── cloud
+├── flavors
+├── inventory
+│ ├── automated
+│ └── default
+│ ├── group_vars
+│ │ └── all
+│ │ └── 10-default.yml
+│ └── host_vars
+├── playbooks
+│ ├── infrastructure.yml
+│ ├── kubernetes.yml
+│ └── applications.yml
+├── roles
+│ ├── applications
+│ ├── infrastructure
+│ ├── kubernetes
+│ └── telemetry
+├── scripts
+├── tasks
+├── inventory.yml
+├── network_edge_cleanup.yml
+└── deploy.py
+```
+
+* `flavors`: definition variables of pre-defined deployment flavors
+* `inventory`: definition of default & generated Ansible\* variables
+* `inventory/default/group_vars/all/10-default.yml`: definition of default variables for all deployments
+* `inventory/automated`: inventory files that were automatically generated by the deployment helper script
+* `playbooks`: Ansible\* playbooks for infrastructure, Kubernetes and applications
+* `roles`: Ansible roles for infrastructure, Kubernetes, applications and telemetry
+* `scripts`: utility scripts
+* `inventory.yml`: definition of the clusters, their controller & edge nodes and respective deployment flavors
+* `deploy.py`: the deployment helper script
+
+
+## The inventory file
+The inventory file defines the group of physical nodes that constitute the edge cluster which will be deployed by the Converged Edge Experience Kits. The inventory file YAML specification allows deploying multiple edge clusters in one command run. Multiple clusters must be separated by the 3 dashes `---` directive.
+
+> **NOTE**: for multi-cluster deployments, user must assign distinct names to the controller and the edge nodes, i.e., no hostname repetitions.
+
+The following variables must be defined
+
+* `cluster_name`: a given name for the OpenNESS edge cluster deployment - separated by underscores `_` instead of spaces.
+* `flavor`: the deployment flavor applicable for the OpenNESS edge deployment as defined in the [Deployment flavors](../flavors.md) document.
+* `single_node_deployment`: If set to `true`, a single-node cluster is deployed.. Must satisfy the following conditions:
+ - IP address (`ansible_host`) for both controller and node must be the same
+ - `controller_group` and `edgenode_group` groups must contain exactly one host
+* `limit` -- **OPTIONAL**: constrains the deployment to a specific Ansible\* group, e.g., `controller`, `edgenode`, `edgenode_vca_group` or just a particular hostname. This is passed as a `--limit` command-line option when executing `ansible-playbook`.
+
+## Sample Deployment Definitions
+### Single Cluster Deployment
+Set `single_node_deployment` flag to `false` in the inventory file and provide the controller node name under the `controller_group` and the edge node names under the `edgenode_group`.
+
+Example:
+
+```yaml
+---
+all:
+ vars:
+ cluster_name: 5g_near_edge
+ flavor: cera_5g_near_edge
+ single_node_deployment: false
+ limit:
+controller_group:
+ hosts:
+ ctrl.openness.org:
+ ansible_host: 10.102.227.154
+ ansible_user: openness
+edgenode_group:
+ hosts:
+ node01.openness.org:
+ ansible_host: 10.102.227.11
+ ansible_user: openness
+ node02.openness.org:
+ ansible_host: 10.102.227.79
+ ansible_user: openness
+edgenode_vca_group:
+ hosts:
+ptp_master:
+ hosts:
+ptp_slave_group:
+ hosts:
+```
+
+### Single-node Cluster Deployment
+Set `single_node_deployment` flag to `true` in the inventory file and provide the node name in the `controller_group` and the `edgenode_group`.
+
+Example:
+
+```yaml
+---
+all:
+ vars:
+ cluster_name: 5g_central_office
+ flavor: cera_5g_central_office
+ single_node_deployment: true
+ limit:
+controller_group:
+ hosts:
+ node.openness.org:
+ ansible_host: 10.102.227.234
+ ansible_user: openness
+edgenode_group:
+ hosts:
+ node.openness.org:
+ ansible_host: 10.102.227.234
+ ansible_user: openness
+edgenode_vca_group:
+ hosts:
+ptp_master:
+ hosts:
+ptp_slave_group:
+ hosts:
+```
+
+### Multi-cluster deployment
+Provide multiple clusters YAML specifications separated by the 3 dashes `---` directive in the inventory.yml. A node name should be used only once across the inventory file, i.e: distinct node names.
+
+Example:
+
+```yaml
+---
+all:
+ vars:
+ cluster_name: 5g_near_edge
+ flavor: cera_5g_near_edge
+ single_node_deployment: true
+ limit:
+controller_group:
+ hosts:
+ node.openness01.org:
+ ansible_host: 10.102.227.154
+ ansible_user: openness
+edgenode_group:
+ hosts:
+ node.openness01.org:
+ ansible_host: 10.102.227.154
+ ansible_user: openness
+edgenode_vca_group:
+ hosts:
+ptp_master:
+ hosts:
+ptp_slave_group:
+ hosts:
+---
+all:
+ vars:
+ cluster_name: 5g_central_office
+ flavor: cera_5g_central_office
+ single_node_deployment: true
+ limit:
+controller_group:
+ hosts:
+ node.openness02.org:
+ ansible_host: 10.102.227.234
+ ansible_user: openness
+edgenode_group:
+ hosts:
+ node.openness02.org:
+ ansible_host: 10.102.227.234
+ ansible_user: openness
+edgenode_vca_group:
+ hosts:
+ptp_master:
+ hosts:
+ptp_slave_group:
+ hosts:
+```
+
+## Deployment customization
+The `deploy.py` script creates a new inventory for each cluster to be deployed in a `inventory/automated` directory. These inventories are based on `inventory/default` - all directories and files are symlinked. Additionally, relevant flavor files are symlinked.
+
+Customizations made to `inventory/default/group_vars` and `inventory/default/host_vars` will affect every deployment performed by `deploy.py` (because these files are symlinked, not copied). Therefore it is a good place to provide changes relevant to the nodes of the cluster.
+
+## Customizing kernel, grub parameters, and tuned profile & variables per host
+
+CEEKs allow a user to customize kernel, grub parameters, and tuned profiles by leveraging Ansible's feature of `host_vars`.
+
+> **NOTE**: `inventory/default/groups_vars/[edgenode|controller|edgenode_vca]_group` directories contain variables applicable for the respective groups and they can be used in `inventory/default/host_vars` to change on per node basis while `inventory/default/group_vars/all` contains cluster wide variables.
+
+CEEKs contain a `inventory/default/host_vars/` directory in which we can create another directory (`nodes-inventory-name`) and place a YAML file (`10-open.yml`, e.g., `node01/10-open.yml`). The file would contain variables that would override roles' default values.
+
+> **NOTE**: Despite the ability to customize parameters (kernel), it is required to have a clean CentOS\* 7.9.2009 operating system installed on hosts (from a minimal ISO image) that will be later deployed from Ansible scripts. This OS shall not have any user customizations.
+
+To override the default value, place the variable's name and new value in the host's vars file. For example, the contents of `inventory/default/host_vars/node01/10-open.yml` that would result in skipping kernel customization on that node:
+
+```yaml
+kernel_skip: true
+```
+
+The following are several common customization scenarios.
+
+### IP address range allocation for various CNIs and interfaces
+
+The Converged Edge Experience kits deployment uses/allocates/reserves a set of IP address ranges for different CNIs and interfaces. The server or host IP address should not conflict with the default address allocation.
+In case if there is a critical need for the server IP address used by the OpenNESS default deployment, it would require to modify the default addresses used by the OpenNESS.
+
+Following files specify the CIDR for CNIs and interfaces. These are the IP address ranges allocated and used by default just for reference.
+
+```yaml
+flavors/media-analytics-vca/all.yml:19:vca_cidr: "172.32.1.0/12"
+inventory/default/group_vars/all/10-open.yml:90:calico_cidr: "10.245.0.0/16"
+inventory/default/group_vars/all/10-open.yml:93:flannel_cidr: "10.244.0.0/16"
+inventory/default/group_vars/all/10-open.yml:96:weavenet_cidr: "10.32.0.0/12"
+inventory/default/group_vars/all/10-open.yml:99:kubeovn_cidr: "10.16.0.0/16,100.64.0.0/16,10.96.0.0/12"
+roles/kubernetes/cni/kubeovn/controlplane/templates/crd_local.yml.j2:13: cidrBlock: "192.168.{{ loop.index0 + 1 }}.0/24"
+```
+
+The `192.168.*.*` is used for SRIOV and interface service IP address allocation in Kube-ovn CNI. So it is not allowed for the server IP address, which conflicting with this range.
+Completely avoid the range of address defined as per the netmask as it may conflict in routing rules.
+
+E.g., If the server/host IP address is required to use `192.168.*.*` while this range by default used for SRIOV interfaces in OpenNESS. The IP address range for `cidrBlock` in `roles/kubernetes/cni/kubeovn/controlplane/templates/crd_local.yml.j2` file can be changed to `192.167.{{ loop.index0 + 1 }}.0/24` to use some other IP segment for SRIOV interfaces.
+
+
+### Default values
+Here are several default values:
+
+```yaml
+# --- machine_setup/custom_kernel
+kernel_skip: false # use this variable to disable custom kernel installation for host
+
+kernel_repo_url: http://linuxsoft.cern.ch/cern/centos/7.9.2009/rt/CentOS-RT.repo
+kernel_repo_key: http://linuxsoft.cern.ch/cern/centos/7.9.2009/os/x86_64/RPM-GPG-KEY-cern
+kernel_package: kernel-rt-kvm
+kernel_devel_package: kernel-rt-devel
+kernel_version: 3.10.0-1160.11.1.rt56.1145.el7.x86_64
+
+kernel_dependencies_urls: []
+kernel_dependencies_packages: []
+
+# --- machine_setup/grub
+hugepage_size: "2M" # Or 1G
+hugepage_amount: "5000"
+
+default_grub_params: "hugepagesz={{ hugepage_size }} hugepages={{ hugepage_amount }} intel_iommu=on iommu=pt"
+additional_grub_params: ""
+
+# --- machine_setup/configure_tuned
+tuned_skip: false # use this variable to skip tuned profile configuration for host
+tuned_packages:
+ - tuned-2.11.0-9.el7
+ - http://ftp.scientificlinux.org/linux/scientific/7/x86_64/os/Packages/tuned-profiles-realtime-2.11.0-9.el7.noarch.rpm
+tuned_profile: realtime
+tuned_vars: |
+ isolated_cores=2-3
+ nohz=on
+ nohz_full=2-3
+```
+
+### Use different realtime kernel (3.10.0-1062)
+By default, `kernel-rt-kvm-3.10.0-1160.11.1.rt56.1145.el7.x86_64` from the built-in repository is installed.
+
+To use another version (e.g., `kernel-rt-kvm-3.10.0-1062.9.1.rt56.1033.el7.x86_64`), create a `host_var` file for the host with content:
+```yaml
+kernel_version: 3.10.0-1062.9.1.rt56.1033.el7.x86_64
+```
+
+### Use different non-rt kernel (3.10.0-1062)
+The CEEK installs a real-time kernel by default. However, the non-rt kernel is present in the official CentOS repository. Therefore, to use a different non-rt kernel, the following overrides must be applied:
+```yaml
+kernel_repo_url: "" # package is in default repository, no need to add new repository
+kernel_package: kernel # instead of kernel-rt-kvm
+kernel_devel_package: kernel-devel # instead of kernel-rt-devel
+kernel_version: 3.10.0-1062.el7.x86_64
+
+dpdk_kernel_devel: "" # kernel-devel is in the repository, no need for url with RPM
+
+# Since, we're not using rt kernel, we don't need a tuned-profiles-realtime but want to keep the tuned 2.11
+tuned_packages:
+- http://linuxsoft.cern.ch/scientific/7x/x86_64/os/Packages/tuned-2.11.0-8.el7.noarch.rpm
+tuned_profile: balanced
+tuned_vars: ""
+```
+
+### Use tuned 2.9
+```yaml
+tuned_packages:
+- tuned-2.9.0-1.el7fdp
+- tuned-profiles-realtime-2.9.0-1.el7fdp
+```
+
+### Default kernel and configure tuned
+```yaml
+kernel_skip: true # skip kernel customization altogether
+
+# update tuned to 2.11 but don't install tuned-profiles-realtime since we're not using rt kernel
+tuned_packages:
+- http://linuxsoft.cern.ch/scientific/7x/x86_64/os/Packages/tuned-2.11.0-8.el7.noarch.rpm
+tuned_profile: balanced
+tuned_vars: ""
+```
+
+### Change amount of HugePages
+```yaml
+hugepage_amount: "1000" # default is 5000
+```
+
+### Change size of HugePages
+```yaml
+hugepage_size: "1G" # default is 2M
+```
+
+### Change amount and size of HugePages
+```yaml
+hugepage_amount: "10" # default is 5000
+hugepage_size: "1G" # default is 2M
+```
+
+### Remove input output memory management unit (IOMMU) from grub params
+```yaml
+default_grub_params: "hugepagesz={{ hugepage_size }} hugepages={{ hugepage_amount }}"
+```
+
+### Add custom GRUB parameter
+```yaml
+additional_grub_params: "debug"
+```
+
+### Configure OVS-DPDK in kube-ovn
+By default, OVS-DPDK is disabled (due to set calico as a default cni). To enable it, set a flag:
+```yaml
+kubeovn_dpdk: true
+```
+
+> **NOTE**: This flag should be set in `roles/kubernetes/cni/kubeovn/common/defaults/main.ym` or added to `inventory/default/group_vars/all/10-open.yml`.
+
+Additionally, HugePages in the OVS pod can be adjusted once default HugePage settings are changed.
+```yaml
+kubeovn_dpdk_socket_mem: "1024,0" # Amount of hugepages reserved for OVS per NUMA node (node 0, node 1, ...) in MB
+kubeovn_dpdk_hugepage_size: "2Mi" # Default size of hugepages, can be 2Mi or 1Gi
+kubeovn_dpdk_hugepages: "1Gi" # Total amount of hugepages that can be used by OVS-OVN pod
+```
+
+> **NOTE**: If the machine has multiple NUMA nodes, remember that HugePages must be allocated for **each NUMA node**. For example, if a machine has two NUMA nodes, `kubeovn_dpdk_socket_mem: "1024,1024"` or similar should be specified.
+
+>**NOTE**: If `kubeovn_dpdk_socket_mem` is changed, set the value of `kubeovn_dpdk_hugepages` to be equal to or greater than the sum of `kubeovn_dpdk_socket_mem` values. For example, for `kubeovn_dpdk_socket_mem: "1024,1024"`, set `kubeovn_dpdk_hugepages` to at least `2Gi` (equal to 2048 MB).
+
+>**NOTE**: `kubeovn_dpdk_socket_mem`, `kubeovn_dpdk_pmd_cpu_mask`, and `kubeovn_dpdk_lcore_mask` can be set on per node basis but the HugePage amount allocated with `kubeovn_dpdk_socket_mem` cannot be greater than `kubeovn_dpdk_hugepages`, which is the same for the whole cluster.
+
+OVS pods limits are configured by:
+```yaml
+kubeovn_dpdk_resources_requests: "1Gi" # OVS-OVN pod RAM memory (requested)
+kubeovn_dpdk_resources_limits: "1Gi" # OVS-OVN pod RAM memory (limit)
+```
+CPU settings can be configured using:
+```yaml
+kubeovn_dpdk_pmd_cpu_mask: "0x4" # DPDK PMD CPU mask
+kubeovn_dpdk_lcore_mask: "0x2" # DPDK lcore mask
+```
+
+## Adding new CNI plugins for Kubernetes (Network Edge)
+
+* The role that handles CNI deployment must be placed in the `roles/kubernetes/cni/` directory (e.g., `roles/kubernetes/cni/kube-ovn/`).
+* Subroles for control plane and node (if needed) should be placed in the `controlplane/` and `node/` directories (e.g., `roles/kubernetes/cni/kube-ovn/{controlplane,node}`).
+* If there is a part of common command for both control plane and node, additional sub-roles can be created: `common` (e.g., `roles/kubernetes/cni/sriov/common`).
+>**NOTE**: The automatic inclusion of the `common` role should be handled by Ansible mechanisms (e.g., usage of meta's `dependencies` or `include_role` module)
+* Name of the main role must be added to the `available_kubernetes_cnis` variable in `roles/kubernetes/cni/defaults/main.yml`.
+* If additional requirements must checked before running the playbook (to not have errors during execution), they can be placed in the `roles/kubernetes/cni/tasks/precheck.yml` file, which is included as a pre_task in plays for both Edge Controller and Edge Node.
+The following are basic prechecks that are currently executed:
+ * Check if any CNI is requested (i.e., `kubernetes_cni` is not empty).
+ * Check if `sriov` is not requested as primary (first on the list) or standalone (only on the list).
+ * Check if `calico` is requested as a primary (first on the list).
+ * Check if `kubeovn` is requested as a primary (first on the list).
+ * Check if the requested CNI is available (check if some CNI is requested that isn't present in the `available_kubernetes_cnis` list).
+* CNI roles should be as self-contained as possible (unless necessary, CNI-specific tasks should not be present in `kubernetes/{controlplane,node,common}` or `openness/network_edge/{controlplane,node}`).
+* If the CNI needs a custom OpenNESS service (e.g., Interface Service in case of `kube-ovn`), it can be added to the `openness/network_edge/{controlplane,node}`.
+ Preferably, such tasks would be contained in a separate task file (e.g., `roles/openness/controlplane/tasks/kube-ovn.yml`) and executed only if the CNI is requested. For example:
+ ```yaml
+ - name: deploy interface service for kube-ovn
+ include_tasks: kube-ovn.yml
+ when: "'kubeovn' in kubernetes_cnis"
+ ```
+* If the CNI is used as an additional CNI (with Multus\*), the network attachment definition must be supplied ([refer to Multus docs for more info](https://github.com/intel/multus-cni/blob/master/docs/quickstart.md#storing-a-configuration-as-a-custom-resource)).
diff --git a/doc/getting-started/harbor-registry.md b/doc/getting-started/harbor-registry.md
new file mode 100644
index 00000000..11469d84
--- /dev/null
+++ b/doc/getting-started/harbor-registry.md
@@ -0,0 +1,206 @@
+```text
+SPDX-License-Identifier: Apache-2.0
+Copyright (c) 2019-2021 Intel Corporation
+```
+
+# Harbor Registry Service in OpenNESS
+- [Deploy Harbor registry](#deploy-harbor-registry)
+ - [System Prerequisite](#system-prerequisite)
+ - [Ansible Playbooks](#ansible-playbooks)
+ - [Projects](#projects)
+- [Harbor login](#harbor-login)
+- [Harbor registry image push](#harbor-registry-image-push)
+- [Harbor registry image pull](#harbor-registry-image-pull)
+- [Harbor UI](#harbor-ui)
+- [Harbor CLI](#harbor-cli)
+ - [CLI - List Project](#cli---list-project)
+ - [CLI - List Image Repositories](#cli---list-image-repositories)
+ - [CLI - Delete Image](#cli---delete-image)
+
+Harbor registry is an open source cloud native registry which can support images and relevant artifacts with extended functionalities as described in [Harbor](https://goharbor.io/). On the OpenNESS environment, Harbor registry service is installed on Control plane Node by Harbor Helm Chart [github](https://github.com/goharbor/harbor-helm/releases/tag/v1.5.1). Harbor registry authentication enabled with self-signed certificates as well as all nodes and control plane will have access to the Harbor registry.
+
+## Deploy Harbor registry
+
+### System Prerequisite
+* The available system disk should be reserved at least 20G for Harbor PV/PVC usage. The defaut disk PV/PVC total size is 20G. The values can be configured in the ```roles/harbor_registry/controlplane/defaults/main.yaml```.
+* If huge pages enabled, need 1G(hugepage size 1G) or 300M(hugepage size 2M) to be reserved for Harbor usage.
+
+### Ansible Playbooks
+Ansible `harbor_registry` roles created in Converged Edge Experience Kits. For deploying a Harbor registry on Kubernetes, control plane roles are enabled in the main `network_edge.yml` playbook file.
+
+```ini
+role: harbor_registry/controlplane
+role: harbor_registry/node
+```
+
+The following steps are processed by converged-edge-experience-kits during the Harbor registry installation on the OpenNESS control plane node.
+
+* Download Harbor Helm Charts on the Kubernetes Control plane Node.
+* Check whether huge pages is enabled and templates values.yaml file accordingly.
+* Create namespace and disk PV for Harbor Services (The default disk PV/PVC total size is 20G. The values can be configured in the `roles/kubernetes/harbor_registry/controlplane/defaults/main.yaml`).
+* Install Harbor on the control plane node using the Helm Charts (The CA crt will be generated by Harbor itself).
+* Create the new project - ```intel``` for OpenNESS microservices, Kurbernetes enhanced add-on images storage.
+* Docker login the Harbor Registry, thus enable pulling, pushing and tag images with the Harbor Registry
+
+
+On the OpenNESS edge nodes, converged-edge-experience-kits will conduct the following steps:
+* Get harbor.crt from the OpenNESS control plane node and save into the host location
+ /etc/docker/certs.d/
+* Docker login the Harbor Registry, thus enable pulling, pushing and tag images with the Harbor Registry
+* After above steps, the Node and Ansible host can access the private Harbor registry.
+* The IP address of the Harbor registry will be: "Kubernetes_Control_Plane_IP"
+* The port number of the Harbor registry will be: 30003
+
+
+### Projects
+Two Harbor projects will be created by CEEK as below:
+- ```library``` The registry project can be used by edge application developer as default images registries.
+- ```intel``` The registry project contains the registries for the OpenNESS microservices and relevant kubernetes addon images. Can also be used for OpenNESS sample application images.
+
+## Harbor login
+For the nodes inside of the OpenNESS cluster, converged-edge-experience-kits ansible playbooks automatically login and prepare harbor CA certifications to access Harbor services.
+
+For the external host outside of the OpenNESS cluster, can use following commands to access the Harbor Registry:
+
+```shell
+# create directory for harbor's CA crt
+mkdir /etc/docker/certs.d/${Kubernetes_Control_Plane_IP}:${port}/
+
+# get EMCO harbor CA.crt
+set -o pipefail && echo -n | openssl s_client -showcerts -connect ${Kubernetes_Control_Plane_IP}:${port} 2>/dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /etc/docker/certs.d/${Kubernetes_Control_Plane_IP}:${port}/harbor.crt
+
+# docker login harobr registry
+docker login ${Kubernetes_Control_Plane_IP}:${port} -uadmin -p${harborAdminPassword}
+```
+The default access configuration for the Harbor Registry is:
+ ```ini
+Kubernetes_Control_Plane_IP: 30003(default)
+harborAdminPassword: Harbor12345(default)
+ ```
+
+## Harbor registry image push
+Use the Docker tag to create an alias of the image with the fully qualified path to your Harbor registry after the tag successfully pushes the image to the Harbor registry.
+
+ ```shell
+ docker tag nginx:latest {Kubernetes_Control_Plane_IP}:30003/intel/nginx:latest
+ docker push {Kubernetes_Control_Plane_IP}:30003/intel/nginx:latest
+ ```
+Now image the tag with the fully qualified path to your private registry. You can push the image to the registry using the Docker push command.
+
+## Harbor registry image pull
+Use the `docker pull` command to pull the image from Harbor registry:
+
+ ```shell
+ docker pull {Kubernetes_Control_Plane_IP}:30003/intel/nginx:latest
+ ```
+
+## Harbor UI
+Open the https://{Kubernetes_Control_Plane_IP}:30003 with login username ```admin``` and password ```Harbor12345```:
+![](openness-cluster-setup-images/harbor_ui.png)
+
+You could see two projects: ```intel``` and ```library``` on the Web UI. For more details about Harbor usage, can refer to [Harbor docs](https://goharbor.io/docs/2.1.0/working-with-projects/).
+
+## Harbor CLI
+Apart for Harbor UI, you can also use ```curl``` to check Harbor projects and images. The examples will be shown as below.
+```text
+In the examples, 10.240.224.172 is IP address of {Kubernetes_Control_Plane_IP}
+If there is proxy connection issue with ```curl``` command, can add ```--proxy``` into the command options.
+```
+
+### CLI - List Project
+Use following example commands to check projects list:
+ ```shell
+ # curl -X GET "https://10.240.224.172:30003/api/v2.0/projects" -H "accept: application/json" -k --cacert /etc/docker/certs.d/10.240.224.172:30003/harbor.crt -u "admin:Harbor12345 | jq"
+ [
+ {
+ "creation_time": "2020-11-26T08:47:31.626Z",
+ "current_user_role_id": 1,
+ "current_user_role_ids": [
+ 1
+ ],
+ "cve_allowlist": {
+ "creation_time": "2020-11-26T08:47:31.628Z",
+ "id": 1,
+ "items": [],
+ "project_id": 2,
+ "update_time": "2020-11-26T08:47:31.628Z"
+ },
+ "metadata": {
+ "public": "true"
+ },
+ "name": "intel",
+ "owner_id": 1,
+ "owner_name": "admin",
+ "project_id": 2,
+ "repo_count": 3,
+ "update_time": "2020-11-26T08:47:31.626Z"
+ },
+ {
+ "creation_time": "2020-11-26T08:39:13.707Z",
+ "current_user_role_id": 1,
+ "current_user_role_ids": [
+ 1
+ ],
+ "cve_allowlist": {
+ "creation_time": "0001-01-01T00:00:00.000Z",
+ "items": [],
+ "project_id": 1,
+ "update_time": "0001-01-01T00:00:00.000Z"
+ },
+ "metadata": {
+ "public": "true"
+ },
+ "name": "library",
+ "owner_id": 1,
+ "owner_name": "admin",
+ "project_id": 1,
+ "update_time": "2020-11-26T08:39:13.707Z"
+ }
+ ]
+
+ ```
+
+### CLI - List Image Repositories
+Use following example commands to check images repository list of project - ```intel```:
+ ```shell
+ # curl -X GET "https://10.240.224.172:30003/api/v2.0/projects/intel/repositories" -H "accept: application/json" -k --cacert /etc/docker/certs.d/10.240.224.172:30003/harbor.crt -u "admin:Harbor12345" | jq
+ [
+ {
+ "artifact_count": 1,
+ "creation_time": "2020-11-26T08:57:43.690Z",
+ "id": 3,
+ "name": "intel/sriov-device-plugin",
+ "project_id": 2,
+ "pull_count": 1,
+ "update_time": "2020-11-26T08:57:55.240Z"
+ },
+ {
+ "artifact_count": 1,
+ "creation_time": "2020-11-26T08:56:16.565Z",
+ "id": 2,
+ "name": "intel/sriov-cni",
+ "project_id": 2,
+ "update_time": "2020-11-26T08:56:16.565Z"
+ },
+ {
+ "artifact_count": 1,
+ "creation_time": "2020-11-26T08:49:25.453Z",
+ "id": 1,
+ "name": "intel/multus",
+ "project_id": 2,
+ "update_time": "2020-11-26T08:49:25.453Z"
+ }
+ ]
+
+ ```
+
+### CLI - Delete Image
+Use following example commands to delete the image repository of project - ```intel```, for example:
+ ```shell
+ # curl -X DELETE "https://10.240.224.172:30003/api/v2.0/projects/intel/repositories/nginx" -H "accept: application/json" -k --cacert /etc/docker/certs.d/10.240.224.172:30003/harbor.crt -u "admin:Harbor12345"
+ ```
+
+Use following example commands to delete a specific image version:
+ ```sh
+ # curl -X DELETE "https://10.240.224.172:30003/api/v2.0/projects/intel/repositories/nginx/artifacts/1.14.2" -H "accept: application/json" -k --cacert /etc/docker/certs.d/10.240.224.172:30003/harbor.crt -u "admin:Harbor12345"
+ ```
diff --git a/doc/getting-started/index.html b/doc/getting-started/index.html
index c4b6d93b..35875c79 100644
--- a/doc/getting-started/index.html
+++ b/doc/getting-started/index.html
@@ -10,5 +10,5 @@
---
You are being redirected to the OpenNESS Docs.
diff --git a/doc/building-blocks/enhanced-platform-awareness/openness-kubernetes-dashboard.md b/doc/getting-started/kubernetes-dashboard.md
similarity index 80%
rename from doc/building-blocks/enhanced-platform-awareness/openness-kubernetes-dashboard.md
rename to doc/getting-started/kubernetes-dashboard.md
index 56482394..17a2e94a 100644
--- a/doc/building-blocks/enhanced-platform-awareness/openness-kubernetes-dashboard.md
+++ b/doc/getting-started/kubernetes-dashboard.md
@@ -17,7 +17,7 @@ Kubernetes Dashboard is a web user interface for Kubernetes. User can use Dashbo
## Details - Kubernetes Dashboard support in OpenNESS
-Kubernetes Dashboard is disabled by default in OpenNESS Experience Kits. It can be enabled by setting variable `kubernetes_dashboard_enable` in `group_vars/all/10-default.yml` file to `true` value:
+Kubernetes Dashboard is disabled by default in Converged Edge Experience Kits. It can be enabled by setting variable `kubernetes_dashboard_enable` in `inventory/default/group_vars/all/10-open.yml` file to `true` value:
```yaml
# Kubernetes Dashboard
@@ -26,7 +26,7 @@ kubernetes_dashboard_enable: false # set to true to enable Kubernetes Dashboard
### TLS encryption
-TLS for Kubernetes dashboard is enabled by default. User can disable TLS encryption using variable `disable_dashboard_tls` in `group_vars/all/10-default.yml`:
+TLS for Kubernetes dashboard is enabled by default. User can disable TLS encryption using variable `disable_dashboard_tls` in `inventory/default/group_vars/all/10-open.yml`:
```yaml
disable_dashboard_tls: false # set to true to disable TLS
@@ -34,7 +34,7 @@ disable_dashboard_tls: false # set to true to disable TLS
### Usage
-User can use Kubernetes Dashboard by browsing `https://:30443` if TLS is enabled or `http://:30443` if TLS is disabled.
+User can use Kubernetes Dashboard by browsing `https://:30553` if TLS is enabled or `http://:30553` if TLS is disabled.
With TLS enabled Kubernetes Dashboard will prompt for `Kubernetes Service Account token` to log in user. You can get the token by executing the following command on your controller:
@@ -42,11 +42,11 @@ With TLS enabled Kubernetes Dashboard will prompt for `Kubernetes Service Accoun
kubectl describe secret -n kube-system $(kubectl get secret -n kube-system | grep 'kubernetes-dashboard-token' | awk '{print $1}') | grep 'token:' | awk '{print $2}'
```
-> NOTE: To use Kubernetes Dashboard with TLS encryption user will have to add `https://:30443` to web browser's list of security exceptions.
+> NOTE: To use Kubernetes Dashboard with TLS encryption user will have to add `https://:30553` to web browser's list of security exceptions.
### Access rights
-By default OpenNESS will deploy Kubernetes Dashboard with read-only access to every information except Kubernetes' secrets. To change access rights (for example hide information about persistent volumes claims, etc.) please modify cluster role defined in `roles/kubernetes/dashboard/files/clusterrole.yml` of OpenNESS Experience Kits.
+By default OpenNESS will deploy Kubernetes Dashboard with read-only access to every information except Kubernetes' secrets. To change access rights (for example hide information about persistent volumes claims, etc.) please modify cluster role defined in `roles/kubernetes/dashboard/files/clusterrole.yml` of Converged Edge Experience Kits.
## Reference
- [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/)
diff --git a/doc/getting-started/network-edge/controller-edge-node-setup.md b/doc/getting-started/network-edge/controller-edge-node-setup.md
deleted file mode 100644
index 8836fff0..00000000
--- a/doc/getting-started/network-edge/controller-edge-node-setup.md
+++ /dev/null
@@ -1,612 +0,0 @@
-```text
-SPDX-License-Identifier: Apache-2.0
-Copyright (c) 2019-2020 Intel Corporation
-```
-
-# OpenNESS Network Edge: Controller and Edge node setup
-- [Quickstart](#quickstart)
-- [Preconditions](#preconditions)
-- [Running playbooks](#running-playbooks)
- - [Deployment scripts](#deployment-scripts)
- - [Network Edge playbooks](#network-edge-playbooks)
- - [Cleanup playbooks](#cleanup-playbooks)
- - [Supported EPA features](#supported-epa-features)
- - [VM support for Network Edge](#vm-support-for-network-edge)
- - [Application on-boarding](#application-on-boarding)
- - [Single-node Network Edge cluster](#single-node-network-edge-cluster)
- - [Harbor registry](#harbor-registry)
- - [Deploy Harbor registry](#deploy-harbor-registry)
- - [Harbor login](#harbor-login)
- - [Harbor registry image push](#harbor-registry-image-push)
- - [Harbor registry image pull](#harbor-registry-image-pull)
- - [Harbor UI](#harbor-ui)
- - [Harbor CLI](#harbor-registry-CLI)
- - [Kubernetes cluster networking plugins (Network Edge)](#kubernetes-cluster-networking-plugins-network-edge)
- - [Selecting cluster networking plugins (CNI)](#selecting-cluster-networking-plugins-cni)
- - [Adding additional interfaces to pods](#adding-additional-interfaces-to-pods)
-- [Q&A](#qa)
- - [Configuring time](#configuring-time)
- - [Setup static hostname](#setup-static-hostname)
- - [Configuring inventory](#configuring-inventory)
- - [Exchanging SSH keys between hosts](#exchanging-ssh-keys-between-hosts)
- - [Setting proxy](#setting-proxy)
- - [Obtaining installation files](#obtaining-installation-files)
- - [Setting Git](#setting-git)
- - [GitHub token](#github-token)
- - [Customize tag/branch/sha to checkout](#customize-tagbranchsha-to-checkout)
- - [Customization of kernel, grub parameters, and tuned profile](#customization-of-kernel-grub-parameters-and-tuned-profile)
-
-# Quickstart
-The following set of actions must be completed to set up the Open Network Edge Services Software (OpenNESS) cluster.
-
-1. Fulfill the [Preconditions](#preconditions).
-2. Become familiar with [supported features](#supported-epa-features) and enable them if desired.
-3. Run the [deployment helper script](#running-playbooks) for the Ansible\* playbook:
-
- ```shell
- ./deploy_ne.sh
- ```
-
-# Preconditions
-
-To use the playbooks, several preconditions must be fulfilled. These preconditions are described in the [Q&A](#qa) section below. The preconditions are:
-
-- CentOS\* 7.8.2003 must be installed on hosts where the product is deployed. It is highly recommended to install the operating system using a minimal ISO image on nodes that will take part in deployment (obtained from inventory file). Also, do not make customizations after a fresh manual install because it might interfere with Ansible scripts and give unpredictable results during deployment.
-
-- Hosts for the Edge Controller (Kubernetes control plane) and Edge Nodes (Kubernetes nodes) must have proper and unique hostnames (i.e., not `localhost`). This hostname must be specified in `/etc/hosts` (refer to [Setup static hostname](#setup-static-hostname)).
-
-- SSH keys must be exchanged between hosts (refer to [Exchanging SSH keys between hosts](#exchanging-ssh-keys-between-hosts)).
-
-- A proxy may need to be set (refer to [Setting proxy](#setting-proxy)).
-
-- If a private repository is used, a Github\* token must be set up (refer to [GitHub token](#github-token)).
-
-- Refer to the [Configuring time](#configuring-time) section for how to enable Network Time Protocol (NTP) clients.
-
-- The Ansible inventory must be configured (refer to [Configuring inventory](#configuring-inventory)).
-
-# Running playbooks
-
-The Network Edge deployment and cleanup is carried out via Ansible playbooks. The playbooks are run from the Ansible host (it might be the same machine as the Edge Controller). Before running the playbooks, an inventory file `inventory.ini` must be configured.
-
-The following subsections describe the playbooks in more detail.
-
-## Deployment scripts
-
-For convenience, playbooks can be executed by running helper deployment scripts from the Ansible host. These scripts require that the Edge Controller and Edge Nodes be configured on different hosts (for deployment on a single node, refer to [Single-node Network Edge cluster](#single-node-network-edge-cluster)). This is done by configuring the Ansible playbook inventory, as described later in this document.
-
-The command syntax for the scripts is: `action_mode.sh [-f flavor] [group]`, i.e.,
-
- - `deploy_ne.sh [-f flavor] [ controller | nodes ]`
- - `cleanup_ne.sh [-f flavor] [ controller | nodes ] `
-
-The parameter `controller` or `nodes` in each case deploys or cleans up the Edge Controller or the Edge Nodes, respectively.
-
-For an initial installation, `deploy_ne.sh controller` must be run before `deploy_ne.sh nodes`. During the initial installation, the hosts may reboot. After reboot, the deployment script that was last run should be run again.
-
-The `cleanup_ne.sh` script is used when a configuration error in the Edge Controller or Edge Nodes must be fixed. The script causes the appropriate installation to be reverted, so that the error can be fixed and `deploy_ne.sh` rerun. `cleanup_ne.sh` does not do a comprehensive cleanup (e.g., installation of DPDK or Golang will not be rolled back).
-
-## Network Edge playbooks
-
-The `network_edge.yml` and `network_edge_cleanup.yml` files contain playbooks for Network Edge mode.
-Playbooks can be customized by enabling and configuring features in the `group_vars/all/10-open.yml` file.
-
-### Cleanup playbooks
-
-The role of the cleanup playbook is to revert changes made by deploy playbooks.
-Changes are reverted by going step-by-step in reverse order and undoing the steps.
-
-For example, when installing Docker\*, the RPM repository is added and Docker is installed. When cleaning up, Docker is uninstalled and the repository is removed.
-
->**NOTE**: There may be leftovers created by the installed software. For example, DPDK and Golang installations, found in `/opt`, are not rolled back.
-
-### Supported EPA features
-
-Several enhanced platform capabilities and features are available in OpenNESS for Network Edge. For the full list of supported features, see [Enhanced Platform Awareness Features](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge/supported-epa.md). The documents referenced in this list provide a detailed description of the features, and step-by-step instructions for enabling them. Users should become familiar with available features before executing the deployment playbooks.
-
-### VM support for Network Edge
-Support for VM deployment on OpenNESS for Network Edge is available and enabled by default. Certain configurations and prerequisites may need to be satisfied to use all VM capabilities. The user is advised to become familiar with the VM support documentation before executing the deployment playbooks. See [openness-network-edge-vm-support](https://github.com/open-ness/ido-specs/blob/master/doc/applications-onboard/openness-network-edge-vm-support.md) for more information.
-
-### Application on-boarding
-
-Refer to the [network-edge-applications-onboarding](https://github.com/open-ness/ido-specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md) document for instructions on how to deploy edge applications for OpenNESS Network Edge.
-
-### Single-node Network Edge cluster
-
-Network Edge can be deployed on just a single machine working as a control plane & node.
-To deploy Network Edge in a single-node cluster scenario, follow the steps below:
-1. Modify `inventory.ini`
- > Rules for inventory:
- > - IP address (`ansible_host`) for both controller and node must be the same
- > - `edgenode_group` and `controller_group` groups must contain exactly one host
-
- Example of a valid inventory:
- ```ini
- [all]
- controller ansible_ssh_user=root ansible_host=192.168.0.11
- node01 ansible_ssh_user=root ansible_host=192.168.0.11
-
- [controller_group]
- controller
-
- [edgenode_group]
- node01
-
- [edgenode_vca_group]
- ```
-2. Features can be enabled in the `group_vars/all/10-open.yml` file by tweaking the configuration variables.
-3. Settings regarding the kernel, grub, HugePages\*, and tuned can be customized in `group_vars/edgenode_group/10-open.yml`.
- > Default settings in the single-node cluster mode are those of the Edge Node (i.e., kernel and tuned customization enabled).
-4. Single-node cluster can be deployed by running command: `./deploy_ne.sh single`
-
-## Harbor registry
-
-Harbor registry is an open source cloud native registry which can support images and relevant artifacts with extended functionalities as described in [Harbor](https://goharbor.io/). On the OpenNESS environment, Harbor registry service is installed on Control plane Node by Harbor Helm Chart [github](https://github.com/goharbor/harbor-helm/releases/tag/v1.5.1). Harbor registry authentication enabled with self-signed certificates as well as all nodes and control plane will have access to the Harbor registry.
-
-### Deploy Harbor registry
-
-#### System Prerequisite
-* The available system disk should be reserved at least 20G for Harbor PV/PVC usage. The defaut disk PV/PVC total size is 20G. The values can be configurable in the ```roles/harbor_registry/controlplane/defaults/main.yaml```.
-* If huge pages enabled, need 1G(hugepage size 1G) or 300M(hugepage size 2M) to be reserved for Harbor usage.
-
-#### Ansible Playbooks
-Ansible "harbor_registry" roles created on openness-experience-kits. For deploying a Harbor registry on Kubernetes, control plane roles are enabled on the openness-experience-kits "network_edge.yml" file.
-
- ```ini
- role: harbor_registry/controlplane
- role: harbor_registry/node
- ```
-
-The following steps are processed by openness-experience-kits during the Harbor registry installation on the OpenNESS control plane node.
-
-* Download Harbor Helm Charts on the Kubernetes Control plane Node.
-* Check whether huge pages is enabled and templates values.yaml file accordingly.
-* Create namespace and disk PV for Harbor Services (The defaut disk PV/PVC total size is 20G. The values can be configurable in the ```roles/harbor_registry/controlplane/defaults/main.yaml```).
-* Install Harbor on the control plane node using the Helm Charts (The CA crt will be generated by Harbor itself).
-* Create the new project - ```intel``` for OpenNESS microservices, Kurbernetes enhanced add-on images storage.
-* Docker login the Harbor Registry, thus enable pulling, pushing and tag images with the Harbor Registry
-
-
-On the OpenNESS edge nodes, openness-experience-kits will conduct the following steps:
-* Get harbor.crt from the OpenNESS control plane node and save into the host location
- /etc/docker/certs.d/
-* Docker login the Harbor Registry, thus enable pulling, pushing and tag images with the Harbor Registry
-* After above steps, the Node and Ansible host can access the private Harbor registry.
-* The IP address of the Harbor registry will be: "Kubernetes_Control_Plane_IP"
-* The port number of the Harbor registry will be: 30003
-
-
-#### Projects
-Two Harbor projects will be created by OEK as below:
-- ```library``` The registry project can be used by edge application developer as default images registries.
-- ```intel``` The registry project contains the registries for the OpenNESS microservices and relevant kubernetes addon images. Can also be used for OpenNESS sample application images.
-
-### Harbor login
-For the nodes inside of the OpenNESS cluster, openness-experience-kits ansible playbooks automatically login and prepare harbor CA certifications to access Harbor services.
-
-For the external host outside of the OpenNESS cluster, can use following commands to access the Harbor Registry:
-
-```shell
-# create directory for harbor's CA crt
-mkdir /etc/docker/certs.d/${Kubernetes_Control_Plane_IP}:${port}/
-
-# get EMCO harbor CA.crt
-set -o pipefail && echo -n | openssl s_client -showcerts -connect ${Kubernetes_Control_Plane_IP}:${port} 2>/dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /etc/docker/certs.d/${Kubernetes_Control_Plane_IP}:${port}/harbor.crt
-
-# docker login harobr registry
-docker login ${Kubernetes_Control_Plane_IP}:${port} -uadmin -p${harborAdminPassword}
-```
-The default access configuration for the Harbor Registry is:
- ```ini
-Kubernetes_Control_Plane_IP: 30003(default)
-harborAdminPassword: Harbor12345(default)
- ```
-
-### Harbor registry image push
-Use the Docker tag to create an alias of the image with the fully qualified path to your Harbor registry after the tag successfully pushes the image to the Harbor registry.
-
- ```shell
- docker tag nginx:latest {Kubernetes_Control_Plane_IP}:30003/intel/nginx:latest
- docker push {Kubernetes_Control_Plane_IP}:30003/intel/nginx:latest
- ```
-Now image the tag with the fully qualified path to your private registry. You can push the image to the registry using the Docker push command.
-
-### Harbor registry image pull
-Use the `docker pull` command to pull the image from Harbor registry:
-
- ```shell
- docker pull {Kubernetes_Control_Plane_IP}:30003/intel/nginx:latest
- ```
-
-### Harbor UI
-Open the https://{Kubernetes_Control_Plane_IP}:30003 with login username ```admin``` and password ```Harbor12345```:
-![](controller-edge-node-setup-images/harbor_ui.png)
-
-You could see two projects: ```intel``` and ```library``` on the Web UI. For more details about Harbor usage, can refer to [Harbor docs](https://goharbor.io/docs/2.1.0/working-with-projects/).
-
-### Harbor CLI
-Apart for Harbor UI, you can also use ```curl``` to check Harbor projects and images. The examples will be shown as below.
-```text
-In the examples, 10.240.224.172 is IP address of {Kubernetes_Control_Plane_IP}
-If there is proxy connection issue with ```curl``` command, can add ```--proxy``` into the command options.
-```
-
-#### CLI - List Project
-Use following example commands to check projects list:
- ```shell
- # curl -X GET "https://10.240.224.172:30003/api/v2.0/projects" -H "accept: application/json" -k --cacert /etc/docker/certs.d/10.240.224.172:30003/harbor.crt -u "admin:Harbor12345 | jq"
- [
- {
- "creation_time": "2020-11-26T08:47:31.626Z",
- "current_user_role_id": 1,
- "current_user_role_ids": [
- 1
- ],
- "cve_allowlist": {
- "creation_time": "2020-11-26T08:47:31.628Z",
- "id": 1,
- "items": [],
- "project_id": 2,
- "update_time": "2020-11-26T08:47:31.628Z"
- },
- "metadata": {
- "public": "true"
- },
- "name": "intel",
- "owner_id": 1,
- "owner_name": "admin",
- "project_id": 2,
- "repo_count": 3,
- "update_time": "2020-11-26T08:47:31.626Z"
- },
- {
- "creation_time": "2020-11-26T08:39:13.707Z",
- "current_user_role_id": 1,
- "current_user_role_ids": [
- 1
- ],
- "cve_allowlist": {
- "creation_time": "0001-01-01T00:00:00.000Z",
- "items": [],
- "project_id": 1,
- "update_time": "0001-01-01T00:00:00.000Z"
- },
- "metadata": {
- "public": "true"
- },
- "name": "library",
- "owner_id": 1,
- "owner_name": "admin",
- "project_id": 1,
- "update_time": "2020-11-26T08:39:13.707Z"
- }
- ]
-
- ```
-
-#### CLI - List Image Repositories
-Use following example commands to check images repository list of project - ```intel```:
- ```shell
- # curl -X GET "https://10.240.224.172:30003/api/v2.0/projects/intel/repositories" -H "accept: application/json" -k --cacert /etc/docker/certs.d/10.240.224.172:30003/harbor.crt -u "admin:Harbor12345" | jq
- [
- {
- "artifact_count": 1,
- "creation_time": "2020-11-26T08:57:43.690Z",
- "id": 3,
- "name": "intel/sriov-device-plugin",
- "project_id": 2,
- "pull_count": 1,
- "update_time": "2020-11-26T08:57:55.240Z"
- },
- {
- "artifact_count": 1,
- "creation_time": "2020-11-26T08:56:16.565Z",
- "id": 2,
- "name": "intel/sriov-cni",
- "project_id": 2,
- "update_time": "2020-11-26T08:56:16.565Z"
- },
- {
- "artifact_count": 1,
- "creation_time": "2020-11-26T08:49:25.453Z",
- "id": 1,
- "name": "intel/multus",
- "project_id": 2,
- "update_time": "2020-11-26T08:49:25.453Z"
- }
- ]
-
- ```
-
-#### CLI - Delete Image
-Use following example commands to delete the image repository of project - ```intel```, for example:
- ```shell
- # curl -X DELETE "https://10.240.224.172:30003/api/v2.0/projects/intel/repositories/nginx" -H "accept: application/json" -k --cacert /etc/docker/certs.d/10.240.224.172:30003/harbor.crt -u "admin:Harbor12345"
- ```
-
-Use following example commands to delete a specific image version:
- ```sh
- # curl -X DELETE "https://10.240.224.172:30003/api/v2.0/projects/intel/repositories/nginx/artifacts/1.14.2" -H "accept: application/json" -k --cacert /etc/docker/certs.d/10.240.224.172:30003/harbor.crt -u "admin:Harbor12345"
- ```
-
-## Kubernetes cluster networking plugins (Network Edge)
-
-Kubernetes uses 3rd party networking plugins to provide [cluster networking](https://kubernetes.io/docs/concepts/cluster-administration/networking/).
-These plugins are based on the [CNI (Container Network Interface) specification](https://github.com/containernetworking/cni).
-
-OpenNESS Experience Kits provide several ready-to-use Ansible roles deploying CNIs.
-The following CNIs are currently supported:
-
-* [kube-ovn](https://github.com/alauda/kube-ovn)
- * **Only as primary CNI**
- * CIDR: 10.16.0.0/16
-* [flannel](https://github.com/coreos/flannel)
- * IPAM: host-local
- * CIDR: 10.244.0.0/16
- * Network attachment definition: openness-flannel
-* [calico](https://github.com/projectcalico/cni-plugin)
- * IPAM: host-local
- * CIDR: 10.243.0.0/16
- * Network attachment definition: openness-calico
-* [weavenet](https://github.com/weaveworks/weave)
- * CIDR: 10.32.0.0/12
-* [SR-IOV](https://github.com/intel/sriov-cni) (cannot be used as a standalone or primary CNI - [sriov setup](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md))
-* [Userspace](https://github.com/intel/userspace-cni-network-plugin) (cannot be used as a standalone or primary CNI - [Userspace CNI setup](https://github.com/open-ness/ido-specs/blob/master/doc/dataplane/openness-userspace-cni.md)
-
-Multiple CNIs can be requested to be set up for the cluster. To provide such functionality [the Multus CNI](https://github.com/intel/multus-cni) is used.
-
->**NOTE**: For a guide on how to add new a CNI role to the OpenNESS Experience Kits, refer to [the OpenNESS Experience Kits guide](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/openness-experience-kits.md#adding-new-cni-plugins-for-kubernetes-network-edge).
-
-### Selecting cluster networking plugins (CNI)
-
-The default CNI for OpenNESS is kube-ovn. Non-default CNIs may be configured with OpenNESS by editing the file `group_vars/all/10-open.yml`.
-To add a non-default CNI, the following edits must be carried out:
-
-- The CNI name is added to the `kubernetes_cnis` variable. The CNIs are applied in the order in which they appear in the file. By default, `kube-ovn` is defined. That is,
-
- ```yaml
- kubernetes_cnis:
- - kubeovn
- ```
-
-- To add a CNI, such as SR-IOV, the `kubernetes_cnis` variable is edited as follows:
-
- ```yaml
- kubernetes_cnis:
- - kubeovn
- - sriov
- ```
-
-- The Multus CNI is added by the Ansible playbook when two or more CNIs are defined in the `kubernetes_cnis` variable
-- The CNI's networks (CIDR for pods, and other CIDRs used by the CNI) are added to the `proxy_noproxy` variable by Ansible playbooks.
-
-### Adding additional interfaces to pods
-
-To add an additional interface from secondary CNIs, annotation is required.
-Below is an example pod yaml file for a scenario with `kube-ovn` as a primary CNI along with `calico` and `flannel` as additional CNIs.
-Multus\* will create an interface named `calico` using the network attachment definition `openness-calico` and interface `flannel` using the network attachment definition `openness-flannel`.
->**NOTE**: Additional annotations such as `openness-calico@calico` are required only if the CNI is secondary. If the CNI is primary, the interface will be added automatically by Multus\*.
-
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: cni-test-pod
- annotations:
- k8s.v1.cni.cncf.io/networks: openness-calico@calico, openness-flannel@flannel
-spec:
- containers:
- - name: cni-test-pod
- image: docker.io/centos/tools:latest
- command:
- - /sbin/init
-```
-
-The following is an example output of the `ip a` command run in a pod and after CNIs have been applied. Some lines in the command output were omitted for readability.
-
-The following interfaces are available: `calico@if142`, `flannel@if143`, and `eth0@if141` (`kubeovn`).
-
-```shell
-# kubectl exec -ti cni-test-pod ip a
-
-1: lo:
- inet 127.0.0.1/8 scope host lo
-
-2: tunl0@NONE:
- link/ipip 0.0.0.0 brd 0.0.0.0
-
-4: calico@if142:
- inet 10.243.0.3/32 scope global calico
-
-6: flannel@if143:
- inet 10.244.0.3/16 scope global flannel
-
-140: eth0@if141:
- inet 10.16.0.5/16 brd 10.16.255.255 scope global eth0
-```
-
-# Q&A
-
-## Configuring time
-
-To allow for correct certificate verification, OpenNESS requires system time to be synchronized among all nodes and controllers in a system.
-
-OpenNESS provides the possibility to synchronize a machine's time with the NTP server.
-To enable NTP synchronization, change `ntp_enable` in `group_vars/all/10-open.yml`:
-```yaml
-ntp_enable: true
-```
-
-Servers to be used instead of default ones can be provided using the `ntp_servers` variable in `group_vars/all/10-open.yml`:
-```yaml
-ntp_servers: ["ntp.local.server"]
-```
-
-## Setup static hostname
-
-The following command is used in CentOS\* to set a static hostname:
-
-```shell
-hostnamectl set-hostname
-```
-
-As shown in the following example, the hostname must also be defined in `/etc/host`:
-
-```shell
-127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
-::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
-```
-
-In addition to being a unique hostname within the cluster, the hostname must also follow Kubernetes naming conventions. For example, only lower-case alphanumeric characters "-" or "." start and end with an alphanumeric character. Refer to
-[K8s naming restrictions](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names) for additional details on these conventions.
-
-## Configuring inventory
-
-To execute playbooks, `inventory.ini` must be configured to specify the hosts on which the playbooks are executed.
-
-The OpenNESS inventory contains three groups: `all`, `controller_group`, and `edgenode_group`.
-
-- `all` contains all the hosts (with configuration) used in any playbook.
-- `controller_group` contains host to be set up as a Kubernetes control plane / OpenNESS Edge Controller \
->**NOTE**: Because only one controller is supported, the `controller_group` can contain only one host.**
-- `edgenode_group` contains hosts to be set up as a Kubernetes nodes / OpenNESS Edge Nodes. \
->**NOTE**: All nodes will be joined to the control plane specified in `controller_group`.
-
-In the `all` group, users can specify all of the hosts for usage in other groups.
-For example, the `all` group looks like:
-
-```ini
-[all]
-ctrl ansible_ssh_user=root ansible_host=
-node1 ansible_ssh_user=root ansible_host=
-node2 ansible_ssh_user=root ansible_host=
-```
-
-The user can then use the specified hosts in `edgenode_group` and `controller_group`. That is,
-
-```ini
-[edgenode_group]
-node1
-node2
-
-[controller_group]
-ctrl
-```
-
-## Exchanging SSH keys between hosts
-
-Exchanging SSH keys between hosts permits a password-less SSH connection from the host running Ansible to the hosts being set up.
-
-In the first step, the host running Ansible (usually the Edge Controller host) must have a generated SSH key. The SSH key can be generated by executing `ssh-keygen` and obtaining the key from the output of the command.
-
-The following is an example of a key generation, in which the key is placed in the default directory (`/root/.ssh/id_rsa`), and an empty passphrase is used.
-
-```shell
-# ssh-keygen
-
-Generating public/private rsa key pair.
-Enter file in which to save the key (/root/.ssh/id_rsa):
-Enter passphrase (empty for no passphrase):
-Enter same passphrase again:
-Your identification has been saved in /root/.ssh/id_rsa.
-Your public key has been saved in /root/.ssh/id_rsa.pub.
-The key fingerprint is:
-SHA256:vlcKVU8Tj8nxdDXTW6AHdAgqaM/35s2doon76uYpNA0 root@host
-The key's randomart image is:
-+---[RSA 2048]----+
-| .oo.==*|
-| . . o=oB*|
-| o . . ..o=.=|
-| . oE. . ... |
-| ooS. |
-| ooo. . |
-| . ...oo |
-| . .*o+.. . |
-| =O==.o.o |
-+----[SHA256]-----+
-```
-
-In the second step, the generated key must be copied to **every host from the inventory**, including the host on which the key was generated, if it appears in the inventory (e.g., if the playbooks are executed from the Edge Controller host, the host must also have a copy of its key). It is done by running `ssh-copy-id`. For example:
-
-```shell
-# ssh-copy-id root@host
-
-/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
-The authenticity of host ' ()' can't be established.
-ECDSA key fingerprint is SHA256:c7EroVdl44CaLH/IOCBu0K0/MHl8ME5ROMV0AGzs8mY.
-ECDSA key fingerprint is MD5:38:c8:03:d6:5a:8e:f7:7d:bd:37:a0:f1:08:15:28:bb.
-Are you sure you want to continue connecting (yes/no)? yes
-/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
-/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
-root@host's password:
-
-Number of key(s) added: 1
-
-Now, try logging into the machine, with: "ssh 'root@host'"
-and check to make sure that only the key(s) you wanted were added.
-```
-
-To make sure the key is copied successfully, try to SSH into the host: `ssh 'root@host'`. It should not ask for the password.
-
-## Setting proxy
-
-If a proxy is required to connect to the Internet, it is configured via the following steps:
-
-- Edit the `proxy_` variables in the `group_vars/all/10-open.yml` file.
-- Set the `proxy_enable` variable in `group_vars/all/10-open.yml` file to `true`.
-- Append the network CIDR (e.g., `192.168.0.1/24`) to the `proxy_noproxy` variable in `group_vars/all/10-open.yml`.
-
-Sample configuration of `group_vars/all/10-open.yml`:
-
-```yaml
-# Setup proxy on the machine - required if the Internet is accessible via proxy
-proxy_enable: true
-# Clear previous proxy settings
-proxy_remove_old: true
-# Proxy URLs to be used for HTTP, HTTPS and FTP
-proxy_http: "http://proxy.example.org:3128"
-proxy_https: "http://proxy.example.org:3129"
-proxy_ftp: "http://proxy.example.org:3128"
-# Proxy to be used by YUM (/etc/yum.conf)
-proxy_yum: "{{ proxy_http }}"
-# No proxy setting contains addresses and networks that should not be accessed using proxy (e.g., local network and Kubernetes CNI networks)
-proxy_noproxy: ""
-```
-
-Sample definition of `no_proxy` environmental variable for Ansible host (to allow Ansible host to connect to other hosts):
-
-```shell
-export no_proxy="localhost,127.0.0.1,10.244.0.0/24,10.96.0.0/12,192.168.0.0/24"
-```
-
-## Obtaining installation files
-
-There are no specific restrictions on the directory into which the OpenNESS directories are cloned. When OpenNESS is built, additional directories will be installed in `/opt`. It is recommended to clone the project into a directory such as `/home`.
-
-## Setting Git
-
-### GitHub token
-
->**NOTE**: Only required when cloning private repositories. Not needed when using github.com/open-ness repositories.
-
-To clone private repositories, a GitHub token must be provided.
-
-To generate a GitHub token, refer to [GitHub help - Creating a personal access token for the command line](https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line).
-
-To provide the token, edit the value of `git_repo_token` variable in `group_vars/all/10-open.yml`.
-
-### Customize tag/branch/sha to checkout
-
-A specific tag, branch, or commit SHA can be checked out by setting the `controller_repository_branch` and the `edgenode_repository_branch` variables in `group_vars/all/10-open.yml` for Edge Nodes and Kubernetes control plane / Edge Controller, respectively.
-
-```yaml
-controller_repository_branch: master
-edgenode_repository_branch: master
-# or
-controller_repository_branch: openness-20.03
-edgenode_repository_branch: openness-20.03
-```
-
-## Customization of kernel, grub parameters, and tuned profile
-
-OpenNESS Experience Kits provide an easy way to customize the kernel version, grub parameters, and tuned profile. For more information, refer to [the OpenNESS Experience Kits guide](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/openness-experience-kits.md).
diff --git a/doc/getting-started/network-edge/index.html b/doc/getting-started/network-edge/index.html
deleted file mode 100644
index c74035cb..00000000
--- a/doc/getting-started/network-edge/index.html
+++ /dev/null
@@ -1,14 +0,0 @@
-
-
----
-title: OpenNESS Documentation
-description: Home
-layout: openness
----
-You are being redirected to the OpenNESS Docs.
-
diff --git a/doc/getting-started/network-edge/offline-edge-deployment.md b/doc/getting-started/network-edge/offline-edge-deployment.md
deleted file mode 100644
index 0b1fb4b3..00000000
--- a/doc/getting-started/network-edge/offline-edge-deployment.md
+++ /dev/null
@@ -1,157 +0,0 @@
-```text
-SPDX-License-Identifier: Apache-2.0
-Copyright (c) 2019-2020 Intel Corporation
-```
-
-- [OpenNESS Network Edge: Offline Deployment](#openness-network-edge-offline-deployment)
- - [OpenNESS support in offline environment](#openness-support-in-offline-environment)
- - [Setup prerequisites](#setup-prerequisites)
- - [Creating the offline package from an online node](#creating-the-offline-package-from-an-online-node)
- - [Placing the complete offline package in offline environment](#placing-the-complete-offline-package-in-offline-environment)
- - [Deployment in offline environment](#deployment-in-offline-environment)
-# OpenNESS Network Edge: Offline Deployment
-
-## OpenNESS support in offline environment
-
-The OpenNESS projects supports a deployment of the solution in an air-gapped, offline environment. The support is currently limited to "[flexran" deployment flavor of OpenNESS Experience Kit](https://github.com/open-ness/ido-openness-experience-kits/tree/master/flavors/flexran) only and it allows for offline deployment of vRAN specific components. Internet connection is needed to create the offline package, a script to download and build all necessary components will create an archive of all the necessary files. Once the offline package is created the installation of OpenNESS Experience Kits will be commenced as usual, in the same way as the default online installation would.
-
-It can be deployed in two different scenarios. The first scenario is to deploy the OpenNESS Experience Kits from the online "jumper" node which is being used to create the offline package, this internet connected node must have a network connection to the air-gapped/offline nodes. The second scenario is to copy the whole OpenNESS Experience Kit directory with the already archived packages to the air-gapped/offline environment (for example via USB or other media or means) and run the OpenNESS Experience Kit from within the offline environment. All the nodes within the air-gapped/offline cluster need to able to SSH into each other.
-
-Figure 1. Scenario one - online node connected to the air-gapped network
-![Scenario one - online node connected to the air-gapped network](offline-images/offline-ssh.png)
-Figure 2. Scenario two - OEK copied to the air-gapped network
-![Scenario two - OEK copied to the air-gapped network](offline-images/offline-copy.png)
-
-## Setup prerequisites
-
-* A node with access to internet to create the offline package.
-* Cluster set up in an air-gapped environment.
-* Clean setup, see [pre-requisites](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md#preconditions)
-* [Optional] If OEK is run from an online jumper node, the node needs to be able to SSH into each machine in air-gapped environment.
-* [Optional] A media such as USB drive to copy the offline OEK package to the air-gapped environment if there is no connection from online node.
-* All the nodes in air-gapped environment must be able to SSH to each other without requiring password input, see [getting-started.md](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md#exchanging-ssh-keys-between-hosts).
-* The control plane node needs to be able to SSH itself.
-* The time and date of the nodes in offline environment is manually synchronized by the cluster's admin.
-* User provided files - OPAE_SDK_1.3.7-5_el7.zip and syscfg_package.zip
-
-## Creating the offline package from an online node
-
-To create the offline package the user must have an access to an online node from which the offline package creator can download all necessary files and build Docker images. The list of files to be downloaded/build is provided in a form of a package definition list (Only package definition list for "flexran" flavor of OpenNESS is provided at the time of writing). Various categories of files to be downloaded are provided within this list including: RPMs, PIP pacakges, Helm charts, Dockerfiles, Go modules, and miscellaneous downloads. According to the category of a file the logic of offline package creator script will handle the download/build accordingly. Some files such as proprietary packages need to be provided by user in specified directories (see following steps). Once the offline package creator collects all necessary components it will pack them into an archive and then place them in appropriate place within the OpenNESS Experience Kits directory. Once the packages are archived the OpenNESS Experience Kits are ready to be deployed in air-gapped environment. The following diagram illustrates the workflow of the offline package creator. Additional information regarding the offline package creator can be found in the [README.md file](https://github.com/open-ness/openness-experience-kits/blob/master/offline_package_creator/README.md).
-
-Figure 3. Offline package creator workflow
-![OPC flow](offline-images/offline-flow.png)
-
-To run the offline package creator run the following steps (user should not be a "root" user but does need "sudo" privileges to create the package, RT components will require installation of RT kernel on the node by the OPC):
-
-Clone the OpenNESS Experience Kits repo to an online node:
-
-```shell
-# https://github.com/open-ness/ido-openness-experience-kits.git
-```
-
-Navigate to offline package creator directory:
-
-```shell
-# cd ido-openness-experience-kits/oek/offline_package_creator/
-```
-
-Create a directory from which user provided files can be accessed:
-
-```shell
-# mkdir ///
-```
-
-Copy the 'OPAE_SDK_1.3.7-5_el7.zip' file (optional but necessary by default - to be done when OPAE is enabled in "flexran" flavor of OEK) and syscfg_package.zip (optional but necessary by default- to be done when BIOS config is enabled in "flexran" flavor of OEK) to the provided directory:
-
-```shell
-# cp OPAE_SDK_1.3.7-5_el7.zip ///
-# cp syscfg_package.zip ///
-```
-
-Edit [ido-openness-experience-kits/oek/offline_package_creator/scripts/initrc](https://github.com/open-ness/openness-experience-kits/blob/master/offline_package_creator/scripts/initrc) file and update with GitHub username/token if necessary, HTTP/GIT proxy if behind firewall and provide paths to file dependencies.
-
-```shell
-# open-ness token
-GITHUB_USERNAME=""
-GITHUB_TOKEN=""
-
-# User add ones
-HTTP_PROXY="http://:" #Add proxy first
-GIT_PROXY="http://:"
-
-# location of OPAE_SDK_1.3.7-5_el7.zip
-BUILD_OPAE=disable
-DIR_OF_OPAE_ZIP="///"
-
-# location of syscfg_package.zip
-BUILD_BIOSFW=disable
-DIR_OF_BIOSFW_ZIP="///"
-
-# location of the zip packages for collectd-fpga
-BUILD_COLLECTD_FPGA=disable
-DIR_OF_FPGA_ZIP="///"
-```
-
-Start the offline package creator script [ido-openness-experience-kits/oek/offline_package_creator/offline_package_creator.sh](https://github.com/open-ness/openness-experience-kits/blob/master/offline_package_creator/offline_package_creator.sh)
-
-```shell
-# bash offline_package_creator.sh all
-```
-
-The script will download all the files define in the [pdl_flexran.yml](https://github.com/open-ness/openness-experience-kits/blob/master/offline_package_creator/package_definition_list/pdl_flexran.yml) and build other necessary images, then copy them to a designated directory. Once the script is finished executing the user should expect three files under the `ido-openness-experience-kits/roles/offline_roles/unpack_offline_package/files` directory:
-
-```shell
-# ls ido-openness-experience-kits/roles/offline_roles/unpack_offline_package/files
-
-checksum.txt prepackages.tar.gz opcdownloads.tar.gz
-```
-
-Once the archive packages are created and placed in the OEK, the OEK is ready to be configured for offline/air-gapped installation.
-
-## Placing the complete offline package in offline environment
-
-User has two options of deploying the OEK for offline/air-gapped environment. Please refer to Figure 1 and Figure 2 of this document for diagrams.
-
-Scenario 1: User will deploy the OEK from an online node with a network connection to the offline/air-gapped environment. In this case if the online node is the same one as the one on which the offline package creator was run and created the archive files for OEK than the OEK directory does not need to be moved and will be used as is. The online node is expected to have a password-less SSH connection with all the offline nodes enabled - all the offline nodes are expected to have a password-less SSH connection between control plane and node and vice-versa, and the control plane node needs to be able to SSH itself.
-
-Scenario 2: User will deploy the OEK from a node within the offline/air-gapped environment. In this case the user needs to copy the whole OEK directory containing the archived files from [previous section](#creating-the-offline-package-from-an-online-node) from the online node to one of the nodes in the offline environment via USB drive or alternative media. It is advisable that the offline node used to run the OEK is a separate node to the actual cluster, if the node is also used as part of the cluster it will reboot during the script run due to kernel upgrade and the OEK will need to be run again - this may have un-forseen consequences. All the offline nodes are expected to have a password-less SSH connection between control plane and node and vice-versa, and the control plane node needs to be able to SSH itself.
-
-Regardless of the scenario in which the OEK will be deployed the deployment method is the same.
-
-## Deployment in offline environment
-
-Once all the previous steps provided within this document are completed and the OEK with offline archives is placed on the node which will run the OEK automation, the user should get familiar with the ["Running-playbooks"](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md#running-playbooks) and ["Preconditions"](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md#preconditions) sections of getting started guide and deploy the OpenNESS as per usual deployment steps. Please note only deployment of "flexran" flavour is supported for offline/air-gapped environment, other flavours/configurations and default deployment may fail due to missing dependencies, the support for ACC100 accelerator is not available for offline deployment of "flexran" flavour at the time of writing. Both multi-node and single node modes are supported.
-
-During the deployment of the offline version of the OEK the archived files created by the offline package creator will be extracted and placed in appropriate directory. The OEK will set up a local file share server on the control plane node and move the files to the said server. The OEK will also create a local yum repo. All the files and packages will be pulled from this file share server by nodes across the air-gapped OpenNESS cluster. During the execution of the OEK the Ansible scripts will follow the same logic as per the online mode with the difference that all the components will be pulled locally from the file share server instead of the internet.
-
-The following are the specific steps to enable offline/air-gaped deployment from OEK:
-
-Enable the offline deployment in [ido-openness-experience-kits/group_vars/all/10-open.yml](https://github.com/open-ness/ido-openness-experience-kits/blob/master/group_vars/all/10-open.yml)
-
-```yaml
-## Offline Mode support
-offline_enable: True
-```
-
-Make sure the time on offline nodes is synchronized.
-
-Make sure nodes can access each other through SSH without password.
-Make sure cotrol-plane node can SSH itself. ie:
-
-```shell
-# hostname -I
-
-# ssh-copy-id
-```
-
-Make sure the CPUs allocation in "flexran" flavor is configured as desired, [see configs in flavor directory](https://github.com/open-ness/ido-openness-experience-kits/tree/master/flavors/flexran).
-
-Deploy OpenNESS using FlexRAN flavor for multi or single node:
-
-```shell
-# ./deploy_ne.sh -f flexran
-```
-OR
-```shell
-# ./deploy_ne.sh -f flexran single
-```
diff --git a/doc/getting-started/network-edge/supported-epa.md b/doc/getting-started/network-edge/supported-epa.md
deleted file mode 100644
index 4b9dd639..00000000
--- a/doc/getting-started/network-edge/supported-epa.md
+++ /dev/null
@@ -1,22 +0,0 @@
-```text
-SPDX-License-Identifier: Apache-2.0
-Copyright (c) 2019 Intel Corporation
-```
-
-# OpenNESS Network Edge - Enhanced Platform Awareness Features Supported
-- [Overview](#overview)
-- [Features](#features)
-
-## Overview
-Enhanced Platform Awareness (EPA) features are supported in on-premises using Kubernetes\* infrastructure. Some of the EPA features are supported as Kubernetes jobs, some as daemon sets, some as normal pods, and some as Kubernetes device plugins. The overall objective of EPA for network edge is to expose the platform capability to the edge cloud orchestrator for better performance, consistency, and reliability.
-
-## Features
-The following EPA features are supported in Open Network Edge Services Software (OpenNESS) Network Edge:
- * [openness-hugepage.md: Hugepages support for edge applications and network functions](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-hugepage.md)
- * [openness-node-feature-discovery.md: Edge node hardware and software feature discovery support in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-node-feature-discovery.md)
- * [openness-sriov-multiple-interfaces.md: Dedicated physical network interface allocation support for edge applications and network functions](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md)
- * [openness-dedicated-core.md: Dedicated CPU core allocation support for edge applications and network functions](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-dedicated-core.md)
- * [openness-bios.md: Edge platform BIOS and firmware and configuration support in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-bios.md)
- * [openness-fpga.md: Dedicated FPGA IP resource allocation support for edge applications and network functions](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-fpga.md)
- * [openness-topology-manager.md: Resource locality awareness support through topology manager in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-topology-manager.md)
-
diff --git a/doc/getting-started/non-root-user.md b/doc/getting-started/non-root-user.md
new file mode 100644
index 00000000..80722b70
--- /dev/null
+++ b/doc/getting-started/non-root-user.md
@@ -0,0 +1,66 @@
+```text
+SPDX-License-Identifier: Apache-2.0
+Copyright (c) 2021 Intel Corporation
+```
+
+# The non-root user on the OpenNESS Platform
+- [Overview](#overview)
+- [Steps on K8s nodes](#steps-on-k8s-nodes)
+- [Repository modification](#repository-modification)
+
+## Overview
+
+OpenNESS provides a possibility to install all required files on a Kubernetes control plane and nodes with or without root user. From security perspective it is advised to use non-root user installation of the OpenNESS platform where all tasks are executed with non-root user’s permissions. Tasks that require root privileges use privilege escalation property "become".
+
+ ```yml
+ - name: Run a command as root
+ command: whoami
+ become: yes
+ ```
+>**NOTE**: For more about privileges escalation in Ansible please refer to https://docs.ansible.com/ansible/latest/user_guide/become.html#
+
+## Steps on K8s nodes
+
+Before Ansible installation is started a non-root user needs to be created on the machines defined in `inventory.yml`. To create a user `openness` execute command:
+
+```bash
+adduser "openness"
+```
+
+A password for the given user is required.
+
+```bash
+passwd "openness"
+```
+
+As some tasks require root privileges the non-root user needs to have a possibility to become a root. For the user `openness` the following command must be performed:
+
+```bash
+echo "openness ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/openness
+```
+
+## Repository modification
+
+To run Ansible as a non-root user a modification in `inventory.yml` is required. Setting a user in variable `ansible_user` to already created non-root user will cause an execution of all tasks as non-root user specified.
+
+Example:
+
+```yaml
+---
+all:
+ vars:
+ cluster_name: minimal_cluster
+ flavor: minimal
+ single_node_deployment: false
+ limit:
+controller_group:
+ hosts:
+ ctrl.openness.org:
+ ansible_host: 172.16.0.1
+ ansible_user: openness
+edgenode_group:
+ hosts:
+ node01.openness.org:
+ ansible_host: 172.16.0.2
+ ansible_user: openness
+```
diff --git a/doc/getting-started/offline-edge-deployment.md b/doc/getting-started/offline-edge-deployment.md
new file mode 100644
index 00000000..ca0856a8
--- /dev/null
+++ b/doc/getting-started/offline-edge-deployment.md
@@ -0,0 +1,187 @@
+```text
+SPDX-License-Identifier: Apache-2.0
+Copyright (c) 2019-2020 Intel Corporation
+```
+
+- [OpenNESS Network Edge: Offline Deployment](#openness-network-edge-offline-deployment)
+ - [OpenNESS support in offline environment](#openness-support-in-offline-environment)
+ - [Setup prerequisites](#setup-prerequisites)
+ - [Creating the offline package from an online node](#creating-the-offline-package-from-an-online-node)
+ - [Placing the complete offline package in offline environment](#placing-the-complete-offline-package-in-offline-environment)
+ - [Deployment in offline environment](#deployment-in-offline-environment)
+# OpenNESS Network Edge: Offline Deployment
+
+## OpenNESS support in offline environment
+
+The OpenNESS projects supports a deployment of the solution in an air-gapped, offline environment. The support is currently limited to "[flexran" deployment flavor of Converged Edge Experience Kits](https://github.com/open-ness/ido-converged-edge-experience-kits/tree/master/flavors/flexran) only and it allows for offline deployment of vRAN specific components. Internet connection is needed to create the offline package, a script to download and build all necessary components will create an archive of all the necessary files. Once the offline package is created the installation of Converged Edge Experience Kits will be commenced as usual, in the same way as the default online installation would.
+
+It can be deployed in two different scenarios. The first scenario is to deploy the Converged Edge Experience Kits from the online "jumper" node which is being used to create the offline package, this internet connected node must have a network connection to the air-gapped/offline nodes. The second scenario is to copy the whole Converged Edge Experience Kits directory with the already archived packages to the air-gapped/offline environment (for example via USB or other media or means) and run the Converged Edge Experience Kits from within the offline environment. All the nodes within the air-gapped/offline cluster need to able to SSH into each other.
+
+Figure 1. Scenario one - online node connected to the air-gapped network
+![Scenario one - online node connected to the air-gapped network](offline-images/offline-ssh.png)
+Figure 2. Scenario two - CEEK copied to the air-gapped network
+![Scenario two - CEEK copied to the air-gapped network](offline-images/offline-copy.png)
+
+## Setup prerequisites
+
+* A node with access to internet to create the offline package.
+* Cluster set up in an air-gapped environment.
+* Clean setup, see [pre-requisites](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/openness-cluster-setup.md#preconditions)
+* [Optional] If CEEK is run from an online jumper node, the node needs to be able to SSH into each machine in air-gapped environment.
+* [Optional] A media such as USB drive to copy the offline CEEK package to the air-gapped environment if there is no connection from online node.
+* All the nodes in air-gapped environment must be able to SSH to each other without requiring password input, see [getting-started.md](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/openness-cluster-setup.md#exchanging-ssh-keys-between-hosts).
+* The control plane node needs to be able to SSH itself.
+* The time and date of the nodes in offline environment is manually synchronized by the cluster's admin.
+* User provided files - OPAE_SDK_1.3.7-5_el7.zip and syscfg_package.zip.
+* User provided files - [ice-1.3.2.tar.gz](https://downloadcenter.intel.com/download/29746/Intel-Network-Adapter-Driver-for-E810-Series-Devices-under-Linux-) and [iavf-4.0.2.tar.gz](https://downloadcenter.intel.com/download/24693?v=t) when `e810_driver_enable` flag is set (default setting).
+
+## Creating the offline package from an online node
+
+To create the offline package the user must have an access to an online node from which the offline package creator can download all necessary files and build Docker images. The list of files to be downloaded/build is provided in a form of a package definition list (Only package definition list for "flexran" flavor of OpenNESS is provided at the time of writing). Various categories of files to be downloaded are provided within this list including: RPMs, PIP pacakges, Helm charts, Dockerfiles, Go modules, and miscellaneous downloads. According to the category of a file the logic of offline package creator script will handle the download/build accordingly. Some files such as proprietary packages need to be provided by user in specified directories (see following steps). Once the offline package creator collects all necessary components it will pack them into an archive and then place them in appropriate place within the Converged Edge Experience Kits directory. Once the packages are archived the Converged Edge Experience Kits are ready to be deployed in air-gapped environment. The following diagram illustrates the workflow of the offline package creator. Additional information regarding the offline package creator can be found in the [README.md file](https://github.com/open-ness/ido-converged-edge-experience-kits/blob/master/offline_package_creator/README.md).
+
+Figure 3. Offline package creator workflow
+![OPC flow](offline-images/offline-flow.png)
+
+To run the offline package creator, follow the steps as below:
+> **NOTE:** RT components will require installation of RT kernel on the node by the OPC
+
+Clone the Converged Edge Experience Kits repo to an online node:
+
+```shell
+# https://github.com/open-ness/ido-converged-edge-experience-kits.git
+```
+
+Navigate to offline package creator directory:
+
+```shell
+# cd ido-converged-edge-experience-kits/offline_package_creator/
+```
+
+Create a directory from which user provided files can be accessed:
+
+```shell
+# mkdir ///
+```
+
+Copy the 'OPAE_SDK_1.3.7-5_el7.zip' file (optional but necessary by default - to be done when OPAE is enabled in "flexran" flavor of CEEK) and syscfg_package.zip (optional but necessary by default- to be done when BIOS config is enabled in "flexran" flavor of CEEK) to the provided directory:
+
+```shell
+# cp OPAE_SDK_1.3.7-5_el7.zip ///
+# cp syscfg_package.zip ///
+```
+
+Create the `ido-converged-edge-experience-kits/ceek/nic_drivers` directory and copy the `ice-1.3.2.tar.gz` and `iavf-4.0.2.tar.gz` files (optional but necessary by default - to be done when `e810_driver_enable` is enabled in "flexran" flavor of CEEK) to the directory.
+
+```shell
+# mkdir ./ceek/nic_drivers
+# cp ice-1.3.2.tar.gz ./ceek/nic_drivers
+# cp iavf-4.0.2.tar.gz ./ceek/nic_drivers
+```
+
+Edit [ido-converged-edge-experience-kits/offline_package_creator/scripts/initrc](https://github.com/open-ness/ido-converged-edge-experience-kits/blob/master/offline_package_creator/scripts/initrc) file and update with GitHub username/token if necessary, HTTP/GIT proxy if behind firewall and provide paths to file dependencies.
+
+```shell
+# open-ness token
+GITHUB_TOKEN=""
+
+# User add ones
+GIT_PROXY="http://:"
+
+# location of OPAE_SDK_1.3.7-5_el7.zip
+BUILD_OPAE=disable
+DIR_OF_OPAE_ZIP="///"
+
+# location of syscfg_package.zip
+BUILD_BIOSFW=disable
+DIR_OF_BIOSFW_ZIP="///"
+
+# location of the zip packages for collectd-fpga
+BUILD_COLLECTD_FPGA=disable
+DIR_OF_FPGA_ZIP="///"
+```
+
+Start the offline package creator script [ido-converged-edge-experience-kits/offline_package_creator/offline_package_creator.sh](https://github.com/open-ness/ido-converged-edge-experience-kits/blob/master/offline_package_creator/offline_package_creator.sh)
+
+```shell
+# bash offline_package_creator.sh all
+```
+
+The script will download all the files define in the [pdl_flexran.yml](https://github.com/open-ness/ido-converged-edge-experience-kits/blob/master/offline_package_creator/package_definition_list/pdl_flexran.yml) and build other necessary images, then copy them to a designated directory. Once the script is finished executing the user should expect three files under the `ido-converged-edge-experience-kits/roles/offline_roles/unpack_offline_package/files` directory:
+
+```shell
+# ls ido-converged-edge-experience-kits/roles/offline_roles/unpack_offline_package/files
+
+checksum.txt prepackages.tar.gz opcdownloads.tar.gz
+```
+
+Once the archive packages are created and placed in the CEEK, the CEEK is ready to be configured for offline/air-gapped installation.
+
+## Placing the complete offline package in offline environment
+
+User has two options of deploying the CEEK for offline/air-gapped environment. Please refer to Figure 1 and Figure 2 of this document for diagrams.
+
+Scenario 1: User will deploy the CEEK from an online node with a network connection to the offline/air-gapped environment. In this case if the online node is the same one as the one on which the offline package creator was run and created the archive files for CEEK than the CEEK directory does not need to be moved and will be used as is. The online node is expected to have a password-less SSH connection with all the offline nodes enabled - all the offline nodes are expected to have a password-less SSH connection between control plane and node and vice-versa, and the control plane node needs to be able to SSH itself.
+
+Scenario 2: User will deploy the CEEK from a node within the offline/air-gapped environment. In this case the user needs to copy the whole CEEK directory containing the archived files from [previous section](#creating-the-offline-package-from-an-online-node) from the online node to one of the nodes in the offline environment via USB drive or alternative media. It is advisable that the offline node used to run the CEEK is a separate node to the actual cluster, if the node is also used as part of the cluster it will reboot during the script run due to kernel upgrade and the CEEK will need to be run again - this may have un-forseen consequences. All the offline nodes are expected to have a password-less SSH connection between control plane and node and vice-versa, and the control plane node needs to be able to SSH itself.
+
+Regardless of the scenario in which the CEEK will be deployed the deployment method is the same.
+
+## Deployment in offline environment
+
+Once all the previous steps provided within this document are completed and the CEEK with offline archives is placed on the node which will run the CEEK automation, the user should get familiar with the ["Running-playbooks"](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/openness-cluster-setup.md#running-playbooks) and ["Preconditions"](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/openness-cluster-setup.md#preconditions) sections of getting started guide and deploy the OpenNESS as per usual deployment steps. Please note only deployment of "flexran" flavour is supported for offline/air-gapped environment, other flavours/configurations and default deployment may fail due to missing dependencies, the support for ACC100 accelerator is not available for offline deployment of "flexran" flavour at the time of writing. Both multi-node and single node modes are supported.
+
+During the deployment of the offline version of the CEEK the archived files created by the offline package creator will be extracted and placed in appropriate directory. The CEEK will set up a local file share server on the control plane node and move the files to the said server. The CEEK will also create a local yum repo. All the files and packages will be pulled from this file share server by nodes across the air-gapped OpenNESS cluster. During the execution of the CEEK the Ansible scripts will follow the same logic as per the online mode with the difference that all the components will be pulled locally from the file share server instead of the internet.
+
+The following are the specific steps to enable offline/air-gaped deployment from CEEK:
+
+Enable the offline deployment in [ido-converged-edge-experience-kits/inventory/default/group_vars/all/10-open.yml](https://github.com/open-ness/ido-converged-edge-experience-kits/blob/master/inventory/default/group_vars/all/10-open.yml)
+
+```yaml
+## Offline Mode support
+offline_enable: True
+```
+
+Make sure the time on offline nodes is synchronized.
+
+Make sure nodes can access each other through SSH without password.
+Make sure cotrol-plane node can SSH itself. ie:
+
+```shell
+# hostname -I | awk '{print $1}'
+
+# ssh-copy-id root@
+```
+If a non-root user is being used (ie. openness) to deploy the cluster, a rule needs to be added, allowing the controller node to access itself through SSH without password.
+```shell
+# hostname -I | awk '{print $1}'
+
+# ssh-copy-id root@
+$ ssh-copy-id openness@
+```
+
+Make sure the CPUs allocation in "flexran" flavor is configured as desired, [see configs in flavor directory](https://github.com/open-ness/ido-converged-edge-experience-kits/tree/master/flavors/flexran).
+
+Deploy OpenNESS using FlexRAN flavor for multi or single node:
+
+1. Update the `inventory.yaml` file by setting the deployment flavor as `flexran` and set single node deployment flag to `true` for single node deployment:
+ ```yaml
+ ---
+ all:
+ vars:
+ cluster_name: offline_flexran_cluster
+ flavor: flexran
+ single_node_deployment: false # set true for single node
+ ...
+ ```
+ > **NOTE:** set `single_node_deployment:` to `true` for single node
+
+2. Install the pre-requisites.
+```shell
+# ./scripts/ansible-precheck.sh
+```
+
+3. Run deployment:
+```shell
+# python3 deploy.py
+```
+> **NOTE**: for more details about deployment and defining inventory please refer to [CEEK](../../getting-started/converged-edge-experience-kits.md#converged-edge-experience-kit-explained) getting started page.
diff --git a/doc/getting-started/network-edge/offline-images/offline-copy.png b/doc/getting-started/offline-images/offline-copy.png
similarity index 100%
rename from doc/getting-started/network-edge/offline-images/offline-copy.png
rename to doc/getting-started/offline-images/offline-copy.png
diff --git a/doc/getting-started/network-edge/offline-images/offline-flow.png b/doc/getting-started/offline-images/offline-flow.png
similarity index 100%
rename from doc/getting-started/network-edge/offline-images/offline-flow.png
rename to doc/getting-started/offline-images/offline-flow.png
diff --git a/doc/getting-started/network-edge/offline-images/offline-ssh.png b/doc/getting-started/offline-images/offline-ssh.png
similarity index 100%
rename from doc/getting-started/network-edge/offline-images/offline-ssh.png
rename to doc/getting-started/offline-images/offline-ssh.png
diff --git a/doc/getting-started/network-edge/controller-edge-node-setup-images/dashboard-login.png b/doc/getting-started/openness-cluster-setup-images/dashboard-login.png
similarity index 100%
rename from doc/getting-started/network-edge/controller-edge-node-setup-images/dashboard-login.png
rename to doc/getting-started/openness-cluster-setup-images/dashboard-login.png
diff --git a/doc/getting-started/network-edge/controller-edge-node-setup-images/harbor_ui.png b/doc/getting-started/openness-cluster-setup-images/harbor_ui.png
similarity index 100%
rename from doc/getting-started/network-edge/controller-edge-node-setup-images/harbor_ui.png
rename to doc/getting-started/openness-cluster-setup-images/harbor_ui.png
diff --git a/doc/getting-started/openness-cluster-setup.md b/doc/getting-started/openness-cluster-setup.md
new file mode 100644
index 00000000..fe079122
--- /dev/null
+++ b/doc/getting-started/openness-cluster-setup.md
@@ -0,0 +1,519 @@
+```text
+SPDX-License-Identifier: Apache-2.0
+Copyright (c) 2019-2021 Intel Corporation
+```
+
+# OpenNESS Network Edge: Controller and Edge node setup
+- [Quickstart](#quickstart)
+- [Preconditions](#preconditions)
+- [Running playbooks](#running-playbooks)
+ - [Deployment scripts](#deployment-scripts)
+ - [Network Edge playbooks](#network-edge-playbooks)
+ - [Cleanup procedure](#cleanup-procedure)
+ - [Supported EPA features](#supported-epa-features)
+ - [VM support for Network Edge](#vm-support-for-network-edge)
+ - [Application on-boarding](#application-on-boarding)
+ - [Single-node Network Edge cluster](#single-node-network-edge-cluster)
+ - [Kubernetes cluster networking plugins (Network Edge)](#kubernetes-cluster-networking-plugins-network-edge)
+ - [Selecting cluster networking plugins (CNI)](#selecting-cluster-networking-plugins-cni)
+ - [Adding additional interfaces to pods](#adding-additional-interfaces-to-pods)
+- [Q&A](#qa)
+ - [Configuring time](#configuring-time)
+ - [Setup static hostname](#setup-static-hostname)
+ - [Configuring the Inventory file](#configuring-the-inventory-file)
+ - [Exchanging SSH keys between hosts](#exchanging-ssh-keys-between-hosts)
+ - [Setting proxy](#setting-proxy)
+ - [Obtaining installation files](#obtaining-installation-files)
+ - [Setting Git](#setting-git)
+ - [GitHub token](#github-token)
+ - [Customize tag/branch/sha to checkout on edgeservices repository](#customize-tagbranchsha-to-checkout-on-edgeservices-repository)
+ - [Customization of kernel, grub parameters, and tuned profile](#customization-of-kernel-grub-parameters-and-tuned-profile)
+
+# Quickstart
+The following set of actions must be completed to set up the Open Network Edge Services Software (OpenNESS) cluster.
+
+1. Fulfill the [Preconditions](#preconditions).
+2. Become familiar with [supported features](#supported-epa-features) and enable them if desired.
+3. Clone [Converged Edge Experience Kits](https://github.com/open-ness/converged-edge-experience-kits)
+4. Install deployment helper script pre-requisites (first time only)
+
+ ```shell
+ $ sudo scripts/ansible-precheck.sh
+ ```
+
+5. Run the [deployment helper script](#running-playbooks) for the Ansible\* playbook:
+
+ ```shell
+ $ python3 deploy.py
+ ```
+
+# Preconditions
+
+To use the playbooks, several preconditions must be fulfilled. These preconditions are described in the [Q&A](#qa) section below. The preconditions are:
+
+- CentOS\* 7.9.2009 must be installed on all the nodes (the controller and edge nodes) where the product is deployed. It is highly recommended to install the operating system using a minimal ISO image on nodes that will take part in deployment (obtained from inventory file). Also, do not make customizations after a fresh manual install because it might interfere with Ansible scripts and give unpredictable results during deployment.
+- Hosts for the Edge Controller (Kubernetes control plane) and Edge Nodes (Kubernetes nodes) must have proper and unique hostnames (i.e., not `localhost`). This hostname must be specified in `/etc/hosts` (refer to [Setup static hostname](#setup-static-hostname)).
+- SSH keys must be exchanged between hosts (refer to [Exchanging SSH keys between hosts](#exchanging-ssh-keys-between-hosts)).
+- A proxy may need to be set (refer to [Setting proxy](#setting-proxy)).
+- If a private repository is used, a Github\* token must be set up (refer to [GitHub token](#github-token)).
+- Refer to the [Configuring time](#configuring-time) section for how to enable Network Time Protocol (NTP) clients.
+- The Ansible inventory must be configured (refer to [Configuring the Inventory file](#configuring-the-inventory-file)).
+
+# Running playbooks
+
+The Network Edge deployment and cleanup is carried out via Ansible playbooks. The playbooks are run from the Ansible host. Before running the playbooks, an inventory file `inventory.yml` must be defined. The provided [deployment helper scripts](#deployment-scripts) support deploying multiple clusters as defined in the Inventory file.
+
+The following subsections describe the playbooks in more details.
+
+## Deployment scripts
+
+For convenience, playbooks can be executed by running helper deployment scripts from the Ansible host. These scripts require that the Edge Controller and Edge Nodes be configured on different hosts (for deployment on a single node, refer to [Single-node Network Edge cluster](#single-node-network-edge-cluster)). This is done by configuring the Ansible playbook inventory, as described later in this document.
+
+To get started with deploying an OpenNESS edge cluster using the Converged Edge Experience Kit:
+
+1. Install pre-requisite tools for the the deployment script
+
+ ```shell
+ $ sudo scripts/ansible-precheck.sh
+ ```
+
+2. Edit the `inventory.yml` file by providing information about the cluster nodes and the intended deployment flavor
+
+ Example:
+
+ ```yaml
+ ---
+ all:
+ vars:
+ cluster_name: 5g_near_edge
+ flavor: cera_5g_near_edge
+ single_node_deployment: false
+ limit:
+ controller_group:
+ hosts:
+ ctrl.openness.org:
+ ansible_host: 10.102.227.154
+ ansible_user: openness
+ edgenode_group:
+ hosts:
+ node01.openness.org:
+ ansible_host: 10.102.227.11
+ ansible_user: openness
+ node02.openness.org:
+ ansible_host: 10.102.227.79
+ ansible_user: openness
+ edgenode_vca_group:
+ hosts:
+ ptp_master:
+ hosts:
+ ptp_slave_group:
+ hosts:
+ ```
+
+ > **NOTE**: To deploy multiple clusters in one command run, append the same set of YAML specs separated by `---`
+
+3. Additional configurations should be applied to the default group_vars file: `inventory/default/group_vars/all/10-default.yml`. More details on the default values is explained in the [Getting Started Guide](../converged-edge-experience-kits.md#default-values).
+
+4. Get the deployment started by executing the deploy script
+
+ ```shell
+ $ python3 deploy.py
+ ```
+ > **NOTE**: This script parses the values provided in the inventory.yml file.
+
+ > **NOTE**: If want to enforce deployment termination in case of any failure, use arguments `-f` or `--any-errors-fatal`, e.g.:
+ > ```shell
+ > $ python3 deploy.py --any-errors-fatal
+ > ```
+
+5. To cleanup an existing deployment, execute with `-c` or `--clean`, e.g:
+
+ ```shell
+ $ python3 deploy.py --clean
+ ```
+ > **NOTE**: If it is intended to do the cleanup manually, i.e: one cluster at a time, update the `inventory.yml` with only the intended cluster configuration
+
+For an initial installation, `deploy.py` with `all/vars/limit: controller` must be run before `deploy.py` with `all/vars/limit: nodes`. During the initial installation, the hosts may reboot. After reboot, the deployment script that was last run should be run again.
+
+
+## Network Edge playbooks
+
+The `network_edge.yml` and `network_edge_cleanup.yml` files contain playbooks for Network Edge mode.
+Playbooks can be customized by enabling and configuring features in the `inventory/default/group_vars/all/10-open.yml` file.
+
+### Cleanup procedure
+
+The cleanup procedure is used when a configuration error in the Edge Controller or Edge Nodes must be fixed. The script causes the appropriate installation to be reverted, so that the error can be fixed and `deploy.py` can be re-run. The cleanup procedure does not do a comprehensive cleanup (e.g., installation of DPDK or Golang will not be rolled back).
+
+The cleanup procedure call a set of cleanup roles that revert the changes resulted from the cluster deployment. The changes are reverted by going step-by-step in the reverse order and undoing the steps.
+
+For example, when installing Docker\*, the RPM repository is added and Docker is installed. When cleaning up, Docker is uninstalled and the repository is removed.
+
+To execute cleanup procedure
+
+```shell
+$ python3 deploy.py --clean
+```
+
+> **NOTE**: There may be leftovers created by the installed software. For example, DPDK and Golang installations, found in `/opt`, are not rolled back.
+
+### Supported EPA features
+
+Several enhanced platform capabilities and features are available in OpenNESS for Network Edge. For the full list of supported features, refer to Building Blocks / Enhanced Platform Awareness section. The documents referenced in this list provide a detailed description of the features, and step-by-step instructions for enabling them. Users should become familiar with available features before executing the deployment playbooks.
+
+### VM support for Network Edge
+Support for VM deployment on OpenNESS for Network Edge is available and enabled by default. Certain configurations and prerequisites may need to be satisfied to use all VM capabilities. The user is advised to become familiar with the VM support documentation before executing the deployment playbooks. See [openness-network-edge-vm-support](../../applications-onboard/openness-network-edge-vm-support.md) for more information.
+
+### Application on-boarding
+
+Refer to the [network-edge-applications-onboarding](../../applications-onboard/network-edge-applications-onboarding.md) document for instructions on how to deploy edge applications for OpenNESS Network Edge.
+
+### Single-node Network Edge cluster
+
+Network Edge can be deployed on just a single machine working as a control plane & node.
+
+To deploy Network Edge in a single-node cluster scenario, follow the steps below:
+
+1. Modify `inventory.yml`
+ > Rules for inventory:
+ > - IP address (`ansible_host`) for both controller and node must be the same
+ > - `controller_group` and `edgenode_group` groups must contain exactly one host
+ > - `single_node_deployment` flag set to `true`
+
+ Example of a valid inventory:
+
+ ```yaml
+ ---
+ all:
+ vars:
+ cluster_name: 5g_central_office
+ flavor: cera_5g_central_office
+ single_node_deployment: true
+ limit:
+ controller_group:
+ hosts:
+ controller:
+ ansible_host: 10.102.227.234
+ ansible_user: openness
+ edgenode_group:
+ hosts:
+ node01:
+ ansible_host: 10.102.227.234
+ ansible_user: openness
+ edgenode_vca_group:
+ hosts:
+ ptp_master:
+ hosts:
+ ptp_slave_group:
+ hosts:
+ ```
+
+2. Features can be enabled in the `inventory/default/group_vars/all/10-default.yml` file by tweaking the configuration variables.
+
+3. Settings regarding the kernel, grub, HugePages\*, and tuned can be customized in `inventory/default/group_vars/edgenode_group/10-default.yml`.
+
+ > **NOTE**: Default settings in the single-node cluster mode are those of the Edge Node (i.e., kernel and tuned customization enabled).
+
+4. Single-node cluster can be deployed by running command:
+ ```shell
+ $ python3 deploy.py
+ ```
+
+## Kubernetes cluster networking plugins (Network Edge)
+
+Kubernetes uses 3rd party networking plugins to provide [cluster networking](https://kubernetes.io/docs/concepts/cluster-administration/networking/).
+These plugins are based on the [CNI (Container Network Interface) specification](https://github.com/containernetworking/cni).
+
+Converged Edge Experience Kits provide several ready-to-use Ansible roles deploying CNIs.
+The following CNIs are currently supported:
+
+* [kube-ovn](https://github.com/alauda/kube-ovn)
+ * **Only as primary CNI**
+ * CIDR: 10.16.0.0/16
+* [calico](https://github.com/projectcalico/cni-plugin)
+ * **Only as primary CNI**
+ * IPAM: host-local
+ * CIDR: 10.245.0.0/16
+ * Network attachment definition: openness-calico
+* [flannel](https://github.com/coreos/flannel)
+ * IPAM: host-local
+ * CIDR: 10.244.0.0/16
+ * Network attachment definition: openness-flannel
+* [weavenet](https://github.com/weaveworks/weave)
+ * CIDR: 10.32.0.0/12
+* [SR-IOV](https://github.com/intel/sriov-cni) (cannot be used as a standalone or primary CNI - [sriov setup](../../building-blocks/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md))
+* [Userspace](https://github.com/intel/userspace-cni-network-plugin) (cannot be used as a standalone or primary CNI - [Userspace CNI setup](../../building-blocks/dataplane/openness-userspace-cni.md)
+
+Multiple CNIs can be requested to be set up for the cluster. To provide such functionality [the Multus CNI](https://github.com/intel/multus-cni) is used.
+
+>**NOTE**: For a guide on how to add new a CNI role to the Converged Edge Experience Kits, refer to [the Converged Edge Experience Kits guide](../../getting-started/converged-edge-experience-kits.md#adding-new-cni-plugins-for-kubernetes-network-edge).
+
+### Selecting cluster networking plugins (CNI)
+
+The default CNI for OpenNESS is calico. Non-default CNIs may be configured with OpenNESS by editing the file `inventory/default/group_vars/all/10-open.yml`.
+To add a non-default CNI, the following edits must be carried out:
+
+- The CNI name is added to the `kubernetes_cnis` variable. The CNIs are applied in the order in which they appear in the file. By default, `calico` is defined. That is,
+
+ ```yaml
+ kubernetes_cnis:
+ - calico
+ ```
+
+- To add a CNI, such as SR-IOV, the `kubernetes_cnis` variable is edited as follows:
+
+ ```yaml
+ kubernetes_cnis:
+ - calico
+ - sriov
+ ```
+
+- The Multus CNI is added by the Ansible playbook when two or more CNIs are defined in the `kubernetes_cnis` variable
+- The CNI's networks (CIDR for pods, and other CIDRs used by the CNI) are added to the `proxy_noproxy` variable by Ansible playbooks.
+
+### Adding additional interfaces to pods
+
+To add an additional interface from secondary CNIs, annotation is required.
+Below is an example pod yaml file for a scenario with `kube-ovn` as a primary CNI along with `calico` and `flannel` as additional CNIs.
+Multus\* will create an interface named `calico` using the network attachment definition `openness-calico` and interface `flannel` using the network attachment definition `openness-flannel`.
+>**NOTE**: Additional annotations such as `openness-calico@calico` are required only if the CNI is secondary. If the CNI is primary, the interface will be added automatically by Multus\*.
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: cni-test-pod
+ annotations:
+ k8s.v1.cni.cncf.io/networks: openness-calico@calico, openness-flannel@flannel
+spec:
+ containers:
+ - name: cni-test-pod
+ image: docker.io/centos/tools:latest
+ command:
+ - /sbin/init
+```
+
+The following is an example output of the `ip a` command run in a pod and after CNIs have been applied. Some lines in the command output were omitted for readability.
+
+The following interfaces are available: `calico@if142`, `flannel@if143`, and `eth0@if141` (`kubeovn`).
+
+```shell
+# kubectl exec -ti cni-test-pod ip a
+
+1: lo:
+ inet 127.0.0.1/8 scope host lo
+
+2: tunl0@NONE:
+ link/ipip 0.0.0.0 brd 0.0.0.0
+
+4: calico@if142:
+ inet 10.243.0.3/32 scope global calico
+
+6: flannel@if143:
+ inet 10.244.0.3/16 scope global flannel
+
+140: eth0@if141:
+ inet 10.16.0.5/16 brd 10.16.255.255 scope global eth0
+```
+
+# Q&A
+
+## Configuring time
+
+To allow for correct certificate verification, OpenNESS requires system time to be synchronized among all nodes and controllers in a system.
+
+OpenNESS provides the possibility to synchronize a machine's time with the NTP server.
+To enable NTP synchronization, change `ntp_enable` in `inventory/default/group_vars/all/10-open.yml`:
+```yaml
+ntp_enable: true
+```
+
+Servers to be used instead of default ones can be provided using the `ntp_servers` variable in `inventory/default/group_vars/all/10-open.yml`:
+```yaml
+ntp_servers: ["ntp.local.server"]
+```
+
+## Setup static hostname
+
+The following command is used in CentOS\* to set a static hostname:
+
+```shell
+hostnamectl set-hostname
+```
+
+As shown in the following example, the hostname must also be defined in `/etc/hosts`:
+
+```shell
+127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
+::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
+```
+
+In addition to being a unique hostname within the cluster, the hostname must also follow Kubernetes naming conventions. For example, only lower-case alphanumeric characters "-" or "." start and end with an alphanumeric character. Refer to
+[K8s naming restrictions](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names) for additional details on these conventions.
+
+## Configuring the Inventory file
+
+To execute playbooks, an inventory file `inventory.yml` must be defined in order to specify the target nodes on which the OpenNESS cluster(s) will be deployed.
+
+The OpenNESS inventory contains three groups: `all`, `controller_group`, and `edgenode_group`.
+
+- `all` contains all the variable definitions relevant to the cluster:
+ > `cluster_name`: defines the name of the OpenNESS edge cluster
+ > `flavor`: the deployment flavor to be deployed to the OpenNESS edge cluster
+ > `single_node_deployment`: when set to `true`, mandates a single-node cluster deployment
+- `controller_group` defines the node to be set up as the OpenNESS Edge Controller
+ > **NOTE**: Because only one controller is supported, the `controller_group` can contain only one host.
+- `edgenode_group` defines the group of nodes that constitute the OpenNESS Edge Nodes.
+ > **NOTE**: All nodes will be joined to the OpenNESS Edge Controller defined in `controller_group`.
+
+Example:
+
+```yaml
+---
+all:
+ vars:
+ cluster_name: 5g_near_edge
+ flavor: cera_5g_near_edge
+ single_node_deployment: false
+ limit:
+controller_group:
+ hosts:
+ ctrl.openness.org:
+ ansible_host: 10.102.227.154
+ ansible_user: openness
+edgenode_group:
+ hosts:
+ node01.openness.org:
+ ansible_host: 10.102.227.11
+ ansible_user: openness
+ node02.openness.org:
+ ansible_host: 10.102.227.79
+ ansible_user: openness
+edgenode_vca_group:
+ hosts:
+ptp_master:
+ hosts:
+ptp_slave_group:
+ hosts:
+```
+
+In this example, a cluster named as `5g_near_edge` is deployed using the pre-defined deployment flavor `cera_5g_near_edge` that is composed of one controller node `ctrl.openness.org` and 2 edge nodes: `node01.openness.org` and `node02.openness.org`.
+
+## Exchanging SSH keys between hosts
+
+Exchanging SSH keys between hosts permits a password-less SSH connection from the host running Ansible to the hosts being set up.
+
+In the first step, the host running Ansible (usually the Edge Controller host) must have a generated SSH key. The SSH key can be generated by executing `ssh-keygen` and obtaining the key from the output of the command.
+
+The following is an example of a key generation, in which the key is placed in the default directory (`/root/.ssh/id_rsa`), and an empty passphrase is used.
+
+```shell
+# ssh-keygen
+
+Generating public/private rsa key pair.
+Enter file in which to save the key (/root/.ssh/id_rsa):
+Enter passphrase (empty for no passphrase):
+Enter same passphrase again:
+Your identification has been saved in /root/.ssh/id_rsa.
+Your public key has been saved in /root/.ssh/id_rsa.pub.
+The key fingerprint is:
+SHA256:vlcKVU8Tj8nxdDXTW6AHdAgqaM/35s2doon76uYpNA0 root@host
+The key's randomart image is:
++---[RSA 2048]----+
+| .oo.==*|
+| . . o=oB*|
+| o . . ..o=.=|
+| . oE. . ... |
+| ooS. |
+| ooo. . |
+| . ...oo |
+| . .*o+.. . |
+| =O==.o.o |
++----[SHA256]-----+
+```
+
+In the second step, the generated key must be copied to **every host from the inventory**, including the host on which the key was generated, if it appears in the inventory (e.g., if the playbooks are executed from the Edge Controller host, the host must also have a copy of its key). It is done by running `ssh-copy-id`. For example:
+
+```shell
+# ssh-copy-id root@host
+
+/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
+The authenticity of host ' ()' can't be established.
+ECDSA key fingerprint is SHA256:c7EroVdl44CaLH/IOCBu0K0/MHl8ME5ROMV0AGzs8mY.
+ECDSA key fingerprint is MD5:38:c8:03:d6:5a:8e:f7:7d:bd:37:a0:f1:08:15:28:bb.
+Are you sure you want to continue connecting (yes/no)? yes
+/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
+/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
+root@host's password:
+
+Number of key(s) added: 1
+
+Now, try logging into the machine, with: "ssh 'root@host'"
+and check to make sure that only the key(s) you wanted were added.
+```
+
+To make sure the key is copied successfully, try to SSH into the host: `ssh 'root@host'`. It should not ask for the password.
+
+>**NOTE**: Where non-root user is used for example `openness` the command should be replaced to `ssh openness@host`. For more information about non-root user please refer to:
+[The non-root user on the OpenNESS Platform](../../building-blocks/enhanced-platform-awareness/openness-nonroot.md)
+## Setting proxy
+
+If a proxy is required to connect to the Internet, it is configured via the following steps:
+
+- Edit the `proxy_` variables in the `inventory/default/group_vars/all/10-open.yml` file.
+- Set the `proxy_enable` variable in `inventory/default/group_vars/all/10-open.yml` file to `true`.
+- Append the network CIDR (e.g., `192.168.0.1/24`) to the `proxy_noproxy` variable in `inventory/default/group_vars/all/10-open.yml`.
+
+Sample configuration of `inventory/default/group_vars/all/10-open.yml`:
+
+```yaml
+# Setup proxy on the machine - required if the Internet is accessible via proxy
+proxy_enable: true
+# Clear previous proxy settings
+proxy_remove_old: true
+# Proxy URLs to be used for HTTP, HTTPS and FTP
+proxy_http: "http://proxy.example.org:3128"
+proxy_https: "http://proxy.example.org:3129"
+proxy_ftp: "http://proxy.example.org:3128"
+# Proxy to be used by YUM (/etc/yum.conf)
+proxy_yum: "{{ proxy_http }}"
+# No proxy setting contains addresses and networks that should not be accessed using proxy (e.g., local network and Kubernetes CNI networks)
+proxy_noproxy: ""
+```
+
+Sample definition of `no_proxy` environmental variable for Ansible host (to allow Ansible host to connect to other hosts):
+
+```shell
+export no_proxy="localhost,127.0.0.1,10.244.0.0/24,10.96.0.0/12,192.168.0.0/24"
+```
+
+## Obtaining installation files
+
+There are no specific restrictions on the directory into which the OpenNESS directories are cloned. When OpenNESS is built, additional directories will be installed in `/opt`. It is recommended to clone the project into a directory such as `/home`.
+
+## Setting Git
+
+### GitHub token
+
+>**NOTE**: Only required when cloning private repositories. Not needed when using github.com/open-ness repositories.
+
+To clone private repositories, a GitHub token must be provided.
+
+To generate a GitHub token, refer to [GitHub help - Creating a personal access token for the command line](https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line).
+
+To provide the token, edit the value of `git_repo_token` variable in `inventory/default/group_vars/all/10-open.yml`.
+
+### Customize tag/branch/sha to checkout on edgeservices repository
+
+A specific tag, branch, or commit SHA on edgeservices repository can be checked out by setting the `git_repo_branch` variable in `inventory/default/group_vars/all/10-open.yml`.
+
+```yaml
+git_repo_branch: master
+# or
+git_repo_branch: openness-20.03
+```
+
+## Customization of kernel, grub parameters, and tuned profile
+
+Converged Edge Experience Kits provide an easy way to customize the kernel version, grub parameters, and tuned profile. For more information, refer to the [Converged Edge Experience Kits guide](../../getting-started/converged-edge-experience-kits.md).
+
diff --git a/doc/getting-started/openness-experience-kits.md b/doc/getting-started/openness-experience-kits.md
deleted file mode 100644
index 32559f7a..00000000
--- a/doc/getting-started/openness-experience-kits.md
+++ /dev/null
@@ -1,229 +0,0 @@
-```text
-SPDX-License-Identifier: Apache-2.0
-Copyright (c) 2019 Intel Corporation
-```
-
-# OpenNESS Experience Kits
-- [Purpose](#purpose)
-- [OpenNESS setup playbooks](#openness-setup-playbooks)
-- [Customizing kernel, grub parameters, and tuned profile & variables per host](#customizing-kernel-grub-parameters-and-tuned-profile--variables-per-host)
- - [IP address range allocation for various CNIs and interfaces](#ip-address-range-allocation-for-various-cnis-and-interfaces)
- - [Default values](#default-values)
- - [Use newer realtime kernel (3.10.0-1062)](#use-newer-realtime-kernel-3100-1062)
- - [Use newer non-rt kernel (3.10.0-1062)](#use-newer-non-rt-kernel-3100-1062)
- - [Use tuned 2.9](#use-tuned-29)
- - [Default kernel and configure tuned](#default-kernel-and-configure-tuned)
- - [Change amount of HugePages](#change-amount-of-hugepages)
- - [Change size of HugePages](#change-size-of-hugepages)
- - [Change amount and size of HugePages](#change-amount-and-size-of-hugepages)
- - [Remove input output memory management unit (IOMMU) from grub params](#remove-input-output-memory-management-unit-iommu-from-grub-params)
- - [Add custom GRUB parameter](#add-custom-grub-parameter)
- - [Configure OVS-DPDK in kube-ovn](#configure-ovs-dpdk-in-kube-ovn)
-- [Adding new CNI plugins for Kubernetes (Network Edge)](#adding-new-cni-plugins-for-kubernetes-network-edge)
-
-## Purpose
-
-The OpenNESS Experience Kit (OEK) repository contains a set of Ansible\* playbooks for the easy setup of OpenNESS in **Network Edge** mode.
-
-## OpenNESS setup playbooks
-
-## Customizing kernel, grub parameters, and tuned profile & variables per host
-
-OEKs allow a user to customize kernel, grub parameters, and tuned profiles by leveraging Ansible's feature of `host_vars`.
-
-> **NOTE**: `groups_vars/[edgenode|controller|edgenode_vca]_group` directories contain variables applicable for the respective groups and they can be used in `host_vars` to change on per node basis while `group_vars/all` contains cluster wide variables.
-
-OEKs contain a `host_vars/` directory that can be used to place a YAML file (`nodes-inventory-name.yml`, e.g., `node01.yml`). The file would contain variables that would override roles' default values.
-
-> **NOTE**: Despite the ability to customize parameters (kernel), it is required to have a clean CentOS\* 7.8.2003 operating system installed on hosts (from a minimal ISO image) that will be later deployed from Ansible scripts. This OS shall not have any user customizations.
-
-To override the default value, place the variable's name and new value in the host's vars file. For example, the contents of `host_vars/node01.yml` that would result in skipping kernel customization on that node:
-
-```yaml
-kernel_skip: true
-```
-
-The following are several common customization scenarios.
-
-### IP address range allocation for various CNIs and interfaces
-
-The OpenNESS Experience kits deployment uses/allocates/reserves a set of IP address ranges for different CNIs and interfaces. The server or host IP address should not conflict with the default address allocation.
-In case if there is a critical need for the server IP address used by the OpenNESS default deployment, it would require to modify the default addresses used by the OpenNESS.
-
-Following files specify the CIDR for CNIs and interfaces. These are the IP address ranges allocated and used by default just for reference.
-
-```yaml
-flavors/media-analytics-vca/all.yml:19:vca_cidr: "172.32.1.0/12"
-group_vars/all/10-open.yml:90:calico_cidr: "10.243.0.0/16"
-group_vars/all/10-open.yml:93:flannel_cidr: "10.244.0.0/16"
-group_vars/all/10-open.yml:96:weavenet_cidr: "10.32.0.0/12"
-group_vars/all/10-open.yml:99:kubeovn_cidr: "10.16.0.0/16,100.64.0.0/16,10.96.0.0/12"
-roles/kubernetes/cni/kubeovn/controlplane/templates/crd_local.yml.j2:13: cidrBlock: "192.168.{{ loop.index0 + 1 }}.0/24"
-```
-
-The 192.168.x.y is used for SRIOV and interface service IP address allocation in Kube-ovn CNI. So it is not allowed for the server IP address, which conflicting with this range.
-Completely avoid the range of address defined as per the netmask as it may conflict in routing rules.
-
-Eg. If the server/host IP address is required to use 192.168.x.y while this range by default used for SRIOV interfaces in OpenNESS. The IP address range for cidrBlock in roles/kubernetes/cni/kubeovn/controlplane/templates/crd_local.yml.j2 file can be changed to 192.167.{{ loop.index0 + 1 }}.0/24 to use some other IP segment for SRIOV interfaces.
-
-
-### Default values
-Here are several default values:
-
-```yaml
-# --- machine_setup/custom_kernel
-kernel_skip: false # use this variable to disable custom kernel installation for host
-
-kernel_repo_url: http://linuxsoft.cern.ch/cern/centos/7.8.2003/rt/CentOS-RT.repo
-kernel_repo_key: http://linuxsoft.cern.ch/cern/centos/7.8.2003/os/x86_64/RPM-GPG-KEY-cern
-kernel_package: kernel-rt-kvm
-kernel_devel_package: kernel-rt-devel
-kernel_version: 3.10.0-1127.19.1.rt56.1116.el7.x86_64
-
-kernel_dependencies_urls: []
-kernel_dependencies_packages: []
-
-
-# --- machine_setup/grub
-hugepage_size: "2M" # Or 1G
-hugepage_amount: "5000"
-
-default_grub_params: "hugepagesz={{ hugepage_size }} hugepages={{ hugepage_amount }} intel_iommu=on iommu=pt"
-additional_grub_params: ""
-
-
-# --- machine_setup/configure_tuned
-tuned_skip: false # use this variable to skip tuned profile configuration for host
-tuned_packages:
-- tuned-2.11.0-8.el7
-- http://linuxsoft.cern.ch/scientific/7.8/x86_64/os/Packages/tuned-profiles-realtime-2.11.0-8.el7.noarch.rpm
-tuned_profile: realtime
-tuned_vars: |
- isolated_cores=2-3
- nohz=on
- nohz_full=2-3
-```
-
-### Use different realtime kernel (3.10.0-1062)
-By default, `kernel-rt-kvm-3.10.0-1127.19.1.rt56.1116.el7.x86_64` from buil-in repository is installed.
-
-To use another version (e.g., `kernel-rt-kvm-3.10.0-1062.9.1.rt56.1033.el7.x86_64`), create a `host_var` file for the host with content:
-```yaml
-kernel_version: 3.10.0-1062.9.1.rt56.1033.el7.x86_64
-```
-
-### Use different non-rt kernel (3.10.0-1062)
-The OEK installs a real-time kernel by default. However, the non-rt kernel is present in the official CentOS repository. Therefore, to use a different non-rt kernel, the following overrides must be applied:
-```yaml
-kernel_repo_url: "" # package is in default repository, no need to add new repository
-kernel_package: kernel # instead of kernel-rt-kvm
-kernel_devel_package: kernel-devel # instead of kernel-rt-devel
-kernel_version: 3.10.0-1062.el7.x86_64
-
-dpdk_kernel_devel: "" # kernel-devel is in the repository, no need for url with RPM
-
-# Since, we're not using rt kernel, we don't need a tuned-profiles-realtime but want to keep the tuned 2.11
-tuned_packages:
-- http://linuxsoft.cern.ch/scientific/7x/x86_64/os/Packages/tuned-2.11.0-8.el7.noarch.rpm
-tuned_profile: balanced
-tuned_vars: ""
-```
-
-### Use tuned 2.9
-```yaml
-tuned_packages:
-- tuned-2.9.0-1.el7fdp
-- tuned-profiles-realtime-2.9.0-1.el7fdp
-```
-
-### Default kernel and configure tuned
-```yaml
-kernel_skip: true # skip kernel customization altogether
-
-# update tuned to 2.11 but don't install tuned-profiles-realtime since we're not using rt kernel
-tuned_packages:
-- http://linuxsoft.cern.ch/scientific/7x/x86_64/os/Packages/tuned-2.11.0-8.el7.noarch.rpm
-tuned_profile: balanced
-tuned_vars: ""
-```
-
-### Change amount of HugePages
-```yaml
-hugepage_amount: "1000" # default is 5000
-```
-
-### Change size of HugePages
-```yaml
-hugepage_size: "1G" # default is 2M
-```
-
-### Change amount and size of HugePages
-```yaml
-hugepage_amount: "10" # default is 5000
-hugepage_size: "1G" # default is 2M
-```
-
-### Remove input output memory management unit (IOMMU) from grub params
-```yaml
-default_grub_params: "hugepagesz={{ hugepage_size }} hugepages={{ hugepage_amount }}"
-```
-
-### Add custom GRUB parameter
-```yaml
-additional_grub_params: "debug"
-```
-
-### Configure OVS-DPDK in kube-ovn
-By default, OVS-DPDK is enabled. To disable it, set a flag:
-```yaml
-kubeovn_dpdk: false
-```
-
->**NOTE**: This flag should be set in `roles/kubernetes/cni/kubeovn/common/defaults/main.ym` or added to `group_vars/all/10-default.yml`.
-
-Additionally, HugePages in the OVS pod can be adjusted once default HugePage settings are changed.
-```yaml
-kubeovn_dpdk_socket_mem: "1024,0" # Amount of hugepages reserved for OVS per NUMA node (node 0, node 1, ...) in MB
-kubeovn_dpdk_hugepage_size: "2Mi" # Default size of hugepages, can be 2Mi or 1Gi
-kubeovn_dpdk_hugepages: "1Gi" # Total amount of hugepages that can be used by OVS-OVN pod
-```
-
-> **NOTE**: If the machine has multiple NUMA nodes, remember that HugePages must be allocated for **each NUMA node**. For example, if a machine has two NUMA nodes, `kubeovn_dpdk_socket_mem: "1024,1024"` or similar should be specified.
-
->**NOTE**: If `kubeovn_dpdk_socket_mem` is changed, set the value of `kubeovn_dpdk_hugepages` to be equal to or greater than the sum of `kubeovn_dpdk_socket_mem` values. For example, for `kubeovn_dpdk_socket_mem: "1024,1024"`, set `kubeovn_dpdk_hugepages` to at least `2Gi` (equal to 2048 MB).
-
->**NOTE**: `kubeovn_dpdk_socket_mem`, `kubeovn_dpdk_pmd_cpu_mask`, and `kubeovn_dpdk_lcore_mask` can be set on per node basis but the HugePage amount allocated with `kubeovn_dpdk_socket_mem` cannot be greater than `kubeovn_dpdk_hugepages`, which is the same for the whole cluster.
-
-OVS pods limits are configured by:
-```yaml
-kubeovn_dpdk_resources_requests: "1Gi" # OVS-OVN pod RAM memory (requested)
-kubeovn_dpdk_resources_limits: "1Gi" # OVS-OVN pod RAM memory (limit)
-```
-CPU settings can be configured using:
-```yaml
-kubeovn_dpdk_pmd_cpu_mask: "0x4" # DPDK PMD CPU mask
-kubeovn_dpdk_lcore_mask: "0x2" # DPDK lcore mask
-```
-
-## Adding new CNI plugins for Kubernetes (Network Edge)
-
-* The role that handles CNI deployment must be placed in the `roles/kubernetes/cni/` directory (e.g., `roles/kubernetes/cni/kube-ovn/`).
-* Subroles for control plane and node (if needed) should be placed in the `controlplane/` and `node/` directories (e.g., `roles/kubernetes/cni/kube-ovn/{controlplane,node}`).
-* If there is a part of common command for both control plane and node, additional sub-roles can be created: `common` (e.g., `roles/kubernetes/cni/sriov/common`).
->**NOTE**: The automatic inclusion of the `common` role should be handled by Ansible mechanisms (e.g., usage of meta's `dependencies` or `include_role` module)
-* Name of the main role must be added to the `available_kubernetes_cnis` variable in `roles/kubernetes/cni/defaults/main.yml`.
-* If additional requirements must checked before running the playbook (to not have errors during execution), they can be placed in the `roles/kubernetes/cni/tasks/precheck.yml` file, which is included as a pre_task in plays for both Edge Controller and Edge Node.
-The following are basic prechecks that are currently executed:
- * Check if any CNI is requested (i.e., `kubernetes_cni` is not empty).
- * Check if `sriov` is not requested as primary (first on the list) or standalone (only on the list).
- * Check if `kubeovn` is requested as a primary (first on the list).
- * Check if the requested CNI is available (check if some CNI is requested that isn't present in the `available_kubernetes_cnis` list).
-* CNI roles should be as self-contained as possible (unless necessary, CNI-specific tasks should not be present in `kubernetes/{controlplane,node,common}` or `openness/network_edge/{controlplane,node}`).
-* If the CNI needs a custom OpenNESS service (e.g., Interface Service in case of `kube-ovn`), it can be added to the `openness/network_edge/{controlplane,node}`.
- Preferably, such tasks would be contained in a separate task file (e.g., `roles/openness/controlplane/tasks/kube-ovn.yml`) and executed only if the CNI is requested. For example:
- ```yaml
- - name: deploy interface service for kube-ovn
- include_tasks: kube-ovn.yml
- when: "'kubeovn' in kubernetes_cnis"
- ```
-* If the CNI is used as an additional CNI (with Multus\*), the network attachment definition must be supplied ([refer to Multus docs for more info](https://github.com/intel/multus-cni/blob/master/doc/quickstart.md#storing-a-configuration-as-a-custom-resource)).
diff --git a/doc/orchestration/openness-helm.md b/doc/orchestration/openness-helm.md
index 393cf68d..91a59c8d 100644
--- a/doc/orchestration/openness-helm.md
+++ b/doc/orchestration/openness-helm.md
@@ -12,7 +12,7 @@ Copyright (c) 2020 Intel Corporation
- [References](#references)
## Introduction
-Helm is a package manager for Kubernetes\*. It allows developers and operators to easily package, configure, and deploy applications and services onto Kubernetes clusters. For details refer to the [Helm Website](https://helm.sh). With OpenNESS, Helm is used to extend the [OpenNESS Experience Kits](https://github.com/open-ness/openness-experience-kits) Ansible\* playbooks to deploy Kubernetes packages. Helm adds considerable flexibility. It enables users to upgrade an existing installation without requiring a re-install. It provides the option to selectively deploy individual microservices if a full installation of OpenNESS is not needed. And it provides a standard process to deploy different applications or network functions. This document aims to familiarize the user with Helm and provide instructions on how to use the specific Helm charts available for OpenNESS.
+Helm is a package manager for Kubernetes\*. It allows developers and operators to easily package, configure, and deploy applications and services onto Kubernetes clusters. For details refer to the [Helm Website](https://helm.sh). With OpenNESS, Helm is used to extend the [Converged Edge Experience Kits](https://github.com/open-ness/converged-edge-experience-kits) Ansible\* playbooks to deploy Kubernetes packages. Helm adds considerable flexibility. It enables users to upgrade an existing installation without requiring a re-install. It provides the option to selectively deploy individual microservices if a full installation of OpenNESS is not needed. And it provides a standard process to deploy different applications or network functions. This document aims to familiarize the user with Helm and provide instructions on how to use the specific Helm charts available for OpenNESS.
## Architecture
The below figure shows the architecture for the OpenNESS Helm in this document.
@@ -22,7 +22,7 @@ _Figure - Helm Architecture in OpenNESS_
## Helm Installation
-Helm 3 is used for OpenNESS. The installation is automatically conducted by the [OpenNESS Experience Kits](https://github.com/open-ness/openness-experience-kits) Ansible playbooks as below:
+Helm 3 is used for OpenNESS. The installation is automatically conducted by the [Converged Edge Experience Kits](https://github.com/open-ness/converged-edge-experience-kits) Ansible playbooks as below:
```yaml
- role: kubernetes/helm
```
@@ -39,7 +39,7 @@ OpenNESS provides the following helm charts:
- CNI plugins including Multus\* and SRIOV CNI
- Video analytics service
- 5G control plane pods including AF, NEF, OAM, and CNTF
-> **NOTE**: NFD, CMK, Prometheus, NodeExporter, and Grafana leverage existing third-party helm charts: [Container Experience Kits](https://github.com/intel/container-experience-kits) and [Helm GitHub\* Repo](https://github.com/helm/charts). For other helm charts, [OpenNESS Experience Kits](https://github.com/open-ness/openness-experience-kits) Ansible playbooks perform automatic charts generation and deployment.
+> **NOTE**: NFD, CMK, Prometheus, NodeExporter, and Grafana leverage existing third-party helm charts: [Container Experience Kits](https://github.com/intel/container-experience-kits) and [Helm GitHub\* Repo](https://github.com/helm/charts). For other helm charts, [Converged Edge Experience Kits](https://github.com/open-ness/converged-edge-experience-kits) Ansible playbooks perform automatic charts generation and deployment.
- Sample applications, network functions, and services that can be deployed and verified on the OpenNESS platform:
- Applications
@@ -49,11 +49,11 @@ OpenNESS provides the following helm charts:
- [Telemetry Sample Application Helm Charts](https://github.com/open-ness/edgeapps/tree/master/applications/telemetry-sample-app)
- [EIS Sample Application Helm Charts](https://github.com/open-ness/edgeapps/tree/master/applications/eis-experience-kit)
- Network Functions
- - [FlexRAN Helm Charts](https://github.com/open-ness/edgeapps/tree/master/network-functions/ran/charts/flexran)
+ - [FlexRAN Helm Charts](https://github.com/open-ness/edgeapps/tree/master/network-functions/ran/charts/du-dev)
- [xRAN Helm Charts](https://github.com/open-ness/edgeapps/tree/master/network-functions/xran/helmcharts/xranchart)
- [UPF Helm Charts](https://github.com/open-ness/edgeapps/tree/master/network-functions/core-network/charts/upf)
-The EPA, Telemetry, and k8s plugins helm chart files will be saved in a specific directory on the OpenNESS controller. To modify the directory, change the following variable `ne_helm_charts_default_dir` in the `group_vars/all/10-default.yml` file:
+The EPA, Telemetry, and k8s plugins helm chart files will be saved in a specific directory on the OpenNESS controller. To modify the directory, change the following variable `ne_helm_charts_default_dir` in the `inventory/default/group_vars/all/10-open.yml` file:
```yaml
ne_helm_charts_default_dir: /opt/openness/helm-charts/
```
diff --git a/doc/reference-architectures/CERA-5G-On-Prem.md b/doc/reference-architectures/CERA-5G-On-Prem.md
index 399b52a8..6ebbbb43 100644
--- a/doc/reference-architectures/CERA-5G-On-Prem.md
+++ b/doc/reference-architectures/CERA-5G-On-Prem.md
@@ -1,6 +1,6 @@
```text
SPDX-License-Identifier: Apache-2.0
-Copyright (c) 2020 Intel Corporation
+Copyright (c) 2020-2021 Intel Corporation
```
# Converged Edge Reference Architecture 5G On Premises Edge
@@ -11,7 +11,7 @@ The Converged Edge Reference Architectures (CERA) are a set of pre-integrated HW
- [CERA 5G On Prem OpenNESS Configuration](#cera-5g-on-prem-openness-configuration)
- [CERA 5G On Prem Deployment Architecture](#cera-5g-on-prem-deployment-architecture)
- [CERA 5G On Prem Experience Kit Deployments](#cera-5g-on-prem-experience-kit-deployments)
- - [Edge Service Applications Supported on CERA 5G On Prem](#edge-service-applications-supported-on-cera-5g-on-prem)
+ - [Edge Service Applications Supported by CERA 5G On Prem](#edge-service-applications-supported-by-cera-5g-on-prem)
- [OpenVINO™](#openvino)
- [Edge Insights Software](#edge-insights-software)
- [CERA 5G On Prem Hardware Platform](#cera-5g-on-prem-hardware-platform)
@@ -21,7 +21,6 @@ The Converged Edge Reference Architectures (CERA) are a set of pre-integrated HW
- [BIOS Setup](#bios-setup)
- [Setting up Machine with Ansible](#setting-up-machine-with-ansible)
- [Steps to be performed on the machine, where the Ansible playbook is going to be run](#steps-to-be-performed-on-the-machine-where-the-ansible-playbook-is-going-to-be-run)
- - [CERA 5G On Premise Experience Kit Deployment](#cera-5g-on-premise-experience-kit-deployment)
- [5G Core Components](#5g-core-components)
- [dUPF](#dupf)
- [Overview](#overview)
@@ -231,26 +230,59 @@ The BIOS settings on the edge node must be properly set in order for the OpenNES
git submodule update --init --recursive
```
-4. Provide target machines IP addresses for OpenNESS deployment in `ido-converged-edge-experience-kits/openness_inventory.ini`. For Singlenode setup, set the same IP address for both `controller` and `node01`, the line with `node02` should be commented by adding # at the beginning.
-Example:
- ```ini
- [all]
- controller ansible_ssh_user=root ansible_host=192.168.1.43 # First server NE
- node01 ansible_ssh_user=root ansible_host=192.168.1.43 # First server NE
- ; node02 ansible_ssh_user=root ansible_host=192.168.1.12
- ```
- At that stage provide IP address only for `CERA 5G NE` server.
-
- If the GMC device is available, the node server can be synchronized. In the `ido-converged-edge-experience-kits/openness_inventory.ini`, `node01` should be added to `ptp_slave_group`. The default value `controller` for `[ptp_master]` should be removed or commented.
- ```ini
- [ptp_master]
- #controller
-
- [ptp_slave_group]
- node01
+4. Provide target machines IP addresses for CEEK deployment in `ido-converged-edge-experience-kits/inventory.yml`. For Singlenode setup, set the same IP address for both `controller` and `node01`. In the same file define the details for Central Office cluster deployment.
+ Example:
+ ```yaml
+ all:
+ vars:
+ cluster_name: on_premises_cluster
+ flavor: cera_5g_on_premise
+ single_node_deployment: true
+ limit:
+ controller_group:
+ hosts:
+ controller:
+ ansible_host: 172.16.0.1
+ ansible_user: root
+ edgenode_group:
+ hosts:
+ node01:
+ ansible_host: 172.16.0.1
+ ansible_user: root
+ edgenode_vca_group:
+ hosts:
+ ptp_master:
+ hosts:
+ ptp_slave_group:
+ hosts:
+
+ ---
+ all:
+ vars:
+ cluster_name: central_office_cluster
+ flavor: cera_5g_central_office
+ single_node_deployment: true
+ limit:
+ controller_group:
+ hosts:
+ co_controller:
+ ansible_host: 172.16.1.1
+ ansible_user: root
+ edgenode_group:
+ hosts:
+ co_node1:
+ ansible_host: 172.16.1.1
+ ansible_user: root
+ edgenode_vca_group:
+ hosts:
+ ptp_master:
+ hosts:
+ ptp_slave_group:
+ hosts:
+
```
-5. Edit `ido-converged-edge-experience-kits/openness/group_vars/all/10-open.yml` and provide some correct settings for deployment.
+5. Edit `ido-converged-edge-experience-kits/inventory/default/group_vars/all/10-open.yml` and provide some correct settings for deployment.
Git token.
```yaml
@@ -280,213 +312,115 @@ Example:
ntp_servers: ['ntp.server.com']
```
-6. Edit file `ido-converged-edge-experience-kits/openness/flavors/cera_5g_on_premise/edgenode_group.yml` and provide correct CPU settings.
+6. Edit file `ido-converged-edge-experience-kits/flavors/cera_5g_on_premise/all.yml` and provide On Premise deployment configuration.
+ Choose Edge Application that will be deployed:
```yaml
- tuned_vars: |
- isolated_cores=2-23,26-47
- nohz=on
- nohz_full=2-23,26-47
-
- # CPUs to be isolated (for RT procesess)
- cpu_isol: "2-23,26-47"
- # CPUs not to be isolate (for non-RT processes) - minimum of two OS cores necessary for controller
- cpu_os: "0-1,24-25"
+ # Choose which demo will be launched: `eis` or `openvino`
+ # To do not deploy any demo app, refer to `edgeapps_deployment_enable` variable
+ deploy_demo_app: "openvino"
```
- If a GMC is connected to the setup, then node server synchronization can be enabled inside ido-converged-edge-experience-kits/openness/flavors/cera_5g_on_premise/edgenode_group.yml file.
+ If OpenVINO was chosen as Edge Application, set the options:
```yaml
- ptp_sync_enable: true
+ model: "pedestrian-detection-adas-0002" # model name used by demo application
+ display_host_ip: "" # update ip for visualizer HOST GUI.
+ save_video: "enable" # enable saving the output video to file
+ target_device: "CPU" # device which will be used for video processing, currently only CPU is supported
```
-7. Edit file `ido-converged-edge-experience-kits/openness/flavors/cera_5g_on_premise/controller_group.yml` and provide names of `network interfaces` that are connected to second server and number of VF's to be created.
-
+ Set interface name for fronthaul connection:
```yaml
- sriov:
- network_interfaces: {eno1: 5, eno2: 10}
+ ## Interface logical name (PF) used for fronthaul
+ fronthaul_if_name: "enp184s0f0"
```
-8. Edit file `ido-converged-edge-experience-kits/openness/x-oek/oek/host_vars/node01.yml` if a GMC is connected and the node server should be synchronized.
-
- For single node setup (this is the default mode for CERA), `ptp_port` keeps the host's interface connected to Grand Master, e.g.:
+ Set interface name for connection to UPF and AMF-SMF:
```yaml
- ptp_port: "eno3"
+ # PF interface name of N3, N4, N6, N9 created VFs
+ host_if_name_cn: "eno1"
```
- Variable `ptp_network_transport` keeps network transport for ptp. Choose `"-4"` for default CERA setup. The `gm_ip` variable should contain the GMC's IP address. The Ansible scripts set the IP on the interface connected to the GMC, according to the values in the variables `ptp_port_ip` and `ptp_port_cidr`.
- ```yaml
- # Valid options:
- # -2 Select the IEEE 802.3 network transport.
- # -4 Select the UDP IPv4 network transport.
- ptp_network_transport: "-4"
-
- # Grand Master IP, e.g.:
- # gm_ip: "169.254.99.9"
- gm_ip: "169.254.99.9"
-
- # - ptp_port_ip contains a static IP for the server port connected to GMC, e.g.:
- # ptp_port_ip: "169.254.99.175"
- # - ptp_port_cidr - CIDR for IP from, e.g.:
- # ptp_port_cidr: "24"
- ptp_port_ip: "169.254.99.175"
- ptp_port_cidr: "24"
- ```
+7. Edit file `ido-converged-edge-experience-kits/flavors/cera_5g_on_premise/controller_group.yml`
-9. Execute the `deploy_openness_for_cera.sh` script in `ido-converged-edge-experience-kits` to start OpenNESS platform deployment process by running the following command:
- ```shell
- ./deploy_openness_for_cera.sh cera_5g_on_premise
- ```
- Note: This might take few hours.
-
-10. After a successful OpenNESS deployment, edit again `ido-converged-edge-experience-kits/openness_inventory.ini`, change IP address to `CERA 5G CN` server.
- ```ini
- [all]
- controller ansible_ssh_user=root ansible_host=192.168.1.109 # Second server CN
- node01 ansible_ssh_user=root ansible_host=192.168.1.109 # Second server CN
- ; node02 ansible_ssh_user=root ansible_host=192.168.1.12
- ```
- Then run `deploy_openness_for_cera.sh` again.
- ```shell
- ./deploy_openness_for_cera.sh
- ```
- All settings in `ido-converged-edge-experience-kits/openness/group_vars/all/10-open.yml` are the same for both servers.
-
- For `CERA 5G CN` server disable synchronization with GMC inside `ido-converged-edge-experience-kits/openness/flavors/cera_5g_on_premise/edgenode_group.yml` file.
+ Provide names of `network interfaces` that are connected to the Central Office cluster and number of VF's to be created.
```yaml
- ptp_sync_enable: false
- ```
-
-11. When both servers have deployed OpenNess, login to `CERA 5G CN` server and generate `RSA ssh key`. It's required for AMF/SMF VM deployment.
- ```shell
- ssh-keygen -t rsa
- # Press enter key to apply default values
+ sriov:
+ network_interfaces: {eno1: 5, enp184s0f0: 10}
```
-12. The full setup is now ready for CERA deployment.
-
-### CERA 5G On Premise Experience Kit Deployment
-The following prerequisites should be met for CERA deployment.
-1. CentOS should use the following kernel and have no newer kernels installed:
- * `3.10.0-1127.19.1.rt56.1116.el7.x86_64` on Near Edge server.
- * `3.10.0-1127.el7.x86_64` on Core Network server.
+8. Edit file `ido-converged-edge-experience-kits/flavors/cera_5g_on_premise/edgenode_group.yml`
-2. Edit file `ido-converged-edge-experience-kits/cera_config.yaml` and provide correct settings:
-
- Git token
- ```yaml
- git_repo_token: "your git token"
- ```
- Decide which demo application should be launched
- ```yaml
- # choose which demo will be launched: `eis` or `openvino`
- deploy_app: "eis"
- ```
- EIS release package location
+ Set up CPUs isolation according to your hardware
```yaml
- # provide EIS release package archive absolute path
- eis_release_package_path: ""
+ # Variables applied with the profile
+ tuned_vars: |
+ isolated_cores=1-16,25-40
+ nohz=on
+ nohz_full=1-16,25-40
+
+ # CPUs to be isolated (for RT procesess)
+ cpu_isol: "1-16,25-40"
+ # CPUs not to be isolate (for non-RT processes) - minimum of two OS cores necessary for controller
+ cpu_os: "0,17-23,24,41-47"
```
- [OpenVino](#OpenVINO) settings, if OpenVino app was set as active demo application
+
+ Set up hugepages settings
```yaml
- display_host_ip: "" # update ip for visualizer HOST GUI.
- save_video: "enable"
+ # Size of a single hugepage (2M or 1G)
+ hugepage_size: "1G"
+ # Amount of hugepages
+ hugepage_amount: "40"
```
- Proxy settings
- ```yaml
- # Setup proxy on the machine - required if the Internet is accessible via proxy
- proxy_os_enable: true
- # Clear previous proxy settings
- proxy_os_remove_old: true
- # Proxy URLs to be used for HTTP, HTTPS and FTP
- proxy_os_http: "http://proxy.example.org:3129"
- proxy_os_https: "http://proxy.example.org:3128"
- proxy_os_ftp: "http://proxy.example.org:3128"
- proxy_os_noproxy: "127.0.0.1,localhost,192.168.1.0/24"
- # Proxy to be used by YUM (/etc/yum.conf)
- proxy_yum_url: "{{ proxy_os_http }}"
- ```
- See [more details](#dUPF) for dUPF configuration
+
+9. Set all necessary settings for `CERA 5G CO` in `ido-converged-edge-experience-kits/flavors/cera_5g_central_office/all.yml`.
+
```yaml
- # Define PCI addresses (xxxx:xx:xx.x format) for i-upf
- n3_pci_bus_address: "0000:19:0a.0"
- n4_n9_pci_bus_address: "0000:19:0a.1"
- n6_pci_bus_address: "0000:19:0a.2"
-
- # Define VPP VF interface names for i-upf
- n3_vf_interface_name: "VirtualFunctionEthernet19/a/0"
- n4_n9_vf_interface_name: "VirtualFunctionEthernet19/a/1"
- n6_vf_interface_name: "VirtualFunctionEthernet19/a/2"
-
- # PF interface name of N3 created VF
- host_if_name_N3: "eno2"
# PF interface name of N4, N6, N9 created VFs
- host_if_name_N4_N6_n9: "eno2"
+ host_if_name_cn: "eno1"
```
- [gNodeB](#gNodeB) configuration
- ```yaml
- ## gNodeB related config
- gnodeb_fronthaul_vf1: "0000:65:02.0"
- gnodeb_fronthaul_vf2: "0000:65:02.1"
- gnodeb_fronthaul_vf1_mac: "ac:1f:6b:c2:48:ad"
- gnodeb_fronthaul_vf2_mac: "ac:1f:6b:c2:48:ab"
+10. Edit file `ido-converged-edge-experience-kits/flavors/cera_5g_central_office/controller_group.yml`
- n2_gnodeb_pci_bus_address: "0000:19:0a.3"
- n3_gnodeb_pci_bus_address: "0000:19:0a.4"
-
- fec_vf_pci_addr: "0000:b8:00.1"
-
- # DPDK driver used (vfio-pci/igb_uio) to VFs bindings
- dpdk_driver_gnodeb: "igb_uio"
-
- ## ConfigMap vars
-
- fronthaul_if_name: "enp101s0f0"
- ```
- Settings for `CERA 5G CN`
+ Provide names of `network interfaces` that are connected to the Near Edge cluster and number of VF's to be created.
```yaml
- ## PSA-UPF vars
-
- # Define N4/N9 and N6 interface device PCI bus address
- PCI_bus_address_N4_N9: '0000:19:0a.0'
- PCI_bus_address_N6: '0000:19:0a.1'
+ sriov:
+ network_interfaces: {eno1: 5}
+ ```
- # 5gc binaries directory name
- package_5gc_path: "/opt/amf-smf/"
+11. Edit file `ido-converged-edge-experience-kits/flavors/cera_5g_central_office/edgenode_group.yml`
- # vpp interface name as per setup connection
- vpp_interface_N4_N9_name: 'VirtualFunctionEthernet19/a/0'
- vpp_interface_N6_name: 'VirtualFunctionEthernet19/a/1'
- ```
-3. If needed change additional settings for `CERA 5G NE` in `ido-converged-edge-experience-kits/host_vars/cera_5g_ne.yml`.
+ Set up CPUs isolation according to your hardware capabilities
```yaml
- # DPDK driver used (vfio-pci/igb_uio) to VFs bindings
- dpdk_driver_upf: "igb_uio"
+ # Variables applied with the profile
+ tuned_vars: |
+ isolated_cores=1-16,25-40
+ nohz=on
+ nohz_full=1-16,25-40
- # Define path where i-upf is located on remote host
- upf_binaries_path: "/opt/flexcore-5g-rel/i-upf/"
+ # CPUs to be isolated (for RT procesess)
+ cpu_isol: "1-16,25-40"
+ # CPUs not to be isolate (for non-RT processes) - minimum of two OS cores necessary for controller
+ cpu_os: "0,17-23,24,41-47"
```
- OpenVino model
+
+ Set up hugepages settings
```yaml
- model: "pedestrian-detection-adas-0002"
- ```
-4. Build the following docker images required and provide necessary binaries.
- - [dUPF](#dUPF)
- - [UPF](#UPF)
- - [AMF-SMF](#AMF-SMF)
- - [gNB](#gNodeB)
-5. Provide correct IP for target servers in file `ido-converged-edge-experience-kits/cera_inventory.ini`
- ```ini
- [all]
- cera_5g_ne ansible_ssh_user=root ansible_host=192.168.1.109
- cera_5g_cn ansible_ssh_user=root ansible_host=192.168.1.43
- ```
-6. Deploy CERA Experience Kit
+ # Size of a single hugepage (2M or 1G)
+ hugepage_size: "1G"
+ # Amount of hugepages
+ hugepage_amount: "8"
+
+12. Deploy Converged Edge Experience Kit (Near Edge and Central Office clusters simultaneously)
+
+ Silent deployment:
```shell
- ./deploy_cera.sh
+ python ./deploy.py
```
+ > NOTE: In multicluster deployment logs are hidden by default. To check the logs `tail` tool can be used on deployment log files.
+
## 5G Core Components
This section describes in details how to build particular images and configure ansible for deployment.
@@ -502,18 +436,20 @@ The `CERA dUPF` component is deployed on `CERA 5G Near Edge (cera_5g_ne)` node.
##### Prerequisites
-To deploy dUPF correctly, one needs to provide Docker image to Docker repository on the target node. There is a script on the `open-ness/eddgeapps/network-functions/core-network/5G/UPF` repo provided by CERA, which builds the image automatically.
+To deploy dUPF correctly, one needs to provide Docker image to Docker repository on the target node. There is a script on the `https://github.com/open-ness/edgeapps/tree/master/network-functions/core-network/5G/UPF` repo provided by CERA, which builds the image automatically.
+
+```sh
+./build_image.sh -b i-upf -i i-upf
+```
##### Settings
-The following variables need to be defined in `cera_config.yaml`
+The following variables need to be defined in `ido-converged-edge-experience-kits/flavors/cera_5g_on_premise/all.yml`
```yaml
-n3_pci_bus_address: "" - PCI bus address of VF, which is used for N3 interface by dUPF
-n4_n9_pci_bus_address: "" - PCI bus address of VF, which is used for N4 and N9 interface by dUPF
-n6_pci_bus_address: "" - PCI bus address of VF, which is used for N6 interface by dUPF
+## Interface logical name (PF) used for fronthaul
+fronthaul_if_name: "enp184s0f0"
-n3_vf_interface_name: "" - name of VF, which is used for N3 interface by dUPF
-n4_n9_vf_interface_name: "" - name of VF, which is used for N4 and N9 interface by dUPF
-n6_vf_interface_name: "" - name of VF, which is used for N6 interface by dUPF
+# PF interface name of N3, N4, N6, N9 created VFs
+host_if_name_cn: "eno1"
```
##### Configuration
@@ -524,22 +460,23 @@ The dUPF is configured automatically during the deployment.
The `User Plane Function (UPF)` is a part of 5G Core Network, it is responsible for packets routing. It has 2 separate interfaces for `N4/N9` and `N6` data lines. `N4/N9` interface is used for connection with `dUPF` and `AMF/SMF` (locally). `N6` interface is used for connection with `EDGE-APP`, `dUPF` and `Remote-DN` (locally).
-The CERA UPF component is deployed on `CERA 5G Core Network (cera_5g_cn)` node. It is deployed as a POD - during deployment of CERA 5G On Prem automatically.
+The CERA UPF component is deployed on `CERA 5G Central Office` node. It is deployed as a POD - during deployment of CERA 5G Central Office flavor automatically.
#### Deployment
##### Prerequisites
-To deploy `UPF` correctly one needs to provide a Docker image to Docker Repository on target nodes. There is a script on the `open-ness/eddgeapps/network-functions/core-network/5G/UPF` repo provided by CERA, which builds the image automatically.
+To deploy `UPF` correctly one needs to provide a Docker image to Docker Repository on target nodes. There is a script on the `https://github.com/open-ness/edgeapps/tree/master/network-functions/core-network/5G/UPF` repo provided by CERA, which builds the image automatically.
+
+```sh
+./build_image.sh -b psa-upf -i psa-upf
+```
##### Settings
-The following variables need to be defined in the `cera_config.yaml`
-```yaml
-PCI_bus_address_N4_N9: "" - PCI bus address of VF, which is used for N4 and N9 interface by UPF
-PCI_bus_address_N6: "" - PCI bus address of VF, which is used for N6 interface by UPF
+Update interface name in file `ido-converged-edge-experience-kits/flavors/cera_5g_central_office/all.yml` that is used for connection to Near Edge cluster (dUPF).
-vpp_interface_N4_N9_name: "" - name of VF, which is used for N4 and N9 interface by UPF
-vpp_interface_N6_name: "" - name of VF, which is used for N6 interface by UPF
+```yaml
+host_if_name_cn: "eno1"
```
##### Configuration
@@ -556,21 +493,21 @@ The CERA `AMF-SMF` component is deployed on `CERA 5G Core Network (cera_5g_cn)`
#### Deployment
##### Prerequisites
-To deploy `AMF-SMF` correctly, one needs to provide a Docker image to Docker Repository on target machine(cera_5g_cn). There is a script on the `open-ness/eddgeapps/network-functions/core-network/5G/AMF-SMF` repository provided by CERA, which builds the image automatically.
-
-##### Settings
+To deploy `AMF-SMF` correctly, one needs to provide a Docker image to Docker Repository on target machine(cera_5g_co). There is a script on the `https://github.com/open-ness/edgeapps/tree/master/network-functions/core-network/5G/AMF_SMF` repository provided by CERA, which builds the image automatically.
-Following variables need to be defined in `cera_config.yaml`
-```yaml
-# 5gc binaries directory name
-package_5gc_path: "/opt/amf-smf/"
+```sh
+./build_image.sh -b amf-smf
```
+##### Settings
+No special settings are required for AMF-SMF deployment.
+
##### Configuration
The `AMF-SMF` is configured automatically during the deployment.
### Remote-DN
+
#### Overview
Remote Data Network is component, which represents `“internet”` in networking. CERA Core Network manages which data should apply to `Near Edge Application(EIS/OpenVINO)` or go further to the network.
@@ -590,12 +527,12 @@ Deployment of Local-DN is completely automated, so there is no need to set or co
### OpenVINO
#### Settings
-In the `cera_config.yaml` file can be chosen for which application should be built and deployed. Set a proper value for the deploy_app variable.
+In the `ido-converged-edge-experience-kits/flavors/cera_5g_on_premise/all.yml` file can be chosen which application should be built and deploy. Set a proper value for the deploy_app variable.
```yaml
-deploy_app: "" - Type openvino if OpenVINO demo should be launched.
+deploy_demo_app: "" - Type openvino if OpenVINO demo should be launched.
```
-Several variables must be set in the file `host_vars/cera_5g_ne.yml`:
+Several variables must be set in the file `ido-converged-edge-experience-kits/flavors/cera_5g_on_premise/all.yml`:
```yaml
model: "pedestrian-detection-adas-0002" - Model for which the OpenVINO demo will be run. Models which can be selected: pedestrian-detection-adas-0002, pedestrian-detection-adas-binary-0001, pedestrian-and-vehicle-detector-adas-0001, vehicle-detection-adas-0002, vehicle-detection-adas-binary-0001, person-vehicle-bike-detection-crossroad-0078, person-vehicle-bike-detection-crossroad-1016, person-reidentification-retail-0031, person-reidentification-retail-0248, person-reidentification-retail-0249, person-reidentification-retail-0300, road-segmentation-adas-0001
@@ -603,7 +540,7 @@ save_video: "enable" - For value "enable" the output will be written to /root/sa
```
#### Deployment
-After running the `deploy_cera.sh` script, pod ov-openvino should be available on `cera_5g_ne` machine. The status of the ov-openvino pod can be checked by use:
+After running the `deploy.py` script, pod ov-openvino should be available on Near Edge cluster. The status of the ov-openvino pod can be checked by use:
```shell
kubectl -n openvino get pods -o wide
```
@@ -612,7 +549,7 @@ Immediately after creating, the ov-openvino pod will wait for input streaming. I
#### Streaming
Video to OpenVINO™ pod should be streamed to IP `192.168.1.101` and port `5000`. Make sure that the pod with OpenVINO™ is visible from your streaming machine. In the simplest case, the video can be streamed from the same machine where pod with OpenVINO™ is available.
-Output will be saved to the `saved_video/ov-output.mjpeg` file (`save_video` variable in the `host_vars/cera_5g_ne.yml` should be set to `"enable"` and should be not changed).
+Output will be saved to the `saved_video/ov-output.mjpeg` file (`save_video` variable in the `ido-converged-edge-experience-kits/flavors/cera_5g_on_premise/all.yml` should be set to `"enable"` and should be not changed).
Streaming is possible from a file or from a camera. For continuous and uninterrupted streaming of a video file, the video file can be streamed in a loop. An example of a Bash file for streaming is shown below.
```shell
@@ -647,35 +584,19 @@ For more details about `eis-experience-kit` check [README.md](https://github.com
### gNodeB
#### Overview
-`gNodeB` is a part of 5G Core Architecture and is deployed on `CERA 5G Nere Edge (cera_5g_ne)` node.
+`gNodeB` is a part of 5G Core Architecture and is deployed on `CERA 5G On Premise (cera_5g_ne)` node.
#### Deployment
#### Prerequisites
-To deploy `gNodeB` correctly it is required to provide a Docker image to Docker Repository on target machine(cera_5g_ne). There is a script on the `open-ness/eddgeapps/network-functions/ran/5G/gnb` repository provided by CERA, which builds the image automatically. For `gNodeB` deployment FPGA card is required PAC N3000 and also QAT card.
+To deploy `gNodeB` correctly it is required to provide a Docker image to Docker Repository on target machine(cera_5g_ne). There is a script on the `open-ness/eddgeapps/network-functions/ran/5G/gnb` repository provided by CERA, which builds the image automatically. For `gNodeB` deployment FPGA card is required PAC N3000 and also QAT card.
#### Settings
-The following variables need to be defined in `cera_config.yaml`
+The following variables need to be defined in `ido-converged-edge-experience-kits/flavors/cera_5g_on_premise/all.yml`
```yaml
-## gNodeB related config
-# Fronthaul require two VFs
-gnodeb_fronthaul_vf1: "0000:65:02.0" - PCI bus address of VF, which is used as fronthaul
-gnodeb_fronthaul_vf2: "0000:65:02.1" - PCI bus address of VF, which is used as fronthaul
-
-gnodeb_fronthaul_vf1_mac: "ac:1f:6b:c2:48:ad" - MAC address which will be set on the first VF during deployment
-gnodeb_fronthaul_vf2_mac: "ac:1f:6b:c2:48:ab" - MAC address which will be set on the second VF during deployment
-
-n2_gnodeb_pci_bus_address: "0000:19:0a.3" - PCI bus address of VF, which is used for N2 interface
-n3_gnodeb_pci_bus_address: "0000:19:0a.4" - PCI bus address of VF, which is used for N3 interface
-
-fec_vf_pci_addr: "0000:b8:00.1" - PCI bus address of VF, which is assigned to FEC PAC N3000 accelerator
-
-# DPDK driver used (vfio-pci/igb_uio) to VFs bindings
-dpdk_driver_gnodeb: "igb_uio" - driver for binding interfaces
-
-## ConfigMap vars
-fronthaul_if_name: "enp101s0f0" - name of fronthaul interface
+## Interface logical name (PF) used for fronthaul
+fronthaul_if_name: "enp101s0f0"
```
#### Configuration
@@ -726,19 +647,20 @@ GMC must be properly configured and connected to the server's ETH port.
#### Settings
If the GMC has been properly configured and connected to the server then the node server can be synchronized.
-In the `ido-converged-edge-experience-kits/openness_inventory.ini` file, `node01` should be added to `ptp_slave_group` and the content inside the `ptp_master` should be empty or commented.
-```ini
-[ptp_master]
-#controller
+In the `ido-converged-edge-experience-kits/inventory.yml` file, `node01` should be added to `ptp_slave_group` and the content inside the `ptp_master` should be empty or commented.
+```yaml
+ptp_master:
+ hosts:
-[ptp_slave_group]
-node01
+ptp_slave_group:
+ hosts:
+ node01
```
-Server synchronization can be enabled inside `ido-converged-edge-experience-kits/openness/flavors/cera_5g_on_premise/edgenode_group.yml` file.
+Server synchronization can be enabled inside `ido-converged-edge-experience-kits/flavors/cera_5g_on_premise/edgenode_group.yml` file.
```yaml
ptp_sync_enable: true
```
-Edit file `ido-converged-edge-experience-kits/openness/x-oek/oek/host_vars/node01.yml` if a GMC is connected and the node server should be synchronized.
+Edit file `ido-converged-edge-experience-kits/flavors/cera_5g_on_premise/edgenode_group.yml` if a GMC is connected and the node server should be synchronized.
For single node setup (this is the default mode for CERA), `ptp_port` keeps the host's interface connected to Grand Master, e.g.:
```yaml
@@ -806,6 +728,7 @@ CERA 5G On Premises deployment provides a reference implementation of how to use
| CERA | Converged Edge Reference Architecture |
| CN | Core Network |
| CNF | Container Network Function |
+| CO | Central Office |
| CommSPs | Communications Service Providers |
| DPDK | Data Plane Developer Kit |
| eNB | e-NodeB |
diff --git a/doc/reference-architectures/CERA-Near-Edge.md b/doc/reference-architectures/CERA-Near-Edge.md
index b422da83..f96097b8 100644
--- a/doc/reference-architectures/CERA-Near-Edge.md
+++ b/doc/reference-architectures/CERA-Near-Edge.md
@@ -23,10 +23,8 @@ Reference architecture combines wireless and high performance compute for IoT, A
- [Setting up target platform before deployment](#setting-up-target-platform-before-deployment)
- [BIOS Setup](#bios-setup)
- [Manual setup](#manual-setup)
- - [Setup through the CERA deployment](#setup-through-the-cera-deployment)
- [Setting up machine with Ansible](#setting-up-machine-with-ansible)
- [Steps to be performed on the machine, where the Ansible playbook is going to be run](#steps-to-be-performed-on-the-machine-where-the-ansible-playbook-is-going-to-be-run)
- - [CERA Near Edge Experience Kit Deployment](#cera-near-edge-experience-kit-deployment)
- [5G Core Components](#5g-core-components)
- [dUPF](#dupf)
- [Overview](#overview)
@@ -46,7 +44,6 @@ Reference architecture combines wireless and high performance compute for IoT, A
- [Prerequisites](#prerequisites-2)
- [Settings](#settings-2)
- [Configuration](#configuration-2)
- - [How to prepare image](#how-to-prepare-image)
- [Remote-DN](#remote-dn)
- [Overview](#overview-3)
- [Prerequisites](#prerequisites-3)
@@ -241,43 +238,11 @@ There are two possibilities to change BIOS settings. The most important paramete
#### Manual setup
Reboot platform, go to the BIOS setup during server boot process and set correct options.
-#### Setup through the CERA deployment
-Bios will be set automatically during CERA deployment according to the provided settings.
-* Provide correct `bios_settings.ini` file for `Intel SYSCFG utility` and store it in `ido-converged-edge-experience-kits/roles/bios_setup/files/`
-* Set correct name of variable `biosconfig_local_path` in file: `ido-converged-edge-experience-kits/cera_5g_near_edge_deployment.yml` for both hosts.
- ```yaml
- # NE Server
- - role: bios_setup
- vars:
- biosconfig_local_path: "bios_config_cera_5g_ne.ini"
- when: update_bios_ne | default(False)
- ```
- ```yaml
- # CN Server
- - role: bios_setup
- vars:
- biosconfig_local_path: "bios_config_cera_5g_cn.ini"
- when: update_bios_cn | default(False)
- ```
-* Change variable to `True` in `ido-converged-edge-experience-kits/host_vars/cera_5g_cn.yml` and in `ido-converged-edge-experience-kits/host_vars/cera_5g_ne.yml`
-
- ```yaml
- # Set True for bios update
- update_bios_cn: True
- ```
- ```yaml
- # Set True for bios update
- update_bios_ne: True
- ```
-> NOTE: It's important to have correct bios.ini file with settings generated on the particular server. There are some unique serial numbers assigned to the server.
-
-More information: [BIOS and Firmware Configuration on OpenNESS Platform](https://www.openness.org/docs/doc/enhanced-platform-awareness/openness-bios)
-
### Setting up machine with Ansible
#### Steps to be performed on the machine, where the Ansible playbook is going to be run
-1. Copy SSH key from machine, where the Ansible playbook is going to be run, to the target machine. Example commands:
+1. Copy SSH key from machine, where the Ansible playbook is going to be run, to the target machines. Example commands:
> NOTE: Generate ssh key if is not present on the machine: `ssh-keygen -t rsa` (Press enter key to apply default values)
Do it for each target machine
@@ -299,17 +264,60 @@ More information: [BIOS and Firmware Configuration on OpenNESS Platform](https:/
git submodule update --init --recursive
```
-4. Provide target machines IP addresses for OpenNESS deployment in `ido-converged-edge-experience-kits/openness_inventory.ini`. For Singlenode setup, set the same IP address for both `controller` and `node01`, the line with `node02` should be commented by adding # at the beginning.
-Example:
- ```ini
- [all]
- controller ansible_ssh_user=root ansible_host=192.168.1.43 # First server NE
- node01 ansible_ssh_user=root ansible_host=192.168.1.43 # First server NE
- ; node02 ansible_ssh_user=root ansible_host=192.168.1.12
- ```
- At that stage provide IP address only for `CERA 5G NE` server.
+4. Provide target machines IP addresses for CEEK deployment in `ido-converged-edge-experience-kits/inventory.yml`. For Singlenode setup, set the same IP address for both `controller` and `node01`. In the same file define the details for Central Office cluster deployment.
-5. Edit `ido-converged-edge-experience-kits/openness/group_vars/all/10-open.yml` and provide some correct settings for deployment.
+ Example:
+ ```yaml
+ all:
+ vars:
+ cluster_name: near_edge_cluster # NOTE: Use `_` instead of spaces.
+ flavor: cera_5g_near_edge # NOTE: Flavors can be found in `flavors` directory.
+ single_node_deployment: true # Request single node deployment (true/false).
+ limit: # Limit ansible deployment to certain inventory group or hosts
+ controller_group:
+ hosts:
+ controller:
+ ansible_host: 172.16.0.1
+ ansible_user: root
+ edgenode_group:
+ hosts:
+ node01:
+ ansible_host: 172.16.0.1
+ ansible_user: root
+ edgenode_vca_group:
+ hosts:
+ ptp_master:
+ hosts:
+ ptp_slave_group:
+ hosts:
+
+ ---
+ all:
+ vars:
+ cluster_name: central_office_cluster # NOTE: Use `_` instead of spaces.
+ flavor: cera_5g_central_office # NOTE: Flavors can be found in `flavors` directory.
+ single_node_deployment: true # Request single node deployment (true/false).
+ limit: # Limit ansible deployment to certain inventory group or hosts
+ controller_group:
+ hosts:
+ co_controller:
+ ansible_host: 172.16.1.1
+ ansible_user: root
+ edgenode_group:
+ hosts:
+ co_node1:
+ ansible_host: 172.16.1.1
+ ansible_user: root
+ edgenode_vca_group:
+ hosts:
+ ptp_master:
+ hosts:
+ ptp_slave_group:
+ hosts:
+
+ ```
+
+5. Edit `ido-converged-edge-experience-kits/inventory/default/group_vars/all/10-open.yml` and provide some correct settings for deployment.
Git token.
```yaml
@@ -339,182 +347,86 @@ Example:
ntp_servers: ['ger.corp.intel.com']
```
-6. Edit file `ido-converged-edge-experience-kits/openness/flavors/cera_5g_near_edge/edgenode_group.yml` and provide correct CPU settings.
+6. Edit file `ido-converged-edge-experience-kits/flavors/cera_5g_near_edge/all.yml` and provide Near Edge deployment configuration.
+ Choose Edge Application that will be deployed:
```yaml
- tuned_vars: |
- isolated_cores=1-16,25-40
- nohz=on
- nohz_full=1-16,25-40
- # CPUs to be isolated (for RT procesess)
- cpu_isol: "1-16,25-40"
- # CPUs not to be isolate (for non-RT processes) - minimum of two OS cores necessary for controller
- cpu_os: "0,17-23,24,41-47"
+ # Choose which demo will be launched: `eis` or `openvino`
+ # To do not deploy any demo app, refer to `edgeapps_deployment_enable` variable
+ deploy_demo_app: "openvino"
```
-7. Edit file `ido-converged-edge-experience-kits/openness/flavors/cera_5g_near_edge/controller_group.yml` and provide names of `network interfaces` that are connected to second server and number of VF's to be created.
-
+ If OpenVINO was chosen as Edge Application, set the options:
```yaml
- sriov:
- network_interfaces: {eno1: 5, eno2: 2}
- ```
- > NOTE: On various platform interfaces can have different name. For e.g `eth1` instead of `eno1`. Please verify interface name before deployment and do right changes.
-
-8. Execute the `deploy_openness_for_cera.sh` script in `ido-converged-edge-experience-kits` to start OpenNESS platform deployment process by running following command:
- ```shell
- ./deploy_openness_for_cera.sh cera_5g_near_edge
- ```
- It might take few hours.
-
-9. After successful OpenNESS deployment, edit again `ido-converged-edge-experience-kits/openness_inventory.ini`, change IP address to `CERA 5G CN` server.
- ```ini
- [all]
- controller ansible_ssh_user=root ansible_host=192.168.1.109 # Second server CN
- node01 ansible_ssh_user=root ansible_host=192.168.1.109 # Second server CN
- ; node02 ansible_ssh_user=root ansible_host=192.168.1.12
- ```
- Then run `deploy_openness_for_cera.sh` again.
- ```shell
- ./deploy_openness_for_cera.sh
- ```
- All settings in `ido-converged-edge-experience-kits/openness/group_vars/all/10-open.yml` are the same for both servers.
-
-10. When both servers have deployed OpenNess, login to `CERA 5G CN` server and generate `RSA ssh key`. It's required for AMF/SMF VM deployment.
- ```shell
- ssh-keygen -t rsa
- # Press enter key to apply default values
+ model: "pedestrian-detection-adas-0002"
+ display_host_ip: "" # update ip for visualizer HOST GUI.
+ save_video: "enable"
+ target_device: "CPU"
```
-11. Now full setup is ready for CERA deployment.
-
-### CERA Near Edge Experience Kit Deployment
-For CERA deployment some prerequisites have to be fulfilled.
-
-1. CentOS should use kernel `kernel-3.10.0-957.el7.x86_64` and have no newer kernels installed.
-2. Edit file `ido-converged-edge-experience-kits/group_vars/all.yml` and provide correct settings:
-
- Git token
+ Set interface name used for connection to Central Office cluster:
```yaml
- git_repo_token: "your git token"
- ```
- Decide which demo application should be launched
- ```yaml
- # choose which demo will be launched: `eis` or `openvino`
- deploy_app: "eis"
- ```
- EIS release package location
- ```yaml
- # provide EIS release package archive absolute path
- eis_release_package_path: ""
- ```
- AMF/SMF VM image location
- ```yaml
- # VM image path
- vm_image_path: "/opt/flexcore-5g-rel/ubuntu_18.04.qcow2"
+ # PF interface name of N3, N4, N6, N9 created VFs
+ host_if_name_cn: "eno1"
```
-3. Edit file `ido-converged-edge-experience-kits/host_vars/localhost.yml` and provide correct proxy if is required.
+7. Edit file `ido-converged-edge-experience-kits/flavors/cera_5g_near_edge/controller_group.yml`
+ Provide names of `network interfaces` that are connected to the second server and number of VF's to be created.
```yaml
- ### Proxy settings
- # Setup proxy on the machine - required if the Internet is accessible via proxy
- proxy_os_enable: true
- # Clear previous proxy settings
- proxy_os_remove_old: true
- # Proxy URLs to be used for HTTP, HTTPS and FTP
- proxy_os_http: "http://proxy.example.org:3129"
- proxy_os_https: "http://proxy.example.org:3128"
- proxy_os_ftp: "http://proxy.example.org:3128"
- proxy_os_noproxy: "127.0.0.1,localhost,192.168.1.0/24"
- # Proxy to be used by YUM (/etc/yum.conf)
- proxy_yum_url: "{{ proxy_os_http }}"
+ sriov:
+ network_interfaces: {eno1: 5}
```
+ > NOTE: On various platform interfaces can have different name. For example `eth1` instead of `eno1`. Please verify interface name before deployment and do right changes.
+
+8. Set all necessary settings for `CERA 5G Central Office` in `ido-converged-edge-experience-kits/flavors/cera_5g_central_office/all.yml`.
-4. Build all docker images required and provide all necessary binaries.
- - [dUPF](#dUPF)
- - [UPF](#UPF)
- - [AMF-SMF](#AMF-SMF)
-5. Set all necessary settings for `CERA 5G NE` in `ido-converged-edge-experience-kits/host_vars/cera_5g_ne.yml`.
- See [more details](#dUPF) for dUPF configuration
- ```yaml
- # Define PCI addresses (xxxx:xx:xx.x format) for i-upf
- n3_pci_bus_address: "0000:3d:06.0"
- n4_n9_pci_bus_address: "0000:3d:02.0"
- n6_pci_bus_address: "0000:3d:02.1"
-
- # Define VPP VF interface names for i-upf
- n3_vf_interface_name: "VirtualFunctionEthernet3d/6/0"
- n4_n9_vf_interface_name: "VirtualFunctionEthernet3d/2/0"
- n6_vf_interface_name: "VirtualFunctionEthernet3d/2/1"
- ```
- ```yaml
- # Define path where i-upf is located on remote host
- upf_binaries_path: "/opt/flexcore-5g-rel/i-upf/"
- ```
```yaml
- # PF interface name of N3 created VF
- host_if_name_N3: "eno1"
# PF interface name of N4, N6, N9 created VFs
- host_if_name_N4_N6_n9: "eno2"
- ```
- [OpenVino](#OpenVINO) settings if was set as active demo application
- ```yaml
- model: "pedestrian-detection-adas-0002"
- display_host_ip: "" # update ip for visualizer HOST GUI.
- save_video: "enable"
- target_device: "CPU"
+ host_if_name_cn: "eno1"
```
-7. Set all necessary settings for `CERA 5G CN` in `ido-converged-edge-experience-kits/host_vars/cera_5g_cn.yml`.
- For more details check:
- - [UPF](#UPF)
- - [AMF-SMF](#AMF-SMF)
- ```yaml
- # Define N4/N9 and N6 interface device PCI bus address
- PCI_bus_address_N4_N9: '0000:3d:02.0'
- PCI_bus_address_N6: '0000:3d:02.1'
- # vpp interface name as per setup connection
- vpp_interface_N4_N9_name: 'VirtualFunctionEthernet3d/2/0'
- vpp_interface_N6_name: 'VirtualFunctionEthernet3d/2/1'
- ```
- ```yaml
- # 5gc binaries directory name
- package_name_5gc: "5gc"
- ```
+9. Edit file `ido-converged-edge-experience-kits/flavors/cera_5g_central_office/controller_group.yml`
+
+ Provide names of `network interfaces` that are connected to the second server and number of VF's to be created.
```yaml
- # psa-upf directory path
- upf_binaries_path: '/opt/flexcore-5g-rel/psa-upf/'
+ sriov:
+ network_interfaces: {eno1: 5}
```
+
+10. Edit file `ido-converged-edge-experience-kits/flavors/cera_5g_central_office/edgenode_group.yml`
+
+ Set up CPUs isolation according to your hardware capabilities
```yaml
- ## AMF-SMF vars
+ # Variables applied with the profile
+ tuned_vars: |
+ isolated_cores=1-16,25-40
+ nohz=on
+ nohz_full=1-16,25-40
- # Define N2/N4
- PCI_bus_address_N2_N4: "0000:3d:02.3"
+ # CPUs to be isolated (for RT procesess)
+ cpu_isol: "1-16,25-40"
+ # CPUs not to be isolate (for non-RT processes) - minimum of two OS cores necessary for controller
+ cpu_os: "0,17-23,24,41-47"
```
- `CERA 5G CN` public ssh key
+
+ Set up hugepages settings
```yaml
- # Host public ssh key
- host_ssh_key: ""
+ # Size of a single hugepage (2M or 1G)
+ hugepage_size: "1G"
+ # Amount of hugepages
+ hugepage_amount: "8"
```
- ```yaml
- ## ConfigMap vars
- # PF interface name of N3 created VF
- host_if_name_N3: "eno2"
- # PF interface name of N4, N6, N9 created VFs
- host_if_name_N4_N6_n9: "eno1"
- ```
-8. Provide correct IP for target servers in file `ido-converged-edge-experience-kits/cera_inventory.ini`
- ```ini
- [all]
- cera_5g_ne ansible_ssh_user=root ansible_host=192.168.1.109
- cera_5g_cn ansible_ssh_user=root ansible_host=192.168.1.43
- ```
-9. Deploy CERA Experience Kit
+11. Deploy Converged Edge Experience Kit (Near Edge and Central Office clusters simultaneously)
+
+ Silent deployment:
```shell
- ./deploy_cera.sh
+ python ./deploy.py
```
+ > NOTE: In multicluster deployment logs are hidden by default. To check the logs `tail` tool can be used on deployment log files.
+
## 5G Core Components
This section describes in details how to build particular images and configure ansible for deployment.
@@ -530,22 +442,17 @@ The `CERA dUPF` component is deployed on `CERA 5G Near Edge (cera_5g_ne)` node.
#### Prerequisites
-To deploy dUPF correctly it is needed to provide Docker image to Docker repository on target machine(cera_5g_ne). There is a script on the `open-ness/eddgeapps/network-functions/core-network/5G/UPF` repo provided by CERA, which builds the image automatically.
-
-#### Settings
-Following variables need to be defined in `/host_vars/cera_5g_ne.yml`
-```yaml
-n3_pci_bus_address: "" - PCI bus address of VF, which is used for N3 interface by dUPF
-n4_n9_pci_bus_address: "" - PCI bus address of VF, which is used for N4 and N9 interface by dUPF
-n6_pci_bus_address: "" - PCI bus address of VF, which is used for N6 interface by dUPF
+To deploy dUPF correctly it is needed to provide Docker image to Docker repository on target machine(cera_5g_ne). There is a script on the `https://github.com/open-ness/edgeapps/tree/master/network-functions/core-network/5G/UPF` repo provided by CERA, which builds the image automatically.
-n3_vf_interface_name: "" - name of VF, which is used for N3 interface by dUPF
-n4_n9_vf_interface_name: "" - name of VF, which is used for N4 and N9 interface by dUPF
-n6_vf_interface_name: "" - name of VF, which is used for N6 interface by dUPF
+```sh
+./build_image.sh -b i-upf -i i-upf
+```
-dpdk_driver_upf: "" - DPDK driver used (vfio-pci/igb_uio) to VFs bindings
+#### Settings
+Update interface name in file `ido-converged-edge-experience-kits/flavors/cera_5g_near_edge/all.yml` that is used for connection to Central Office cluster (AMF/SMF and UPF).
-upf_binaries_path: "" - path where the dUPF binaries are located on the remote host
+```yaml
+host_if_name_cn: "eno1"
```
#### Configuration
@@ -556,161 +463,52 @@ The dUPF is configured automatically during the deployment.
The `User Plane Function (UPF)` is a part of 5G Core Network, it is responsible for packets routing. It has 2 separate interfaces for `N4/N9` and `N6` data lines. `N4/N9` interface is used for connection with `dUPF` and `AMF/SMF` (locally). `N6` interface is used for connection with `EDGE-APP`, `dUPF` and `Remote-DN` (locally).
-The CERA UPF component is deployed on `CERA 5G Core Network (cera_5g_cn)` node. It is deployed as a POD - during deployment of OpenNESS with CERA 5G Near Edge flavor automatically.
+The CERA UPF component is deployed on `CERA 5G Central Office (cera_5g_co)` node. It is deployed as a POD - during deployment of OpenNESS with CERA 5G Central Office flavor automatically.
#### Deployment
#### Prerequisites
-To deploy UPF correctly it is needed to provide a Docker image to Docker Repository on target machine(cera_5g_ne and cera_5g_cn). There is a script on the `open-ness/eddgeapps/network-functions/core-network/5G/UPF` repo provided by CERA, which builds the image automatically.
+To deploy UPF correctly it is needed to provide a Docker image to Docker Repository on target machine(cera_5g_ne and cera_5g_co). There is a script on the `https://github.com/open-ness/edgeapps/tree/master/network-functions/core-network/5G/UPF` repo provided by CERA, which builds the image automatically.
-#### Settings
-
-Following variables need to be defined in the `/host_vars/cera_5g_ne.yml`
-```yaml
-PCI_bus_address_N4_N9: "" - PCI bus address of VF, which is used for N4 and N9 interface by UPF
-PCI_bus_address_N6: "" - PCI bus address of VF, which is used for N6 interface by UPF
+```sh
+./build_image.sh -b psa-upf -i psa-upf
+```
-vpp_interface_N4_N9_name: "" - name of VF, which is used for N4 and N9 interface by UPF
-vpp_interface_N6_name: "" - name of VF, which is used for N6 interface by UPF
+#### Settings
-dpdk_driver_upf: "" - DPDK driver used (vfio-pci/igb_uio) to VFs bindings
+Update interface name in file `ido-converged-edge-experience-kits/flavors/cera_5g_central_office/all.yml` that is used for connection to Near Edge cluster (dUPF).
-upf_binaries_path: "" - path where the UPF binaries are located on the remote host
+```yaml
+host_if_name_cn: "eno1"
```
#### Configuration
-The UPF is configured automatically during the deployment.
-
+The `UPF` is configured automatically during the deployment.
### AMF-SMF
#### Overview
AMF-SMF is a part of 5G Core Architecture responsible for `Session Management(SMF)` and `Access and Mobility Management(AMF)` Functions - it establishes sessions and manages date plane packages.
-The CERA `AMF-SMF` component is deployed on `CERA 5G Core Network (cera_5g_cn)` node and communicates with UPF and dUPF, so they must be deployed and configured before `AMF-SMF`.
+The CERA `AMF-SMF` component is deployed on `CERA 5G Central Office (cera_5g_co)` node and communicates with UPF and dUPF, so they must be deployed and configured before `AMF-SMF`.
-It is deployed in Virtual Machine with `Ubuntu 18.04 Cloud OS`, using `Kube-virt` on OpenNess platform - deploying OpenNess with CERA 5G Near Edge flavor automatically, configures and enables Kube-Virt plugin in OpenNess platform.
+It is deployed as a POD - during deployment of OpenNESS with CERA 5G Central Office flavor automatically.
#### Deployment
#### Prerequisites
-To deploy `AMF-SMF` correctly it is needed to provide image with `Ubuntu 18.04.1 Desktop (.img, .qcow2 format)` with required packages installed and directory with `AMF-SMF` binaries.
+To deploy AMF-SMF correctly it is needed to provide Docker image to Docker repository on target machine(cera_5g_co). There is a script on the `https://github.com/open-ness/edgeapps/tree/master/network-functions/core-network/5G/AMF_SMF` repo provided by CERA, which builds the image automatically.
-#### Settings
-
-Following variables need to be defined in `/host_vars/cera_5g_cn.yml`
-```yaml
-PCI_bus_address_N2_N4: "" - PCI Bus address for VF (e.g. 0000:3a:01), which will be used for N2 and N4 interface by AMF-SMF (VF created from the same interface like Remote-DN and dUPF).
-
-host_ssh_key: "" - public ssh key of node - to generate public ssh key, please use on node command: ssh-keygen -t rsa and copy content of file located in $HOME/.ssh/id_rsa.pub (without ssh-rsa on beginning and user@hostname at the end) to variable.
-
-host_user_name: "" - username (e.g. root) of the node.
-
-And one variable in /group_vars/all.yml
-
-vm_image_path: "" - path where image of Virtual Machine (provided from script described above) is stored on host machine.
+```sh
+./build_image.sh -b amf-smf
```
+#### Settings
+No special settings are required for AMF-SMF deployment.
+
#### Configuration
-During the deployment, there is a Python script, which automatically configure `SMF` config files according to CERA setup. It changes IP subnet for `Local-DN` component in `AMF-SMF` configuration files. These settings can be changed manually if it is needed by User Setup.
-
-#### How to prepare image
-Steps to do on host machine with CentOS
-
-1. Download Ubuntu 18.04.1 Desktop `.iso` image.
- ```shell
- wget http://old-releases.ubuntu.com/releases/18.04.1/ubuntu-18.04.1-desktop-amd64.iso
- ```
-2. Check that `kvm_intel` is enabled in BIOS settings.
- ```shell
- dmesg | grep kvm -> should not display any disabled msg
- lsmod | egrep 'kvm'
- kvm_intel 183818 0
- kvm 624312 1 kvm_intel ->if BIOS VM enabled then kvm_intel should appear
- irqbypass 13503 1 kvm
- ```
-3. Enable VNC Server and install GNOME Desktop.
- ```shell
- yum groupinstall "GNOME Desktop"
- yum install tigervnc-server xorg-x11-fonts-Type1
- vncserver -depth 24 -geometry 1920x1080
- ```
-4. Install Hypervisor packages and libraries.
- ```shell
- yum install qemu-kvm libvirt libvirt-python libguestfs-tools virt-install virt-manager
- systemctl start libvirtd
- ```
-5. RUN `virt-manager` GUI application, select previous downloaded Ubuntu `.iso` image and the disk size (20GiB recommended) and follow the install process. After successful install take the next steps.
-6. Change the grub file on Guest OS to allow console connection.
- ```shell
- vi /etc/default/grub
- Add `console=ttyS0` to end of `GRUB_CMD_LINELINUX=`
- ```
- Execute.
- ```shell
- grub-mkconfig -o /boot/grub/grub.cfg
- Reboot board..
- ```
-7. Login to Guest OS using `virsh console`.
- ```shell
- virsh console
- ```
- > NOTE: Replace with the Virtual Machine name with Ubuntu OS
-
-Steps to do on logged Guest OS
-
-1. Enable Ping utility.
- ```shell
- sudo ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf
- ```
- Verify that content of `etc/reslov.conf` is the same like in `/run/systemd/resolve/resolv.conf`
- Test utility by pinging other server in the same network
- ```shell
- ping -c 5
- ```
- > NOTE: Replace with any other server on the same network
-2. Add proxy to environment variables.
- ```shell
- vi /etc/environment
- http_proxy="http://proxy.example.org:3128"
- https_proxy="http://proxy.example.org:3129"
- no_proxy="127.0.0.1,localhost,192.168.1.0/24"
- ```
-3. Reboot Guest.
- ```shell
- reboot
- ```
-4. After reboot log in again using `virsh console` from host machine.
-5. Update package repositories.
- ```shell
- apt-get update
- ```
-6. Install SSH Server and check status.
- ```shell
- apt-get install openssh-server
- systemctl status ssh
- ```
-7. Permit SSH connection as a root user.
- Change settings in SSH config file:
- ```shell
- vi /etc/ssh/sshd_config
- PermitRootLogin yes
- ```
- And restart SSH Server Daemon.
- ```shell
- service sshd restart
- ```
-8. Install required packages for AMF-SMF deployment.
- ```shell
- apt-get install -y screen iproute2 net-tools cloud-init
- ```
-9. Copy AMF-SMF binaries to root HOME folder.
-10. Shutdown the Guest Machine.
-
-After these steps there will be available `.qcow2` image generated by installed Virtual Machine in `/var/lib/libvirt/images` directory.
-
-If AMF-SMF is not working correctly installing these packages should fix it: `qemu-guest-agent,iputils-ping,iproute2,screen,libpcap-dev,tcpdump,libsctp-dev,apache2,python-pip,sudo,ssh`.
+The `AMF-SMF` is configured automatically during the deployment.
### Remote-DN
@@ -733,9 +531,9 @@ Deployment of Local-DN is completely automated, so there is no need to set or co
### OpenVINO
#### Settings
-In the `group_vars/all.yml` file can be chosen which application should be built and deploy. Set a proper value for the deploy_app variable.
+In the `ido-converged-edge-experience-kits/flavors/cera_5g_near_edge/all.yml` file can be chosen which application should be built and deploy. Set a proper value for the deploy_app variable.
```yaml
-deploy_app: "" - Type openvino if OpenVINO demo should be launched.
+deploy_demo_app: "" - Type openvino if OpenVINO demo should be launched.
```
Several variables must be set in the file `host_vars/cera_5g_ne.yml`:
@@ -746,7 +544,7 @@ save_video: "enable" - For value "enable" the output will be written to /root/sa
```
#### Deployment
-After running the `deploy_cera.sh` script, pod ov-openvino should be available on `cera_5g_ne` machine. The status of the ov-openvino pod can be checked by use:
+After running the `deploy.py` script, pod ov-openvino should be available on `cera_5g_ne` machine. The status of the ov-openvino pod can be checked by use:
```shell
kubectl get nodes,pods,svc -A -o wide|grep ov-openvino
```
@@ -755,7 +553,7 @@ Immediately after creating, the ov-openvino pod will wait for input streaming. I
#### Streaming
Video to OpenVINO pod should be streamed to IP `192.168.1.101` and port `5000`. Make sure that the pod with OpenVINO is visible from yours streaming machine. In the simplest case, the video can be streamed from the same machine where pod with OpenVINO is available.
-Output will be saved to the `saved_video/ov-output.mjpeg` file (`save_video` variable in the `host_vars/cera_5g_ne.yml` should be set to `"enable"` and should be not changed).
+Output will be saved to the `saved_video/ov-output.mjpeg` file (`save_video` variable in the `ido-converged-edge-experience-kits/flavors/cera_5g_near_edge/all.yml` should be set to `"enable"` and should be not changed).
Streaming is possible from a file or from a camera. For continuous and uninterrupted streaming of a video file, the video file can be streamed in a loop. An example of a Bash file for streaming is shown below.
```shell
@@ -804,6 +602,7 @@ CERA Near Edge deployment provide a reference implementation on how to use OpenN
| CERA | Converged Edge Reference Architecture |
| CN | Core Network |
| CNF | Container Network Function |
+| CO | Central Office |
| CommSPs | Communications service providers |
| DPDK | Data Plane Developer Kit |
| eNB | e-NodeB |
diff --git a/doc/reference-architectures/openness_sdwan.md b/doc/reference-architectures/cera_sdwan.md
similarity index 78%
rename from doc/reference-architectures/openness_sdwan.md
rename to doc/reference-architectures/cera_sdwan.md
index 1e1577cb..47ab5f32 100644
--- a/doc/reference-architectures/openness_sdwan.md
+++ b/doc/reference-architectures/cera_sdwan.md
@@ -28,11 +28,18 @@ Copyright (c) 2020 Intel Corporation
- [Scenario 1](#scenario-1)
- [Scenario 2](#scenario-2)
- [Scenario 3](#scenario-3)
+ - [EWO Configuration](#ewo-configuration)
+ - [NodeSelector For CNF](#nodeselector-for-cnf)
+ - [Network and CNF Interface](#network-and-cnf-interface)
+ - [Tunnel](#tunnel)
+ - [SNAT](#snat)
+ - [DNAT](#dnat)
- [Resource Consumption](#resource-consumption)
- [Methodology](#methodology)
- [Results](#results)
- [References](#references)
- [Acronyms](#acronyms)
+- [Terminology](#terminology)
## Introduction
With the growth of global organizations, there is an increased need to connect branch offices distributed across the world. As enterprise applications move from corporate data centers to the cloud or the on-premise edge, their branches require secure and reliable, low latency, and affordable connectivity. One way to achieve this is to deploy a wide area network (WAN) over the public Internet, and create secure links to the branches where applications are running.
@@ -125,7 +132,7 @@ the CRD Controller includes several functions:
- FirewallConf Controller, to monitor the FirewallConf CR;
- - IPSec Controller, to monitor the IpSec CRs.
+ - IPSec Controller, to monitor the IPsec CRs.
### Custom Resources (CRs)
@@ -356,6 +363,273 @@ A more detailed description of this scenario is available in OpenNESS [documenta
![OpenNESS SD-WAN Scenario 3 ](sdwan-images/e2e-scenario3.png)
+### EWO Configuration
+There are five types configuration for EWO. With these configurations, it is easy to deploy the above scenarios automatically.
+
+#### NodeSelector For CNF
+
+![EWO NodeSelector](sdwan-images/ewo-node-select.png)
+This configuration is used to choose a node to install CNFs.
+For this example, we want to setup a cnf on node1 and another cnf on node3, the configurations snippet as below:
+
+`inventory/default/host_vars/node1/30-ewo.yml`
+```bash
+sdwan_labels: '{"sdwanPurpose": "infra", "sdwanProvider": "operator_A"}'
+
+```
+
+and
+`inventory/default/host_vars/node3/30-ewo.yml`
+```bash
+sdwan_labels: '{"sdwanProvider": "operator_B"}'
+
+```
+
+**NOTE** An alternative configuration: You can also define the sdwan_labels in `inventory.yml`. If only deploy cnfs on node3, the snippet configuration as follow:
+
+```bash
+edgenode_group:
+ hosts:
+ node01:
+ ansible_host: 172.16.0.1
+ ansible_user: openness
+ node02:
+ ansible_host: 172.16.0.2
+ ansible_user: openness
+ node03:
+ ansible_host: 172.16.0.3
+ ansible_user: openness
+ sdwan_labels: {"sdwanProvider": "operator_A"}
+```
+
+#### Network and CNF Interface
+![EWO Network and CNF Map](sdwan-images/ewo-network-cnf-interface.png)
+This configuration is used to setup ovn WAN or cluster networks and attach the cnfs to the network.
+For this example, we want to setup 4 networks with different color lines (black/yellow/orage/purple). The balck and yellow are 2 different WAN networks. The configurations snippet as below:
+
+
+in `inventory/default/host_vars/node1/30-ewo.yml`, `flavors/sdewan-edge/all.yml`(edge cluster as example) if only set cnfs on one node.
+```bash
+pnet1_name: pnetwork1
+pnet2_name: pnetwork2
+onet1_name: onetwork1
+
+# A list of network definitions. It can be provider network or ovn4nfv network.
+# ovn4nfv should be configured as the secondary CNI for cluster network in this
+# situation. And the user can use the configuration to customize networks
+# according to needs.
+networks:
+ - networkname: "{{ pnet1_name }}"
+ subnname: "pnet1_subnet"
+ subnet: 10.10.1.0/24
+ gateway: 10.10.1.1
+ excludeIps: 10.10.1.2..10.10.1.9
+ providerNetType: "DIRECT"
+ providerInterfaceName: "p1"
+ - networkname: "{{ pnet2_name }}"
+ subnname: "pnet2_subnet"
+ subnet: 10.10.2.0/24
+ gateway: 10.10.2.1
+ excludeIps: 10.10.2.2..10.10.2.9
+ providerNetType: "DIRECT"
+ providerInterfaceName: "p2"
+ - networkname: "{{ onet1_name }}"
+ subnname: "onet1_subnet"
+ subnet: 10.10.3.0/24
+ gateway: 10.10.3.1
+ excludeIps: 10.10.3.2..10.10.3.9
+ providerNetType: "NONE"
+
+# CNF pod info
+cnf_config:
+ - name: "cnf1"
+ interfaces:
+ - ipAddress: "10.10.1.5"
+ name: "net2"
+ belongto: "{{ pnet1_name }}"
+ - ipAddress: "10.10.1.6"
+ name: "net3"
+ belongto: "{{ pnet2_name }}"
+ - ipAddress: "10.10.3.5"
+ name: "net4"
+ belongto: "{{ onet1_name }}"
+```
+
+#### Tunnel
+![EWO Tunnel](sdwan-images/ewo-tunnel-setup.png)
+This configuration is used to setup an IPsec tunnel between 2 clusters.
+The configurations snippet for the edge cluster(left) as below:
+
+in `inventory/default/host_vars/node1/30-ewo.yml`, or `flavors/sdewan-edge/all.yml`(edge cluster as example) if only set cnfs on one node.
+```bash
+
+pnet1_name: pnetwork1
+# A list of network definitions. It can be provider network or ovn4nfv network.
+# ovn4nfv should be configured as the secondary CNI for cluster network in this
+# situation. And the user can use the configuration to customize networks
+# according to needs.
+networks:
+ - networkname: "{{ pnet1_name }}"
+ subnname: "pnet1_subnet"
+ subnet: 10.10.1.0/24
+ gateway: 10.10.1.1
+ excludeIps: 10.10.1.2..10.10.1.9
+ providerNetType: "DIRECT"
+ providerInterfaceName: "p1"
+
+# overlay network
+O_TUNNEL_NET: 172.16.30.0/24
+
+# CNF pod info
+cnf_config:
+ - name: "cnf1"
+ interfaces:
+ - ipAddress: "10.10.1.5"
+ name: "net2"
+ belongto: "{{ pnet1_name }}"
+ rules:
+ - name: tunnel1
+ type: tunnelhost
+ local_identifier: 10.10.1.5
+ remote: 10.10.2.5
+ remote_subnet: "{{ O_TUNNEL_NET }},10.10.2.5/32"
+ remote_sourceip:
+ local_subnet:
+```
+
+The configurations snippet for the hub cluster(right) as below:
+in `inventory/default/host_vars/node1/30-ewo.yml`, or `flavors/sdewan-edge/all.yml`(edge cluster as example) if only set cnfs on one node.
+```bash
+pnet1_name: pnetwork1
+
+# A list of network definitions. It can be provider network or ovn4nfv network.
+# ovn4nfv should be configured as the secondary CNI for cluster network in this
+# situation. And the user can use the configuration to customize networks
+# according to needs.
+networks:
+ - networkname: "{{ pnet1_name }}"
+ subnname: "pnet2_subnet"
+ subnet: 10.10.2.0/24
+ gateway: 10.10.2.1
+ excludeIps: 10.10.2.2..10.10.2.9
+ providerNetType: "DIRECT"
+ providerInterfaceName: "p1"
+
+# overlay network
+O_TUNNEL_NET: 172.16.30.0/24
+
+# CNF pod info
+cnf_config:
+ - name: "cnf1"
+ interfaces:
+ - ipAddress: "10.10.2.5"
+ name: "net2"
+ belongto: "{{ pnet1_name }}"
+ rules:
+ - name: tunnel1
+ type: tunnelsite
+ local_identifier:
+ local_sourceip:
+ remote_sourceip: "{{ O_TUNNEL_NET }}"
+ local_subnet: "{{ O_TUNNEL_NET }},10.10.2.5/32"
+```
+
+#### SNAT
+![EWO SNAT](sdwan-images/ewo-snat-setup.png)
+This configuration is used to setup an SNAT when an app pod in clusters want to access the external network, for example it wants to access the service on internet.
+The configurations snippet as below:
+
+in `inventory/default/host_vars/node1/30-ewo.yml`, or `flavors/sdewan-edge/all.yml`(edge cluster as example) if only set cnfs on one node.
+```bash
+pnet1_name: pnetwork1
+pnet2_name: pnetwork2
+
+# A list of network definitions. It can be provider network or ovn4nfv network.
+# ovn4nfv should be configured as the secondary CNI for cluster network in this
+# situation. And the user can use the configuration to customize networks
+# according to needs.
+networks:
+ - networkname: "{{ pnet1_name }}"
+ subnname: "pnet1_subnet"
+ subnet: 10.10.1.0/24
+ gateway: 10.10.1.1
+ excludeIps: 10.10.1.2..10.10.1.9
+ providerNetType: "DIRECT"
+ providerInterfaceName: "p1"
+ - networkname: "{{ pnet2_name }}"
+ subnname: "pnet2_subnet"
+ subnet: 10.10.2.0/24
+ gateway: 10.10.2.1
+ excludeIps: 10.10.2.2..10.10.2.9
+ providerNetType: "DIRECT"
+ providerInterfaceName: "p2"
+
+# CNF pod info
+cnf_config:
+ - name: "cnf1"
+ interfaces:
+ - ipAddress: "10.10.1.5"
+ name: "net2"
+ belongto: "{{ pnet1_name }}"
+ - ipAddress: "10.10.1.6"
+ name: "net3"
+ belongto: "{{ pnet2_name }}"
+ - name: snat1
+ type: snat
+ network: 10.10.1.0/24
+ private: 10.10.2.6
+ via: 10.10.1.5
+ provider: "{{ pnet1_name }}"
+```
+
+#### DNAT
+![EWO DNAT](sdwan-images/ewo-snat-setup.png)
+This configuration is used to setup an DNAT when outer traffic comes into the cluster, for example, when an app pod exposes a service to internet.
+The configurations snippet as below:
+
+in `inventory/default/host_vars/node1/30-ewo.yml`, or `flavors/sdewan-edge/all.yml`(edge cluster as example) if only set cnfs on one node.
+```bash
+pnet1_name: pnetwork1
+pnet2_name: pnetwork2
+
+# A list of network definitions. It can be provider network or ovn4nfv network.
+# ovn4nfv should be configured as the secondary CNI for cluster network in this
+# situation. And the user can use the configuration to customize networks
+# according to needs.
+networks:
+ - networkname: "{{ pnet1_name }}"
+ subnname: "pnet1_subnet"
+ subnet: 10.10.1.0/24
+ gateway: 10.10.1.1
+ excludeIps: 10.10.1.2..10.10.1.9
+ providerNetType: "DIRECT"
+ providerInterfaceName: "p1"
+ - networkname: "{{ pnet2_name }}"
+ subnname: "pnet2_subnet"
+ subnet: 10.10.2.0/24
+ gateway: 10.10.2.1
+ excludeIps: 10.10.2.2..10.10.2.9
+ providerNetType: "DIRECT"
+ providerInterfaceName: "p2"
+
+# CNF pod info
+cnf_config:
+ - name: "cnf1"
+ interfaces:
+ - ipAddress: "10.10.1.5"
+ name: "net2"
+ belongto: "{{ pnet1_name }}"
+ - ipAddress: "10.10.1.6"
+ name: "net3"
+ belongto: "{{ pnet2_name }}"
+ - name: dnat1
+ type: dnat
+ from: 10.10.1.6
+ ingress: 10.10.2.5
+ network: 10.10.2.0/24
+ provider: "{{ pnet1_name }}"
+```
+
## Resource Consumption
### Methodology
@@ -411,3 +685,14 @@ To measure total memory usage, the command “free -h” was used.
| TCP | Transmission Control Protocol |
| uCPE | Universal Customer Premise Equipment |
+## Terminology
+
+| Term | Description |
+|:-----: | ----- |
+| EWO | Edge WAN Overlay
|
+| Overlay controller | is a Central Controller provides central control of SDEWAN overlay networks by automatically configuring the SDEWAN CNFs through SDEWAN CRD controller located in edge location clusters and hub clusters
|
+| EWO Controller | To represent central overlay controller
|
+| EWO Operator | To represent CRD controller
|
+| EWO CNF | To represent OpenWRT based CNF.
|
+| SDEWAN CRD Controller | is implemented as k8s CRD Controller, it manages CRDs (e.g. Firewall related CRDs, Mwan3 related CRDs and IPsec related CRDs etc.) and internally calls SDEWAN Restful API to do CNF configuration. And a remote client (e.g. SDEWAN Central Controller) can manage SDEWAN CNF configuration through creating/updating/deleting SDEWAN CRs.
|
+| OpenWRT based CNF | The CNF is implemented based on OpenWRT, it enhances OpenWRT Luci web interface with SDEWAN controllers to provide Restful API for network functions configuration and control.
|
diff --git a/doc/reference-architectures/core-network/openness_upf.md b/doc/reference-architectures/core-network/openness_upf.md
index b8ea9c61..98a73b2d 100644
--- a/doc/reference-architectures/core-network/openness_upf.md
+++ b/doc/reference-architectures/core-network/openness_upf.md
@@ -45,7 +45,7 @@ As part of the end-to-end integration of the Edge cloud deployment using OpenNES
# Purpose
-This document provides the required steps to deploy UPF on the OpenNESS platform. 4G/(Long Term Evolution network)LTE or 5G UPF can run as network functions on the Edge node in a virtualized environment. The reference [Dockerfile](https://github.com/open-ness/edgeapps/blob/master/network-functions/core-network/5G/UPF/Dockerfile) and [5g-upf.yaml](https://github.com/open-ness/edgeapps/blob/master/network-functions/core-network/5G/UPF/5g-upf.yaml) provide details on how to deploy UPF as a Container Networking function (CNF) in a K8s pod on OpenNESS edge node using OpenNESS Enhanced Platform Awareness (EPA) features.
+This document provides the required steps to deploy UPF on the OpenNESS platform. 4G/(Long Term Evolution network)LTE or 5G UPF can run as network functions on the Edge node in a virtualized environment. The reference [Dockerfile](https://github.com/open-ness/edgeapps/blob/master/network-functions/core-network/5G/UPF/Dockerfile) and [5g-upf.yaml](https://github.com/open-ness/edgeapps/blob/master/network-functions/core-network/5G/UPF/5g-upf.yaml) provide details on how to deploy UPF as a Cloud-native Network Function (CNF) in a K8s pod on OpenNESS edge node using OpenNESS Enhanced Platform Awareness (EPA) features.
These scripts are validated through a reference UPF solution (implementation is based on Vector Packet Processing (VPP)) that is not part of the OpenNESS release.
@@ -59,14 +59,17 @@ These scripts are validated through a reference UPF solution (implementation is
1. To keep the build and deploy process straightforward, the Docker\* build and image are stored on the Edge node.
+2. Copy the upf binary package to the Docker build folder. Reference Docker files and the Helm chart for deploying the UPF is available at [edgeapps_upf_docker](https://github.com/open-ness/edgeapps/tree/master/network-functions/core-network/5G/UPF) and [edgeapps_upf_helmchart](https://github.com/open-ness/edgeapps/tree/master/network-functions/core-network/charts/upf) respectively
+
```bash
- ne-node# cd <5g-upf-binary-package>
+ ne-node# cp -rf <5g-upf-binary-package> edgeapps/network-functions/core-network/5G/UPF/upf
```
-2. Copy the Docker files to the node and build the Docker image. Reference Docker files and the Helm chart for deploying the UPF is available at [edgeapps_upf_docker](https://github.com/open-ness/edgeapps/tree/master/network-functions/core-network/5G/UPF) and [edgeapps_upf_helmchart](https://github.com/open-ness/edgeapps/tree/master/network-functions/core-network/charts/upf) respectively
+3. Build the Docker image.
```bash
- ne-node# ./build_image.sh
+ ne-node# cd edgeapps/network-functions/core-network/5G/UPF
+ ne-node# ./build_image.sh -b ./upf/ -i upf-cnf
ne-node# docker image ls | grep upf
upf-cnf 1.0 e0ce467c13d0 15 hours ago 490MB
@@ -139,9 +142,9 @@ Below is a list of minimal configuration parameters for VPP-based applications s
3. Enable the vfio-pci/igb-uio driver on the node. The below example shows the enabling of the `igb_uio` driver:
```bash
- ne-node# /opt/openness/dpdk-18.11.6/usertools/dpdk-devbind.py -b igb_uio 0000:af:0a.0
+ ne-node# /opt/openness/dpdk-19.11.1/usertools/dpdk-devbind.py -b igb_uio 0000:af:0a.0
- ne-node# /opt/openness/dpdk-18.11.6/usertools/dpdk-devbind.py --status
+ ne-node# /opt/openness/dpdk-19.11.1/usertools/dpdk-devbind.py --status
Network devices using DPDK-compatible driver
============================================
0000:af:0a.0 'Ethernet Virtual Function 700 Series 154c' drv=igb_uio unused=i40evf,vfio-pci
@@ -284,7 +287,7 @@ helm install \ \ \**NOTE**: The command `groupadd vpp` needs to be given only for the first execution.
-
```bash
ne-controller# kubectl exec -it upf-cnf -- /bin/bash
- upf-cnf# groupadd vpp
- upf-cnf# ./run_upf.sh
+ upf-cnf# sudo -E ./run_upf.sh
```
## Uninstall UPF pod from OpenNESS controller
diff --git a/doc/reference-architectures/ran/openness_ran.md b/doc/reference-architectures/ran/openness_ran.md
index bd3bd9f5..0cac933d 100644
--- a/doc/reference-architectures/ran/openness_ran.md
+++ b/doc/reference-architectures/ran/openness_ran.md
@@ -8,39 +8,44 @@ Copyright (c) 2020 Intel Corporation
- [Building the FlexRAN image](#building-the-flexran-image)
- [FlexRAN hardware platform configuration](#flexran-hardware-platform-configuration)
- [BIOS](#bios)
+ - [Setting up CPU Uncore frequency](#setting-up-cpu-uncore-frequency)
- [Host kernel command line](#host-kernel-command-line)
+- [Deploying Access Edge CERA for FlexRAN](#deploying-access-edge-cera-for-flexran)
- [Deploying and Running the FlexRAN pod](#deploying-and-running-the-flexran-pod)
- [Setting up 1588 - PTP based Time synchronization](#setting-up-1588---ptp-based-time-synchronization)
- [Setting up PTP](#setting-up-ptp)
- [Primary clock](#primary-clock)
- [Secondary clock](#secondary-clock)
- [BIOS configuration](#bios-configuration)
+- [CPU frequency configuration](#cpu-frequency-configuration)
- [References](#references)
# Introduction
-Radio Access Network (RAN) is the edge of wireless network. 4G and 5G base stations form the key network function for the edge deployment. In OpenNESS, FlexRAN is used as a reference for 4G and 5G base stations as well as 4G and 5G end-to-end testing.
-FlexRAN offers high-density baseband pooling that could run on a distributed Telco\* cloud to provide a smart indoor coverage solution and next-generation fronthaul architecture. This 4G and 5G platform provides the open platform ‘smarts’ for both connectivity and new applications at the edge of the network, along with the developer tools to create these new services. FlexRAN running on the Telco Cloud provides low latency compute, storage, and network offload from the edge. Thus, saving network bandwidth.
+Radio Access Network (RAN) is the edge of wireless network. 4G and 5G base stations form the key network function for the edge deployment. In OpenNESS, FlexRAN is used as a reference for 4G and 5G base stations as well as 4G and 5G end-to-end testing.
-FlexRAN 5GNR Reference PHY is a baseband PHY Reference Design for a 4G and 5G base station, using Intel® Xeon® processor family with Intel® architecture. This 5GNR Reference PHY consists of a library of c-callable functions that are validated on several technologies from Intel (Intel® microarchitecture code name Broadwell, Intel® microarchitecture code name Skylake, Cascade Lake, and Ice Lake) and demonstrates the capabilities of the software running different 5GNR L1 features. The functionality of these library functions is defined by the relevant sections in [3GPP TS 38.211, 212, 213, 214, and 215]. Performance of the Intel 5GNR Reference PHY meets the requirements defined by the base station conformance tests in [3GPP TS 38.141]. This library of functions will be used by Intel partners and end customers as a foundation for their product development. Reference PHY is integrated with third-party L2 and L3 to complete the base station pipeline.
+FlexRAN offers high-density baseband pooling that could run on a distributed Telco\* cloud to provide a smart indoor coverage solution and next-generation fronthaul architecture. This 4G and 5G platform provides the open platform ‘smarts’ for both connectivity and new applications at the edge of the network, along with the developer tools to create these new services. FlexRAN running on the Telco Cloud provides low latency compute, storage, and network offload from the edge. Thus, saving network bandwidth.
-The diagram below shows FlexRAN DU (Real-time L1 and L2) deployed on the OpenNESS platform with the necessary microservices and Kubernetes\* enhancements required for real-time workload deployment.
+FlexRAN 5GNR Reference PHY is a baseband PHY Reference Design for a 4G and 5G base station, using Intel® Xeon® processor family with Intel® architecture. This 5GNR Reference PHY consists of a library of c-callable functions that are validated on several technologies from Intel (Intel® microarchitecture code name Broadwell, Intel® microarchitectures code name Skylake, Cascade Lake, and Intel® microarchitecture Ice Lake) and demonstrates the capabilities of the software running different 5GNR L1 features. The functionality of these library functions is defined by the relevant sections in [3GPP TS 38.211, 212, 213, 214, and 215]. Performance of the Intel 5GNR Reference PHY meets the requirements defined by the base station conformance tests in [3GPP TS 38.141]. This library of functions will be used by Intel partners and end customers as a foundation for their product development. Reference PHY is integrated with third-party L2 and L3 to complete the base station pipeline.
+
+The diagram below shows FlexRAN DU (Real-time L1 and L2) deployed on the OpenNESS platform with the necessary microservices and Kubernetes\* enhancements required for real-time workload deployment.
![FlexRAN DU deployed on OpenNESS](openness-ran.png)
-This document aims to provide the steps involved in deploying FlexRAN 5G (gNb) on the OpenNESS platform.
+This document aims to provide the steps involved in deploying FlexRAN 5G (gNb) on the OpenNESS platform.
->**NOTE**: This document covers both FlexRAN 4G and 5G. All the steps mentioned in this document use 5G for reference. Refer to the [FlexRAN 4G Reference Solution L1 User Guide #570228](https://cdrdv2.intel.com/v1/dl/getContent/570228) for minor updates needed to build, deploy, and test FlexRAN 4G.
+>**NOTE**: This document covers both FlexRAN 4G and 5G. All the steps mentioned in this document use 5G for reference. Refer to the [FlexRAN 4G Reference Solution L1 User Guide #570228](https://cdrdv2.intel.com/v1/dl/getContent/570228) for minor updates needed to build, deploy, and test FlexRAN 4G.
-# Building the FlexRAN image
+# Building the FlexRAN image
This section explains the steps involved in building the FlexRAN image. Only L1 and L2-stub will be part of these steps. Real-time L2 (MAC and RLC) and non-real-time L2 and L3 are out of scope as it is a part of the third-party component.
1. Contact your Intel representative to obtain the package
2. Untar the FlexRAN package.
3. Set the required environmental variables:
- ```
- export RTE_SDK=$localPath/dpdk-19.11
+
+ ```shell
+ export RTE_SDK=$localPath/dpdk-20.11
export RTE_TARGET=x86_64-native-linuxapp-icc
export WIRELESS_SDK_TARGET_ISA=avx512
export RPE_DIR=${flexranPath}/libs/ferrybridge
@@ -56,97 +61,204 @@ This section explains the steps involved in building the FlexRAN image. Only L1
export FLEXRAN_SDK=${DIR_WIRELESS_SDK}/install
export DIR_WIRELESS_TABLE_5G=${flexranPath}/bin/nr5g/gnb/l1/table
```
- >**NOTE**: The environmental variables path must be updated according to your installation and file/directory names.
-4. Build L1, WLS interface between L1, L2, and L2-Stub (testmac):
- `./flexran_build.sh -r 5gnr_sub6 -m testmac -m wls -m l1app -b -c`
+
+ >**NOTE**: The environmental variables path must be updated according to your installation and file/directory names.
+
+4. Build L1, WLS interface between L1, L2, and L2-Stub (testmac):
+
+ ```shell
+ # ./flexran_build.sh -r 5gnr_sub6 -m testmac -m wls -m l1app -b -c
+ ```
+
5. Once the build has completed, copy the required binary files to the folder where the Docker\* image is built. This can be done by using a provided example [build-du-dev-image.sh](https://github.com/open-ness/edgeapps/blob/master/network-functions/ran/5G/du-dev/build-du-dev-image.sh) script from Edge Apps OpenNESS repository, it will copy the files from the paths provided as environmental variables in previous step. The script will copy the files into the right directory containing the Dockerfile and commence the docker build.
+
```shell
- git clone https://github.com/open-ness/edgeapps.git
- cd edgeapps/network-functions/ran/5G/du-dev
- ./build-du-dev-image.sh
+ # git clone https://github.com/open-ness/edgeapps.git
+ # cd edgeapps/network-functions/ran/5G/du-dev
+ # ./build-du-dev-image.sh
```
- The list of binary files that are used is documented in [dockerfile](https://github.com/open-ness/edgeapps/blob/master/network-functions/ran/5G/flexRAN-gnb/Dockerfile)
- - ICC, IPP mpi and mkl Runtime
- - DPDK build target directory
- - FlexRAN test vectors (optional)
- - FlexRAN L1 and testmac (L2-stub) binary
- - FlexRAN SDK modules
- - FlexRAN WLS share library
- - FlexRAN CPA libraries
+ The list of binary files that are used is documented in [dockerfile](https://github.com/open-ness/edgeapps/blob/master/network-functions/ran/5G/du-dev/Dockerfile)
+ - ICC, IPP mpi and mkl Runtime
+ - DPDK build target directory
+ - FlexRAN test vectors (optional)
+ - FlexRAN L1 and testmac (L2-stub) binary
+ - FlexRAN SDK modules
+ - FlexRAN WLS share library
+ - FlexRAN CPA libraries
6. The following example reflects the Docker image [expected by Helm chart](https://github.com/open-ness/edgeapps/blob/master/network-functions/ran/charts/du-dev/values.yaml), user needs to adjust the IP address and port of the Harbor registry where Docker image will be pushed:
-
+
```shell
image:
repository: :/intel/flexran5g # Change Me! - please provide IP address and port
# of Harbor registry where FlexRAN docker image is uploaded
- tag: 3.10.0-1127.19.1.rt56 # The tag identifying the FlexRAN docker image,
+ tag: 3.10.0-1160.11.1.rt56 # The tag identifying the FlexRAN docker image,
# the kernel version used to build FlexRAN can be used as tag
```
-7. Tag the image and push to a local Harbor registry (Harbor registry deployed as part of OpenNESS Experience Kit)
-
+
+7. Tag the image and push to a local Harbor registry (Harbor registry deployed as part of Converged Edge Experience Kits)
+
```shell
- docker tag flexran5g :/intel/flexran5g:3.10.0-1127.19.1.rt56
+ # docker tag flexran5g :/intel/flexran5g:3.10.0-1160.11.1.rt56
- docker push :/intel/flexran5g:3.10.0-1127.19.1.rt56
+ # docker push :/intel/flexran5g:3.10.0-1160.11.1.rt56
```
-By the end of step 7, the FlexRAN Docker image is created and available in the Harbor registry. This image is copied to the edge node where FlexRAN will be deployed and that is installed with OpenNESS Network edge with all the required EPA features including Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000. Please refer to the document [Using FPGA in OpenNESS: Programming, Resource Allocation, and Configuration](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md) for details on setting up Intel® FPGA PAC N3000 with vRAN FPGA image.
+By the end of step 7, the FlexRAN Docker image is created and available in the Harbor registry. This image is copied to the edge node where FlexRAN will be deployed and that is installed with OpenNESS Network edge with all the required EPA features including Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000. Please refer to the document [Using FPGA in OpenNESS: Programming, Resource Allocation, and Configuration](../../building-blocks/enhanced-platform-awareness/openness-fpga.md) for details on setting up the Intel® FPGA PAC N3000 with vRAN FPGA image or alternatively to [Using the Intel vRAN Dedicated Accelerator ACC100 on OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-acc100.md#using-the-intel-vran-dedicated-accelerator-acc100-on-openness) for details on setting up the Intel vRAN Dedicated Accelerator ACC100 for FEC acceleration.
+
+# FlexRAN hardware platform configuration
-# FlexRAN hardware platform configuration
-## BIOS
-FlexRAN on Intel® microarchitecture code name Skylake and Cascade Lake technology from Intel requires a BIOS configuration that disables C-state and enables Config TDP level-2. Refer to the [BIOS configuration](#bios-configuration) section in this document.
+## BIOS
+
+FlexRAN on Intel® microarchitecture code name Skylake, Cascade Lake and Ice Lake, technology from Intel requires a BIOS configuration that disables C-state and enables Config TDP level-2. Refer to the [BIOS configuration](#bios-configuration) section in this document.
+
+## Setting up CPU Uncore frequency
+
+FlexRAN on Intel® microarchitecture code name Skylake, Cascade Lake and Ice Lake, technology from Intel requires that the CPU frequency and uncore frequency are set up for optimal performance. Refer to the [CPU frequency configuration](#cpu-frequency-configuration) section in this document.
## Host kernel command line
+```shell
+usbcore.autosuspend=-1 selinux=0 enforcing=0 nmi_watchdog=0 softlockup_panic=0 audit=0 intel_pstate=disable cgroup_memory=1 cgroup_enable=memory mce=off idle=poll isolcpus=0-23,25-47,49-71,73-95 rcu_nocbs=0-23,25-47,49-71,73-95 kthread_cpus=0,24,48,72 irqaffinity=0,24,48,72 nohz_full=0-23,25-47,49-71,73-95 hugepagesz=1G hugepages=30 default_hugepagesz=1G intel_iommu=on iommu=pt pci=realloc pci=assign-busses rdt=l3cat
```
-usbcore.autosuspend=-1 selinux=0 enforcing=0 nmi_watchdog=0 softlockup_panic=0 audit=0 intel_pstate=disable cgroup_memory=1 cgroup_enable=memory mce=off idle=poll isolcpus=1-23,25-47 rcu_nocbs=1-23,25-47 kthread_cpus=0,24 irqaffinity=0,24 nohz_full=1-23,25-47 hugepagesz=1G hugepages=50 default_hugepagesz=1G intel_iommu=on iommu=pt pci=realloc pci=assign-busses
-```
-Host kernel version - 3.10.0-1062.12.1.rt56.1042.el7.x86_64
+> NOTE: CPU ID related variables may vary according to CPU SKU
+
+Host kernel version - 3.10.0-1160.11.1.rt56.1145.el7.x86_64
+
+Instructions on how to configure the kernel command line in OpenNESS can be found in [OpenNESS getting started documentation](../../getting-started/converged-edge-experience-kits.md#customizing-kernel-grub-parameters-and-tuned-profile--variables-per-host)
+
+# Deploying Access Edge CERA for FlexRAN
+
+Information about Access Edge CERA and other CERAs can be found in [flavours.md documentation](https://github.com/open-ness/ido-specs/blob/master/doc/flavors.md#cera-access-edge-flavor). Additionally users are encouraged to familiarize themselves with [converged-edge-experience-kits documentation](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/converged-edge-experience-kits.md)
+
+1. Fulfill the [pre-conditions for deploying OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md#preconditions)
+
+2. Configure the specification for the Access Edge CERA present under the `flavors/flexran` directory. The following may need to be adjusted.
+
+3. Edit `flavors/flexran/all.yml` as necessary.
+
+ - `fpga_sriov_userspace_enable` can be set to `true` (default) or `false` depending on the type of desired accelerator used by FlexRAN for FEC hardware offload. See [Intel® FPGA PAC N3000](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md#intelr-fpga-pac-n3000-flexran-host-interface-overview) support in OpenNESS.
+ - `fpga_userspace_vf` can be set to `enable: true` (default) or `enabled: false` depending on the type of desired accelerator used by FlexRAN for FEC hardware offload. See [Intel® vRAN Dedicated Accelerator ACC100](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-acc100.md) and [Intel® FPGA PAC N3000](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md#intelr-fpga-pac-n3000-flexran-host-interface-overview) support in OpenNESS.
+ - `acc100_sriov_userspace_enable` can be set to `true` or `false` (default) depending on the type of desired accelerator used by FlexRAN for FEC hardware offload. See [Intel® vRAN Dedicated Accelerator ACC100](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-acc100.md) support in OpenNESS.
+ - `acc100_userspace_vf` can be set to `enable: true` or `enabled: false` (default) depending on the type of desired accelerator used by FlexRAN for FEC hardware offload. See [Intel® vRAN Dedicated Accelerator ACC100](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-acc100.md) and [Intel® FPGA PAC N3000](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md#intelr-fpga-pac-n3000-flexran-host-interface-overview) support in OpenNESS.
+ - `ne_opae_fpga_enable` can be set to `true` (default) or `false` depending on the desire to support [Intel® FPGA PAC N3000](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md#intelr-fpga-pac-n3000-flexran-host-interface-overview) programming with OPAE within OpenNESS
+ - `reserved_cpus` needs to be set up accordingly to the CPU SKU, number of available CPUs and user's desire to [limit the OS and K8s processes only to non-RT CPUs](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-topology-manager.md#details---topology-manager-support-in-openness). It is critical that the CPUs selected for `reserved_cpus` do exist on the Edge Node, as forcing K8s processes to a CPU that does not exist will cause a K8s deployment failure. The usual choice (default) of CPUs used for the K8s and OS threads in FlexRAN deployment is a first CPU ID on each NUMA node (ie. on 24 core platform with two NUMA nodes `reserved_cpus: "0,24"`. In case of Hyper-threading enabled CPU, the CPU IDs of both siblings are expected ie. `reserved_cpus: "0,24,48,72`).
+ - `e810_driver_enable` (default set to `true`) provides support for installing recommended version of the `ice` and `iavf` kernel drivers for E810 series Intel NICs. This can be disabled if the user does not require this functionality.
+ - `rmd_operator_enable` (default set to `true`) provides support for deploying [RMD operator](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-rmd.md) enabling configuration od LLC (Last Level Cache) and MBC (Memory Bandwidth Configuration) through RDT.
+ > Note: At the time of writing the RMD operator version enabling the 3rd Generation Intel® Xeon® Scalable Processors, code named Ice Lake is not yet available. This may cause a crash of the RMD operator DaemonSet when deployed on Ice Lake.
+
+4. Depending on enabled features provide requested files under correct directories (the directories are to be created by the user).
+
+ - When `ne_biosfw_enable` is enabled, create a `ido-converged-edge-experience-kits/ceek/biosfw` directory and copy the [syscfg_package.zip](https://downloadcenter.intel.com/download/29693?v=t) file into it.
+ > Note: At the time of writing the version of SYSCFG utility supporting 3rd Generation Intel® Xeon® Scalable Processors platform is not yet generally available.
+ - When `ne_opae_fpga_enable` is enabled, create a `ido-converged-edge-experience-kits/ceek/opae_fpga` directory and copy the [OPAE_SDK_1.3.7-5_el7.zip](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md#converged-edge-experience-kits) file into it.
+ - When `e810_driver_enable` is enabled, create a `ido-converged-edge-experience-kits/ceek/nic_drivers` directory and copy the [ice-1.3.2.tar.gz](https://downloadcenter.intel.com/download/30303/Intel-Network-Adapter-Driver-for-E810-Series-Devices-under-Linux-) and [iavf-4.0.2.tar.gz](https://downloadcenter.intel.com/download/30305/Intel-Network-Adapter-Linux-Virtual-Function-Driver-for-Intel-Ethernet-Controller-700-and-E810-Series) files into it.
+
+5. Edit the [inventory.yml](https://github.com/open-ness/ido-converged-edge-experience-kits/blob/master/inventory.yml) as necessary. For more information see [sample deployment definitions](https://github.com/open-ness/specs/blob/master/doc/getting-started/converged-edge-experience-kits.md#sample-deployment-definitions). Below is an example to deploy OpenNESS on one Edge Controller and one Edge Node, as an `openness` user.
+
+ ```yaml
+ all:
+ vars:
+ cluster_name: flexran_cluster # NOTE: Use `_` instead of spaces.
+ flavor: flexran # NOTE: Flavors can be found in `flavors` directory.
+ single_node_deployment: false # Request single node deployment (true/false).
+ limit: # Limit ansible deployment to certain inventory group or hosts
+ controller_group:
+ hosts:
+ controller:
+ ansible_host:
+ ansible_user: openness
+ edgenode_group:
+ hosts:
+ node01:
+ ansible_host:
+ ansible_user: openness
+ edgenode_vca_group:
+ hosts:
+ ptp_master:
+ hosts:
+ ptp_slave_group:
+ hosts:
+ ```
+
+6. Run deployment helper script:
-Instructions on how to configure the kernel command line in OpenNESS can be found in [OpenNESS getting started documentation](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/openness-experience-kits.md#customizing-kernel-grub-parameters-and-tuned-profile--variables-per-host)
+ ```shell
+ # sudo scripts/ansible-precheck.sh
+ ```
+
+7. Deploy OpenNESS
+
+ ```shell
+ # python3 deploy.py
+ ```
# Deploying and Running the FlexRAN pod
-1. Deploy the OpenNESS cluster with [SRIOV for FPGA enabled](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md#fpga-fec-ansible-installation-for-openness-network-edge).
+1. Deploy the OpenNESS cluster with [Access Edge CERA](https://github.com/open-ness/ido-specs/blob/master/doc/flavors.md#cera-access-edge-flavor) enabled.
+
2. Confirm that there are no FlexRAN pods and the FPGA configuration pods are not deployed using `kubectl get pods`.
-3. Confirm that all the EPA microservice and enhancements (part of OpenNESS playbook) are deployed `kubectl get po --all-namespaces`.
- ```yaml
- NAMESPACE NAME READY STATUS RESTARTS AGE
- kube-ovn kube-ovn-cni-8x5hc 1/1 Running 17 7d19h
- kube-ovn kube-ovn-cni-p6v6s 1/1 Running 1 7d19h
- kube-ovn kube-ovn-controller-578786b499-28lvh 1/1 Running 1 7d19h
- kube-ovn kube-ovn-controller-578786b499-d8d2t 1/1 Running 3 5d19h
- kube-ovn ovn-central-5f456db89f-l2gps 1/1 Running 0 7d19h
- kube-ovn ovs-ovn-56c4c 1/1 Running 17 7d19h
- kube-ovn ovs-ovn-fm279 1/1 Running 5 7d19h
- kube-system coredns-6955765f44-2lqm7 1/1 Running 0 7d19h
- kube-system coredns-6955765f44-bpk8q 1/1 Running 0 7d19h
- kube-system etcd-silpixa00394960 1/1 Running 0 7d19h
- kube-system kube-apiserver-silpixa00394960 1/1 Running 0 7d19h
- kube-system kube-controller-manager-silpixa00394960 1/1 Running 0 7d19h
- kube-system kube-multus-ds-amd64-bpq6s 1/1 Running 17 7d18h
- kube-system kube-multus-ds-amd64-jf8ft 1/1 Running 0 7d19h
- kube-system kube-proxy-2rh9c 1/1 Running 0 7d19h
- kube-system kube-proxy-7jvqg 1/1 Running 17 7d19h
- kube-system kube-scheduler-silpixa00394960 1/1 Running 0 7d19h
- kube-system kube-sriov-cni-ds-amd64-crn2h 1/1 Running 17 7d19h
- kube-system kube-sriov-cni-ds-amd64-j4jnt 1/1 Running 0 7d19h
- kube-system kube-sriov-device-plugin-amd64-vtghv 1/1 Running 0 7d19h
- kube-system kube-sriov-device-plugin-amd64-w4px7 1/1 Running 0 4d21h
- openness eaa-78b89b4757-7phb8 1/1 Running 3 5d19h
- openness edgedns-mdvds 1/1 Running 16 7d18h
- openness interfaceservice-tkn6s 1/1 Running 16 7d18h
- openness nfd-master-82dhc 1/1 Running 0 7d19h
- openness nfd-worker-h4jlt 1/1 Running 37 7d19h
- openness syslog-master-894hs 1/1 Running 0 7d19h
- openness syslog-ng-n7zfm 1/1 Running 16 7d19h
- ```
-4. Deploy the Kubernetes job to program the [FPGA](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md#fpga-programming-and-telemetry-on-openness-network-edge)
-5. Deploy the Kubernetes job to configure the [BIOS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-bios.md) (note: only works on select Intel development platforms)
-6. Deploy the Kubernetes job to configure the [Intel PAC N3000 FPGA](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md#fec-vf-configuration-for-openness-network-edge)
+
+3. Confirm that all the EPA microservice and enhancements (part of OpenNESS playbook) are deployed `kubectl get pods --all-namespaces`.
+
+ ```shell
+ NAMESPACE NAME READY STATUS RESTARTS AGE
+ default intel-rmd-operator-78c8d6b47c-h6hrv 1/1 Running 0 2h
+ default rmd-node-agent-silpixa00400827 1/1 Running 0 2h
+ default rmd-silpixa00400827 1/1 Running 0 2h
+ harbor harbor-app-harbor-chartmuseum-74fb748c4d-zg96l 1/1 Running 0 2h
+ harbor harbor-app-harbor-clair-779df4555b-z8nmj 2/2 Running 0 2h
+ harbor harbor-app-harbor-core-69477b9f7c-rkq7m 1/1 Running 0 2h
+ harbor harbor-app-harbor-database-0 1/1 Running 0 2h
+ harbor harbor-app-harbor-jobservice-75bf777dc9-rk2ww 1/1 Running 0 2h
+ harbor harbor-app-harbor-nginx-98b8cc48-5tx4n 1/1 Running 0 2h
+ harbor harbor-app-harbor-notary-server-7dbbfd5775-rx5zc 1/1 Running 0 2h
+ harbor harbor-app-harbor-notary-signer-64f4879947-q6bgh 1/1 Running 0 2h
+ harbor harbor-app-harbor-portal-fd5ff4bc9-bh2wc 1/1 Running 0 2h
+ harbor harbor-app-harbor-redis-0 1/1 Running 0 2h
+ harbor harbor-app-harbor-registry-68cd7c59c7-fhddp 2/2 Running 0 2h
+ harbor harbor-app-harbor-trivy-0 1/1 Running 0 2h
+ kafka cluster-entity-operator-55894648cb-682ln 3/3 Running 0 2h
+ kafka cluster-kafka-0 2/2 Running 0 2h
+ kafka cluster-zookeeper-0 1/1 Running 0 2h
+ kafka strimzi-cluster-operator-68b6d59f74-jj7vf 1/1 Running 0 2h
+ kube-system calico-kube-controllers-646546699f-wl6rn 1/1 Running 0 2h
+ kube-system calico-node-hrtn4 1/1 Running 0 2h
+ kube-system coredns-74ff55c5b-shpw2 1/1 Running 0 2h
+ kube-system coredns-74ff55c5b-w4s7s 1/1 Running 0 2h
+ kube-system descheduler-cronjob-1615305120-xrj48 0/1 Completed 0 2h
+ kube-system etcd-silpixa00400827 1/1 Running 0 2h
+ kube-system kube-apiserver-silpixa00400827 1/1 Running 0 2h
+ kube-system kube-controller-manager-silpixa00400827 1/1 Running 0 2h
+ kube-system kube-multus-ds-amd64-v2dhr 1/1 Running 0 2h
+ kube-system kube-proxy-vg57p 1/1 Running 0 2h
+ kube-system kube-scheduler-silpixa00400827 1/1 Running 0 2h
+ kube-system sriov-release-kube-sriov-cni-ds-amd64-mqfh6 1/1 Running 0 2h
+ kube-system sriov-release-kube-sriov-device-plugin-amd64-cxx6g 1/1 Running 0 2h
+ openness certsigner-6cb79468b5-q2zhr 1/1 Running 0 2h
+ openness eaa-69c7bb7b5d-nqghg 1/1 Running 0 2h
+ openness edgedns-xjwpk 1/1 Running 0 2h
+ openness nfd-release-node-feature-discovery-master-748fff4b6f-89w2j 1/1 Running 0 2h
+ openness nfd-release-node-feature-discovery-worker-5bnvb 1/1 Running 0 2h
+ telemetry collectd-wgcvw 2/2 Running 0 2h
+ telemetry custom-metrics-apiserver-55bdf684ff-tqwwv 1/1 Running 0 2h
+ telemetry grafana-9db5b9cdb-j652q 2/2 Running 0 2h
+ telemetry otel-collector-f9b9d494-h622t 2/2 Running 0 2h
+ telemetry prometheus-node-exporter-jt2cf 1/1 Running 0 2h
+ telemetry prometheus-server-8656f6bf98-r2d9q 3/3 Running 0 2h
+ telemetry telemetry-aware-scheduling-69dbb979f6-n5cz6 2/2 Running 0 2h
+ telemetry telemetry-collector-certs-5glnn 0/1 Completed 0 2h
+ telemetry telemetry-node-certs-vw4fh 1/1 Running 0 2h
+ ```
+
+4. Deploy the Kubernetes job to program the [FPGA](../../building-blocks/enhanced-platform-awareness/openness-fpga.md#fpga-programming-and-telemetry-on-openness-network-edge)
+
+5. Deploy the Kubernetes job to configure the [BIOS](../../building-blocks/enhanced-platform-awareness/openness-bios.md) (note: only works on select Intel development platforms)
+
+6. Deploy the Kubernetes job to configure the [Intel PAC N3000 FPGA](../../building-blocks/enhanced-platform-awareness/openness-fpga.md#fec-vf-configuration-for-openness-network-edge)
+
7. Deploy the FlexRAN Kubernetes pod using a helm chart provided in Edge Apps repository at `edgeapps/network-functions/ran/charts`:
```shell
@@ -154,13 +266,14 @@ Instructions on how to configure the kernel command line in OpenNESS can be foun
```
8. `exec` into FlexRAN pod `kubectl exec -it flexran -- /bin/bash`
+
9. Find the PCI Bus function device ID of the FPGA VF assigned to the pod:
```shell
printenv | grep FEC
```
-11. Edit `phycfg_timer.xml` used for configuration of L1 application with the PCI Bus function device ID from the previous step to offload FEC to this device:
+10. Edit `phycfg_timer.xml` used for configuration of L1 application with the PCI Bus function device ID from the previous step to offload FEC to this device:
```xml
@@ -168,16 +281,20 @@ Instructions on how to configure the kernel command line in OpenNESS can be foun
0000:1d:00.1
```
-12. Once in the FlexRAN pod L1 and test-L2 (testmac) can be started.
-# Setting up 1588 - PTP based Time synchronization
+11. Once in the FlexRAN pod L1 and test-L2 (testmac) can be started.
+
+# Setting up 1588 - PTP based Time synchronization
+
This section provides an overview of setting up PTP-based time synchronization in a cloud-native Kubernetes/docker environment. For FlexRAN specific xRAN fronthaul tests and configurations please refer to the xRAN specific document in the reference section.
>**NOTE**: The PTP-based time synchronization method described here is applicable only for containers. For VMs, methods based on Virtual PTP need to be applied and this is not covered in this document.
## Setting up PTP
+
In the environment that needs to be synchronized, install the linuxptp package, which provides ptp4l and phc2sys applications. The PTP setup needs the primary clock and secondary clock setup. The secondary clock will be synchronized to the primary clock. At first, the primary clock will be configured. A supported NIC is required to use Hardware Time Stamps. To check if NIC is supporting Hardware Time Stamps, run the ethtool and a similar output should appear:
-```shell
+
+```shell
# ethtool -T eno4
Time stamping parameters for eno4:
Capabilities:
@@ -201,30 +318,34 @@ Hardware Receive Filter Modes:
The time in containers is the same as on the host machine, and so it is enough to synchronize the host to the primary clock.
PTP requires a few kernel configuration options to be enabled:
+
- CONFIG_PPS
- CONFIG_NETWORK_PHY_TIMESTAMPING
- CONFIG_PTP_1588_CLOCK
## Primary clock
-This is an optional step if you already have a primary clock. The below steps explain how to set up a Linux system to behave like ptp GM.
+
+This is an optional step if you already have a primary clock. The below steps explain how to set up a Linux system to behave like ptp GM.
On the primary clock side, take a look at the `/etc/sysconfig/ptp4l` file. It is the `ptp4l` daemon configuration file where starting options will be provided. Its content should look like this:
-```shell
+
+```shell
OPTIONS=”-f /etc/ptp4l.conf -i ”
```
+
`` is the interface name used for time stamping and `/etc/ptp4l.conf` is a configuration file for the `ptp4l` instance.
To determine if a primary clock PTP protocol is using BMC algorithm, and it is not obvious which clock will be chosen as primary clock. However, users can set the timer that is preferable to be the primary clock. It can be changed in `/etc/ptp4l.conf`. Set `priority1 property` to `127`.
After that start ptp4l service.
-```shell
+```shell
service ptp4l start
```
Output from the service can be checked at `/var/log/messages`, and for primary clock, it should be like this:
-```shell
+```shell
Mar 16 17:08:57 localhost ptp4l: ptp4l[23627.304]: selected /dev/ptp2 as PTP clock
Mar 16 17:08:57 localhost ptp4l: [23627.304] selected /dev/ptp2 as PTP clock
Mar 16 17:08:57 localhost ptp4l: [23627.306] port 1: INITIALIZING to LISTENING on INITIALIZE
@@ -248,26 +369,30 @@ OPTIONS="-c -s CLOCK_REALTIME -w"
```
Replace `` with the interface name. Start the phc2sys service.
+
```shell
service phc2sys start
```
+
Logs can be viewed at `/var/log/messages` and it looks like this:
-```shell
+```shell
phc2sys[3656456.969]: Waiting for ptp4l...
phc2sys[3656457.970]: sys offset -6875996252 s0 freq -22725 delay 1555
phc2sys[3656458.970]: sys offset -6875996391 s1 freq -22864 delay 1542
phc2sys[3656459.970]: sys offset -52 s2 freq -22916 delay 1536
phc2sys[3656460.970]: sys offset -29 s2 freq -22909 delay 1548
phc2sys[3656461.971]: sys offset -25 s2 freq -22913 delay 1549
-```
+```
## Secondary clock
+
The secondary clock configuration will be the same as the primary clock except for `phc2sys` options and priority1 property for `ptp4l`. For secondary clock priority1 property in `/etc/ptp4l.conf` should stay with default value (128). Run `ptp4l` service. To keep the system time synchronized to PHC time, change `phc2sys` options in `/etc/sysconfig/phc2sys` using the following command:
-```shell
+```shell
OPTIONS=”phc2sys -s -w"
-```
+```
+
Replace `` with the interface name. Logs will be available at `/var/log/messages`.
```shell
@@ -278,12 +403,14 @@ phc2sys[28920.407]: phc offset 308 s2 freq +5470 delay 947
phc2sys[28921.407]: phc offset 408 s2 freq +5662 delay 947
phc2sys[28922.407]: phc offset 394 s2 freq +5771 delay 947
```
+
Since this moment, both clocks should be synchronized. Any Docker container running in a pod is using the same clock as host so its clock will be synchronized as well.
+# BIOS configuration
-# BIOS configuration
+Below is the subset of the BIOS configuration. It contains the list of BIOS features that are recommended to be configured for FlexRAN DU deployment.
-Below is the subset of the BIOS configuration. It contains the list of BIOS features that are recommended to be configured for FlexRAN DU deployment.
+2nd Generation Intel® Xeon® Scalable Processors platforms BIOS configuration:
```shell
[BIOS::Advanced]
@@ -330,12 +457,37 @@ Memory Mapped I/O above 4 GB=Enabled
SR-IOV Support=Enabled
```
+# CPU frequency configuration
+
+Below is a script which configures the CPU frequency and uncore frequency for optimal performance - this needs to be adjusted accordingly to specific CPU SKUs.
+
+To run the script download msr-tools:
+
+```shell
+yum install -y msr-tools
+```
+
+Example for 2nd Generation Intel® Xeon® Scalable Processor - Intel(R) Xeon(R) Gold 6252N
+
+```shell
+#!/bin/bash
+
+cpupower frequency-set -g performance
+
+wrmsr -a 0x199 0x1900
+
+#Set Uncore max frequency
+wrmsr -p 0 0x620 0x1e1e
+wrmsr -p 35 0x620 0x1e1e
+```
+
# References
+
- FlexRAN Reference Solution Software Release Notes - Document ID:575822
- FlexRAN Reference Solution LTE eNB L2-L1 API Specification - Document ID:571742
- FlexRAN 5G New Radio Reference Solution L2-L1 API Specification - Document ID:603575
- FlexRAN 4G Reference Solution L1 User Guide - Document ID:570228
- FlexRAN 5G NR Reference Solution L1 User Guide - Document ID:603576
- FlexRAN Reference Solution L1 XML Configuration User Guide - Document ID:571741
-- FlexRAN 5G New Radio FPGA User Guide - Document ID:603578
+- FlexRAN 5G New Radio FPGA User Guide - Document ID:603578
- FlexRAN Reference Solution xRAN FrontHaul SAS - Document ID:611268
diff --git a/doc/reference-architectures/ran/openness_xran.md b/doc/reference-architectures/ran/openness_xran.md
index ececca9c..37816588 100644
--- a/doc/reference-architectures/ran/openness_xran.md
+++ b/doc/reference-architectures/ran/openness_xran.md
@@ -1,6 +1,6 @@
```text
SPDX-License-Identifier: Apache-2.0
-Copyright (c) 2020 Intel Corporation
+Copyright (c) 2020-2021 Intel Corporation
```
# O-RAN Front Haul Sample Application in OpenNESS
@@ -20,7 +20,7 @@ Copyright (c) 2020 Intel Corporation
- [xRAN Library](#xran-library)
- [Packet Classification](#packet-classification)
- [xRAN Library Sample Application](#xran-library-sample-application)
- - [Precision Time Protocol Synchronization](precision-time-protocol-synchronization)
+ - [Precision Time Protocol Synchronization](#precision-time-protocol-synchronization)
- [eCPRI DDP Profile](#ecpri-ddp-profile)
- [xRAN Sample App Deployment in OpenNESS](#xran-sample-app-deployment-in-openness)
- [Hardware Configuration and Checks](#hardware-configuration-and-checks)
@@ -406,11 +406,11 @@ Verify the i40e driver version of the NIC to be used and the firmware version on
## Deploy xRAN sample app O-DU and O-RU in OpenNESS Network Edge
-Before starting the deployment script, OpenNESS should be configured according to the instructions available [here](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md)
+Before starting the deployment script, OpenNESS should be configured according to the instructions available [here](../../getting-started/openness-cluster-setup.md)
Additional configuration steps are provided below.
### Setting up SRIOV
-1. Modify the `group_vars/all/10-default.yml` file as follows:
+1. Modify the `inventory/default/group_vars/all/10-open.yml` file as follows:
```yaml
kubernetes_cnis:
@@ -426,7 +426,7 @@ Additional configuration steps are provided below.
kubeovn_dpdk: false
```
-2. Modify `host_vars/.yml`. Provide the physical addresses of the connected interface to be used by the xRAN sample application and the number of VFs to be created on each of the connected physical ports. Each port needs to have 2 VFs. The SRIOV setting should look similar to:
+2. Modify `inventory/default/host_vars//10-open.yml`. Provide the physical addresses of the connected interface to be used by the xRAN sample application and the number of VFs to be created on each of the connected physical ports. Each port needs to have 2 VFs. The SRIOV setting should look similar to:
```yaml
sriov:
@@ -437,7 +437,7 @@ Additional configuration steps are provided below.
vm_vf_ports: 0
```
-Detailed instructions on configuring SRIOV for OpenNESS can be found [here](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md)
+Detailed instructions on configuring SRIOV for OpenNESS can be found [here](../../building-blocks/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md)
3. Modify SRIOV ConfigMap
@@ -456,7 +456,7 @@ Modify SRIOV ConfigMap. In the file `roles/kubernetes/cni/sriov/controlplane/fil
### Amend GRUB and tuned configuration
-In file `./group_vars/edgenode_group.yml`, change the following settings:
+In file `./inventory/default/group_vars/edgenode_group.yml`, change the following settings:
>**NOTE**: These configuration settings are for real-time kernels. The expected kernel version is - 3.10.0-1062.12.1.rt56.1042.el7.x86_64
@@ -481,21 +481,21 @@ In file `./group_vars/edgenode_group.yml`, change the following settings:
Host kernel version should be - 3.10.0-1062.12.1.rt56.1042.el7.x86_64
-Instructions on how to configure the kernel command line in OpenNESS can be found in [OpenNESS getting started documentation](https://github.com/open-ness/specs/blob/master/doc/getting-started/openness-experience-kits.md#customizing-kernel-grub-parameters-and-tuned-profile--variables-per-host)
+Instructions on how to configure the kernel command line in OpenNESS can be found in [OpenNESS getting started documentation](../../getting-started/converged-edge-experience-kits.md#customizing-kernel-grub-parameters-and-tuned-profile--variables-per-host)
### PTP Synchronization
-To enable PTP synchronization, modify one setting in `./group_vars/all.sh`:
+To enable PTP synchronization, modify one setting in `./inventory/default/group_vars/all/10-open.yml`:
```yaml
ptp_sync_enable: true
```
-For the two nodes that are to be synchronized with PTP, modify files `host_vars/nodeXX.yml`
+For the two nodes that are to be synchronized with PTP, modify files `inventory/default/host_vars/nodeXX/10-open.yml`
Example:
-For node "node01", modify file `host_vars/node01.yml`
+For node "node01", modify file `inventory/default/host_vars/node01/10-open.yml`
1. For PTP Configuration 1 [see](#xran-sample-app-deployment-in-openness)
@@ -541,16 +541,18 @@ Example:
```
### Deploy Openness NE
-Run the deployment script:
+Define the `inventory.yml` and then run the deployment script:
```shell
- ./deploy_ne.sh
+ python3 deploy.py
```
+> **NOTE**: for more details about deployment and defining inventory please refer to [CEEK](../../getting-started/converged-edge-experience-kits.md#converged-edge-experience-kit-explained) getting started page.
+
Check the `/proc/cmd` output. It should look similar to:
```shell
#cat /proc/cmdline
- BOOT_IMAGE=/vmlinuz-3.10.0-1127.19.1.rt56.1116.el7.x86_64 root=/dev/mapper/centosroot ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap intel_iommu=on iommu=pt usbcore.autosuspend=-1 selinux=0 enforcing=0 nmi_watchdog=0 softlockup_panic=0 audit=0 intel_pstate=disable cgroup_memory=1 cgroup_enable=memory mce=off idle=poll hugepagesz=1G hugepages=16 hugepagesz=2M hugepages=0 default_hugepagesz=1G isolcpus=1-19,21-39 rcu_nocbs=1-19,21-39 kthread_cpus=0,20 irqaffinity=0,20 nohz_full=1-19,21-39
+ BOOT_IMAGE=/vmlinuz-3.10.0-1160.11.1.rt56.1145.el7.x86_64 root=/dev/mapper/centosroot ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap intel_iommu=on iommu=pt usbcore.autosuspend=-1 selinux=0 enforcing=0 nmi_watchdog=0 softlockup_panic=0 audit=0 intel_pstate=disable cgroup_memory=1 cgroup_enable=memory mce=off idle=poll hugepagesz=1G hugepages=16 hugepagesz=2M hugepages=0 default_hugepagesz=1G isolcpus=1-19,21-39 rcu_nocbs=1-19,21-39 kthread_cpus=0,20 irqaffinity=0,20 nohz_full=1-19,21-39
```
### Configure Interfaces
diff --git a/doc/reference-architectures/sdwan-images/ewo-dnat-setup.png b/doc/reference-architectures/sdwan-images/ewo-dnat-setup.png
new file mode 100755
index 00000000..c71a0572
Binary files /dev/null and b/doc/reference-architectures/sdwan-images/ewo-dnat-setup.png differ
diff --git a/doc/reference-architectures/sdwan-images/ewo-network-cnf-interface.png b/doc/reference-architectures/sdwan-images/ewo-network-cnf-interface.png
new file mode 100755
index 00000000..7a599a3c
Binary files /dev/null and b/doc/reference-architectures/sdwan-images/ewo-network-cnf-interface.png differ
diff --git a/doc/reference-architectures/sdwan-images/ewo-node-select.png b/doc/reference-architectures/sdwan-images/ewo-node-select.png
new file mode 100755
index 00000000..7bcb13a6
Binary files /dev/null and b/doc/reference-architectures/sdwan-images/ewo-node-select.png differ
diff --git a/doc/reference-architectures/sdwan-images/ewo-snat-setup.png b/doc/reference-architectures/sdwan-images/ewo-snat-setup.png
new file mode 100755
index 00000000..3c9be5e4
Binary files /dev/null and b/doc/reference-architectures/sdwan-images/ewo-snat-setup.png differ
diff --git a/doc/reference-architectures/sdwan-images/ewo-tunnel-setup.png b/doc/reference-architectures/sdwan-images/ewo-tunnel-setup.png
new file mode 100755
index 00000000..211aa19c
Binary files /dev/null and b/doc/reference-architectures/sdwan-images/ewo-tunnel-setup.png differ
diff --git a/doc/reference-architectures/sdwan-images/packet-flow-tx.png b/doc/reference-architectures/sdwan-images/packet-flow-tx.png
index f7e5b017..274a62be 100644
Binary files a/doc/reference-architectures/sdwan-images/packet-flow-tx.png and b/doc/reference-architectures/sdwan-images/packet-flow-tx.png differ
diff --git a/openness_releasenotes.md b/openness_releasenotes.md
index dfaf322b..81c84148 100644
--- a/openness_releasenotes.md
+++ b/openness_releasenotes.md
@@ -1,6 +1,6 @@
```text
SPDX-License-Identifier: Apache-2.0
-Copyright (c) 2019-2020 Intel Corporation
+Copyright (c) 2019-2021 Intel Corporation
```
# Release Notes
@@ -13,6 +13,7 @@ This document provides high-level system features, issues, and limitations infor
- [OpenNESS - 20.06](#openness---2006)
- [OpenNESS - 20.09](#openness---2009)
- [OpenNESS - 20.12](#openness---2012)
+ - [OpenNESS - 21.03](#openness---2103)
- [Changes to Existing Features](#changes-to-existing-features)
- [OpenNESS - 19.06](#openness---1906-1)
- [OpenNESS - 19.06.01](#openness---190601)
@@ -22,6 +23,7 @@ This document provides high-level system features, issues, and limitations infor
- [OpenNESS - 20.06](#openness---2006-1)
- [OpenNESS - 20.09](#openness---2009-1)
- [OpenNESS - 20.12](#openness---2012-1)
+ - [OpenNESS - 21.03](#openness---2103-1)
- [Fixed Issues](#fixed-issues)
- [OpenNESS - 19.06](#openness---1906-2)
- [OpenNESS - 19.06.01](#openness---190601-1)
@@ -32,6 +34,7 @@ This document provides high-level system features, issues, and limitations infor
- [OpenNESS - 20.09](#openness---2009-2)
- [OpenNESS - 20.12](#openness---2012-2)
- [OpenNESS - 20.12.02](#openness---201202)
+ - [OpenNESS - 21.03](#openness---2103-2)
- [Known Issues and Limitations](#known-issues-and-limitations)
- [OpenNESS - 19.06](#openness---1906-3)
- [OpenNESS - 19.06.01](#openness---190601-3)
@@ -42,6 +45,7 @@ This document provides high-level system features, issues, and limitations infor
- [OpenNESS - 20.09](#openness---2009-3)
- [OpenNESS - 20.12](#openness---2012-3)
- [OpenNESS - 20.12.02](#openness---201202-1)
+ - [OpenNESS - 21.03](#openness---2103-3)
- [Release Content](#release-content)
- [OpenNESS - 19.06](#openness---1906-4)
- [OpenNESS - 19.06.01](#openness---190601-4)
@@ -52,8 +56,10 @@ This document provides high-level system features, issues, and limitations infor
- [OpenNESS - 20.09](#openness---2009-4)
- [OpenNESS - 20.12](#openness---2012-4)
- [OpenNESS - 20.12.02](#openness---201202-2)
+ - [OpenNESS - 21.03](#openness---2103-4)
- [Hardware and Software Compatibility](#hardware-and-software-compatibility)
- [Intel® Xeon® D Processor](#intel-xeon-d-processor)
+ - [3rd Generation Intel® Xeon® Scalable Processors - Early Access](#3rd-generation-intel-xeon-scalable-processors---early-access)
- [2nd Generation Intel® Xeon® Scalable Processors](#2nd-generation-intel-xeon-scalable-processors)
- [Intel® Xeon® Scalable Processors](#intel-xeon-scalable-processors)
- [Supported Operating Systems](#supported-operating-systems)
@@ -142,7 +148,7 @@ This document provides high-level system features, issues, and limitations infor
- Non-Privileged Container: Support deployment of non-privileged pods (CNFs and Applications as reference)
- Edge Compute EPA features support for On-Premises
- Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS
-- OpenNESS Experience Kit for Network and OnPremises edge
+- Converged Edge Experience Kits for Network and OnPremises edge
- Offline Release Package: Customers should be able to create an installer package that can be used to install OnPremises version of OpenNESS without the need for Internet access.
- 5G NR Edge Cloud deployment support
- 5G NR edge cloud deployment support with SA mode
@@ -287,6 +293,20 @@ This document provides high-level system features, issues, and limitations infor
- Support Intel® vRAN Dedicated Accelerator ACC100, Kubernetes Cloud-native deployment supporting higher capacity 4G/LTE and 5G vRANs cells/carriers for FEC offload.
- Major system Upgrades: Kubernetes 1.19.3, CentOS 7.8, Calico 3.16, and Kube-OVN 1.5.2.
+## OpenNESS - 21.03
+- EMCO: Hardware Platform Awareness (HPA) based Placement Intent support. Demonstrated using Smart City Pipeline with CPU and VCAC-A mode.
+- Edge Insights for Industrial updated to 2.4
+- SD-WAN Flavor deployment automation improvement
+- Support for Intel® Ethernet Controller E810
+- Improvements to Converged Edge Reference Architecture framework including support for deploying one or more OpenNESS Kubernetes clusters
+- OpenVINO upgraded to 2021.1.110
+- Early Access support for 3rd Generation Intel® Xeon® Scalable Processors
+- Major system upgrades: CentOS 7.9, Kubernetes 1.20.0, Docker 20.10.2, QEMU 5.2 and Golang 1.16.
+- Kubernetes CNI upgrades: Calico 3.17, SR-IOV CNI 2.6, Flannel 0.13.0.
+- Telemetry upgrades: CAdvisor 0.37.5, Grafana 7.4.2, Prometheus 2.24.0, Prometheus Node Exporter 1.0.1.
+- Set Calico as a default cni for cdn-transcode, central_orchestrator, core-cplane, core-uplane, media-analytics and minimal flavor.
+- Intel CMK is replaced with Kubernetes native CPU manager for core resource allocation
+
# Changes to Existing Features
## OpenNESS - 19.06
@@ -314,7 +334,13 @@ There are no unsupported or discontinued features relevant to this release.
## OpenNESS - 20.12
There are no unsupported or discontinued features relevant to this release.
-
+## OpenNESS - 21.03
+- FlexRAN/Access Edge CERA Flavor is only aviable in Intel Distribution of OpenNESS
+- OpenNESS repositories have been consolidated to the following
+ - https://github.com/open-ness/ido-converged-edge-experience-kits
+ - https://github.com/open-ness/ido-specs
+ - https://github.com/open-ness/ido-edgeservices
+ - https://github.com/open-ness/ido-epcforedge
# Fixed Issues
## OpenNESS - 19.06
@@ -360,6 +386,10 @@ There are no non-Intel issues relevant to this release.
- Fixed TAS deployment
- Updated SR-IOV CNI and device plugin to fix issues with image build in offline package creator
+## OpenNESS - 21.03
+- Offline deployment issues related to zlib-devel version 1.2.7-19
+- CAdvisor resource utilization has been optimized using "--docker_only=true" which decreased CPU usage from 15-25% to 5-6% (confirmed with ‘docker stats’ and ‘top’ commands). Memory usage also decreased by around 15-20%.
+
# Known Issues and Limitations
## OpenNESS - 19.06
There are no issues relevant to this release.
@@ -411,6 +441,15 @@ There is one issue relevant to this release: it is not possible to remove the ap
## OpenNESS - 20.12.02
- Offline deployment issues related to zlib-devel version 1.2.7-19
+## OpenNESS - 21.03
+- An issue appears when the KubeVirt Containerized Data Importer (CDI) upload pod is deployed with Kube-OVN CNI, the deployed pods readiness probe fails and pod is never in ready state. Calico CNI is used by default in OpenNESS when using CDI
+- Telemetry deployment with PCM enabled will cause a deployment failure in single node cluster deployments due to conflict with CollectD deployment, it is advised to not use PCM and CollectD at the same time in OpenNESS at this time
+- Kafka and Zookeeper resource consumption is on the higher side. When deployed in the context of uCPE and SD-WAN users need to consider this.
+- When flannel CNI is being used and worker node is being manually joined or re-joined to the cluster, then
+`kubectl patch node NODE_NAME -p '{ "spec":{ "podCIDR":"10.244.0.0/24" }}`
+command should be issued on controller to enable flannel CNI on that node.
+- Cloud native enablement for Access Edge CERA is functional, however FlexRAN tests in timermode shows instability in this release. This issue is being investigated and will be addressed with a hotfix post release.
+
# Release Content
## OpenNESS - 19.06
@@ -446,6 +485,11 @@ OpenNESS Edge node, OpenNESS Controller, Common, Spec, OpenNESS Applications, an
- Open Source: Edge node, Controller, Epcforedge, Common, Spec, Applications and Experience kit.
- IDO: IDO Edge node, IDO Controller, IDO Epcforedge, IDO Spec and IDO Experience kit.
+## OpenNESS - 21.03
+ - https://github.com/open-ness/ido-converged-edge-experience-kits
+ - https://github.com/open-ness/ido-specs
+ - https://github.com/open-ness/ido-edgeservices
+ - https://github.com/open-ness/ido-epcforedge
# Hardware and Software Compatibility
OpenNESS Edge Node has been tested using the following hardware specification:
@@ -454,6 +498,14 @@ OpenNESS Edge Node has been tested using the following hardware specification:
- Motherboard type: [X11SDV-16C-TP8F](https://www.supermicro.com/products/motherboard/Xeon/D/X11SDV-16C-TP8F.cfm)
- Intel® Xeon® Processor D-2183IT
+## 3rd Generation Intel® Xeon® Scalable Processors - Early Access
+
+| | |
+| ------------ | ---------------------------------------------------------- |
+| ICX-SP | Compute Node based on 3rd Generation Intel® Xeon® Scalable Processors |
+| NIC | Intel® Ethernet Controller E810 |
+
+
## 2nd Generation Intel® Xeon® Scalable Processors
| | |
@@ -494,9 +546,11 @@ OpenNESS Edge Node has been tested using the following hardware specification:
# Supported Operating Systems
-OpenNESS was tested on CentOS Linux release 7.8.2003 (Core)
-> **NOTE**: OpenNESS is tested with CentOS 7.8 Pre-empt RT kernel to ensure VNFs and Applications can co-exist. There is no requirement from OpenNESS software to run on a Pre-empt RT kernel.
+OpenNESS was tested on CentOS Linux release 7.9.2009 (Core)
+> **NOTE**: OpenNESS is tested with CentOS 7.9 Pre-empt RT kernel to ensure VNFs and Applications can co-exist. There is no requirement from OpenNESS software to run on a Pre-empt RT kernel.
# Packages Version
-Package: telemetry, cadvisor 0.36.0, grafana 7.0.3, prometheus 2.16.0, prometheus: node exporter 1.0.0-rc.0, golang 1.15, docker 19.03.12, kubernetes 1.19.3, dpdk 19.11, ovs 2.14.0, ovn 2.14.0, helm 3.0, kubeovn 1.5.2, flannel 0.12.0, calico 3.16.0, multus 3.6, sriov cni 2.3, nfd 0.6.0, cmk v1.4.1, TAS (from specific commit "a13708825e854da919c6fdf05d50753113d04831")
+Package: telemetry, cadvisor 0.37.5, grafana 7.4.2, prometheus 2.24.0, prometheus: node exporter 1.0.1, golang 1.16, docker 20.10.2, kubernetes 1.20.0, dpdk 19.11.1, ovs 2.14.0, ovn 2.14.0, helm 3.1.2, kubeovn 1.5.2, flannel 0.13.0, calico 3.17.0, multus 3.6, sriov cni 2.6, nfd 0.6.0, cmk v1.4.1, TAS (from specific commit "a13708825e854da919c6fdf05d50753113d04831"), openssl 1.1.1i, QEMU 5.2
+
+> OpenNESS uses openwrt-18.06.4-x86-64 for the SD-WAN reference solution and it does not include the latest functional and security updates. openwrt-19.07.5-x86-64 or the latest at the time of development will be targeted to be released in 2nd Half of 2021 and will include additional functional and security updates. Customers should update to the latest version as it becomes available.
\ No newline at end of file