You are being redirected to the OpenNESS Docs.
+ diff --git a/doc/cloud-adapters/openness_baiducloud.md b/doc/cloud-adapters/openness_baiducloud.md index d59a856b..57619722 100644 --- a/doc/cloud-adapters/openness_baiducloud.md +++ b/doc/cloud-adapters/openness_baiducloud.md @@ -322,7 +322,7 @@ The scripts can be found in the release package with the subfolder name `setup_b └── measure_rtt_openedge.py ``` -Before running the scripts, install python3.6 and paho mqtt on a CentOS\* Linux\* machine, where the recommended version is CentOS Linux release 7.6.1810 (Core). +Before running the scripts, install python3.6 and paho mqtt on a CentOS\* Linux\* machine, where the recommended version is CentOS Linux release 7.8.2003 (Core). The following are recommended install commands: ```docker diff --git a/doc/devkits/index.html b/doc/devkits/index.html new file mode 100644 index 00000000..ca350b29 --- /dev/null +++ b/doc/devkits/index.html @@ -0,0 +1,14 @@ + + +--- +title: OpenNESS Documentation +description: Home +layout: openness +--- +You are being redirected to the OpenNESS Docs.
+ diff --git a/doc/devkits/openness-azure-devkit.md b/doc/devkits/openness-azure-devkit.md new file mode 100644 index 00000000..889f9af5 --- /dev/null +++ b/doc/devkits/openness-azure-devkit.md @@ -0,0 +1,17 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020 Intel Corporation +``` + +# OpenNESS Development Kit for Microsoft Azure + +## Introduction + +This devkit supports the use of OpenNESS in cloud solutions. It leverages the Azure Stack for OpenNESS deployment. +The devkit offers a quick and easy way to deploy OpenNESS on cloud for developers and businesses. It contains templates +for automated depoyment, and supports deployment using Porter. It enables cloud solutions supported by Intel's processors. + +## Getting Started + +Following document contains steps for quick deployment on Azure: +* [openness-experience-kits/cloud/README.md: Deployment and setup guide](https://github.com/open-ness/openness-experience-kits/blob/master/cloud/README.md) diff --git a/doc/enhanced-platform-awareness/hddl-images/hddlservice.png b/doc/enhanced-platform-awareness/hddl-images/hddlservice.png deleted file mode 100644 index 415e1c6e..00000000 Binary files a/doc/enhanced-platform-awareness/hddl-images/hddlservice.png and /dev/null differ diff --git a/doc/flavors.md b/doc/flavors.md index 1124c636..cbb84c1b 100644 --- a/doc/flavors.md +++ b/doc/flavors.md @@ -3,19 +3,27 @@ SPDX-License-Identifier: Apache-2.0 Copyright (c) 2020 Intel Corporation ``` +- [OpenNESS Deployment Flavors](#openness-deployment-flavors) + - [CERA Minimal Flavor](#cera-minimal-flavor) + - [CERA Access Edge Flavor](#cera-access-edge-flavor) + - [CERA Media Analytics Flavor](#cera-media-analytics-flavor) + - [CERA Media Analytics Flavor with VCAC-A](#cera-media-analytics-flavor-with-vcac-a) + - [CERA CDN Transcode Flavor](#cera-cdn-transcode-flavor) + - [CERA CDN Caching Flavor](#cera-cdn-caching-flavor) + - [CERA Core Control Plane Flavor](#cera-core-control-plane-flavor) + - [CERA Core User Plane Flavor](#cera-core-user-plane-flavor) + - [CERA Untrusted Non3gpp Access Flavor](#cera-untrusted-non3gpp-access-flavor) + - [CERA Near Edge Flavor](#cera-near-edge-flavor) + - [CERA 5G On-Prem Flavor](#cera-5g-on-prem-flavor) + - [Reference Service Mesh](#reference-service-mesh) + - [Central Orchestrator Flavor](#central-orchestrator-flavor) + # OpenNESS Deployment Flavors + This document introduces the supported deployment flavors that are deployable through OpenNESS Experience Kits (OEKs. -- [Minimal Flavor](#minimal-flavor) -- [FlexRAN Flavor](#flexran-flavor) -- [Service Mesh Flavor](#service-mesh-flavor) -- [Media Analytics Flavor](#media-analytics-flavor) -- [Media Analytics Flavor with VCAC-A](#media-analytics-flavor-with-vcac-a) -- [CDN Transcode Flavor](#cdn-transcode-flavor) -- [CDN Caching Flavor](#cdn-caching-flavor) -- [Core Control Plane Flavor](#core-control-plane-flavor) -- [Core User Plane Flavor](#core-user-plane-flavor) - -## Minimal Flavor + +## CERA Minimal Flavor + The pre-defined *minimal* deployment flavor provisions the minimal set of configurations for bringing up the OpenNESS network edge deployment. The following are steps to install this flavor: @@ -30,60 +38,36 @@ This deployment flavor enables the following ingredients: * The default Kubernetes CNI: `kube-ovn` * Telemetry -## FlexRAN Flavor + +## CERA Access Edge Flavor + The pre-defined *flexran* deployment flavor provisions an optimized system configuration for vRAN workloads on Intel® Xeon® platforms. It also provisions for deployment of Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000 tools and components to enable offloading for the acceleration of FEC (Forward Error Correction) to the FPGA. The following are steps to install this flavor: 1. Configure the OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md). -2. Run the OEK deployment script: +2. Configure the flavor file to reflect desired deployment. + - Configure the CPUs selected for isolation and OS/K8s processes from command line in files [controller_group.yml](https://github.com/open-ness/openness-experience-kits/blob/master/flavors/flexran/controller_group.yml) and [edgenode_group.yml](https://github.com/open-ness/openness-experience-kits/blob/master/flavors/flexran/edgenode_group.yml) - please note that in single node mode the edgenode_group.yml is used to configure the CPU isolation. + - Configure the amount of CPUs reserved for K8s and OS from K8s level with `reserved_cpu` flag in [all.yml](https://github.com/open-ness/openness-experience-kits/blob/master/flavors/flexran/all.yml) file. + - Configure whether the FPGA or eASIC support for FEC is desired or both in [all.yml](https://github.com/open-ness/openness-experience-kits/blob/master/flavors/flexran/all.yml) file. + +3. Run OEK deployment script: ```shell $ deploy_ne.sh -f flexran ``` This deployment flavor enables the following ingredients: -* Node feature discovery +* Node Feature Discovery * SRIOV device plugin with FPGA configuration * Calico CNI * Telemetry * FPGA remote system update through OPAE * FPGA configuration +* eASIC ACC100 configuration * RT Kernel * Topology Manager * RMD operator -## Service Mesh Flavor -The pre-defined *service-mesh* deployment flavor installs the OpenNESS service mesh that is based on [Istio](https://istio.io/). - -Steps to install this flavor are as follows: -1. Configure OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md). -2. Run OEK deployment script: - ```shell - $ deploy_ne.sh -f service-mesh - ``` - -This deployment flavor enables the following ingredients: -* Node Feature Discovery -* The default Kubernetes CNI: `kube-ovn` -* Istio service mesh -* Kiali management console -* Telemetry - -> **NOTE:** Kiali management console username & password can be changed by editing the variables `istio_kiali_username` & `istio_kiali_password`. - -Following parameters in the flavor/all.yaml can be customize for Istio deployment: - -``` -# Istio deployment profile possible values: default, demo, minimal, remote -istio_deployment_profile: "default" +## CERA Media Analytics Flavor -# Kiali -istio_kiali_username: "admin" -istio_kiali_password: "admin" -istio_kiali_nodeport: 30001 -``` - -> **NOTE:** If creating a customized flavor, the Istio service mesh installation can be included in the Ansible playbook by setting the flag `ne_istio_enable: true` in the flavor file. - -## Media Analytics Flavor The pre-defined *media-analytics* deployment flavor provisions an optimized system configuration for media analytics workloads on Intel® Xeon® platforms. It also provisions a set of video analytics services based on the [Video Analytics Serving](https://github.com/intel/video-analytics-serving) for analytics pipeline management and execution. The following are steps to install this flavor: @@ -94,19 +78,18 @@ The following are steps to install this flavor: ``` > **NOTE:** The video analytics services integrates with the OpenNESS service mesh when the flag `ne_istio_enable: true` is set. -> **NOTE:** Kiali management console username & password can be changed by editing the variables `istio_kiali_username` & `istio_kiali_password`. +> **NOTE:** Kiali management console username can be changed by editing the variable `istio_kiali_username`. By default `istio_kiali_password` is randomly generated and can be retirieved by running `kubectl get secrets/kiali -n istio-system -o json | jq -r '.data.passphrase' | base64 -d` on the Kubernetes controller. This deployment flavor enables the following ingredients: * Node feature discovery -* VPU and GPU device plugins -* HDDL daemonset * The default Kubernetes CNI: `kube-ovn` * Video analytics services * Telemetry * Istio service mesh - conditional * Kiali management console - conditional -## Media Analytics Flavor with VCAC-A +## CERA Media Analytics Flavor with VCAC-A + The pre-defined *media-analytics-vca* deployment flavor provisions an optimized system configuration for media analytics workloads leveraging Visual Cloud Accelerator Card – Analytics (VCAC-A) acceleration. It also provisions a set of video analytics services based on the [Video Analytics Serving](https://github.com/intel/video-analytics-serving) for analytics pipeline management and execution. The following are steps to install this flavor: @@ -117,7 +100,7 @@ The following are steps to install this flavor: silpixa00400194 ``` - > **NOTE:** The VCA host name should *only* be placed once in the `inventory.ini` file and under the `[edgenode_vca_group]` group. + > **NOTE:** The VCA host name should *only* be placed once in the `inventory.ini` file and under the `[edgenode_vca_group]` group. 3. Run the OEK deployment script: ```shell @@ -125,6 +108,7 @@ The following are steps to install this flavor: ``` > **NOTE:** At the time of writing this document, *Weave Net*\* is the only supported CNI for network edge deployments involving VCAC-A acceleration. The `weavenet` CNI is automatically selected by the *media-analytics-vca*. +> **NOTE:** The flag `force_build_enable` (default true) supports force build VCAC-A system image (VCAD) by default, it is defined in flavors/media-analytics-vca/all.yml. By setting the flag as false, OEK will not rebuild the image and re-use the last system image built during deployment. If the flag is true, OEK will force build VCA host kernel and node system image which will take several hours. This deployment flavor enables the following ingredients: * Node feature discovery @@ -134,8 +118,9 @@ This deployment flavor enables the following ingredients: * Video analytics services * Telemetry -## CDN Transcode Flavor -The pre-defined *cdn-transcode* deployment flavor provisions an optimized system configuration for Content Delivery Network (CDN) transcode sample workloads on Intel® Xeon® platforms. +## CERA CDN Transcode Flavor + +The pre-defined *cdn-transcode* deployment flavor provisions an optimized system configuration for Content Delivery Network (CDN) transcode sample workloads on Intel® Xeon® platforms. The following are steps to install this flavor: 1. Configure the OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md). @@ -149,8 +134,9 @@ This deployment flavor enables the following ingredients: * The default Kubernetes CNI: `kube-ovn` * Telemetry -## CDN Caching Flavor -The pre-defined *cdn-caching* deployment flavor provisions an optimized system configuration for CDN content delivery workloads on Intel® Xeon® platforms. +## CERA CDN Caching Flavor + +The pre-defined *cdn-caching* deployment flavor provisions an optimized system configuration for CDN content delivery workloads on Intel® Xeon® platforms. The following are steps to install this flavor: 1. Configure the OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md). @@ -165,9 +151,9 @@ This deployment flavor enables the following ingredients: * Telemetry * Kubernetes Topology Manager policy: `single-numa-node` -## Core Control Plane Flavor +## CERA Core Control Plane Flavor -The pre-defined Core Control Plane flavor provisions the minimal set of configurations for 5G Control Plane Network Functions on Intel® Xeon® platforms. +The pre-defined Core Control Plane flavor provisions the minimal set of configurations for 5G Control Plane Network Functions on Intel® Xeon® platforms. The following are steps to install this flavor: @@ -195,7 +181,7 @@ This deployment flavor enables the following ingredients: > **NOTE:** Istio service mesh is enabled by default in the `core-cplane` deployment flavor. To deploy 5G CNFs without Istio, the flag `ne_istio_enable` in `flavors/core-cplane/all.yml` must be set to `false`. -## Core User Plane Flavor +## CERA Core User Plane Flavor The pre-defined Core Control Plane flavor provisions the minimal set of configurations for a 5G User Plane Function on Intel® Xeon® platforms. @@ -217,3 +203,132 @@ This deployment flavor enables the following ingredients: - HugePages of size 1Gi and the amount of HugePages as 8G for the nodes > **NOTE**: For a reference UPF deployment, refer to [5G UPF Edge App](https://github.com/open-ness/edgeapps/tree/master/network-functions/core-network/5G/UPF) + +## CERA Untrusted Non3gpp Access Flavor + +The pre-defined Untrusted Non3pp Access flavor provisions the minimal set of configurations for a 5G Untrusted Non3gpp Access Network Functions like Non3GPP Interworking Function(N3IWF) on Intel® Xeon® platforms. + +The following are steps to install this flavor: + +1. Configure the OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md). + +2. Run the x-OEK deployment script: + + ```bash + $ ido-openness-experience-kits# deploy_ne.sh -f untrusted-non3pp-access + ``` + +This deployment flavor enables the following ingredients: + +- Node feature discovery +- Kubernetes CNI: calico and SR-IOV. +- Kubernetes Device Plugin +- Telemetry +- HugePages of size 1Gi and the amount of HugePages as 10G for the nodes + +## CERA Near Edge Flavor + +The pre-defined CERA Near Edge flavor provisions the required set of configurations for a 5G Converged Edge Reference Architecture for Near Edge deployments on Intel® Xeon® platforms. + +The following are steps to install this flavor: +1. Configure the OEK under CERA repository as described in the [Converged Edge Reference Architecture Near Edge](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/CERA-Near-Edge.md). + +2. Run the x-OEK for CERA deployment script: + ```shell + $ ido-converged-edge-experience-kits# deploy_openness_for_cera.sh + ``` + +This deployment flavor enables the following ingredients: + +- Kubernetes CNI: kube-ovn and SRIOV. +- SR-IOV support for kube-virt +- Virtual Functions +- CPU Manager for Kubernetes (CMK) with 16 exclusive cores and 1 core in shared pool. +- Kubernetes Device Plugin +- BIOSFW feature +- Telemetry +- HugePages of size 1Gi and the amount of HugePages as 8G for the nodes +- RMD operator + +## CERA 5G On-Prem Flavor + +The pre-defined CERA Near Edge flavor provisions the required set of configurations for a 5G Converged Edge Reference Architecture for On Premises deployments on Intel® Xeon® platforms. It also provisions for deployment of Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000 tools and components to enable offloading for the acceleration of FEC (Forward Error Correction) to the FPGA. + +The following are steps to install this flavor: +1. Configure the OEK under CERA repository as described in the [Converged Edge Reference Architecture On Premises Edge](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/CERA-5G-On-Prem.md). + +2. Run the x-OEK for CERA deployment script: + ```shell + $ ido-converged-edge-experience-kits# deploy_openness_for_cera.sh + ``` + +This deployment flavor enables the following ingredients: + +- Kubernetes CNI: Calico and SRIOV. +- SRIOV device plugin with FPGA configuration +- Virtual Functions +- FPGA remote system update through OPAE +- FPGA configuration +- RT Kernel +- Topology Manager +- Kubernetes Device Plugin +- BIOSFW feature +- Telemetry +- HugePages of size 1Gi and the amount of HugePages as 40G for the nodes +- RMD operator + +## Reference Service Mesh + +Service Mesh technology enables services discovery and sharing of data between application services. This technology can be useful in any CERA. Customers will find Service Mesh under flavors directory as a reference to quickly try out the technology and understand the implications. In future OpenNESS releases this Service Mesh will not be a dedicated flavor. + +The pre-defined *service-mesh* deployment flavor installs the OpenNESS service mesh that is based on [Istio](https://istio.io/). + +> **NOTE**: When deploying Istio Service Mesh in VMs, a minimum of 8 CPU core and 16GB RAM must be allocated to each worker VM so that Istio operates smoothly + +Steps to install this flavor are as follows: +1. Configure OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md). +2. Run OEK deployment script: + ```shell + $ deploy_ne.sh -f service-mesh + ``` + +This deployment flavor enables the following ingredients: +* Node Feature Discovery +* The default Kubernetes CNI: `kube-ovn` +* Istio service mesh +* Kiali management console +* Telemetry + +> **NOTE:** Kiali management console username can be changed by editing the variable `istio_kiali_username`. By default `istio_kiali_password` is randomly generated and can be retirieved by running `kubectl get secrets/kiali -n istio-system -o json | jq -r '.data.passphrase' | base64 -d` on the Kubernetes controller. + +Following parameters in the flavor/all.yaml can be customize for Istio deployment: + +```code +# Istio deployment profile possible values: default, demo, minimal, remote +istio_deployment_profile: "default" + +# Kiali +istio_kiali_username: "admin" +istio_kiali_password: "{{ lookup('password', '/dev/null length=16') }}" +istio_kiali_nodeport: 30001 +``` + +> **NOTE:** If creating a customized flavor, the Istio service mesh installation can be included in the Ansible playbook by setting the flag `ne_istio_enable: true` in the flavor file. + +## Central Orchestrator Flavor + +Central Orchestrator Flavor is used to deploy EMCO. + +The pre-defined *orchestration* deployment flavor provisions an optimized system configuration for emco (central orchestrator) workloads on Intel Xeon servers. It also provisions a set of central orchestrator services for [edge, multiple clusters orchestration](building-blocks/emco/openness-emco.md). + +Steps to install this flavor are as follows: +1. Configure OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md). +2. Run OEK deployment script: + ```shell + $ deploy_ne.sh -f central_orchestrator + ``` + +This deployment flavor enables the following ingredients: +* Harbor Registry +* The default Kubernetes CNI: `kube-ovn` +* EMCO services \ No newline at end of file diff --git a/doc/getting-started/network-edge/controller-edge-node-setup-images/harbor_ui.png b/doc/getting-started/network-edge/controller-edge-node-setup-images/harbor_ui.png new file mode 100644 index 00000000..448feee6 Binary files /dev/null and b/doc/getting-started/network-edge/controller-edge-node-setup-images/harbor_ui.png differ diff --git a/doc/getting-started/network-edge/controller-edge-node-setup.md b/doc/getting-started/network-edge/controller-edge-node-setup.md index 2a8bf357..8836fff0 100644 --- a/doc/getting-started/network-edge/controller-edge-node-setup.md +++ b/doc/getting-started/network-edge/controller-edge-node-setup.md @@ -14,10 +14,13 @@ Copyright (c) 2019-2020 Intel Corporation - [VM support for Network Edge](#vm-support-for-network-edge) - [Application on-boarding](#application-on-boarding) - [Single-node Network Edge cluster](#single-node-network-edge-cluster) - - [Docker registry](#docker-registry) - - [Deploy Docker registry](#deploy-docker-registry) - - [Docker registry image push](#docker-registry-image-push) - - [Docker registry image pull](#docker-registry-image-pull) + - [Harbor registry](#harbor-registry) + - [Deploy Harbor registry](#deploy-harbor-registry) + - [Harbor login](#harbor-login) + - [Harbor registry image push](#harbor-registry-image-push) + - [Harbor registry image pull](#harbor-registry-image-pull) + - [Harbor UI](#harbor-ui) + - [Harbor CLI](#harbor-registry-CLI) - [Kubernetes cluster networking plugins (Network Edge)](#kubernetes-cluster-networking-plugins-network-edge) - [Selecting cluster networking plugins (CNI)](#selecting-cluster-networking-plugins-cni) - [Adding additional interfaces to pods](#adding-additional-interfaces-to-pods) @@ -31,7 +34,6 @@ Copyright (c) 2019-2020 Intel Corporation - [Setting Git](#setting-git) - [GitHub token](#github-token) - [Customize tag/branch/sha to checkout](#customize-tagbranchsha-to-checkout) - - [Installing Kubernetes dashboard](#installing-kubernetes-dashboard) - [Customization of kernel, grub parameters, and tuned profile](#customization-of-kernel-grub-parameters-and-tuned-profile) # Quickstart @@ -49,7 +51,7 @@ The following set of actions must be completed to set up the Open Network Edge S To use the playbooks, several preconditions must be fulfilled. These preconditions are described in the [Q&A](#qa) section below. The preconditions are: -- CentOS\* 7.6.1810 must be installed on hosts where the product is deployed. It is highly recommended to install the operating system using a minimal ISO image on nodes that will take part in deployment (obtained from inventory file). Also, do not make customizations after a fresh manual install because it might interfere with Ansible scripts and give unpredictable results during deployment. +- CentOS\* 7.8.2003 must be installed on hosts where the product is deployed. It is highly recommended to install the operating system using a minimal ISO image on nodes that will take part in deployment (obtained from inventory file). Also, do not make customizations after a fresh manual install because it might interfere with Ansible scripts and give unpredictable results during deployment. - Hosts for the Edge Controller (Kubernetes control plane) and Edge Nodes (Kubernetes nodes) must have proper and unique hostnames (i.e., not `localhost`). This hostname must be specified in `/etc/hosts` (refer to [Setup static hostname](#setup-static-hostname)). @@ -137,47 +139,195 @@ To deploy Network Edge in a single-node cluster scenario, follow the steps below > Default settings in the single-node cluster mode are those of the Edge Node (i.e., kernel and tuned customization enabled). 4. Single-node cluster can be deployed by running command: `./deploy_ne.sh single` -## Docker registry +## Harbor registry -Docker registry is a storage and distribution system for Docker Images. On the OpenNESS environment, Docker registry service is deployed as a pod on Control plane Node. Docker registry authentication enabled with self-signed certificates as well as all node and control plane nodes will have access to the Docker registry. +Harbor registry is an open source cloud native registry which can support images and relevant artifacts with extended functionalities as described in [Harbor](https://goharbor.io/). On the OpenNESS environment, Harbor registry service is installed on Control plane Node by Harbor Helm Chart [github](https://github.com/goharbor/harbor-helm/releases/tag/v1.5.1). Harbor registry authentication enabled with self-signed certificates as well as all nodes and control plane will have access to the Harbor registry. -### Deploy Docker registry +### Deploy Harbor registry -Ansible "docker_registry" roles created on openness-experience-kits. For deploying a Docker registry on Kubernetes, control plane node roles are enabled on the openness-experience-kits "network_edge.yml" file. +#### System Prerequisite +* The available system disk should be reserved at least 20G for Harbor PV/PVC usage. The defaut disk PV/PVC total size is 20G. The values can be configurable in the ```roles/harbor_registry/controlplane/defaults/main.yaml```. +* If huge pages enabled, need 1G(hugepage size 1G) or 300M(hugepage size 2M) to be reserved for Harbor usage. + +#### Ansible Playbooks +Ansible "harbor_registry" roles created on openness-experience-kits. For deploying a Harbor registry on Kubernetes, control plane roles are enabled on the openness-experience-kits "network_edge.yml" file. ```ini - role: docker_registry/controlplane - role: docker_registry/node - ``` -The following steps are processed during the Docker registry deployment on the OpenNESS setup. + role: harbor_registry/controlplane + role: harbor_registry/node + ``` + +The following steps are processed by openness-experience-kits during the Harbor registry installation on the OpenNESS control plane node. + +* Download Harbor Helm Charts on the Kubernetes Control plane Node. +* Check whether huge pages is enabled and templates values.yaml file accordingly. +* Create namespace and disk PV for Harbor Services (The defaut disk PV/PVC total size is 20G. The values can be configurable in the ```roles/harbor_registry/controlplane/defaults/main.yaml```). +* Install Harbor on the control plane node using the Helm Charts (The CA crt will be generated by Harbor itself). +* Create the new project - ```intel``` for OpenNESS microservices, Kurbernetes enhanced add-on images storage. +* Docker login the Harbor Registry, thus enable pulling, pushing and tag images with the Harbor Registry + + +On the OpenNESS edge nodes, openness-experience-kits will conduct the following steps: +* Get harbor.crt from the OpenNESS control plane node and save into the host location + /etc/docker/certs.d/You are being redirected to the OpenNESS Docs.
+ diff --git a/doc/orchestration/openness-helm.md b/doc/orchestration/openness-helm.md index 94565588..393cf68d 100644 --- a/doc/orchestration/openness-helm.md +++ b/doc/orchestration/openness-helm.md @@ -2,6 +2,7 @@ SPDX-License-Identifier: Apache-2.0 Copyright (c) 2020 Intel Corporation ``` + # Helm support in OpenNESS - [Introduction](#introduction) @@ -54,12 +55,12 @@ OpenNESS provides the following helm charts: The EPA, Telemetry, and k8s plugins helm chart files will be saved in a specific directory on the OpenNESS controller. To modify the directory, change the following variable `ne_helm_charts_default_dir` in the `group_vars/all/10-default.yml` file: ```yaml - ne_helm_charts_default_dir: /opt/openness-helm-charts/ + ne_helm_charts_default_dir: /opt/openness/helm-charts/ ``` To check helm charts files, run the following command on the OpenNESS controller: ```bash - $ ls /opt/openness-helm-charts/ + $ ls /opt/openness/helm-charts/ vpu-plugin gpu-plugin node-feature-discovery prometheus ``` diff --git a/doc/overview.md b/doc/overview.md new file mode 100644 index 00000000..19c16748 --- /dev/null +++ b/doc/overview.md @@ -0,0 +1,107 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2019-2020 Intel Corporation +``` + + +# OpenNESS Overview + +- [Introduction to OpenNESS](#introduction-to-openness) +- [Why consider OpenNESS](#why-consider-openness) +- [Building Blocks](#building-blocks) +- [Distributions](#distributions) +- [Consumption Models](#consumption-models) + +## Introduction to OpenNESS + +OpenNESS is an edge computing software toolkit that enables highly optimized and performant edge platforms to on-board and manage applications and network functions with cloud-like agility across any type of network. + +The toolkit includes a variety of Building Blocks that enable you to build different types of Converged Edge platforms that combine IT (Information Technology), OT (Operational Technology) and CT (Communications Technology). + +OpenNESS can help speed up the development of Edges such as: + +- Cloud Native RAN with Apps +- 5G distributed UPF with Apps +- uCPE/SD-WAN with Apps +- AI/vision inferencing apps with MEC +- Media apps with MEC + +OpenNESS is a Certified Kubernetes* offering. See the [CNCF Software Conformance program](https://www.cncf.io/certification/software-conformance/) for details. + +## Why consider OpenNESS + +In the era of 5G, as the cloud architectures start to disaggregate, various locations on the Telco edge start to become prime candidates for compute workloads that are capable of delivering a new set of KPIs for a new breed of apps and services. These locations include the On-prem edge (located typically in an enterprise), the Access Edge (located at or close to a 5G basestation), the Near Edge (the next aggregation point hosting a distributed UPF) and the Regional Data Center (hosting a Next Gen Central Office with wireless/wireline convergence). + +![](arch-images/multi-location-edge.png) + +As the industry seeks to settle on a consistent cloud native platform approach capable of extending across these edge locations, lowering the Total Cost of Ownership (TCO) becomes paramount. However a number challenges need to be overcome to achieve this vision: + +- Deliver platform consistency and scalability across diverse edge location requirements +- Optimize cloud native frameworks to meet stringent edge KPIs and simplify network complexity +- Leverage a broad ecosystem and evolving standards for edge computing + +OpenNESS brings together the best of breed cloud native frameworks to build a horizontal edge computing platform to address these challenges. + +**Benefits of OpenNESS** + +Edge Performant & Optimized: + +- Data plane acceleration, throughput, real-time optimizations for low latency, accelerators for crypto, AI, & Media, telemetry & resource management, Edge native power, security, performance/footprint optimizations, Cloud Native containers & microservices based, seamless and frictionless connectivity + +Multi-Access Edge Networking: + +- 3GPP & ETSI MEC based 5G/4G/WiFi capabilities +- Complies with Industry Standards (3GPP, CNCF, ORAN, ETSI) + +Ease of Use, Consumability & Time to Market (TTM) + +- Multi-location, Multi-Access, Multi-Cloud +- Delivered via use case specific Reference Architectures for ease of consumption and to accelerate TTM +- Easy to consume Intel silicon features, integrated set of components (networking, AI, media, vertical use cases), significantly reduce development time, ability to fill gaps in partner/customer IP portfolio + +## Building Blocks + +OpenNESS is composed of a set of Building Blocks, each intended to offer a set of capabilities for edge solutions. + +| Building Block | Summary | +| -------------------------------- | ------------------------------------------------------------ | +| Multi-Access Networking | 3GPP Network function microservices enabling deployment of an edge cloud in a 5G network | +| Edge Multi-Cluster Orchestration | Manage CNFs and applications across massively distributed edge Kubernetes* clusters, placement algorithms based on platform awareness/SLA/cost, multi-cluster service mesh automation | +| Edge Aware Service Mesh | Enhancements for high performance, reduced resource utilization, security and automation | +| Edge WAN Overlay | Highly optimized and secure WAN overlay implementation, providing abstraction of multiple edge & cloud provider networks as a uniform network, traffic sanitization, and edge aware SD-WAN | +| Confidential Computing | Protecting Data In Use at the edge, IP protection in multi tenant hosted environments | +| Resource Management | Kubernetes* extensions for Node Feature Discovery, NUMA awareness, Core Pinning, Resource Management Daemon, Topology Management | +| Data plane CNI | Optimized dataplanes and CNIs for various edge use cases: OVN, eBPF, SRIOV | +| Accelerators | Kubernetes* operators and device plugins for VPU, GPU, FPGA | +| Telemetry and Monitoring | Platform and application level telemetry leveraging industry standard frameworks | +| Green Edge | Modular microservices and Kubernetes* enhancements to manage different power profiles, events and scheduling, and detecting hotspots when deploying services | + + +## Distributions +OpenNESS is released as two distributions: +1. OpenNESS : A full open-source distribution of OpenNESS +2. Intel® Distribution of OpenNESS : A licensed distribution from Intel that includes all the features in OpenNESS along with additional microservices, Kubernetes\* extensions, enhancements, and optimizations for Intel® architecture. + +The Intel Distribution of OpenNESS requires a secure login to the OpenNESS GitHub repository. For access to the Intel Distribution of OpenNESS, contact your Intel support representative. + +## Consumption Models + +OpenNESS can be consumed as a whole or as individual building blocks. Whether you are an infrastructure developer or an app developer, if you are moving your business to the Edge, you may benefit from utilizing OpenNESS in your next project. + +**Building Blocks** + +You can explore the various building blocks packaged as Helm Charts and Kubernetes* Operators via the [OpenNESS github project](https://github.com/open-ness). + +**Converged Edge Reference Architectures (CERA)** + +CERA is a set of pre-integrated and readily deployable HW/SW Reference Architectures powered by OpenNESS to significantly accelerate Edge Platform Development, available via the [OpenNESS github project](https://github.com/open-ness). + +**Cloud Devkits** + +Software toolkits to easily deploy an OpenNESS cluster in a cloud environment such as Azure Cloud, available via the [OpenNESS github project](https://github.com/open-ness). + +**Converged Edge Insights** + +Ready to deploy software packages available via the [Intel® Edge Software Hub](https://www.intel.com/content/www/us/en/edge-computing/edge-software-hub.html), comes with use case specific reference implementations to kick start your next pathfinding effort for the Edge. + +Next explore the [OpenNESS Architecture](architecture.md). diff --git a/doc/ran/openness-ran.png b/doc/ran/openness-ran.png deleted file mode 100644 index 1f46c47e..00000000 Binary files a/doc/ran/openness-ran.png and /dev/null differ diff --git a/doc/reference-architectures/CERA-5G-On-Prem.md b/doc/reference-architectures/CERA-5G-On-Prem.md new file mode 100644 index 00000000..399b52a8 --- /dev/null +++ b/doc/reference-architectures/CERA-5G-On-Prem.md @@ -0,0 +1,829 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020 Intel Corporation +``` + +# Converged Edge Reference Architecture 5G On Premises Edge +The Converged Edge Reference Architectures (CERA) are a set of pre-integrated HW/SW reference architectures based on OpenNESS to accelerate the development of edge platforms and architectures. This document describes the CERA 5G On Premises Edge, which combines wireless networking and high performance compute for IoT, AI, video and other services. + +- [CERA 5G On Prem](#cera-5g-on-prem) + - [CERA 5G On Prem Experience Kit](#cera-5g-on-prem-experience-kit) + - [CERA 5G On Prem OpenNESS Configuration](#cera-5g-on-prem-openness-configuration) + - [CERA 5G On Prem Deployment Architecture](#cera-5g-on-prem-deployment-architecture) + - [CERA 5G On Prem Experience Kit Deployments](#cera-5g-on-prem-experience-kit-deployments) + - [Edge Service Applications Supported on CERA 5G On Prem](#edge-service-applications-supported-on-cera-5g-on-prem) + - [OpenVINO™](#openvino) + - [Edge Insights Software](#edge-insights-software) + - [CERA 5G On Prem Hardware Platform](#cera-5g-on-prem-hardware-platform) + - [Hardware Acceleration](#hardware-acceleration) + - [CERA 5G On Prem OpenNESS Deployment](#cera-5g-on-prem-openness-deployment) + - [Setting up Target Platform Before Deployment](#setting-up-target-platform-before-deployment) + - [BIOS Setup](#bios-setup) + - [Setting up Machine with Ansible](#setting-up-machine-with-ansible) + - [Steps to be performed on the machine, where the Ansible playbook is going to be run](#steps-to-be-performed-on-the-machine-where-the-ansible-playbook-is-going-to-be-run) + - [CERA 5G On Premise Experience Kit Deployment](#cera-5g-on-premise-experience-kit-deployment) +- [5G Core Components](#5g-core-components) + - [dUPF](#dupf) + - [Overview](#overview) + - [Deployment](#deployment) + - [Prerequisites](#prerequisites) + - [Settings](#settings) + - [Configuration](#configuration) + - [UPF](#upf) + - [Overview](#overview-1) + - [Deployment](#deployment-1) + - [Prerequisites](#prerequisites-1) + - [Settings](#settings-1) + - [Configuration](#configuration-1) + - [AMF-SMF](#amf-smf) + - [Overview](#overview-2) + - [Deployment](#deployment-2) + - [Prerequisites](#prerequisites-2) + - [Settings](#settings-2) + - [Configuration](#configuration-2) + - [Remote-DN](#remote-dn) + - [Overview](#overview-3) + - [Prerequisites](#prerequisites-3) + - [Local-DN](#local-dn) + - [Overview](#overview-4) + - [Prerequisites](#prerequisites-4) + - [OpenVINO](#openvino-1) + - [Settings](#settings-3) + - [Deployment](#deployment-3) + - [Streaming](#streaming) + - [EIS](#eis) + - [gNodeB](#gnodeb) + - [Overview](#overview-5) + - [Deployment](#deployment-4) + - [Prerequisites](#prerequisites-5) + - [Settings](#settings-4) + - [Configuration](#configuration-3) + - [Time synchronization over PTP for node server](#time-synchronization-over-ptp-for-node-server) + - [Overview](#overview-6) + - [Prerequisites](#prerequisites-6) + - [Settings](#settings-5) + - [GMC configuration](#gmc-configuration) +- [Conclusion](#conclusion) +- [Learn more](#learn-more) +- [Acronyms](#acronyms) + +## CERA 5G On Prem +CERA 5G On Prem deployment focuses on On Premises, Private Wireless and Ruggedized Outdoor deployments, presenting a scalable solution across the On Premises Edge. The assumed 3GPP deployment architecture is based on the figure below from 3GPP 23.501 Rel15 which shows the reference point representation for concurrent access to two (e.g. local and central) data networks (single PDU Session option). The highlighted yellow blocks - RAN, UPF and Data Network (edge apps) are deployed on the CERA 5G On Prem. + +![3GPP Network](cera-on-prem-images/3gpp_on_prem.png) + +> Figure 1 - 3GPP Network + +### CERA 5G On Prem Experience Kit +The CERA 5G On Prem implementation in OpenNESS supports a single Orchestration domain, optimizing the edge node to support Network Functions (gNB, UPF) and Applications at the same time. This allows the deployment on small uCPE and pole mounted form factors. + +#### CERA 5G On Prem OpenNESS Configuration +CERA 5G On Prem is a combination of the existing OpenNESS Building Blocks required to run 5G gNB, UPF, Applications and their associated HW Accelerators. CERA 5G On Prem also adds CMK and RMD to better support workload isolation and mitigate any interference from applications affecting the performance of the network functions. The below diagram shows the logical deployment with the OpenNESS Building Blocks. + +![CERA 5G On Prem Architecture](cera-on-prem-images/cera-on-prem-arch.png) + +> Figure 2 - CERA 5G On Prem Architecture + +#### CERA 5G On Prem Deployment Architecture + +![CERA 5G On Prem Deployment](cera-on-prem-images/cera_deployment.png) + +> Figure 3 - CERA 5G On Prem Deployment + +CERA 5G On Prem architecture supports a single platform (Xeon® SP and Xeon D) that hosts both the Edge Node and the Kubernetes* Control Plane. The UPF is deployed using SRIOV-Device plugin and SRIOV-CNI allowing direct access to the network interfaces used for connection to the gNB and back haul. For high throughput workloads such as UPF network function, it is recommended to use single root input/output (SR-IOV) pass-through the physical function (PF) or the virtual function (VF), as required. Also, in some cases, the simple switching capability in the NIC can be used to send traffic from one application to another, as there is a direct path of communication required between the UPF and the Data plane, this becomes an option. It should be noted that the VF-to-VF option is only suitable when there is a direct connection between PODs on the same PF with no support for advanced switching. In this scenario, it is advantageous to configure the UPF with three separate interfaces for the different types of traffic flowing in the system. This eliminates the need for additional traffic switching at the host. In this case, there is a separate interface for N3 traffic to the Access Network, N9 and N4 traffic can share an interface to the backhaul network. While local data network traffic on the N6 can be switched directly to the local applications, similarly gNB DU and CU interfaces N2 and N4 are separated. Depending on performance requirements, a mix of data planes can be used on the platform to meet the varying requirements of the workloads. + +The applications are deployed on the same edge node as the UPF and gNB. + +The use of Intel® Resource Director Technology (Intel® RDT) ensures that the cache allocation and memory bandwidth are optimized for the workloads on running on the platform. + +Intel® Speed Select Technology (Intel® SST) can be used to further enhance the performance of the platform. + +The following Building Blocks are supported in OpenNESS + +- High-Density Deep Learning (HDDL): Software that enables OpenVINO™-based AI apps to run on Intel® Movidius Vision Processing Units (VPUs). It consists of the following components: + - HDDL device plugin for K8s + - HDDL service for scheduling jobs on VPUs +- FPGA/eASIC/NIC: Software that enables AI inferencing for applications, high-performance and low-latency packet pre-processing on network cards, and offloading for network functions such as eNB/gNB offloading Forward Error Correction (FEC). It consists of: + - FPGA device plugin for inferencing + - SR-IOV device plugin for FPGA/eASIC + - Dynamic Device Profile for Network Interface Cards (NIC) +- Resource Management Daemon (RMD): RMD uses Intel® Resource Director Technology (Intel® RDT) to implement cache allocation and memory bandwidth allocation to the application pods. This is a key technology for achieving resource isolation and determinism on a cloud-native platform. +- Node Feature Discovery (NFD): Software that enables node feature discovery for Kubernetes*. It detects hardware features available on each node in a Kubernetes* cluster and advertises those features using node labels. +- Topology Manager: This component allows users to align their CPU and peripheral device allocations by NUMA node. +- Kubevirt: Provides support for running legacy applications in VM mode and the allocation of SR-IOV ethernet interfaces to VMs. +- Precision Time Protocol (PTP): Uses primary-secondary architecture for time synchronization between machines connected through ETH. The primary clock is a reference clock for the secondary nodes that adapt their clocks to the primary node's clock. Grand Master Clock (GMC) can be used to precisely set primary clock. + +#### CERA 5G On Prem Experience Kit Deployments +The CERA 5G On Prem experience kit deploys both the 5G On Premises cluster and also a second cluster to host the 5GC control plane functions and provide an additional Data Network POD to act as public network for testing purposes. Note that the Access network and UE are not configured as part of the CERA 5G On Prem Experience Kit. Also required but not provided is a binary iUPF, UPF and 5GC components. Please contact your local Intel® representative for more information. + +![CERA Experience Kit](cera-on-prem-images/cera-full-setup.png) + +> Figure 4 - CERA Experience Kit + +### Edge Service Applications Supported by CERA 5G On Prem +The CERA architectural paradigm enables convergence of edge services and applications across different market segments. This is demonstrated by taking diverse workloads native to different segments and successfully integrating within a common platform. The reference considers workloads segments across the following applications: + +Smart city: Capture of live camera streams to monitor and measure pedestrian and vehicle movement within a zone. + +Industrial: Monitoring of the manufacturing quality of an industrial line, the capture of video streams focuses on manufactured devices on an assembly line and the real-time removal of identified defect parts. + +While these use cases are addressing different market segments, they all have similar requirements: + +- Capture video either from a live stream from a camera, or streamed from a recorded file. + +- Process that video using inference with a trained machine learning model, computer vision filters, etc. + +- Trigger business control logic based on the results of the video processing. + +Video processing is inherently compute intensive and, in most cases, especially in edge processing, video processing becomes the bottleneck in user applications. This, ultimately, impacts service KPIs such as frames-per-second, number of parallel streams, latency, etc. + +Therefore, pre-trained models, performing numerical precision conversions, offloading to video accelerators, heterogeneous processing and asynchronous execution across multiple types of processors all of which increase video throughput are extremely vital in edge video processing. However these requirements can significantly complicate software development, requiring expertise that is rare in engineering teams and increasing the time-to-market. + +#### OpenVINO™ +The Intel® Distribution of OpenVINO™ toolkit helps developers and data scientists speed up computer vision workloads, streamline deep learning inference and deployments, and enable easy, heterogeneous execution across Intel® architecture platforms from edge to cloud. It helps to unleash deep learning inference using a common API, streamlining deep learning inference and deployment using standard or custom layers without the overhead of frameworks. + +#### Edge Insights Software +Intel® Edge Insights for Industrial offers a validated solution to easily integrate customers' data, devices, and processes in manufacturing applications, which helps enable near-real-time intelligence at the edge, greater operational efficiency, and security in factories. +Intel® Edge Insights for Industrial takes advantage of modern microservices architecture. This approach helps OEMs, device manufacturers, and solution providers integrate data from sensor networks, operational sources, external providers, and industrial systems more rapidly. The modular, product-validated software enables the extraction of machine data at the edge. It also allows that data to be communicated securely across protocols and operating systems managed cohesively, and analyzed quickly. +Allowing machines to communicate interchangeably across different protocols and operating systems eases the process of data ingestion, analysis, storage, and management. Doing so, also helps industrial companies build powerful analytics and machine learning models easily and generate actionable predictive insights at the edge. +Edge computing software deployments occupy a middle layer between the operating system and applications built upon it. Intel® Edge Insights for Industrial is created and optimized for Intel® architecture-based platforms and validated for underlying operating systems. It's capability supports multiple edge-critical Intel® hardware components like CPUs, FPGAs, accelerators, and Intel® Movidius Vision Processing Unit (VPU). Also, its modular architecture offers OEMs, solution providers, and ISVs the flexibility to pick and choose the features and capabilities that they wish to include or expand upon for customized solutions. As a result, they can bring solutions to market fast and accelerate customer deployments. + +For more information on the supported EIS demos support, see [EIS whitepaper](https://github.com/open-ness/edgeapps/blob/master/applications/eis-experience-kit/docs/whitepaper.md) + +### CERA 5G On Prem Hardware Platform +CERA 5G On Prem is designed to run on standard, off-the-shelf servers with Intel® Xeon CPUs. Dedicated platform is [Single socket SP SYS-E403-9P-FN2T](https://www.supermicro.com/en/products/system/Box_PC/SYS-E403-9P-FN2T.cfm) + + +#### Hardware Acceleration +Based on deployment scenario and capacity requirements, there is option to utilize hardware accelerators on the platform to increase performance of certain workloads. Hardware accelerators can be assigned to the relevant container on the platform through the OpenNESS Controller, enabling modular deployments to meet the desired use case. + +AI Acceleration +Video inference is done using the OpenVINO™ toolkit to accelerate the inference processing to identify people, vehicles or other items, as required. This is already optimized for software implementation and can be easily changed to utilize hardware acceleration if it is available on the platform. + +Intel® Movidius Myriad X Vision +Intel® Movidius Myriad X Vision Processing Unit (VPU) can be added to a server to provide a dedicated neural compute engine for accelerating deep learning inferencing at the edge. To take advantage of the performance of the neural compute engine, Intel® has developed the high-density deep learning (HDDL) inference engine plugin for inference of neural networks. + +In the current example when the HDDL is enabled on the platform, the OpenVINO™ toolkit sample application reduces its CPU requirements from two cores to a single core. + +In future releases additional media analytics services may be enabled e.g VCAC-A card, for more information refer to [OpenNESS VA Services](../applications/openness_va_services.md) + +Intel® FPGA PAC N3000 +The Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) plays a key role in accelerating certain types of workloads, which in turn increases the overall compute capacity of a commercial, off-the-shelf platform. FPGA benefits include: +- Flexibility - FPGA functionality can change upon every power up of the device. +- Acceleration - Get products to market faster and increase your system performance. +- Integration - Modern FPGAs include on-die processors, transceiver I/Os at 28 Gbps (or faster), RAM blocks, DSP engines, and more. +- Total Cost of Ownership (TCO) - While ASICs may cost less per unit than an equivalent FPGA, building them requires a non-recurring expense (NRE), expensive software tools, specialized design teams, and long manufacturing cycles. + +The Intel® FPGA PAC N3000 is a full-duplex, 100 Gbps in-system, re-programmable acceleration card for multi-workload networking application acceleration. It has an optimal memory mixture designed for network functions, with an integrated network interface card (NIC) in a small form factor that enables high throughput, low latency, and low power per bit for a custom networking pipeline. + +For more references, see [openness-fpga.md: Dedicated FPGA IP resource allocation support for Edge Applications and Network Functions](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md) + +Intel® QAT +The Intel® QuickAssist Adapter provides customers with a scalable, flexible, and extendable way to offer Intel® QuickAssist Technology (Intel® QAT) crypto acceleration and compression capabilities to their existing product lines. Intel® QuickAssist Technology (Intel® QAT) provides hardware acceleration to assist with the performance demands of securing and routing Internet traffic and other workloads, such as compression and wireless 4G LTE and 5G gnb algorithm offload, thereby reserving processor cycles for application and control processing. + + +### CERA 5G On Prem OpenNESS Deployment + +#### Setting up Target Platform Before Deployment + +Perform the following steps on the target machine before deployment: + +1. Ensure that, the target machine gets IP address automatically on boot every time. +Example command: +`hostname -I` + +2. Change target machine's hostname. + * Edit file `vi /etc/hostname`. Press `Insert` key to enter Insert mode. Delete the old hostname and replace it with the new one. Exit the vi editor by pressing `Esc` key and then type `:wq` and press `Enter` key. + * Edit file `vi /etc/hosts`. Press `Insert` key to enter Insert mode. Add a space at the end of both lines and write hostname after it. Exit the vi editor by pressing `Esc` key and then type `:wq` and press `Enter` key. + +3. Reboot the target machine. + +### BIOS Setup +The BIOS settings on the edge node must be properly set in order for the OpenNESS building blocks to function correctly. They may be set either during deployment of the reference architecture, or manually. The settings that must be set are: +* Enable Intel® Hyper-Threading Technology +* Enable Intel® Virtualization Technology +* Enable Intel® Virtualization Technology for Directed I/O +* Enable SR-IOV Support + +### Setting up Machine with Ansible + +#### Steps to be performed on the machine, where the Ansible playbook is going to be run + +1. Copy SSH key from machine, where the Ansible playbook is going to be run, to the target machine. Example commands: + > NOTE: Generate ssh key if is not present on the machine: `ssh-keygen -t rsa` (Press enter key to apply default values) + + Do it for each target machine. + ```shell + ssh-copy-id root@TARGET_IP + ``` + > NOTE: Replace TARGET_IP with the actual IP address of the target machine. + +2. Clone `ido-converged-edge-experience-kits` repo from `github.com/open-ness` using git token. + ```shell + git clone --recursive GIT_TOKEN@github.com:open-ness/ido-converged-edge-experience-kits.git + ``` + > NOTE: Replace GIT_TOKEN with your git token. + +3. Update repositories by running following commands. + ```shell + cd ido-converged-edge-experience-kits + git submodule foreach --recursive git checkout master + git submodule update --init --recursive + ``` + +4. Provide target machines IP addresses for OpenNESS deployment in `ido-converged-edge-experience-kits/openness_inventory.ini`. For Singlenode setup, set the same IP address for both `controller` and `node01`, the line with `node02` should be commented by adding # at the beginning. +Example: + ```ini + [all] + controller ansible_ssh_user=root ansible_host=192.168.1.43 # First server NE + node01 ansible_ssh_user=root ansible_host=192.168.1.43 # First server NE + ; node02 ansible_ssh_user=root ansible_host=192.168.1.12 + ``` + At that stage provide IP address only for `CERA 5G NE` server. + + If the GMC device is available, the node server can be synchronized. In the `ido-converged-edge-experience-kits/openness_inventory.ini`, `node01` should be added to `ptp_slave_group`. The default value `controller` for `[ptp_master]` should be removed or commented. + ```ini + [ptp_master] + #controller + + [ptp_slave_group] + node01 + ``` + +5. Edit `ido-converged-edge-experience-kits/openness/group_vars/all/10-open.yml` and provide some correct settings for deployment. + + Git token. + ```yaml + git_repo_token: "your git token" + ``` + Proxy if it is required. + ```yaml + # Setup proxy on the machine - required if the Internet is accessible via proxy + proxy_enable: true + # Clear previous proxy settings + proxy_remove_old: true + # Proxy URLs to be used for HTTP, HTTPS and FTP + proxy_http: "http://proxy.example.org:3128" + proxy_https: "http://proxy.example.org:3129" + proxy_ftp: "http://proxy.example.org:3129" + # Proxy to be used by YUM (/etc/yum.conf) + proxy_yum: "{{ proxy_http }}" + # No proxy setting contains addresses and networks that should not be accessed using proxy (e.g. local network, Kubernetes CNI networks) + proxy_noproxy: "127.0.0.1,localhost,192.168.1.0/24" + ``` + NTP server + ```yaml + ### Network Time Protocol (NTP) + # Enable machine's time synchronization with NTP server + ntp_enable: true + # Servers to be used by NTP instead of the default ones (e.g. 0.centos.pool.ntp.org) + ntp_servers: ['ntp.server.com'] + ``` + +6. Edit file `ido-converged-edge-experience-kits/openness/flavors/cera_5g_on_premise/edgenode_group.yml` and provide correct CPU settings. + + ```yaml + tuned_vars: | + isolated_cores=2-23,26-47 + nohz=on + nohz_full=2-23,26-47 + + # CPUs to be isolated (for RT procesess) + cpu_isol: "2-23,26-47" + # CPUs not to be isolate (for non-RT processes) - minimum of two OS cores necessary for controller + cpu_os: "0-1,24-25" + ``` + + If a GMC is connected to the setup, then node server synchronization can be enabled inside ido-converged-edge-experience-kits/openness/flavors/cera_5g_on_premise/edgenode_group.yml file. + ```yaml + ptp_sync_enable: true + ``` + +7. Edit file `ido-converged-edge-experience-kits/openness/flavors/cera_5g_on_premise/controller_group.yml` and provide names of `network interfaces` that are connected to second server and number of VF's to be created. + + ```yaml + sriov: + network_interfaces: {eno1: 5, eno2: 10} + ``` + +8. Edit file `ido-converged-edge-experience-kits/openness/x-oek/oek/host_vars/node01.yml` if a GMC is connected and the node server should be synchronized. + + For single node setup (this is the default mode for CERA), `ptp_port` keeps the host's interface connected to Grand Master, e.g.: + ```yaml + ptp_port: "eno3" + ``` + + Variable `ptp_network_transport` keeps network transport for ptp. Choose `"-4"` for default CERA setup. The `gm_ip` variable should contain the GMC's IP address. The Ansible scripts set the IP on the interface connected to the GMC, according to the values in the variables `ptp_port_ip` and `ptp_port_cidr`. + ```yaml + # Valid options: + # -2 Select the IEEE 802.3 network transport. + # -4 Select the UDP IPv4 network transport. + ptp_network_transport: "-4" + + + # Grand Master IP, e.g.: + # gm_ip: "169.254.99.9" + gm_ip: "169.254.99.9" + + # - ptp_port_ip contains a static IP for the server port connected to GMC, e.g.: + # ptp_port_ip: "169.254.99.175" + # - ptp_port_cidr - CIDR for IP from, e.g.: + # ptp_port_cidr: "24" + ptp_port_ip: "169.254.99.175" + ptp_port_cidr: "24" + ``` + +9. Execute the `deploy_openness_for_cera.sh` script in `ido-converged-edge-experience-kits` to start OpenNESS platform deployment process by running the following command: + ```shell + ./deploy_openness_for_cera.sh cera_5g_on_premise + ``` + Note: This might take few hours. + +10. After a successful OpenNESS deployment, edit again `ido-converged-edge-experience-kits/openness_inventory.ini`, change IP address to `CERA 5G CN` server. + ```ini + [all] + controller ansible_ssh_user=root ansible_host=192.168.1.109 # Second server CN + node01 ansible_ssh_user=root ansible_host=192.168.1.109 # Second server CN + ; node02 ansible_ssh_user=root ansible_host=192.168.1.12 + ``` + Then run `deploy_openness_for_cera.sh` again. + ```shell + ./deploy_openness_for_cera.sh + ``` + All settings in `ido-converged-edge-experience-kits/openness/group_vars/all/10-open.yml` are the same for both servers. + + For `CERA 5G CN` server disable synchronization with GMC inside `ido-converged-edge-experience-kits/openness/flavors/cera_5g_on_premise/edgenode_group.yml` file. + ```yaml + ptp_sync_enable: false + ``` + +11. When both servers have deployed OpenNess, login to `CERA 5G CN` server and generate `RSA ssh key`. It's required for AMF/SMF VM deployment. + ```shell + ssh-keygen -t rsa + # Press enter key to apply default values + ``` +12. The full setup is now ready for CERA deployment. + +### CERA 5G On Premise Experience Kit Deployment +The following prerequisites should be met for CERA deployment. + +1. CentOS should use the following kernel and have no newer kernels installed: + * `3.10.0-1127.19.1.rt56.1116.el7.x86_64` on Near Edge server. + * `3.10.0-1127.el7.x86_64` on Core Network server. + +2. Edit file `ido-converged-edge-experience-kits/cera_config.yaml` and provide correct settings: + + Git token + ```yaml + git_repo_token: "your git token" + ``` + Decide which demo application should be launched + ```yaml + # choose which demo will be launched: `eis` or `openvino` + deploy_app: "eis" + ``` + EIS release package location + ```yaml + # provide EIS release package archive absolute path + eis_release_package_path: "" + ``` + [OpenVino](#OpenVINO) settings, if OpenVino app was set as active demo application + ```yaml + display_host_ip: "" # update ip for visualizer HOST GUI. + save_video: "enable" + ``` + Proxy settings + ```yaml + # Setup proxy on the machine - required if the Internet is accessible via proxy + proxy_os_enable: true + # Clear previous proxy settings + proxy_os_remove_old: true + # Proxy URLs to be used for HTTP, HTTPS and FTP + proxy_os_http: "http://proxy.example.org:3129" + proxy_os_https: "http://proxy.example.org:3128" + proxy_os_ftp: "http://proxy.example.org:3128" + proxy_os_noproxy: "127.0.0.1,localhost,192.168.1.0/24" + # Proxy to be used by YUM (/etc/yum.conf) + proxy_yum_url: "{{ proxy_os_http }}" + ``` + See [more details](#dUPF) for dUPF configuration + ```yaml + # Define PCI addresses (xxxx:xx:xx.x format) for i-upf + n3_pci_bus_address: "0000:19:0a.0" + n4_n9_pci_bus_address: "0000:19:0a.1" + n6_pci_bus_address: "0000:19:0a.2" + + # Define VPP VF interface names for i-upf + n3_vf_interface_name: "VirtualFunctionEthernet19/a/0" + n4_n9_vf_interface_name: "VirtualFunctionEthernet19/a/1" + n6_vf_interface_name: "VirtualFunctionEthernet19/a/2" + + # PF interface name of N3 created VF + host_if_name_N3: "eno2" + # PF interface name of N4, N6, N9 created VFs + host_if_name_N4_N6_n9: "eno2" + ``` + [gNodeB](#gNodeB) configuration + ```yaml + ## gNodeB related config + gnodeb_fronthaul_vf1: "0000:65:02.0" + gnodeb_fronthaul_vf2: "0000:65:02.1" + + gnodeb_fronthaul_vf1_mac: "ac:1f:6b:c2:48:ad" + gnodeb_fronthaul_vf2_mac: "ac:1f:6b:c2:48:ab" + + n2_gnodeb_pci_bus_address: "0000:19:0a.3" + n3_gnodeb_pci_bus_address: "0000:19:0a.4" + + fec_vf_pci_addr: "0000:b8:00.1" + + # DPDK driver used (vfio-pci/igb_uio) to VFs bindings + dpdk_driver_gnodeb: "igb_uio" + + ## ConfigMap vars + + fronthaul_if_name: "enp101s0f0" + ``` + Settings for `CERA 5G CN` + ```yaml + ## PSA-UPF vars + + # Define N4/N9 and N6 interface device PCI bus address + PCI_bus_address_N4_N9: '0000:19:0a.0' + PCI_bus_address_N6: '0000:19:0a.1' + + # 5gc binaries directory name + package_5gc_path: "/opt/amf-smf/" + + # vpp interface name as per setup connection + vpp_interface_N4_N9_name: 'VirtualFunctionEthernet19/a/0' + vpp_interface_N6_name: 'VirtualFunctionEthernet19/a/1' + ``` +3. If needed change additional settings for `CERA 5G NE` in `ido-converged-edge-experience-kits/host_vars/cera_5g_ne.yml`. + ```yaml + # DPDK driver used (vfio-pci/igb_uio) to VFs bindings + dpdk_driver_upf: "igb_uio" + + # Define path where i-upf is located on remote host + upf_binaries_path: "/opt/flexcore-5g-rel/i-upf/" + ``` + OpenVino model + ```yaml + model: "pedestrian-detection-adas-0002" + ``` +4. Build the following docker images required and provide necessary binaries. + - [dUPF](#dUPF) + - [UPF](#UPF) + - [AMF-SMF](#AMF-SMF) + - [gNB](#gNodeB) +5. Provide correct IP for target servers in file `ido-converged-edge-experience-kits/cera_inventory.ini` + ```ini + [all] + cera_5g_ne ansible_ssh_user=root ansible_host=192.168.1.109 + cera_5g_cn ansible_ssh_user=root ansible_host=192.168.1.43 + ``` +6. Deploy CERA Experience Kit + ```shell + ./deploy_cera.sh + ``` + +## 5G Core Components +This section describes in details how to build particular images and configure ansible for deployment. + +### dUPF + +#### Overview + +The Distributed User Plane Function (dUPF) is a part of 5G Access Network, it is responsible for packets routing. It has 3 separate interfaces for `N3, N4/N9` and `N6` data lines. `N3` interface is used for connection with video stream source. `N4/N9` interface is used for connection with `UPF` and `AMF/SMF`. `N6` interface is used for connection with `EDGE-APP` (locally), `UPF` and `Remote-DN` + +The `CERA dUPF` component is deployed on `CERA 5G Near Edge (cera_5g_ne)` node. It is deployed as a POD - during deployment of CERA 5G On Prem automatically. + +#### Deployment + +##### Prerequisites + +To deploy dUPF correctly, one needs to provide Docker image to Docker repository on the target node. There is a script on the `open-ness/eddgeapps/network-functions/core-network/5G/UPF` repo provided by CERA, which builds the image automatically. + +##### Settings +The following variables need to be defined in `cera_config.yaml` +```yaml +n3_pci_bus_address: "" - PCI bus address of VF, which is used for N3 interface by dUPF +n4_n9_pci_bus_address: "" - PCI bus address of VF, which is used for N4 and N9 interface by dUPF +n6_pci_bus_address: "" - PCI bus address of VF, which is used for N6 interface by dUPF + +n3_vf_interface_name: "" - name of VF, which is used for N3 interface by dUPF +n4_n9_vf_interface_name: "" - name of VF, which is used for N4 and N9 interface by dUPF +n6_vf_interface_name: "" - name of VF, which is used for N6 interface by dUPF +``` + +##### Configuration +The dUPF is configured automatically during the deployment. + +### UPF +#### Overview + +The `User Plane Function (UPF)` is a part of 5G Core Network, it is responsible for packets routing. It has 2 separate interfaces for `N4/N9` and `N6` data lines. `N4/N9` interface is used for connection with `dUPF` and `AMF/SMF` (locally). `N6` interface is used for connection with `EDGE-APP`, `dUPF` and `Remote-DN` (locally). + +The CERA UPF component is deployed on `CERA 5G Core Network (cera_5g_cn)` node. It is deployed as a POD - during deployment of CERA 5G On Prem automatically. + +#### Deployment +##### Prerequisites + +To deploy `UPF` correctly one needs to provide a Docker image to Docker Repository on target nodes. There is a script on the `open-ness/eddgeapps/network-functions/core-network/5G/UPF` repo provided by CERA, which builds the image automatically. + +##### Settings + +The following variables need to be defined in the `cera_config.yaml` +```yaml +PCI_bus_address_N4_N9: "" - PCI bus address of VF, which is used for N4 and N9 interface by UPF +PCI_bus_address_N6: "" - PCI bus address of VF, which is used for N6 interface by UPF + +vpp_interface_N4_N9_name: "" - name of VF, which is used for N4 and N9 interface by UPF +vpp_interface_N6_name: "" - name of VF, which is used for N6 interface by UPF +``` + +##### Configuration +The `UPF` is configured automatically during the deployment. + + +### AMF-SMF +#### Overview + +AMF-SMF is a part of 5G Core Architecture responsible for `Session Management(SMF)` and `Access and Mobility Management(AMF)` Functions - it establishes sessions and manages date plane packages. + +The CERA `AMF-SMF` component is deployed on `CERA 5G Core Network (cera_5g_cn)` node and communicates with UPF and dUPF, so they must be deployed and configured before `AMF-SMF`. + +#### Deployment +##### Prerequisites + +To deploy `AMF-SMF` correctly, one needs to provide a Docker image to Docker Repository on target machine(cera_5g_cn). There is a script on the `open-ness/eddgeapps/network-functions/core-network/5G/AMF-SMF` repository provided by CERA, which builds the image automatically. + +##### Settings + +Following variables need to be defined in `cera_config.yaml` +```yaml +# 5gc binaries directory name +package_5gc_path: "/opt/amf-smf/" +``` + +##### Configuration +The `AMF-SMF` is configured automatically during the deployment. + + +### Remote-DN +#### Overview +Remote Data Network is component, which represents `“internet”` in networking. CERA Core Network manages which data should apply to `Near Edge Application(EIS/OpenVINO)` or go further to the network. + + +##### Prerequisites +Deployment of Remote-DN is completely automated, so there is no need to set or configure anything. + + +### Local-DN +#### Overview +Local Data Network is component, which is responsible for combining Core part with Edge applications. It can convert incoming video streaming protocol for acceptable format by EIS/OpenVino + + +#### Prerequisites +Deployment of Local-DN is completely automated, so there is no need to set or configure anything. + +### OpenVINO + +#### Settings +In the `cera_config.yaml` file can be chosen for which application should be built and deployed. Set a proper value for the deploy_app variable. +```yaml +deploy_app: "" - Type openvino if OpenVINO demo should be launched. +``` + +Several variables must be set in the file `host_vars/cera_5g_ne.yml`: +```yaml +model: "pedestrian-detection-adas-0002" - Model for which the OpenVINO demo will be run. Models which can be selected: pedestrian-detection-adas-0002, pedestrian-detection-adas-binary-0001, pedestrian-and-vehicle-detector-adas-0001, vehicle-detection-adas-0002, vehicle-detection-adas-binary-0001, person-vehicle-bike-detection-crossroad-0078, person-vehicle-bike-detection-crossroad-1016, person-reidentification-retail-0031, person-reidentification-retail-0248, person-reidentification-retail-0249, person-reidentification-retail-0300, road-segmentation-adas-0001 + +save_video: "enable" - For value "enable" the output will be written to /root/saved_video/ov-output.mjpeg file on cera_5g_ne machine. This variable should not be changed. +``` + +#### Deployment +After running the `deploy_cera.sh` script, pod ov-openvino should be available on `cera_5g_ne` machine. The status of the ov-openvino pod can be checked by use: +```shell +kubectl -n openvino get pods -o wide +``` +Immediately after creating, the ov-openvino pod will wait for input streaming. If streaming is not available, ov-openvino pod will restart after some time. After this restart, this pod will wait for streaming again. + +#### Streaming +Video to OpenVINO™ pod should be streamed to IP `192.168.1.101` and port `5000`. Make sure that the pod with OpenVINO™ is visible from your streaming machine. In the simplest case, the video can be streamed from the same machine where pod with OpenVINO™ is available. + +Output will be saved to the `saved_video/ov-output.mjpeg` file (`save_video` variable in the `host_vars/cera_5g_ne.yml` should be set to `"enable"` and should be not changed). + +Streaming is possible from a file or from a camera. For continuous and uninterrupted streaming of a video file, the video file can be streamed in a loop. An example of a Bash file for streaming is shown below. +```shell +#!/usr/bin/env bash +while : +do + ffmpeg -re -i Rainy_Street.mp4 -pix_fmt yuvj420p -vcodec mjpeg \ + -huffman 0 -map 0:0 -pkt_size 1200 -f rtp rtp://192.168.1.101:5000 +done +``` +Where: +* `ffmpeg` - Streaming software must be installed on the streaming machine. +* `Rainy_Street.mp4` - The file that will be streamed. This file can be downloaded by: + ```shell + wget https://storage.googleapis.com/coverr-main/zip/Rainy_Street.zip + ``` + +The OpenVINO™ demo, saves its output to saved_video/ov-output.mjpeg file on the cera_5g_cn machine. + +- To stop the OpenVINO™ demo and interrupt creating the output video file - run on the cera_5g_cn: kubectl delete -f /opt/openvino/yamls/openvino.yaml +- To start the OpenVINO™ demo and start creating the output video file (use this command if ov-openvino pod does not exist) - run on the cera_5g_cn: kubectl apply -f /opt/openvino/yamls/openvino.yaml + +### EIS +Deployment of EIS is completely automated, so there is no need to set or configure anything except providing release package archive. +```yaml +# provide EIS release package archive absolute path +eis_release_package_path: "" +``` + +For more details about `eis-experience-kit` check [README.md](https://github.com/open-ness/edgeapps/blob/master/applications/eis-experience-kit/README.md) + +### gNodeB +#### Overview + +`gNodeB` is a part of 5G Core Architecture and is deployed on `CERA 5G Nere Edge (cera_5g_ne)` node. + +#### Deployment +#### Prerequisites + +To deploy `gNodeB` correctly it is required to provide a Docker image to Docker Repository on target machine(cera_5g_ne). There is a script on the `open-ness/eddgeapps/network-functions/ran/5G/gnb` repository provided by CERA, which builds the image automatically. For `gNodeB` deployment FPGA card is required PAC N3000 and also QAT card. + +#### Settings + +The following variables need to be defined in `cera_config.yaml` +```yaml +## gNodeB related config +# Fronthaul require two VFs +gnodeb_fronthaul_vf1: "0000:65:02.0" - PCI bus address of VF, which is used as fronthaul +gnodeb_fronthaul_vf2: "0000:65:02.1" - PCI bus address of VF, which is used as fronthaul + +gnodeb_fronthaul_vf1_mac: "ac:1f:6b:c2:48:ad" - MAC address which will be set on the first VF during deployment +gnodeb_fronthaul_vf2_mac: "ac:1f:6b:c2:48:ab" - MAC address which will be set on the second VF during deployment + +n2_gnodeb_pci_bus_address: "0000:19:0a.3" - PCI bus address of VF, which is used for N2 interface +n3_gnodeb_pci_bus_address: "0000:19:0a.4" - PCI bus address of VF, which is used for N3 interface + +fec_vf_pci_addr: "0000:b8:00.1" - PCI bus address of VF, which is assigned to FEC PAC N3000 accelerator + +# DPDK driver used (vfio-pci/igb_uio) to VFs bindings +dpdk_driver_gnodeb: "igb_uio" - driver for binding interfaces + +## ConfigMap vars +fronthaul_if_name: "enp101s0f0" - name of fronthaul interface +``` + +#### Configuration +The `gNodeB` is configured automatically during the deployment. + +### Time synchronization over PTP for node server +#### Overview +The CERA 5G on Premises node must be synchronized with PTP to allow connection with the RRH. + +#### Prerequisites +Not every NIC supports hardware timestamping. To verify if the NIC supports hardware timestamping, run the ethtool command for the interface in use. + +Example: +```shell +ethtool -T eno3 +``` + +Sample output: +```shell +Time stamping parameters for eno3: +Capabilities: + hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE) + software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE) + hardware-receive (SOF_TIMESTAMPING_RX_HARDWARE) + software-receive (SOF_TIMESTAMPING_RX_SOFTWARE) + software-system-clock (SOF_TIMESTAMPING_SOFTWARE) + hardware-raw-clock (SOF_TIMESTAMPING_RAW_HARDWARE) +PTP Hardware Clock: 0 +Hardware Transmit Timestamp Modes: + off (HWTSTAMP_TX_OFF) + on (HWTSTAMP_TX_ON) +Hardware Receive Filter Modes: + none (HWTSTAMP_FILTER_NONE) + all (HWTSTAMP_FILTER_ALL) +``` + +For software time stamping support, the parameters list should include: +- SOF_TIMESTAMPING_SOFTWARE +- SOF_TIMESTAMPING_TX_SOFTWARE +- SOF_TIMESTAMPING_RX_SOFTWARE + +For hardware time stamping support, the parameters list should include: +- SOF_TIMESTAMPING_RAW_HARDWARE +- SOF_TIMESTAMPING_TX_HARDWARE +- SOF_TIMESTAMPING_RX_HARDWARE + +GMC must be properly configured and connected to the server's ETH port. + +#### Settings +If the GMC has been properly configured and connected to the server then the node server can be synchronized. +In the `ido-converged-edge-experience-kits/openness_inventory.ini` file, `node01` should be added to `ptp_slave_group` and the content inside the `ptp_master` should be empty or commented. +```ini +[ptp_master] +#controller + +[ptp_slave_group] +node01 +``` +Server synchronization can be enabled inside `ido-converged-edge-experience-kits/openness/flavors/cera_5g_on_premise/edgenode_group.yml` file. +```yaml +ptp_sync_enable: true +``` +Edit file `ido-converged-edge-experience-kits/openness/x-oek/oek/host_vars/node01.yml` if a GMC is connected and the node server should be synchronized. + +For single node setup (this is the default mode for CERA), `ptp_port` keeps the host's interface connected to Grand Master, e.g.: +```yaml +ptp_port: "eno3" +``` + +Variable `ptp_network_transport` keeps network transport for ptp. Choose `"-4"` for default CERA setup. The `gm_ip` variable should contain the GMC's IP address. The Ansible scripts set the IP on the interface connected to the GMC, according to the values in the variables `ptp_port_ip` and `ptp_port_cidr`. +```yaml +# Valid options: +# -2 Select the IEEE 802.3 network transport. +# -4 Select the UDP IPv4 network transport. +ptp_network_transport: "-4" + +# Grand Master IP, e.g.: +# gm_ip: "169.254.99.9" +gm_ip: "169.254.99.9" + +# - ptp_port_ip contains a static IP for the server port connected to GMC, e.g.: +# ptp_port_ip: "169.254.99.175" +# - ptp_port_cidr - CIDR for IP from, e.g.: +# ptp_port_cidr: "24" +ptp_port_ip: "169.254.99.175" +ptp_port_cidr: "24" +``` + +#### GMC configuration + +Important settings: +- Port State: Master +- Delay Mechanism: E2E +- Network Protocol: IPv4 +- Sync Interval: 0 +- Delay Request Interval: 0 +- Pdelay Request Interval: 0 +- Announce Interval: 3 +- Announce Receipt Timeout: 3 +- Multicast/Unicast Operation: Unicast +- Negotiation: ON +- DHCP: Enable +- VLAN: Off +- Profile: Default (1588 v2) +- Two step clock: FALSE +- Clock class: 248 +- Clock accuracy: 254 +- Offset scaled log: 65535 +- Priority 1: 128 +- Priority 2: 128 +- Domain number: 0 +- Slave only: FALSE + +## Conclusion +CERA 5G On Premises deployment provides a reference implementation of how to use OpenNESS software to efficiently deploy, manage and optimize the performance of network functions and applications suited to running at the On Premises Network. With the power of Intel® architecture CPUs and the flexibility to add hardware accelerators, CERA systems can be customized for a wide range of applications. + +## Learn more +* [Building on NFVI foundation from Core to Cloud to Edge with Intel® Architecture](https://networkbuilders.intel.com/social-hub/video/building-on-nfvi-foundation-from-core-to-cloud-to-edge-with-intel-architecture) +* [Edge Software Hub](https://software.intel.com/content/www/us/en/develop/topics/iot/edge-solutions.html) +* [Solution Brief: Converged Edge Reference Architecture (CERA) for On-Premise/Outdoor](https://networkbuilders.intel.com/solutionslibrary/converged-edge-reference-architecture-cera-for-on-premise-outdoor#.XffY5ut7kfI) + +## Acronyms + +| | | +|-------------|---------------------------------------------------------------| +| AI | Artificial intelligence | +| AN | Access Network | +| CERA | Converged Edge Reference Architecture | +| CN | Core Network | +| CNF | Container Network Function | +| CommSPs | Communications Service Providers | +| DPDK | Data Plane Developer Kit | +| eNB | e-NodeB | +| EPA | Enhance Platform Awareness | +| EPC | Extended Packet Core | +| FPGA | Field Programmable Gate Array | +| GMC | Grand Master Clock | +| IPSEC | Internet Protocol Security | +| MEC | Multi-Access Edge Computing | +| OpenNESS | Open Network Edge Services Software | +| OpenVINO | Open Visual Inference and Neural Network Optimization | +| OpenVX | Open Vision Acceleration | +| OVS | Open Virtual Switch | +| PF | Physical Function | +| RAN | Radio Access Network | +| PTP | Precision Time Protocol | +| SD-WAN | Software Defined Wide Area Network | +| uCPE | Universal Customer Premises Equipment | +| UE | User Equipment | +| VF | Virtual function | +| VM | Virtual Machine | diff --git a/doc/reference-architectures/CERA-Near-Edge.md b/doc/reference-architectures/CERA-Near-Edge.md index c9a16b83..b422da83 100644 --- a/doc/reference-architectures/CERA-Near-Edge.md +++ b/doc/reference-architectures/CERA-Near-Edge.md @@ -338,13 +338,35 @@ Example: # Servers to be used by NTP instead of the default ones (e.g. 0.centos.pool.ntp.org) ntp_servers: ['ger.corp.intel.com'] ``` -6. Execute the `deploy_openness_for_cera.sh` script in `ido-converged-edge-experience-kits` to start OpenNESS platform deployment process by running following command: + +6. Edit file `ido-converged-edge-experience-kits/openness/flavors/cera_5g_near_edge/edgenode_group.yml` and provide correct CPU settings. + + ```yaml + tuned_vars: | + isolated_cores=1-16,25-40 + nohz=on + nohz_full=1-16,25-40 + # CPUs to be isolated (for RT procesess) + cpu_isol: "1-16,25-40" + # CPUs not to be isolate (for non-RT processes) - minimum of two OS cores necessary for controller + cpu_os: "0,17-23,24,41-47" + ``` + +7. Edit file `ido-converged-edge-experience-kits/openness/flavors/cera_5g_near_edge/controller_group.yml` and provide names of `network interfaces` that are connected to second server and number of VF's to be created. + + ```yaml + sriov: + network_interfaces: {eno1: 5, eno2: 2} + ``` + > NOTE: On various platform interfaces can have different name. For e.g `eth1` instead of `eno1`. Please verify interface name before deployment and do right changes. + +8. Execute the `deploy_openness_for_cera.sh` script in `ido-converged-edge-experience-kits` to start OpenNESS platform deployment process by running following command: ```shell - ./deploy_openness_for_cera.sh + ./deploy_openness_for_cera.sh cera_5g_near_edge ``` It might take few hours. -7. After successful OpenNESS deployment, edit again `ido-converged-edge-experience-kits/openness_inventory.ini`, change IP address to `CERA 5G CN` server. +9. After successful OpenNESS deployment, edit again `ido-converged-edge-experience-kits/openness_inventory.ini`, change IP address to `CERA 5G CN` server. ```ini [all] controller ansible_ssh_user=root ansible_host=192.168.1.109 # Second server CN @@ -357,17 +379,19 @@ Example: ``` All settings in `ido-converged-edge-experience-kits/openness/group_vars/all/10-open.yml` are the same for both servers. -8. When both servers have deployed OpenNess, login to `CERA 5G CN` server and generate `RSA ssh key`. It's required for AMF/SMF VM deployment. +10. When both servers have deployed OpenNess, login to `CERA 5G CN` server and generate `RSA ssh key`. It's required for AMF/SMF VM deployment. ```shell ssh-keygen -t rsa # Press enter key to apply default values ``` -9. Now full setup is ready for CERA deployment. +11. Now full setup is ready for CERA deployment. ### CERA Near Edge Experience Kit Deployment For CERA deployment some prerequisites have to be fulfilled. -1. Edit file `ido-converged-edge-experience-kits/group_vars/all.yml` and provide correct settings: +1. CentOS should use kernel `kernel-3.10.0-957.el7.x86_64` and have no newer kernels installed. + +2. Edit file `ido-converged-edge-experience-kits/group_vars/all.yml` and provide correct settings: Git token ```yaml @@ -389,7 +413,8 @@ For CERA deployment some prerequisites have to be fulfilled. vm_image_path: "/opt/flexcore-5g-rel/ubuntu_18.04.qcow2" ``` -2. Edit file `ido-converged-edge-experience-kits/host_vars/localhost.yml` and provide correct proxy if is required. +3. Edit file `ido-converged-edge-experience-kits/host_vars/localhost.yml` and provide correct proxy if is required. + ```yaml ### Proxy settings # Setup proxy on the machine - required if the Internet is accessible via proxy @@ -405,11 +430,11 @@ For CERA deployment some prerequisites have to be fulfilled. proxy_yum_url: "{{ proxy_os_http }}" ``` -3. Build all docker images required and provide all necessary binaries. +4. Build all docker images required and provide all necessary binaries. - [dUPF](#dUPF) - [UPF](#UPF) - [AMF-SMF](#AMF-SMF) -4. Set all necessary settings for `CERA 5G NE` in `ido-converged-edge-experience-kits/host_vars/cera_5g_ne.yml`. +5. Set all necessary settings for `CERA 5G NE` in `ido-converged-edge-experience-kits/host_vars/cera_5g_ne.yml`. See [more details](#dUPF) for dUPF configuration ```yaml # Define PCI addresses (xxxx:xx:xx.x format) for i-upf @@ -439,7 +464,7 @@ For CERA deployment some prerequisites have to be fulfilled. save_video: "enable" target_device: "CPU" ``` -5. Set all necessary settings for `CERA 5G CN` in `ido-converged-edge-experience-kits/host_vars/cera_5g_cn.yml`. +7. Set all necessary settings for `CERA 5G CN` in `ido-converged-edge-experience-kits/host_vars/cera_5g_cn.yml`. For more details check: - [UPF](#UPF) - [AMF-SMF](#AMF-SMF) @@ -457,6 +482,10 @@ For CERA deployment some prerequisites have to be fulfilled. package_name_5gc: "5gc" ``` ```yaml + # psa-upf directory path + upf_binaries_path: '/opt/flexcore-5g-rel/psa-upf/' + ``` + ```yaml ## AMF-SMF vars # Define N2/N4 @@ -475,13 +504,13 @@ For CERA deployment some prerequisites have to be fulfilled. # PF interface name of N4, N6, N9 created VFs host_if_name_N4_N6_n9: "eno1" ``` -6. Provide correct IP for target servers in file `ido-converged-edge-experience-kits/cera_inventory.ini` +8. Provide correct IP for target servers in file `ido-converged-edge-experience-kits/cera_inventory.ini` ```ini [all] cera_5g_ne ansible_ssh_user=root ansible_host=192.168.1.109 cera_5g_cn ansible_ssh_user=root ansible_host=192.168.1.43 ``` -6. Deploy CERA Experience Kit +9. Deploy CERA Experience Kit ```shell ./deploy_cera.sh ``` @@ -501,7 +530,7 @@ The `CERA dUPF` component is deployed on `CERA 5G Near Edge (cera_5g_ne)` node. #### Prerequisites -To deploy dUPF correctly it is needed to provide Docker image to Docker repository on target machine. There is a script on the `open-ness/eddgeapps/network-functions/core-network/5G/UPF` repo provided by CERA , which builds the image automatically. +To deploy dUPF correctly it is needed to provide Docker image to Docker repository on target machine(cera_5g_ne). There is a script on the `open-ness/eddgeapps/network-functions/core-network/5G/UPF` repo provided by CERA, which builds the image automatically. #### Settings Following variables need to be defined in `/host_vars/cera_5g_ne.yml` @@ -533,7 +562,7 @@ The CERA UPF component is deployed on `CERA 5G Core Network (cera_5g_cn)` node. #### Prerequisites -To deploy UPF correctly it is needed to provide a Docker image to Docker Repository on target machine. There is a script on the `open-ness/eddgeapps/network-functions/core-network/5G/UPF` repo provided by CERA , which builds the image automatically. +To deploy UPF correctly it is needed to provide a Docker image to Docker Repository on target machine(cera_5g_ne and cera_5g_cn). There is a script on the `open-ness/eddgeapps/network-functions/core-network/5G/UPF` repo provided by CERA, which builds the image automatically. #### Settings @@ -681,6 +710,8 @@ Steps to do on logged Guest OS After these steps there will be available `.qcow2` image generated by installed Virtual Machine in `/var/lib/libvirt/images` directory. +If AMF-SMF is not working correctly installing these packages should fix it: `qemu-guest-agent,iputils-ping,iproute2,screen,libpcap-dev,tcpdump,libsctp-dev,apache2,python-pip,sudo,ssh`. + ### Remote-DN #### Overview @@ -742,6 +773,11 @@ Where: wget https://storage.googleapis.com/coverr-main/zip/Rainy_Street.zip ``` +The OpenVINO demo, saves its output to saved_video/ov-output.mjpeg file on the cera_5g_cn machine. + +- To stop the OpenVINO demo and interrupt creating the output video file - run on the cera_5g_cn: kubectl delete -f /opt/openvino/yamls/openvino.yaml +- To start the OpenVINO demo and start creating the output video file (use this command if ov-openvino pod does not exist) - run on the cera_5g_cn: kubectl apply -f /opt/openvino/yamls/openvino.yaml + ### EIS Deployment of EIS is completely automated, so there is no need to set or configure anything except providing release package archive. ```yaml diff --git a/doc/reference-architectures/cera-on-prem-images/3gpp_on_prem.png b/doc/reference-architectures/cera-on-prem-images/3gpp_on_prem.png new file mode 100644 index 00000000..f42ca993 Binary files /dev/null and b/doc/reference-architectures/cera-on-prem-images/3gpp_on_prem.png differ diff --git a/doc/reference-architectures/cera-on-prem-images/cera-full-setup.png b/doc/reference-architectures/cera-on-prem-images/cera-full-setup.png new file mode 100755 index 00000000..49750f55 Binary files /dev/null and b/doc/reference-architectures/cera-on-prem-images/cera-full-setup.png differ diff --git a/doc/reference-architectures/cera-on-prem-images/cera-near-edge-orchestration-domains.png b/doc/reference-architectures/cera-on-prem-images/cera-near-edge-orchestration-domains.png new file mode 100644 index 00000000..0f9f93f5 Binary files /dev/null and b/doc/reference-architectures/cera-on-prem-images/cera-near-edge-orchestration-domains.png differ diff --git a/doc/reference-architectures/cera-on-prem-images/cera-on-prem-arch.png b/doc/reference-architectures/cera-on-prem-images/cera-on-prem-arch.png new file mode 100644 index 00000000..5d3f4385 Binary files /dev/null and b/doc/reference-architectures/cera-on-prem-images/cera-on-prem-arch.png differ diff --git a/doc/reference-architectures/cera-on-prem-images/cera_deployment.png b/doc/reference-architectures/cera-on-prem-images/cera_deployment.png new file mode 100644 index 00000000..0a544163 Binary files /dev/null and b/doc/reference-architectures/cera-on-prem-images/cera_deployment.png differ diff --git a/doc/reference-architectures/cera-on-prem-images/image-20200826-122458.png b/doc/reference-architectures/cera-on-prem-images/image-20200826-122458.png new file mode 100644 index 00000000..1b25e9e0 Binary files /dev/null and b/doc/reference-architectures/cera-on-prem-images/image-20200826-122458.png differ diff --git a/doc/reference-architectures/cera-on-prem-images/network_locations.png b/doc/reference-architectures/cera-on-prem-images/network_locations.png new file mode 100644 index 00000000..9391c97f Binary files /dev/null and b/doc/reference-architectures/cera-on-prem-images/network_locations.png differ diff --git a/doc/core-network/5g-nsa-images/5g-nsa.png b/doc/reference-architectures/core-network/5g-nsa-images/5g-nsa.png similarity index 100% rename from doc/core-network/5g-nsa-images/5g-nsa.png rename to doc/reference-architectures/core-network/5g-nsa-images/5g-nsa.png diff --git a/doc/core-network/5g-nsa-images/distributed-epc.png b/doc/reference-architectures/core-network/5g-nsa-images/distributed-epc.png similarity index 100% rename from doc/core-network/5g-nsa-images/distributed-epc.png rename to doc/reference-architectures/core-network/5g-nsa-images/distributed-epc.png diff --git a/doc/core-network/5g-nsa-images/distributed-spgw.png b/doc/reference-architectures/core-network/5g-nsa-images/distributed-spgw.png similarity index 100% rename from doc/core-network/5g-nsa-images/distributed-spgw.png rename to doc/reference-architectures/core-network/5g-nsa-images/distributed-spgw.png diff --git a/doc/core-network/5g-nsa-images/openness-nsa-depc.png b/doc/reference-architectures/core-network/5g-nsa-images/openness-nsa-depc.png similarity index 100% rename from doc/core-network/5g-nsa-images/openness-nsa-depc.png rename to doc/reference-architectures/core-network/5g-nsa-images/openness-nsa-depc.png diff --git a/doc/core-network/5g-nsa-images/option-3.png b/doc/reference-architectures/core-network/5g-nsa-images/option-3.png similarity index 100% rename from doc/core-network/5g-nsa-images/option-3.png rename to doc/reference-architectures/core-network/5g-nsa-images/option-3.png diff --git a/doc/core-network/5g-nsa-images/option-3a.png b/doc/reference-architectures/core-network/5g-nsa-images/option-3a.png similarity index 100% rename from doc/core-network/5g-nsa-images/option-3a.png rename to doc/reference-architectures/core-network/5g-nsa-images/option-3a.png diff --git a/doc/core-network/5g-nsa-images/option-3x-4g-coverage-1.png b/doc/reference-architectures/core-network/5g-nsa-images/option-3x-4g-coverage-1.png similarity index 100% rename from doc/core-network/5g-nsa-images/option-3x-4g-coverage-1.png rename to doc/reference-architectures/core-network/5g-nsa-images/option-3x-4g-coverage-1.png diff --git a/doc/core-network/5g-nsa-images/option-3x-4g-coverage-2.png b/doc/reference-architectures/core-network/5g-nsa-images/option-3x-4g-coverage-2.png similarity index 100% rename from doc/core-network/5g-nsa-images/option-3x-4g-coverage-2.png rename to doc/reference-architectures/core-network/5g-nsa-images/option-3x-4g-coverage-2.png diff --git a/doc/core-network/5g-nsa-images/option-3x-5g-coverage.png b/doc/reference-architectures/core-network/5g-nsa-images/option-3x-5g-coverage.png similarity index 100% rename from doc/core-network/5g-nsa-images/option-3x-5g-coverage.png rename to doc/reference-architectures/core-network/5g-nsa-images/option-3x-5g-coverage.png diff --git a/doc/core-network/5g-nsa-images/option-3x.png b/doc/reference-architectures/core-network/5g-nsa-images/option-3x.png similarity index 100% rename from doc/core-network/5g-nsa-images/option-3x.png rename to doc/reference-architectures/core-network/5g-nsa-images/option-3x.png diff --git a/doc/core-network/5g-nsa-images/sgw-lbo.png b/doc/reference-architectures/core-network/5g-nsa-images/sgw-lbo.png similarity index 100% rename from doc/core-network/5g-nsa-images/sgw-lbo.png rename to doc/reference-architectures/core-network/5g-nsa-images/sgw-lbo.png diff --git a/doc/core-network/epc-images/Openness_highlevel.png b/doc/reference-architectures/core-network/epc-images/Openness_highlevel.png similarity index 100% rename from doc/core-network/epc-images/Openness_highlevel.png rename to doc/reference-architectures/core-network/epc-images/Openness_highlevel.png diff --git a/doc/core-network/epc-images/openness_epc1.png b/doc/reference-architectures/core-network/epc-images/openness_epc1.png similarity index 100% rename from doc/core-network/epc-images/openness_epc1.png rename to doc/reference-architectures/core-network/epc-images/openness_epc1.png diff --git a/doc/core-network/epc-images/openness_epc2.png b/doc/reference-architectures/core-network/epc-images/openness_epc2.png similarity index 100% rename from doc/core-network/epc-images/openness_epc2.png rename to doc/reference-architectures/core-network/epc-images/openness_epc2.png diff --git a/doc/core-network/epc-images/openness_epc3.png b/doc/reference-architectures/core-network/epc-images/openness_epc3.png similarity index 100% rename from doc/core-network/epc-images/openness_epc3.png rename to doc/reference-architectures/core-network/epc-images/openness_epc3.png diff --git a/doc/core-network/epc-images/openness_epc_cnca_1.png b/doc/reference-architectures/core-network/epc-images/openness_epc_cnca_1.png similarity index 100% rename from doc/core-network/epc-images/openness_epc_cnca_1.png rename to doc/reference-architectures/core-network/epc-images/openness_epc_cnca_1.png diff --git a/doc/core-network/epc-images/openness_epcconfig.png b/doc/reference-architectures/core-network/epc-images/openness_epcconfig.png similarity index 100% rename from doc/core-network/epc-images/openness_epcconfig.png rename to doc/reference-architectures/core-network/epc-images/openness_epcconfig.png diff --git a/doc/core-network/epc-images/openness_epctest1.png b/doc/reference-architectures/core-network/epc-images/openness_epctest1.png similarity index 100% rename from doc/core-network/epc-images/openness_epctest1.png rename to doc/reference-architectures/core-network/epc-images/openness_epctest1.png diff --git a/doc/core-network/epc-images/openness_epctest2.png b/doc/reference-architectures/core-network/epc-images/openness_epctest2.png similarity index 100% rename from doc/core-network/epc-images/openness_epctest2.png rename to doc/reference-architectures/core-network/epc-images/openness_epctest2.png diff --git a/doc/core-network/epc-images/openness_epctest3.png b/doc/reference-architectures/core-network/epc-images/openness_epctest3.png similarity index 100% rename from doc/core-network/epc-images/openness_epctest3.png rename to doc/reference-architectures/core-network/epc-images/openness_epctest3.png diff --git a/doc/core-network/epc-images/openness_epctest4.png b/doc/reference-architectures/core-network/epc-images/openness_epctest4.png similarity index 100% rename from doc/core-network/epc-images/openness_epctest4.png rename to doc/reference-architectures/core-network/epc-images/openness_epctest4.png diff --git a/doc/core-network/epc-images/openness_epcupf_add.png b/doc/reference-architectures/core-network/epc-images/openness_epcupf_add.png similarity index 100% rename from doc/core-network/epc-images/openness_epcupf_add.png rename to doc/reference-architectures/core-network/epc-images/openness_epcupf_add.png diff --git a/doc/core-network/epc-images/openness_epcupf_del.png b/doc/reference-architectures/core-network/epc-images/openness_epcupf_del.png similarity index 100% rename from doc/core-network/epc-images/openness_epcupf_del.png rename to doc/reference-architectures/core-network/epc-images/openness_epcupf_del.png diff --git a/doc/core-network/epc-images/openness_epcupf_get.png b/doc/reference-architectures/core-network/epc-images/openness_epcupf_get.png similarity index 100% rename from doc/core-network/epc-images/openness_epcupf_get.png rename to doc/reference-architectures/core-network/epc-images/openness_epcupf_get.png diff --git a/doc/core-network/index.html b/doc/reference-architectures/core-network/index.html similarity index 100% rename from doc/core-network/index.html rename to doc/reference-architectures/core-network/index.html diff --git a/doc/core-network/ngc-images/5g_edge_data_paths.png b/doc/reference-architectures/core-network/ngc-images/5g_edge_data_paths.png similarity index 100% rename from doc/core-network/ngc-images/5g_edge_data_paths.png rename to doc/reference-architectures/core-network/ngc-images/5g_edge_data_paths.png diff --git a/doc/core-network/ngc-images/5g_edge_deployment_scenario1.png b/doc/reference-architectures/core-network/ngc-images/5g_edge_deployment_scenario1.png similarity index 100% rename from doc/core-network/ngc-images/5g_edge_deployment_scenario1.png rename to doc/reference-architectures/core-network/ngc-images/5g_edge_deployment_scenario1.png diff --git a/doc/core-network/ngc-images/5g_edge_deployment_scenario2.png b/doc/reference-architectures/core-network/ngc-images/5g_edge_deployment_scenario2.png similarity index 100% rename from doc/core-network/ngc-images/5g_edge_deployment_scenario2.png rename to doc/reference-architectures/core-network/ngc-images/5g_edge_deployment_scenario2.png diff --git a/doc/core-network/ngc-images/5g_edge_deployment_scenario3.png b/doc/reference-architectures/core-network/ngc-images/5g_edge_deployment_scenario3.png similarity index 100% rename from doc/core-network/ngc-images/5g_edge_deployment_scenario3.png rename to doc/reference-architectures/core-network/ngc-images/5g_edge_deployment_scenario3.png diff --git a/doc/core-network/ngc-images/5g_openess_components.png b/doc/reference-architectures/core-network/ngc-images/5g_openess_components.png similarity index 100% rename from doc/core-network/ngc-images/5g_openess_components.png rename to doc/reference-architectures/core-network/ngc-images/5g_openess_components.png diff --git a/doc/core-network/ngc-images/5g_openess_microservices.png b/doc/reference-architectures/core-network/ngc-images/5g_openess_microservices.png similarity index 100% rename from doc/core-network/ngc-images/5g_openess_microservices.png rename to doc/reference-architectures/core-network/ngc-images/5g_openess_microservices.png diff --git a/doc/core-network/ngc-images/5g_system_architecture.png b/doc/reference-architectures/core-network/ngc-images/5g_system_architecture.png similarity index 100% rename from doc/core-network/ngc-images/5g_system_architecture.png rename to doc/reference-architectures/core-network/ngc-images/5g_system_architecture.png diff --git a/doc/core-network/ngc-images/AF_Policy_Authorization_Notif.png b/doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_Notif.png similarity index 100% rename from doc/core-network/ngc-images/AF_Policy_Authorization_Notif.png rename to doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_Notif.png diff --git a/doc/core-network/ngc-images/AF_Policy_Authorization_Notif_Terminate.png b/doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_Notif_Terminate.png similarity index 100% rename from doc/core-network/ngc-images/AF_Policy_Authorization_Notif_Terminate.png rename to doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_Notif_Terminate.png diff --git a/doc/core-network/ngc-images/AF_Policy_Authorization_Notification.png b/doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_Notification.png similarity index 100% rename from doc/core-network/ngc-images/AF_Policy_Authorization_Notification.png rename to doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_Notification.png diff --git a/doc/core-network/ngc-images/AF_Policy_Authorization_create.png b/doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_create.png similarity index 100% rename from doc/core-network/ngc-images/AF_Policy_Authorization_create.png rename to doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_create.png diff --git a/doc/core-network/ngc-images/AF_Policy_Authorization_delete.png b/doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_delete.png similarity index 100% rename from doc/core-network/ngc-images/AF_Policy_Authorization_delete.png rename to doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_delete.png diff --git a/doc/core-network/ngc-images/AF_Policy_Authorization_event_subscription_delete.png b/doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_event_subscription_delete.png similarity index 100% rename from doc/core-network/ngc-images/AF_Policy_Authorization_event_subscription_delete.png rename to doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_event_subscription_delete.png diff --git a/doc/core-network/ngc-images/AF_Policy_Authorization_event_subscription_put.png b/doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_event_subscription_put.png similarity index 100% rename from doc/core-network/ngc-images/AF_Policy_Authorization_event_subscription_put.png rename to doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_event_subscription_put.png diff --git a/doc/core-network/ngc-images/AF_Policy_Authorization_get.png b/doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_get.png similarity index 100% rename from doc/core-network/ngc-images/AF_Policy_Authorization_get.png rename to doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_get.png diff --git a/doc/core-network/ngc-images/AF_Policy_Authorization_patch.png b/doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_patch.png similarity index 100% rename from doc/core-network/ngc-images/AF_Policy_Authorization_patch.png rename to doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_patch.png diff --git a/doc/core-network/ngc-images/AF_Traffic_Influence_Notification.png b/doc/reference-architectures/core-network/ngc-images/AF_Traffic_Influence_Notification.png similarity index 100% rename from doc/core-network/ngc-images/AF_Traffic_Influence_Notification.png rename to doc/reference-architectures/core-network/ngc-images/AF_Traffic_Influence_Notification.png diff --git a/doc/core-network/ngc-images/AF_traffic_influence_add.png b/doc/reference-architectures/core-network/ngc-images/AF_traffic_influence_add.png similarity index 100% rename from doc/core-network/ngc-images/AF_traffic_influence_add.png rename to doc/reference-architectures/core-network/ngc-images/AF_traffic_influence_add.png diff --git a/doc/core-network/ngc-images/AF_traffic_influence_delete.png b/doc/reference-architectures/core-network/ngc-images/AF_traffic_influence_delete.png similarity index 100% rename from doc/core-network/ngc-images/AF_traffic_influence_delete.png rename to doc/reference-architectures/core-network/ngc-images/AF_traffic_influence_delete.png diff --git a/doc/core-network/ngc-images/AF_traffic_influence_get.png b/doc/reference-architectures/core-network/ngc-images/AF_traffic_influence_get.png similarity index 100% rename from doc/core-network/ngc-images/AF_traffic_influence_get.png rename to doc/reference-architectures/core-network/ngc-images/AF_traffic_influence_get.png diff --git a/doc/core-network/ngc-images/AF_traffic_influence_update.png b/doc/reference-architectures/core-network/ngc-images/AF_traffic_influence_update.png similarity index 100% rename from doc/core-network/ngc-images/AF_traffic_influence_update.png rename to doc/reference-architectures/core-network/ngc-images/AF_traffic_influence_update.png diff --git a/doc/core-network/ngc-images/OAuth2Flow.png b/doc/reference-architectures/core-network/ngc-images/OAuth2Flow.png similarity index 100% rename from doc/core-network/ngc-images/OAuth2Flow.png rename to doc/reference-architectures/core-network/ngc-images/OAuth2Flow.png diff --git a/doc/core-network/ngc-images/PFD_Management_transaction_delete.png b/doc/reference-architectures/core-network/ngc-images/PFD_Management_transaction_delete.png similarity index 100% rename from doc/core-network/ngc-images/PFD_Management_transaction_delete.png rename to doc/reference-architectures/core-network/ngc-images/PFD_Management_transaction_delete.png diff --git a/doc/core-network/ngc-images/PFD_Management_transaction_get.png b/doc/reference-architectures/core-network/ngc-images/PFD_Management_transaction_get.png similarity index 100% rename from doc/core-network/ngc-images/PFD_Management_transaction_get.png rename to doc/reference-architectures/core-network/ngc-images/PFD_Management_transaction_get.png diff --git a/doc/core-network/ngc-images/PFD_Managment_transaction_add.png b/doc/reference-architectures/core-network/ngc-images/PFD_Managment_transaction_add.png similarity index 100% rename from doc/core-network/ngc-images/PFD_Managment_transaction_add.png rename to doc/reference-architectures/core-network/ngc-images/PFD_Managment_transaction_add.png diff --git a/doc/core-network/ngc-images/PFD_management_transaction_update.png b/doc/reference-architectures/core-network/ngc-images/PFD_management_transaction_update.png similarity index 100% rename from doc/core-network/ngc-images/PFD_management_transaction_update.png rename to doc/reference-architectures/core-network/ngc-images/PFD_management_transaction_update.png diff --git a/doc/core-network/ngc-images/cntf_in_5G_ref_architecture.png b/doc/reference-architectures/core-network/ngc-images/cntf_in_5G_ref_architecture.png similarity index 100% rename from doc/core-network/ngc-images/cntf_in_5G_ref_architecture.png rename to doc/reference-architectures/core-network/ngc-images/cntf_in_5G_ref_architecture.png diff --git a/doc/core-network/ngc-images/e2e_pfd_pa.png b/doc/reference-architectures/core-network/ngc-images/e2e_pfd_pa.png similarity index 100% rename from doc/core-network/ngc-images/e2e_pfd_pa.png rename to doc/reference-architectures/core-network/ngc-images/e2e_pfd_pa.png diff --git a/doc/core-network/ngc-images/e2e_pfd_tif.png b/doc/reference-architectures/core-network/ngc-images/e2e_pfd_tif.png similarity index 100% rename from doc/core-network/ngc-images/e2e_pfd_tif.png rename to doc/reference-architectures/core-network/ngc-images/e2e_pfd_tif.png diff --git a/doc/core-network/ngc-images/e2e_tif.png b/doc/reference-architectures/core-network/ngc-images/e2e_tif.png similarity index 100% rename from doc/core-network/ngc-images/e2e_tif.png rename to doc/reference-architectures/core-network/ngc-images/e2e_tif.png diff --git a/doc/core-network/ngc-images/ngcoam_af_service_add.png b/doc/reference-architectures/core-network/ngc-images/ngcoam_af_service_add.png similarity index 100% rename from doc/core-network/ngc-images/ngcoam_af_service_add.png rename to doc/reference-architectures/core-network/ngc-images/ngcoam_af_service_add.png diff --git a/doc/core-network/ngc-images/ngcoam_af_service_delete.png b/doc/reference-architectures/core-network/ngc-images/ngcoam_af_service_delete.png similarity index 100% rename from doc/core-network/ngc-images/ngcoam_af_service_delete.png rename to doc/reference-architectures/core-network/ngc-images/ngcoam_af_service_delete.png diff --git a/doc/core-network/ngc-images/ngcoam_af_service_get.png b/doc/reference-architectures/core-network/ngc-images/ngcoam_af_service_get.png similarity index 100% rename from doc/core-network/ngc-images/ngcoam_af_service_get.png rename to doc/reference-architectures/core-network/ngc-images/ngcoam_af_service_get.png diff --git a/doc/core-network/ngc-images/ngcoam_af_service_update.png b/doc/reference-architectures/core-network/ngc-images/ngcoam_af_service_update.png similarity index 100% rename from doc/core-network/ngc-images/ngcoam_af_service_update.png rename to doc/reference-architectures/core-network/ngc-images/ngcoam_af_service_update.png diff --git a/doc/core-network/ngc_flows/AF_Policy_Authorization_Notif.uml b/doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_Notif.uml similarity index 100% rename from doc/core-network/ngc_flows/AF_Policy_Authorization_Notif.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_Notif.uml diff --git a/doc/core-network/ngc_flows/AF_Policy_Authorization_Notif_Terminate.uml b/doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_Notif_Terminate.uml similarity index 100% rename from doc/core-network/ngc_flows/AF_Policy_Authorization_Notif_Terminate.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_Notif_Terminate.uml diff --git a/doc/core-network/ngc_flows/AF_Policy_Authorization_Notification.uml b/doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_Notification.uml similarity index 100% rename from doc/core-network/ngc_flows/AF_Policy_Authorization_Notification.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_Notification.uml diff --git a/doc/core-network/ngc_flows/AF_Policy_Authorization_create.uml b/doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_create.uml similarity index 100% rename from doc/core-network/ngc_flows/AF_Policy_Authorization_create.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_create.uml diff --git a/doc/core-network/ngc_flows/AF_Policy_Authorization_delete.uml b/doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_delete.uml similarity index 100% rename from doc/core-network/ngc_flows/AF_Policy_Authorization_delete.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_delete.uml diff --git a/doc/core-network/ngc_flows/AF_Policy_Authorization_event_subscription_delete.uml b/doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_event_subscription_delete.uml similarity index 100% rename from doc/core-network/ngc_flows/AF_Policy_Authorization_event_subscription_delete.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_event_subscription_delete.uml diff --git a/doc/core-network/ngc_flows/AF_Policy_Authorization_event_subscription_put.uml b/doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_event_subscription_put.uml similarity index 100% rename from doc/core-network/ngc_flows/AF_Policy_Authorization_event_subscription_put.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_event_subscription_put.uml diff --git a/doc/core-network/ngc_flows/AF_Policy_Authorization_get.uml b/doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_get.uml similarity index 100% rename from doc/core-network/ngc_flows/AF_Policy_Authorization_get.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_get.uml diff --git a/doc/core-network/ngc_flows/AF_Policy_Authorization_patch.uml b/doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_patch.uml similarity index 100% rename from doc/core-network/ngc_flows/AF_Policy_Authorization_patch.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_patch.uml diff --git a/doc/core-network/ngc_flows/AF_Traffic_Influence_Notification.uml b/doc/reference-architectures/core-network/ngc_flows/AF_Traffic_Influence_Notification.uml similarity index 100% rename from doc/core-network/ngc_flows/AF_Traffic_Influence_Notification.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_Traffic_Influence_Notification.uml diff --git a/doc/core-network/ngc_flows/AF_traffic_influence_add.uml b/doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_add.uml similarity index 96% rename from doc/core-network/ngc_flows/AF_traffic_influence_add.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_add.uml index 0d0d8eba..148a6a46 100644 --- a/doc/core-network/ngc_flows/AF_traffic_influence_add.uml +++ b/doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_add.uml @@ -1,50 +1,50 @@ -@startuml -/' SPDX-License-Identifier: Apache-2.0 - Copyright (c) 2020 Intel Corporation -'/ -skinparam monochrome false -skinparam roundcorner 20 -skinparam defaultFontName "Intel Clear" -skinparam defaultFontSize 20 -skinparam maxmessagesize 300 -skinparam sequenceArrowThickness 3 - -header Intel Corporation -footer Proprietary and Confidential -title Traffic influencing flows between OpenNESS controller and 5G Core - -actor "User/Admin" as user -box "OpenNESS Controller components" #LightBlue - participant "UI/CLI" as cnca - participant "AF Microservice" as af -end box -box "5G Core components" #LightGreen - participant "NEF" as nef - note over nef - OpenNESS provided - Core component with - limited functionality - end note - participant "NGC\nCP Functions" as ngccp -end box - -group Traffic influence submission flow - user -> cnca : Traffic influencing request - activate cnca - cnca -> af : /af/v1/subscriptions: POST \n {3GPP TS 29.522v15.3 \n Sec. 5.4}* - activate af - af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions : POST \n {3GPP TS 29.522v15.3 \n Sec. 5.4} - activate nef - - nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} - ngccp --> nef : - nef --> af : OK: {subscriptionId} \n ERROR: {400/500} - deactivate nef - af --> cnca : OK: {subscriptionId} \n ERROR: {400/500} - deactivate af - cnca --> user : Success: {subscriptionId} - deactivate cnca -end group - -@enduml - +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 300 +skinparam sequenceArrowThickness 3 + +header Intel Corporation +footer Proprietary and Confidential +title Traffic influencing flows between OpenNESS controller and 5G Core + +actor "User/Admin" as user +box "OpenNESS Controller components" #LightBlue + participant "UI/CLI" as cnca + participant "AF Microservice" as af +end box +box "5G Core components" #LightGreen + participant "NEF" as nef + note over nef + OpenNESS provided + Core component with + limited functionality + end note + participant "NGC\nCP Functions" as ngccp +end box + +group Traffic influence submission flow + user -> cnca : Traffic influencing request + activate cnca + cnca -> af : /af/v1/subscriptions: POST \n {3GPP TS 29.522v15.3 \n Sec. 5.4}* + activate af + af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions : POST \n {3GPP TS 29.522v15.3 \n Sec. 5.4} + activate nef + + nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} + ngccp --> nef : + nef --> af : OK: {subscriptionId} \n ERROR: {400/500} + deactivate nef + af --> cnca : OK: {subscriptionId} \n ERROR: {400/500} + deactivate af + cnca --> user : Success: {subscriptionId} + deactivate cnca +end group + +@enduml + diff --git a/doc/core-network/ngc_flows/AF_traffic_influence_delete.uml b/doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_delete.uml similarity index 96% rename from doc/core-network/ngc_flows/AF_traffic_influence_delete.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_delete.uml index 6fb3a66d..22172586 100644 --- a/doc/core-network/ngc_flows/AF_traffic_influence_delete.uml +++ b/doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_delete.uml @@ -1,51 +1,51 @@ -@startuml -/' SPDX-License-Identifier: Apache-2.0 - Copyright (c) 2020 Intel Corporation -'/ -skinparam monochrome false -skinparam roundcorner 20 -skinparam defaultFontName "Intel Clear" -skinparam defaultFontSize 20 -skinparam maxmessagesize 300 -skinparam sequenceArrowThickness 3 - -header Intel Corporation -footer Proprietary and Confidential -title Traffic influencing flows between OpenNESS controller and 5G Core - -actor "User/Admin" as user -box "OpenNESS Controller components" #LightBlue - participant "UI/CLI" as cnca - participant "AF Microservice" as af -end box -box "5G Core components" #LightGreen - participant "NEF" as nef - note over nef - OpenNESS provided - Core component with - limited functionality - end note - participant "NGC\nCP Functions" as ngccp -end box - - -group Delete a subscribed traffic influence by subscriptionId - user -> cnca : Delete request by subscriptionId - activate cnca - cnca -> af : /af/v1/subscriptions/{subscriptionId} : DELETE - activate af - af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions/{subscriptionId} : DELETE - activate nef - - nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} - ngccp --> nef : - nef --> af : OK : Delete success \n ERROR: {400/500} - deactivate nef - af --> cnca : OK : Delete success \n ERROR: {400/500} - deactivate af - cnca --> user : Success/Error - deactivate cnca -end group - -@enduml - +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 300 +skinparam sequenceArrowThickness 3 + +header Intel Corporation +footer Proprietary and Confidential +title Traffic influencing flows between OpenNESS controller and 5G Core + +actor "User/Admin" as user +box "OpenNESS Controller components" #LightBlue + participant "UI/CLI" as cnca + participant "AF Microservice" as af +end box +box "5G Core components" #LightGreen + participant "NEF" as nef + note over nef + OpenNESS provided + Core component with + limited functionality + end note + participant "NGC\nCP Functions" as ngccp +end box + + +group Delete a subscribed traffic influence by subscriptionId + user -> cnca : Delete request by subscriptionId + activate cnca + cnca -> af : /af/v1/subscriptions/{subscriptionId} : DELETE + activate af + af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions/{subscriptionId} : DELETE + activate nef + + nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} + ngccp --> nef : + nef --> af : OK : Delete success \n ERROR: {400/500} + deactivate nef + af --> cnca : OK : Delete success \n ERROR: {400/500} + deactivate af + cnca --> user : Success/Error + deactivate cnca +end group + +@enduml + diff --git a/doc/core-network/ngc_flows/AF_traffic_influence_get.uml b/doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_get.uml similarity index 96% rename from doc/core-network/ngc_flows/AF_traffic_influence_get.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_get.uml index 135db770..b767fcfc 100644 --- a/doc/core-network/ngc_flows/AF_traffic_influence_get.uml +++ b/doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_get.uml @@ -1,68 +1,68 @@ -@startuml -/' SPDX-License-Identifier: Apache-2.0 - Copyright (c) 2020 Intel Corporation -'/ -skinparam monochrome false -skinparam roundcorner 20 -skinparam defaultFontName "Intel Clear" -skinparam defaultFontSize 20 -skinparam maxmessagesize 300 -skinparam sequenceArrowThickness 3 - -header Intel Corporation -footer Proprietary and Confidential -title Traffic influencing flows between OpenNESS controller and 5G Core - -actor "User/Admin" as user -box "OpenNESS Controller components" #LightBlue - participant "UI/CLI" as cnca - participant "AF Microservice" as af -end box -box "5G Core components" #LightGreen - participant "NEF" as nef - note over nef - OpenNESS provided - Core component with - limited functionality - end note - participant "NGC\nCP Functions" as ngccp -end box - -group Get all subscribed traffic influence info - user -> cnca : Request all traffic influence subscribed - activate cnca - cnca -> af : /af/v1/subscriptions : GET - activate af - af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions : GET - activate nef - - nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} - ngccp --> nef : - nef --> af : OK: traffic influence info \n ERROR: {400/500} - deactivate nef - af --> cnca : OK: traffic influence info \n ERROR: {400/500} - deactivate af - cnca --> user : Traffic influence details - deactivate cnca -end group - -group Get subscribed traffic influence info by subscriptionId - user -> cnca : Request traffic influence using subscriptionId - activate cnca - cnca -> af : /af/v1/subscriptions/{subscriptionId} : GET - activate af - af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions/{subscriptionId} : GET - activate nef - - nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} - ngccp --> nef : - nef --> af : OK: traffic influence info \n ERROR: {400/500} - deactivate nef - af --> cnca : OK: traffic influence info \n ERROR: {400/500} - deactivate af - cnca --> user : Traffic influence details - deactivate cnca -end group - -@enduml - +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 300 +skinparam sequenceArrowThickness 3 + +header Intel Corporation +footer Proprietary and Confidential +title Traffic influencing flows between OpenNESS controller and 5G Core + +actor "User/Admin" as user +box "OpenNESS Controller components" #LightBlue + participant "UI/CLI" as cnca + participant "AF Microservice" as af +end box +box "5G Core components" #LightGreen + participant "NEF" as nef + note over nef + OpenNESS provided + Core component with + limited functionality + end note + participant "NGC\nCP Functions" as ngccp +end box + +group Get all subscribed traffic influence info + user -> cnca : Request all traffic influence subscribed + activate cnca + cnca -> af : /af/v1/subscriptions : GET + activate af + af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions : GET + activate nef + + nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} + ngccp --> nef : + nef --> af : OK: traffic influence info \n ERROR: {400/500} + deactivate nef + af --> cnca : OK: traffic influence info \n ERROR: {400/500} + deactivate af + cnca --> user : Traffic influence details + deactivate cnca +end group + +group Get subscribed traffic influence info by subscriptionId + user -> cnca : Request traffic influence using subscriptionId + activate cnca + cnca -> af : /af/v1/subscriptions/{subscriptionId} : GET + activate af + af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions/{subscriptionId} : GET + activate nef + + nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} + ngccp --> nef : + nef --> af : OK: traffic influence info \n ERROR: {400/500} + deactivate nef + af --> cnca : OK: traffic influence info \n ERROR: {400/500} + deactivate af + cnca --> user : Traffic influence details + deactivate cnca +end group + +@enduml + diff --git a/doc/core-network/ngc_flows/AF_traffic_influence_update.uml b/doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_update.uml similarity index 96% rename from doc/core-network/ngc_flows/AF_traffic_influence_update.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_update.uml index eaf712e0..4fb6732b 100644 --- a/doc/core-network/ngc_flows/AF_traffic_influence_update.uml +++ b/doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_update.uml @@ -1,50 +1,50 @@ -@startuml -/' SPDX-License-Identifier: Apache-2.0 - Copyright (c) 2020 Intel Corporation -'/ -skinparam monochrome false -skinparam roundcorner 20 -skinparam defaultFontName "Intel Clear" -skinparam defaultFontSize 20 -skinparam maxmessagesize 300 -skinparam sequenceArrowThickness 3 - -header Intel Corporation -footer Proprietary and Confidential -title Traffic influencing flows between OpenNESS controller and 5G Core - -actor "User/Admin" as user -box "OpenNESS Controller components" #LightBlue - participant "UI/CLI" as cnca - participant "AF Microservice" as af -end box -box "5G Core components" #LightGreen - participant "NEF" as nef - note over nef - OpenNESS provided - Core component with - limited functionality - end note - participant "NGC\nCP Functions" as ngccp -end box - -group Update a subscribed traffic influence by subscriptionId - user -> cnca : Update request by subscriptionId - activate cnca - cnca -> af : /af/v1/subscriptions/{subscriptionId} : PUT - activate af - af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions/{subscriptionId} : PUT - activate nef - - nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} - ngccp --> nef : - nef --> af : OK : Update success, traffic influence info \n ERROR: {400/500} - deactivate nef - af --> cnca : OK : Update success, traffic influence info \n ERROR: {400/500} - deactivate af - cnca --> user : Success/Error - deactivate cnca -end group - -@enduml - +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 300 +skinparam sequenceArrowThickness 3 + +header Intel Corporation +footer Proprietary and Confidential +title Traffic influencing flows between OpenNESS controller and 5G Core + +actor "User/Admin" as user +box "OpenNESS Controller components" #LightBlue + participant "UI/CLI" as cnca + participant "AF Microservice" as af +end box +box "5G Core components" #LightGreen + participant "NEF" as nef + note over nef + OpenNESS provided + Core component with + limited functionality + end note + participant "NGC\nCP Functions" as ngccp +end box + +group Update a subscribed traffic influence by subscriptionId + user -> cnca : Update request by subscriptionId + activate cnca + cnca -> af : /af/v1/subscriptions/{subscriptionId} : PUT + activate af + af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions/{subscriptionId} : PUT + activate nef + + nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} + ngccp --> nef : + nef --> af : OK : Update success, traffic influence info \n ERROR: {400/500} + deactivate nef + af --> cnca : OK : Update success, traffic influence info \n ERROR: {400/500} + deactivate af + cnca --> user : Success/Error + deactivate cnca +end group + +@enduml + diff --git a/doc/core-network/ngc_flows/OAuth2Flow.uml b/doc/reference-architectures/core-network/ngc_flows/OAuth2Flow.uml similarity index 100% rename from doc/core-network/ngc_flows/OAuth2Flow.uml rename to doc/reference-architectures/core-network/ngc_flows/OAuth2Flow.uml diff --git a/doc/core-network/ngc_flows/PFD_Management_transaction_delete.uml b/doc/reference-architectures/core-network/ngc_flows/PFD_Management_transaction_delete.uml similarity index 100% rename from doc/core-network/ngc_flows/PFD_Management_transaction_delete.uml rename to doc/reference-architectures/core-network/ngc_flows/PFD_Management_transaction_delete.uml diff --git a/doc/core-network/ngc_flows/PFD_Management_transaction_get.uml b/doc/reference-architectures/core-network/ngc_flows/PFD_Management_transaction_get.uml similarity index 100% rename from doc/core-network/ngc_flows/PFD_Management_transaction_get.uml rename to doc/reference-architectures/core-network/ngc_flows/PFD_Management_transaction_get.uml diff --git a/doc/core-network/ngc_flows/PFD_Managment_transaction_add.uml b/doc/reference-architectures/core-network/ngc_flows/PFD_Managment_transaction_add.uml similarity index 100% rename from doc/core-network/ngc_flows/PFD_Managment_transaction_add.uml rename to doc/reference-architectures/core-network/ngc_flows/PFD_Managment_transaction_add.uml diff --git a/doc/core-network/ngc_flows/PFD_management_transaction_update.uml b/doc/reference-architectures/core-network/ngc_flows/PFD_management_transaction_update.uml similarity index 100% rename from doc/core-network/ngc_flows/PFD_management_transaction_update.uml rename to doc/reference-architectures/core-network/ngc_flows/PFD_management_transaction_update.uml diff --git a/doc/core-network/ngc_flows/e2e_config_flow_for_5g_edge.uml b/doc/reference-architectures/core-network/ngc_flows/e2e_config_flow_for_5g_edge.uml similarity index 100% rename from doc/core-network/ngc_flows/e2e_config_flow_for_5g_edge.uml rename to doc/reference-architectures/core-network/ngc_flows/e2e_config_flow_for_5g_edge.uml diff --git a/doc/core-network/ngc_flows/e2e_flow_pfd_pa.uml b/doc/reference-architectures/core-network/ngc_flows/e2e_flow_pfd_pa.uml similarity index 100% rename from doc/core-network/ngc_flows/e2e_flow_pfd_pa.uml rename to doc/reference-architectures/core-network/ngc_flows/e2e_flow_pfd_pa.uml diff --git a/doc/core-network/ngc_flows/e2e_flow_pfd_tif.uml b/doc/reference-architectures/core-network/ngc_flows/e2e_flow_pfd_tif.uml similarity index 100% rename from doc/core-network/ngc_flows/e2e_flow_pfd_tif.uml rename to doc/reference-architectures/core-network/ngc_flows/e2e_flow_pfd_tif.uml diff --git a/doc/core-network/ngc_flows/e2e_flow_tif.uml b/doc/reference-architectures/core-network/ngc_flows/e2e_flow_tif.uml similarity index 100% rename from doc/core-network/ngc_flows/e2e_flow_tif.uml rename to doc/reference-architectures/core-network/ngc_flows/e2e_flow_tif.uml diff --git a/doc/core-network/ngc_flows/ngcoam_af_service_add.uml b/doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_add.uml similarity index 96% rename from doc/core-network/ngc_flows/ngcoam_af_service_add.uml rename to doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_add.uml index 287f8f16..c657c4bc 100644 --- a/doc/core-network/ngc_flows/ngcoam_af_service_add.uml +++ b/doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_add.uml @@ -1,46 +1,46 @@ -@startuml -/' SPDX-License-Identifier: Apache-2.0 - Copyright (c) 2020 Intel Corporation -'/ - -skinparam monochrome false -skinparam roundcorner 20 -skinparam defaultFontName "Intel Clear" -skinparam defaultFontSize 20 -skinparam maxmessagesize 400 -skinparam sequenceArrowThickness 3 - -header "Intel Corporation" -footer "Proprietary and Confidential" -title "NGC OAM flows between OpenNESS Controller and NGC Core OAM Component" - -actor "Admin" as user -box "OpenNESS Controller" #LightBlue -participant "UI/CLI" as cnca -end box -box "NGC component" #LightGreen -participant "OAM" as oam -note over oam - OpenNESS provided component - with REST based HTTP interface - (for reference) -end note -participant "NGC \n CP Functions" as ngccp -end box - -== AF services operations with NGC Core through OAM Component == -group AF services registration with 5G Core - user -> cnca : Register AF services (UI): \n {dnn, dnai, snssai, tac, dns-ip, upf-ip} - activate cnca - cnca -> oam : /ngcoam/v1/af/services : POST \n {dnn, dnai, snssai, tac, dns-ip, upf-ip} - activate oam - - oam -> ngccp : {Open: 3rd Party NGC integration with OpenNESS(oam)} - ngccp --> oam : - oam --> cnca : OK : {afServiceId} \n ERROR: {400/500} - deactivate oam - cnca --> user : Success/Failure : {afServiceId} - deactivate cnca -end - +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ + +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 400 +skinparam sequenceArrowThickness 3 + +header "Intel Corporation" +footer "Proprietary and Confidential" +title "NGC OAM flows between OpenNESS Controller and NGC Core OAM Component" + +actor "Admin" as user +box "OpenNESS Controller" #LightBlue +participant "UI/CLI" as cnca +end box +box "NGC component" #LightGreen +participant "OAM" as oam +note over oam + OpenNESS provided component + with REST based HTTP interface + (for reference) +end note +participant "NGC \n CP Functions" as ngccp +end box + +== AF services operations with NGC Core through OAM Component == +group AF services registration with 5G Core + user -> cnca : Register AF services (UI): \n {dnn, dnai, snssai, tac, dns-ip, upf-ip} + activate cnca + cnca -> oam : /ngcoam/v1/af/services : POST \n {dnn, dnai, snssai, tac, dns-ip, upf-ip} + activate oam + + oam -> ngccp : {Open: 3rd Party NGC integration with OpenNESS(oam)} + ngccp --> oam : + oam --> cnca : OK : {afServiceId} \n ERROR: {400/500} + deactivate oam + cnca --> user : Success/Failure : {afServiceId} + deactivate cnca +end + @enduml \ No newline at end of file diff --git a/doc/core-network/ngc_flows/ngcoam_af_service_delete.uml b/doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_delete.uml similarity index 96% rename from doc/core-network/ngc_flows/ngcoam_af_service_delete.uml rename to doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_delete.uml index c9e1349c..ebe01659 100644 --- a/doc/core-network/ngc_flows/ngcoam_af_service_delete.uml +++ b/doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_delete.uml @@ -1,47 +1,47 @@ -@startuml -/' SPDX-License-Identifier: Apache-2.0 - Copyright (c) 2020 Intel Corporation -'/ - -skinparam monochrome false -skinparam roundcorner 20 -skinparam defaultFontName "Intel Clear" -skinparam defaultFontSize 20 -skinparam maxmessagesize 400 -skinparam sequenceArrowThickness 3 - -header "Intel Corporation" -footer "Proprietary and Confidential" -title "NGC OAM flows between OpenNESS Controller and NGC Core OAM Component" - -actor "Admin" as user -box "OpenNESS Controller" #LightBlue -participant "UI/CLI" as cnca -end box -box "NGC component" #LightGreen -participant "OAM" as oam -note over oam - OpenNESS provided component - with REST based HTTP interface - (for reference) -end note -participant "NGC \n CP Functions" as ngccp -end box - -== AF services operations with NGC Core through OAM Component == - -group AF services deregistration with 5G Core - user -> cnca : Deregister AF services from 5G Core (UI): \n {afServiceId} - activate cnca - cnca -> oam : /ngcoam/v1/af/services/{afServiceId}: DELETE - activate oam - - oam -> ngccp : {Open: 3rd Party NGC integration with OpenNESS(oam)} - ngccp --> oam : - oam --> cnca : OK \n ERROR: {400/500} - deactivate oam - cnca --> user : Success/Failure - deactivate cnca -end - +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ + +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 400 +skinparam sequenceArrowThickness 3 + +header "Intel Corporation" +footer "Proprietary and Confidential" +title "NGC OAM flows between OpenNESS Controller and NGC Core OAM Component" + +actor "Admin" as user +box "OpenNESS Controller" #LightBlue +participant "UI/CLI" as cnca +end box +box "NGC component" #LightGreen +participant "OAM" as oam +note over oam + OpenNESS provided component + with REST based HTTP interface + (for reference) +end note +participant "NGC \n CP Functions" as ngccp +end box + +== AF services operations with NGC Core through OAM Component == + +group AF services deregistration with 5G Core + user -> cnca : Deregister AF services from 5G Core (UI): \n {afServiceId} + activate cnca + cnca -> oam : /ngcoam/v1/af/services/{afServiceId}: DELETE + activate oam + + oam -> ngccp : {Open: 3rd Party NGC integration with OpenNESS(oam)} + ngccp --> oam : + oam --> cnca : OK \n ERROR: {400/500} + deactivate oam + cnca --> user : Success/Failure + deactivate cnca +end + @enduml \ No newline at end of file diff --git a/doc/core-network/ngc_flows/ngcoam_af_service_get.uml b/doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_get.uml similarity index 96% rename from doc/core-network/ngc_flows/ngcoam_af_service_get.uml rename to doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_get.uml index e23a4112..753819ef 100644 --- a/doc/core-network/ngc_flows/ngcoam_af_service_get.uml +++ b/doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_get.uml @@ -1,46 +1,46 @@ -@startuml -/' SPDX-License-Identifier: Apache-2.0 - Copyright (c) 2020 Intel Corporation -'/ - -skinparam monochrome false -skinparam roundcorner 20 -skinparam defaultFontName "Intel Clear" -skinparam defaultFontSize 20 -skinparam maxmessagesize 400 -skinparam sequenceArrowThickness 3 - -header "Intel Corporation" -footer "Proprietary and Confidential" -title "NGC OAM flows between OpenNESS Controller and NGC Core OAM Component" - -actor "Admin" as user -box "OpenNESS Controller" #LightBlue -participant "UI/CLI" as cnca -end box -box "NGC component" #LightGreen -participant "OAM" as oam -note over oam - OpenNESS provided component - with REST based HTTP interface - (for reference) -end note -participant "NGC \n CP Functions" as ngccp -end box - - -group Get AF registered DNN services from NGC Core - user -> cnca : Get AF registered DNN services info : {afServiceId} - activate cnca - cnca -> oam : /ngcoam/v1/af/services/{afServiceId}: GET - activate oam - - oam -> ngccp : {Open: 3rd Party NGC integration with OpenNESS(oam)} - ngccp --> oam : - oam --> cnca : OK : {dnn, dnai, snssai, tac, dnsIp, upfIp} \n ERROR: {400/500} - deactivate oam - cnca --> user : DNN services info associated with afServiceId - deactivate cnca -end - +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ + +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 400 +skinparam sequenceArrowThickness 3 + +header "Intel Corporation" +footer "Proprietary and Confidential" +title "NGC OAM flows between OpenNESS Controller and NGC Core OAM Component" + +actor "Admin" as user +box "OpenNESS Controller" #LightBlue +participant "UI/CLI" as cnca +end box +box "NGC component" #LightGreen +participant "OAM" as oam +note over oam + OpenNESS provided component + with REST based HTTP interface + (for reference) +end note +participant "NGC \n CP Functions" as ngccp +end box + + +group Get AF registered DNN services from NGC Core + user -> cnca : Get AF registered DNN services info : {afServiceId} + activate cnca + cnca -> oam : /ngcoam/v1/af/services/{afServiceId}: GET + activate oam + + oam -> ngccp : {Open: 3rd Party NGC integration with OpenNESS(oam)} + ngccp --> oam : + oam --> cnca : OK : {dnn, dnai, snssai, tac, dnsIp, upfIp} \n ERROR: {400/500} + deactivate oam + cnca --> user : DNN services info associated with afServiceId + deactivate cnca +end + @enduml \ No newline at end of file diff --git a/doc/core-network/ngc_flows/ngcoam_af_service_update.uml b/doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_update.uml similarity index 96% rename from doc/core-network/ngc_flows/ngcoam_af_service_update.uml rename to doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_update.uml index 09194625..36b94e87 100644 --- a/doc/core-network/ngc_flows/ngcoam_af_service_update.uml +++ b/doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_update.uml @@ -1,47 +1,47 @@ -@startuml -/' SPDX-License-Identifier: Apache-2.0 - Copyright (c) 2020 Intel Corporation -'/ - -skinparam monochrome false -skinparam roundcorner 20 -skinparam defaultFontName "Intel Clear" -skinparam defaultFontSize 20 -skinparam maxmessagesize 400 -skinparam sequenceArrowThickness 3 - -header "Intel Corporation" -footer "Proprietary and Confidential" -title "NGC OAM flows between OpenNESS Controller and NGC Core OAM Component" - -actor "Admin" as user -box "OpenNESS Controller" #LightBlue -participant "UI/CLI" as cnca -end box -box "NGC component" #LightGreen -participant "OAM" as oam -note over oam - OpenNESS provided component - with REST based HTTP interface - (for reference) -end note -participant "NGC \n CP Functions" as ngccp -end box - -== AF services operations with NGC Core through OAM Component == - -group Update DNS config values for DNN served by Edge DNN - user -> cnca : Update DNS configuration of DNN (UI): \n {afServiceId, dnn, dnai, snssai, tac, dns-ip, upf-ip} - activate cnca - cnca -> oam : /ngcoam/v1/af/services/{afServiceId} : PATCH \n {dnn, dnai, snssai, tac, dns-ip, upf-ip} - activate oam - - oam -> ngccp : {Open: 3rd Party NGC integration with OpenNESS(oam)} - ngccp --> oam : - oam --> cnca : OK \n ERROR: {400/500} - deactivate oam - cnca --> user : Success/Failure - deactivate cnca -end - +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ + +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 400 +skinparam sequenceArrowThickness 3 + +header "Intel Corporation" +footer "Proprietary and Confidential" +title "NGC OAM flows between OpenNESS Controller and NGC Core OAM Component" + +actor "Admin" as user +box "OpenNESS Controller" #LightBlue +participant "UI/CLI" as cnca +end box +box "NGC component" #LightGreen +participant "OAM" as oam +note over oam + OpenNESS provided component + with REST based HTTP interface + (for reference) +end note +participant "NGC \n CP Functions" as ngccp +end box + +== AF services operations with NGC Core through OAM Component == + +group Update DNS config values for DNN served by Edge DNN + user -> cnca : Update DNS configuration of DNN (UI): \n {afServiceId, dnn, dnai, snssai, tac, dns-ip, upf-ip} + activate cnca + cnca -> oam : /ngcoam/v1/af/services/{afServiceId} : PATCH \n {dnn, dnai, snssai, tac, dns-ip, upf-ip} + activate oam + + oam -> ngccp : {Open: 3rd Party NGC integration with OpenNESS(oam)} + ngccp --> oam : + oam --> cnca : OK \n ERROR: {400/500} + deactivate oam + cnca --> user : Success/Failure + deactivate cnca +end + @enduml \ No newline at end of file diff --git a/doc/core-network/openness-core.png b/doc/reference-architectures/core-network/openness-core.png similarity index 100% rename from doc/core-network/openness-core.png rename to doc/reference-architectures/core-network/openness-core.png diff --git a/doc/core-network/openness_5g_nsa.md b/doc/reference-architectures/core-network/openness_5g_nsa.md similarity index 100% rename from doc/core-network/openness_5g_nsa.md rename to doc/reference-architectures/core-network/openness_5g_nsa.md diff --git a/doc/core-network/openness_epc.md b/doc/reference-architectures/core-network/openness_epc.md similarity index 100% rename from doc/core-network/openness_epc.md rename to doc/reference-architectures/core-network/openness_epc.md diff --git a/doc/core-network/openness_ngc.md b/doc/reference-architectures/core-network/openness_ngc.md similarity index 100% rename from doc/core-network/openness_ngc.md rename to doc/reference-architectures/core-network/openness_ngc.md diff --git a/doc/core-network/openness_upf.md b/doc/reference-architectures/core-network/openness_upf.md similarity index 99% rename from doc/core-network/openness_upf.md rename to doc/reference-architectures/core-network/openness_upf.md index 70285b8a..b8ea9c61 100644 --- a/doc/core-network/openness_upf.md +++ b/doc/reference-architectures/core-network/openness_upf.md @@ -139,9 +139,9 @@ Below is a list of minimal configuration parameters for VPP-based applications s 3. Enable the vfio-pci/igb-uio driver on the node. The below example shows the enabling of the `igb_uio` driver: ```bash - ne-node# /opt/dpdk-18.11.6/usertools/dpdk-devbind.py -b igb_uio 0000:af:0a.0 + ne-node# /opt/openness/dpdk-18.11.6/usertools/dpdk-devbind.py -b igb_uio 0000:af:0a.0 - ne-node# /opt/dpdk-18.11.6/usertools/dpdk-devbind.py --status + ne-node# /opt/openness/dpdk-18.11.6/usertools/dpdk-devbind.py --status Network devices using DPDK-compatible driver ============================================ 0000:af:0a.0 'Ethernet Virtual Function 700 Series 154c' drv=igb_uio unused=i40evf,vfio-pci diff --git a/doc/reference-architectures/index.html b/doc/reference-architectures/index.html new file mode 100644 index 00000000..4dad3f78 --- /dev/null +++ b/doc/reference-architectures/index.html @@ -0,0 +1,14 @@ + + +--- +title: OpenNESS Documentation +description: Home +layout: openness +--- +You are being redirected to the OpenNESS Docs.
+ diff --git a/doc/reference-architectures/openness_sdwan.md b/doc/reference-architectures/openness_sdwan.md new file mode 100644 index 00000000..1e1577cb --- /dev/null +++ b/doc/reference-architectures/openness_sdwan.md @@ -0,0 +1,413 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020 Intel Corporation +``` + +# Converged Edge Reference Architecture for SD-WAN +- [Introduction](#introduction) +- [Universal Customer Premises Equipment (u-CPE)](#universal-customer-premises-equipment-u-cpe) +- [Software-Defined Wide Area Network (SD-WAN)](#software-defined-wide-area-network-sd-wan) +- [SD-WAN Implementation](#sd-wan-implementation) + - [SD-WAN CNF](#sd-wan-cnf) + - [SD-WAN CRD Controller](#sd-wan-crd-controller) + - [Custom Resources (CRs)](#custom-resources-crs) +- [CNF Configuration via OpenWRT Packages](#cnf-configuration-via-openwrt-packages) + - [Multi WAN (Mwan3)](#multi-wan-mwan3) + - [Firewall (fw3)](#firewall-fw3) + - [IPSec](#ipsec) +- [SD-WAN CNF Packet Flow](#sd-wan-cnf-packet-flow) +- [OpenNESS Integration](#openness-integration) + - [Goals](#goals) + - [Networking Implementation](#networking-implementation) + - [Converged Edge Reference Architectures (CERA)](#converged-edge-reference-architectures-cera) + - [SD-WAN Edge Reference Architecture](#sd-wan-edge-reference-architecture) + - [SD-WAN Hub Reference Architecture](#sd-wan-hub-reference-architecture) +- [Deployment](#deployment) + - [E2E Scenarios](#e2e-scenarios) + - [Hardware Specification](#hardware-specification) + - [Scenario 1](#scenario-1) + - [Scenario 2](#scenario-2) + - [Scenario 3](#scenario-3) +- [Resource Consumption](#resource-consumption) + - [Methodology](#methodology) + - [Results](#results) +- [References](#references) +- [Acronyms](#acronyms) + +## Introduction +With the growth of global organizations, there is an increased need to connect branch offices distributed across the world. As enterprise applications move from corporate data centers to the cloud or the on-premise edge, their branches require secure and reliable, low latency, and affordable connectivity. One way to achieve this is to deploy a wide area network (WAN) over the public Internet, and create secure links to the branches where applications are running. +The primary role of a traditional WAN is to connect clients to applications hosted anywhere on the Internet. The applications are accessed via public TCP/IP addresses, supported by routing tables on enterprise routers. Branches were also connected to their headquarter data centers via a combination of configurable routers and leased connections. This made WAN connectivty complex and expensive to manage. Additionally, with the move of applications to the cloud and edge, where applications are hosted in private networks without public addresses, accessing these applications requires even more complex rules and policies. + + +Software-defined WAN (SD-WAN) introduces a new way to operate a WAN. First of all, because it is defined by software, its management can be decoupled from the underlying networking hardware (e.g., routers) and managed in a centralized manner, making it more scalable. Secondly, SD-WAN network functions can now be hosted on Universal Customer Premises Equipment (uCPE), which also host software versions of traditional customer premises equipment. Finally, an SD-WAN can be complemented by edge computing solutions, allowing, for example, latency-sensitive traffic to be steered to edge nodes for local processing, and to allow uCPE functions to be hosted in edge nodes. + + +This paper describes how the Open Network Edge Services Software (OpenNESS) integrates uCPE features and SD-WAN capabilities to create for edge optimization, and how it leverages SD-WAN functionality to allow edge-to-edge communication via a WAN. + +## Universal Customer Premises Equipment (u-CPE) +Universal Customer Premise Equipment (uCPE) is a general-purpose platform that can host network functions, implemented in software, that are traditionally run in hardware-based Customer Premises Equipment (CPE). These network services are implemented as virtual functions or cloud-native network functions. Because they are implemented in software, they are well-suited to be hosted on edge nodes, because the nodes are located close to their end users, but also can be be orchestrated by the Controller of an edge computing system. + +## Software-Defined Wide Area Network (SD-WAN) +An SD-WAN is a set of network functions that enable application-aware, intelligent, and secure routing of traffic across the WAN. An SD-WAN typically uses the public internet to interconnect its branch offices, securing the traffic via encrypted tunnels, basically treating the tunnels as "dumb pipes". Traffic at the endpoints can be highly optimized, because the network functions at a branch are virtualized and centrally managed. The SD-WAN manager can also make use of information about the applications running at a branch to optimize traffic. + + +OpenNESS provides an edge computing-based reference architecture for SD-WAN, consisting of building blocks for SD-WAN network functions and reference implementations of branch office functions and services, all running on an OpenNESS edge node and managed by an OpenNESS Controller. + +The figure below shows an example of an OpenNESS based SD-WAN. In this figure, there are two edge nodes, "Manufacturing Plant" and "Branch Office". In each node are multiple OpenNESS-based clusters, each running the OpenNESS edge platform, but supporting different collections of network functions, such as Private 5G (e.g., the AF, NEF, gNB, UPF functions), SD-WAN network functions, or user applications. + +In this figure, the SD-WAN implementation is depicted in "SD-WAN NFs" boxes appearing in a number of OpenNESS clusters, and an "SD-WAN Controller" appearing in the Orchestration and Management function. Other functions seen in the figure are OpenNESS building blocks that the SD-WAN implementation uses to carry out its function. + + +The next section describes the SD-WAN implementation. + +![OpenNESS reference solution for SD-WAN ](sdwan-images/openness-sdwan-ref.png) + +## SD-WAN Implementation +The CERA SD-WAN is based on OpenWrt, an embedded version of Linux designed for use in routers and other communication devices. OpenWrt is highly customizable, allowing it to be deployed with a small footprint, and has a fully-writable filesystem. More details about OpenWRT can be found [here](https://openwrt.org/). + +The OpenWrt project provides a number of kernel images. The “x86-generic rootfs” image is used in the SD-WAN implementation + +The OpenWrt project contains a number of packages of use in implementing SD-WAN functional elements, which are written as OpenWrt applications. These include: + + - mwan3 (for Multiple WAN link support) [mwan](https://openwrt.org/docs/guide-user/network/wan/multiwan/mwan3/) + + - firewall3 (for firewall, SNAT, DNAT) [fw3](https://openwrt.org/docs/guide-user/firewall/overview) + + - strongswan (for IPsec) [strongswan](https://openwrt.org/docs/guide-user/services/vpn/strongswan/start) + + +These packages support the following functionality: + + - IPsec tunnels across K8s clusters; + + - Support of multiple types of K8s clusters: + + - K8s clusters having static public IP address, + + - K8s clusters having dynamic public IP address with static FQDN, and + + - K8s clusters with no public IP; + + - Stateful inspection firewall (for inbound and outbound connections); + + - Source NAT and Destination NAT for K8s clusters whose POD and ClusterIP subnets are overlapping; + + - Multiple WAN links. + + +The SD-WAN implementation uses the following three primary components: + + - SD-WAN Cloud-Native Network Function (CNF) based on OpenWrt packages; + + - Custom Resource Definition (CRD) Controller; + + - Custom Resource Definitions (CRD). + +The CNF contains the OpenWrt services that perform SD-WANM operations. The CRD Controller and CRDs allow Custom Resources (i.e., extensions to Kubernetes APIs) to be created. Together these components allow information to be sent and received, and commands performed, from the Kubernetes Controller to the SD-WAN. + +This behavior is described in the following subsections. + +### SD-WAN CNF +The SD-WAN CNF is deployed as a pod with external network connections. The CNF runs the mwan, mwan3, and strongswan applications, as described in the previous section. The configuration parameters for the CNF include: + + - LAN interface configuration – to create and connect virtual, local networks within the edge cluster (local branch) to the CNF. + + - WAN interface configuration – to initialize interfaces that connect the CNF and connected LANs to the external Internet - WAN and to initialize the traffic rules (e.g., policy, rules) for the interfaces. The external WAN is also referred to in this document as a provider network. + +SD-WAN traffic rules and WAN interfaces are configured at runtime via a RESTful API. The CNF implements the Luci CGI plugin to provide this API. The API calls are initiated and passed to the CNF by a CRD Controller described in the next paragraph. The API provides the capability to list available SD-WAN services (e.g., mwan3, firewall, and ipsec), get service status, and execute service operations for adding, viewing, and deleting settings for these services. + +### SD-WAN CRD Controller +The CRD Controller (also referred to in the implementation as a Config Agent), interacts with the SD-WAN CNF via RESTful API calls. It monitors CRs applied through K8s APIs and translates them into API calls that carry the CNF configuration to the CNF instance. + +the CRD Controller includes several functions: + + - Mwan3conf Controller, to monitor the Mwan3Conf CR; + + - FirewallConf Controller, to monitor the FirewallConf CR; + + - IPSec Controller, to monitor the IpSec CRs. + + +### Custom Resources (CRs) + +As explained above, the behavior of the SD-WAN is governed by rules established in the CNF services. +In order to set these rules externally, CRs are defined to allow rules to be transmitted from the Kubernetes API. The CRs are created from the CRDs that are part of the SD-WAN implementation. + +The types of rules supported by the CRs are: + + - Mwan3 class, with 2 subclasses, mwan3_policy and mwan3_rule. + + - The firewall class has 5 kinds of rules: firewall_zone, firewall_snat, firewall_dnat, firewall_forwarding, firewall_rule. + + - IPsec class. + + The rules are defined by the OpenWrt services, and can be found in the OpenWrt documentation, e.g., [here](https://openwrt.org/docs/guide-user/network/wan/multiwan/mwan3). + + Each kind of SD-WAN rule corresponds to a CRD, which are used to instantiate the CRs. + +In a Kubernetes namespace, with more than one CNF deployment and many SD-WAN rule CRDs, labels are used to correlate a CNF with SD-WAN rule CRDs. + +## CNF Configuration via OpenWRT Packages + +As explained earlier, the SD-WAN CNF contains a collection of services, implemented by OpenWRT packages. In this section, the services are described in greater detail. + +### Multi WAN (Mwan3) +The OpenWRT mwan3 service provides capabilities for multiple WAN management: WAN interfaces management, outbound traffic rules, traffic load balancing etc. The service allows an edge to connect to WANs of different providers and and to specify different rules for the links. + +According to the OpenWRT [website](https://openwrt.org), mwan3 provides the following functionality and capabilities: + + - Provides outbound WAN traffic load balancing or fail-over with multiple WAN interfaces based on a numeric weight assignment. + + - Monitors each WAN connection using repeated ping tests and can automatically route outbound traffic to another WAN interface if a current WAN interface loses connectivity. + + - Creates outbound traffic rules to customize which outbound connections should use which WAN interface (i.e., policy-based routing). This can be customized based on source IP, destination IP, source port(s), destination port(s), type of IP protocol, and other parameters. + + - Supports physical and/or logical WAN interfaces. + + - Uses the firewall mask (default 0x3F00) to mark outgoing traffic, which can be configured in the /etc/config/mwan3 globals section, and can be mixed with other packages that use the firewall masking feature. This value is also used to set the number of supported interfaces. + +Mwan3 is useful for routers with multiple internet connections, where users have control over the traffic that flows to a specific WAN interface. It can handle multiple levels of primary and backup interfaces, where different sources can have different primary or backup WANs. Mwan3 uses Netfilter mark mask, in order to be compatible with other packages (e.g., OpenVPN, PPTP VPN, QoS-script, Tunnels), so that traffic can also be routed based on the default routing table. + +Mwan3 is triggered by a hotplug event when an interface comes up, causing it to create a new custom routing table and iptables rules for the interface. It then sets up iptables rules and uses iptables MARK to mark certain traffic. Based on these rules, the kernel determines which routing table to use. Once all the routes and rules are initially set up, mwan3 exits. Thereafter, the kernel takes care of all the routing decisions. A monitoring script, mwan3track, runs in the background, running ping to verify that each WAN interface is up. If an interface goes down, mwan3track issues a hotplug event to cause mwan3 to adjust routing tables in response to the interface failure, and to delete all the rules and routes to that interface. + +Another component, mwan3rtmon, keeps the main routing table in sync with the interface routing tables by monitoring routing table changes. + +Mwan3 is configured when it is started, according to a configuration with the following paragraphs: + + - Global: common configuration spec, used to configure routable loopback address (for OpenWRT 18.06). + + - Interface: defines how each WAN interface is tested for up/down status. + + - Member: represents an interface with a metric and a weight value. + + - Policy: defines how traffic is routed through the different WAN interface(s). + + - Rule: describes what traffic to match and what policy to assign for that traffic. + +A SD-WAN CNF will be created with Global and Interface sections initialized based on the interfaces allocated to it. Once the CNF starts, the SD-WAN MWAN3 CNF API can be used to get/create/update/delete an mwan3 rule and policy, on a per member basis. + +### Firewall (fw3) +OpenWrt uses the firewall3 (fw3) netfilter/iptable rule builder application. It runs in user space to parse a configuration file into a set of iptables rules, sending each of the rules to the kernel netfilter modules. The fw3 application is used by OpenWRT to “safely” construct a rule set, while hiding much of the details. The fw3 configuration automatically provides the router with a base set of rules and an understandable configuration file for additional rules. + +Similarly to the iptables application, fw3 is based on libiptc library that is used to communicate with the netfilter kernel modules. Both fw3 and iptables applications follow the same steps to apply rules on Netfilter: + + - Establish a socket and read the netfilter table into the application. + + - Modify the chains, rules, etc. in the table (all parsing and error checking is done in user-space by libiptc). + + - Replace the netfilter table in the kernel + +fw3 is typically managed by invoking the shell script /etc/init.d/firewall, which accepts the following set of arguments (start, stop, restart, reload, flush). Behind the scenes, /etc/init.d/firewall then calls fw3, passing the supplied argument to the binary. + +OpenWRT firewall is configured when it is started, via a configuration file with the following paragraphs: + + - Default: declares global firewall settings that do not belong to specific zones. + + - Include: used to enable customized firewall scripts. + + - Zone: groups one or more interfaces and serves as a source or destination for forwardings, rules, and redirects. + + - Forwarding: control the traffic between zones. + + - Redirect: defines port forwarding (NAT) rules + + - Rule: defines basic accept, drop, or reject rules to allow or restrict access to specific ports or hosts. + +The SD-WAN firewall API provides support to get/create/update/delete Firewall Zone, Redirect, Rule, and Forwardings. + +### IPSec +The SD-WAN leverages IPSec functionality to setup secure tunnels for Edge-to-WAN and Edge-WAN-Edge (i.e., to interconnect two edges) communication. The SD-WAN uses the OpenWrt StrongSwan implementation of IPSec. IPsec rules are integrated with the OpenWRT firewall, which enables custom firewall rules. StrongSwan uses the default firewall mechanism to update the firewall rules and injects all the additionally required settings, according to the IPsec configuration stored in /etc/config/ipsec . + +The SD-WAN configures the IPSec site-to-site tunnels to connect edge networks through a hub located in the external network. The hub is a server that acts as a proxy between pairs of edges. The hub also runs SD-WAN CRD Controller and CNF configured such that it knows how to access SD-WAN CNFs deployed on both edges. In that case, to create the IPsec tunnel, the WAN interface on the edge is treated as one side of the tunnel, and the connected WAN interface on the hub is configured as the "responder". Both edges are configured as "initiator". + +## SD-WAN CNF Packet Flow + +Packets that arrives at the edge come through a WAN link that connects the edge to an external provoder network. This WAN interface should be already configured with traffic rules. If there is an IPSec tunnel created on the WAN interface, the packet enters the IPSec tunnel and is forwarded according to IPSec and Firewall/NAT rules. The packet eventually leaves the CNF via a LAN link connecting the OVN network on the edge. + +The following figure shows the typical packet flow through the SD-WAN CNF for Rx (WAN to LAN) when a packet sent from external network enters the edge cluster: + +![SD-WAN Rx packet flow ](sdwan-images/packet-flow-rx.png) + +Packets that attempt to leave the edge come into the CNF through a LAN link attached to the OVN network on the edge cluster. This packet is then marked by the mwan3 application. This mark is used by the firewall to apply rules on the packet, and steer it to the proper WAN link used by the IPSec tunnel connecting the CNF to the WAN. The packet enters the IPSec tunnel and leaves the edge through the WAN interface. + +The following figure shows the typical packet flow through the SD-WAN CNF for Tx (LAN to WAN), when a packet leaves from the edge cluster to the external network: + +![SD-WAN Tx packet flow ](sdwan-images/packet-flow-tx.png) + +## OpenNESS Integration +The previous sections of this document describe the operation of an SD-WAN implemention built from OpenWrt and its various packages. We now turn to the subject of how the SD-WAN is integrated with OpenNESS. + +### Goals +OpenNESS leverages the SD-WAN project to offer SD-WAN service within an on-premise edge, to enable secure and optimized inter-edge data transfer. This functionality is sought by global corporations with branch offices distributed across many geographical locations, as it creates an optimized WAN between edge locations implemented on top of a public network. + +At least one SD-WAN CNF is expected to run on each OpenNESS cluster (as shown in a previous figure), and act as a proxy for edge applications traffic entering and exiting the cluster. The primary task for the CNF is to provide software-defined routes connecting the edge LANs with the (public network) WAN. + +Currently, the OpenNESS SD-WAN is intended only for single node clusters, accommodating only one instance of a CNF and a CRD Controller. + + + +### Networking Implementation +OpenNESS deployment featuring SD-WAN implements networking within the cluster with three CNIs: + + - calico CNI, that acts as the primary CNI. + - ovn4nfv k8s plugin CNI that acts as the secondary CNI. + - Multus CNI, that allows for attaching multiple network interfaces to pods, required by the CNF pod. Without Multus, Kubernetes pods could support only one network interface. + +The [Calico](https://docs.projectcalico.org/about/about-calico) CNI is used to configure the default network overlay for the OpenNESS cluster. It provides the commuication between the pods of the cluster and acts as the management interface. Calico is considered a lighter solution than Kube-OVN, which currently is the preferable CNI plugin for the primary network in OpenNESS clusters. + +The [ovn4nfv-k8s-plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) is a CNI plugin based on OVN and OpenVSwitch (OVS). It works with the Multus CNI to add multiple interfaces to the pod. If Multus is used, the net1 interface is by convention the OVN default interface that connects to Multus. The other interfaces are added by ovn4nfv-k8s-plugin according to the pod annotation. With ovn4nfv-k8s-plugin, virtual networks can be created at runtime. The CNI plugin also utilises physical interfaces to connect a pod to an external network (provider network). This is particularly important for the SD-WAN CNF. ovn4nfv also enables Service Function Chaining ([SFC](https://github.com/opnfv/ovn4nfv-k8s-plugin/blob/master/demo/sfc-setup/README.md)). + +In order for the SD-WAN CNF to act as a proxy between the virtual LANs in the cluster and the WAN, it needs to have two types of network interfaces configured: + + - A virtual LAN network on one of the CNF's virtual interfaces. This connects application pods belonging to the same OVN network in the cluster. The ovn4nfv plugin allows for simplified creation of a virtual OVN network based on the provided configuration. The network is then attached on one of the CNF's interfaces. + - A provider network, to connect the CNF pod to an external network (WAN). The provider network is attached to the physical network infrastructure via layer-2 (i.e., via bridging/switching). + +### Converged Edge Reference Architectures (CERA) +CERA is a business program that creates and maintains validated reference architectures of edge networks, including both hardware and software elements. The reference architectures are used by ISVs, system integrators, and others to accelerate the development of production edge computing systems. + +The OpenNESS project has created a CERA reference architecture for SD-WAN edge and SD-WAN hub. They are used, with OpenNESS, to create a uCPE platform for an SD-WAN CNF on edge and hub accordingly. Even though there is only one implementation of CNF, it can be used for two different purposes, as described below. + +#### SD-WAN Edge Reference Architecture +The SD-WAN Edge CERA reference implementation is used to deploy SD-WAN CNF on a single-node edge cluster that will also accomodate enterprize edge applications. The major goal of SD-WAN Edge is to support the creation of a Kubernetes-based platform that boosts the performance of deployed edge applications and reduces resource usage by the Kubernetes system. To accomplish this, the underlying platform must be optimized and made ready to use IA accelerators. OpenNESS provides support for the deployment of OpenVINO™ applications and workloads acceleration with the Intel® Movidius™ VPU HDDL-R add-in card. SD-WAN Edge also enables the Node Feature Discovery (NFD) building block on the cluster to provide awareness of the nodes’ features to edge applications. Finally, SD-WAN Edge implements Istio Service Mesh (SM) in the default namespace to connect the edge applications. SM acts as a middleware between edge applications/services and the OpenNESS platform, and provides abstractions for traffic management, observability, and security of the building blocks in the platform. Istio is a cloud-native service mesh that provides capabilities such as Traffic Management, Security, and Observability uniformly across a network of services. OpenNESS integrates with Istio to reduce the complexity of large scale edge applications, services, and network functions. More information on SM in OpenNESS can be found on the OpenNESS [website](https://openness.org/developers/). + + +To minimalize resource consumption by the cluster, SD-WAN Edge disables services such as EAA, Edge DNS, and Kafka. Telemetry service stays active for all the Kubernetes deployments. + +The following figure shows the system architecture of the SD-WAN Edge Reference Architecture. + +![OpenNESS SD-WAN Edge Architecture ](sdwan-images/sdwan-edge-arch.png) + + +#### SD-WAN Hub Reference Architecture +The SD-WAN Hub reference architecture prepares an OpenNESS platform for a single-node cluster that functions primarily as an SD-WAN hub. That cluster will also deploy a SD-WAN CRD Controller and a CNF, but no other corporate applications are expected to run on it. That is why the node does not enable support for an HDDL card or for Network Feature Discovery and Service Mesh. + +The Hub is another OpenNESS single-node cluster that acts as a proxy between different edge clusters. The Hub is essential to connect edges through a WAN when applications within the edge clusters have no public IP addresses, which requires additional routing rules to provide access. These rules can be configured globally on a device acting as a hub for the edge locations. + +The Hub node has two expected use-cases: + +- If the edge application wants to access the internet, or an external application wants to access service running in the edge node, the Hub node can act as a gateway with a security policy in force. + +- For communication between a pair of edge nodes located at different locations (and in different clusters), if both edge nodes have public IP addresses, then an IP Tunnel can be configured directly between the edge clusters, otherwise the Hub node is required to act as a proxy to enable the communication. + +The following figure shows the system architecture of the SD-WAN Hub Reference Architecture. + +![OpenNESS SD-WAN Hub Architecture ](sdwan-images/sdwan-hub-arch.png) + +## Deployment +### E2E Scenarios +Three end-to-end scenarios have been validated to verify deployment of an SD-WAN on OpenNESS. The three scenarios are described in the following sections of this document. + +#### Hardware Specification + +The following table describes the hardware requirements of the scenarios. + +| Hardware | | UE | Edge & Hub | +| ---------|----------------------- | ---------------------------------- | ------------------------------------ | +| CPU | Model name: | Intel(R) Xeon(R) | Intel(R) Xeon(R) D-2145NT | +| | | CPU E5-2658 v3 @ 2.20GHz | CPU @ 1.90GHz | +| | CPU MHz: | 1478.527 | CPU MHz: 1900.000 | +| | L1d cache: | 32K | 32K | +| | L1i cache: | 32K | 32K | +| | L2 cache: | 256K | 1024K | +| | L3 cache: | 30720K | 1126K | +| | NUMA node0 CPU(s): | 0-11 | 0-15 | +| | NUMA node1 CPU(s): | 12-23 | | +| NIC | Ethernet controller: | Intel Corporation | Intel Corporation | +| | | 82599ES 10-Gigabit | Ethernet Connection | +| | | SFI/SFP+ Network Connection | X722 for 10GbE SFP+ | +| | | (rev 01) | Subsystem: Advantech Co. Ltd | +| | | Subsystem: Intel Corporation | Device 301d | +| | | Ethernet Server Adapter X520-2 | | +| HDDL | | | | + +#### Scenario 1 + +In this scenario, two UEs are connected to two separate edge nodes, which are connected to one common hub. The scenario demonstrates basic connectivity accross the edge clusters via the SD-WAN. The traffic flow is initiated on one UE and received on the other UE. + +For this scenario, OpenNESS is deployed on both edges and on the hub. On each edge and hub, an SD-WAN CRD Controller and a CNF are set up. Then CRs are used to configre the CNFs and to set up IPsec tunnels between each edge and the hub, and to configure rules on the WAN interfaces connecting edges with the hub. Each CNF is connected to two provider networks. The CNFs on Edge 1 and Edge 2 use provider network n2 to connect to UEs outside the Edge, and the provider network n3 to connect the hub in another edge location. Currently, the UE connects to the CNF directly without the switch. In the following figure, UE1 is in the same network(NET1) as Edge1 port. It is considered a private network. + +This scenario verifies that sample traffic can be sent from the UE connected to Edge2 to another UE connected to Edge1 over secure WAN links connecting the edges to a hub. To demonstrate this connectivity, traffic from the Iperf-client application running on the Edge2 UE is sent toward the Edge1 UE running the Iperf server application. + +The Edge1 node also deploys an OpenVINO app, and, in this way, this scenario also demonstrates Scenario 3 described below. + +![OpenNESS SD-WAN Scenario 1 ](sdwan-images/e2e-scenario1.png) + +A more detailed description of this E2E test is provided under the link in the OpenNESS documentation for this SD-WAN [scenario](https://github.com/open-ness/edgeapps/blob/master/network-functions/sdewan_cnf/e2e-scenarios/three-single-node-clusters/E2E-Overview.md). + +#### Scenario 2 +This scenario demonstrates an simple OpenNESS SD-WAN with a single node cluster, that deploys an SD-WAN CNF and an application pod running an Iperf client. The scenario is depicted in the following figure. + +The CNF pod and Iperf-client pod are attached to one virtual OVN network, using the n3 and n0 interfaces respectively. The CNF has configured a provider network on interface n2, that is attached to a physical interface on the Edge node to work as a bridge, to connect the external network. This scenario demonstrates that, after configuration of the CNF, the traffic sent from the application pod uses the SD-WAN CNF as a proxy, and arrives at the User Equipment (UE) in the external network. The E2E traffic from the Iperf3 client application on the application pod (which is deployed on the Edge node) travels to the external UE via a 10G NIC port. The UE runs the Iperf3 server application. The OpenNESS cluster, consisting of the Edge Node server, is deployed on the SD-WAN Edge. The Iperf client traffic is expected to pass through the SD-WAN CNF and the attached provider network interface to reach the Iperf server that is listening on the UE. + +A more detailed description of the scenarion can be found in this SD-WAN scenario [documentation](https://github.com/open-ness/edgeapps/blob/master/network-functions/sdewan_cnf/e2e-scenarios/one-single-node-cluster/README.md) + +![OpenNESS SD-WAN Scenario 2 ](sdwan-images/e2e-scenario2.png) + + +#### Scenario 3 +This scenario a sample OpenVINO benchmark application deployed on an OpenNESS edge platform equipped with an HDDL accelerator card. It reflects the use case in which a high performance OpenVINO application is executed on an OpenNESS single node cluster, deployed with an SD-WAN Edge. The SD-WAN Edge enables an HDDL plugin to provide the OpenNESS platform with support for workload acceleration via the HDDL card. More information on the OpenVINO sample application is provided under the following links: + + - [OpenVINO Sample Application White Paper](https://github.com/open-ness/specs/blob/master/doc/applications/openness_openvino.md) + + - [OpenVINO Sample Application Onboarding](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md#onboarding-openvino-application) + + +A more detailed description of this scenario is available in OpenNESS [documentation](https://github.com/open-ness/edgeapps/blob/master/network-functions/sdewan_cnf/e2e-scenarios/openvino-hddl-cluster/README.md) + +![OpenNESS SD-WAN Scenario 3 ](sdwan-images/e2e-scenario3.png) + +## Resource Consumption +### Methodology + +The resource consumption of CPU and memory was measured. + +To measure the CPU and memory resource consumption of the Kubernetes cluster, the “kubctl top pod -A” command was invoked both on the Edge node and the Edge Hub. + +The resource consumption was measured twice: + + - With no IPerf traffic; + + - With IPerf traffic from Edge2-UE to Edge1-UE. + +To measure total memory usage, the command “free -h” was used. + +### Results + +| Option | Resource | Edge | Hub | +| ---------------------- | ------------- | ------------------ | ------------------------------------ | +| Without traffic | CPU | 339m (0.339 CPU) | 327m (0.327 CPU) | +| | RAM | 2050Mi (2.05G) | 2162Mi (2.162G) | +| | Total mem used| 3.1G | 3.1G | +| With Iperf traffic | CPU | 382m(0.382 CPU) | 404m(0.404 CPU) | +| | RAM | 2071Mi(2.071G) | 2186Mi(2.186G) | +| | Total mem used| 3.1G | 3.1 | + +## References +- [ICN SDEWAN documentation](https://wiki.akraino.org/display/AK/ICN+-+SDEWAN) +- [ovn4nfv k8s plugin documentation](https://github.com/opnfv/ovn4nfv-k8s-plugin) +- [Service Function Chaining (SFC) Setup](https://github.com/opnfv/ovn4nfv-k8s-plugin/blob/master/demo/sfc-setup/README.md) +- [Utilizing a Service Mesh for Edge Services in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/applications/openness_service_mesh.md) +- [Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness_hddl.md) +- [Node Feature Discovery support in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-node-feature-discovery.md) +- [OpenVINO™ Sample Application in OpenNESS](https://github.com/open-ness/ido-specs/blob/78d7797cbe0a21ade2fdc61625c2416d8430df23/doc/applications/openness_openvino.md) + +## Acronyms + +| | | +|-------------|---------------------------------------------------------------| +| API | Application Programming Interface | +| CERA | Converged Edge Reference Architectures +| CR | Custom Resource | +| CRD | Custom Resource Definition | +| CNF | Cloud-native Network Function | +| DNAT | Destination Network Address Translation | +| HDDL | High Density Deep Learning | +| IP | Internet Protocol | +| NAT | Network Address Translation | +| NFD | Network Feature Discovery | +| SM | Service Mesh | +| SD-WAN | Software-Defined Wide Area Network | +| SNAT | Source Network Address Translation | +| TCP | Transmission Control Protocol | +| uCPE | Universal Customer Premise Equipment | + diff --git a/doc/ran/index.html b/doc/reference-architectures/ran/index.html similarity index 100% rename from doc/ran/index.html rename to doc/reference-architectures/ran/index.html diff --git a/doc/reference-architectures/ran/openness-ran.png b/doc/reference-architectures/ran/openness-ran.png new file mode 100644 index 00000000..68706f8b Binary files /dev/null and b/doc/reference-architectures/ran/openness-ran.png differ diff --git a/doc/ran/openness_ran.md b/doc/reference-architectures/ran/openness_ran.md similarity index 88% rename from doc/ran/openness_ran.md rename to doc/reference-architectures/ran/openness_ran.md index e235a325..bd3bd9f5 100644 --- a/doc/ran/openness_ran.md +++ b/doc/reference-architectures/ran/openness_ran.md @@ -59,7 +59,13 @@ This section explains the steps involved in building the FlexRAN image. Only L1 >**NOTE**: The environmental variables path must be updated according to your installation and file/directory names. 4. Build L1, WLS interface between L1, L2, and L2-Stub (testmac): `./flexran_build.sh -r 5gnr_sub6 -m testmac -m wls -m l1app -b -c` -5. Once the build has completed, copy the required binary files to the folder where the Docker\* image is built. The list of binary files that are used is documented in [dockerfile](https://github.com/open-ness/edgeapps/blob/master/network-functions/ran/5G/flexRAN-gnb/Dockerfile) +5. Once the build has completed, copy the required binary files to the folder where the Docker\* image is built. This can be done by using a provided example [build-du-dev-image.sh](https://github.com/open-ness/edgeapps/blob/master/network-functions/ran/5G/du-dev/build-du-dev-image.sh) script from Edge Apps OpenNESS repository, it will copy the files from the paths provided as environmental variables in previous step. The script will copy the files into the right directory containing the Dockerfile and commence the docker build. + ```shell + git clone https://github.com/open-ness/edgeapps.git + cd edgeapps/network-functions/ran/5G/du-dev + ./build-du-dev-image.sh + ``` + The list of binary files that are used is documented in [dockerfile](https://github.com/open-ness/edgeapps/blob/master/network-functions/ran/5G/flexRAN-gnb/Dockerfile) - ICC, IPP mpi and mkl Runtime - DPDK build target directory - FlexRAN test vectors (optional) @@ -67,21 +73,26 @@ This section explains the steps involved in building the FlexRAN image. Only L1 - FlexRAN SDK modules - FlexRAN WLS share library - FlexRAN CPA libraries -6. `cd` to the folder where the Docker image is built and start the docker build `docker build -tYou are being redirected to the OpenNESS Docs.
diff --git a/openness_releasenotes.md b/openness_releasenotes.md index cfdcdf94..386f16ea 100644 --- a/openness_releasenotes.md +++ b/openness_releasenotes.md @@ -6,11 +6,49 @@ Copyright (c) 2019-2020 Intel Corporation # Release Notes This document provides high-level system features, issues, and limitations information for Open Network Edge Services Software (OpenNESS). - [Release history](#release-history) -- [Features for Release](#features-for-release) + - [OpenNESS - 19.06](#openness---1906) + - [OpenNESS - 19.09](#openness---1909) + - [OpenNESS - 19.12](#openness---1912) + - [OpenNESS - 20.03](#openness---2003) + - [OpenNESS - 20.06](#openness---2006) + - [OpenNESS - 20.09](#openness---2009) + - [OpenNESS - 20.12](#openness---2012) - [Changes to Existing Features](#changes-to-existing-features) + - [OpenNESS - 19.06](#openness---1906-1) + - [OpenNESS - 19.06.01](#openness---190601) + - [OpenNESS - 19.09](#openness---1909-1) + - [OpenNESS - 19.12](#openness---1912-1) + - [OpenNESS - 20.03](#openness---2003-1) + - [OpenNESS - 20.06](#openness---2006-1) + - [OpenNESS - 20.09](#openness---2009-1) + - [OpenNESS - 20.12](#openness---2012-1) - [Fixed Issues](#fixed-issues) + - [OpenNESS - 19.06](#openness---1906-2) + - [OpenNESS - 19.06.01](#openness---190601-1) + - [OpenNESS - 19.06.01](#openness---190601-2) + - [OpenNESS - 19.12](#openness---1912-2) + - [OpenNESS - 20.03](#openness---2003-2) + - [OpenNESS - 20.06](#openness---2006-2) + - [OpenNESS - 20.09](#openness---2009-2) + - [OpenNESS - 20.12](#openness---2012-2) - [Known Issues and Limitations](#known-issues-and-limitations) + - [OpenNESS - 19.06](#openness---1906-3) + - [OpenNESS - 19.06.01](#openness---190601-3) + - [OpenNESS - 19.09](#openness---1909-2) + - [OpenNESS - 19.12](#openness---1912-3) + - [OpenNESS - 20.03](#openness---2003-3) + - [OpenNESS - 20.06](#openness---2006-3) + - [OpenNESS - 20.09](#openness---2009-3) + - [OpenNESS - 20.12](#openness---2012-3) - [Release Content](#release-content) + - [OpenNESS - 19.06](#openness---1906-4) + - [OpenNESS - 19.06.01](#openness---190601-4) + - [OpenNESS - 19.09](#openness---1909-3) + - [OpenNESS - 19.12](#openness---1912-4) + - [OpenNESS - 20.03](#openness---2003-4) + - [OpenNESS - 20.06](#openness---2006-4) + - [OpenNESS - 20.09](#openness---2009-4) + - [OpenNESS - 20.12](#openness---2012-4) - [Hardware and Software Compatibility](#hardware-and-software-compatibility) - [Intel® Xeon® D Processor](#intel-xeon-d-processor) - [2nd Generation Intel® Xeon® Scalable Processors](#2nd-generation-intel-xeon-scalable-processors) @@ -18,358 +56,430 @@ This document provides high-level system features, issues, and limitations infor - [Supported Operating Systems](#supported-operating-systems) - [Packages Version](#packages-version) -# Release history -1. OpenNESS - 19.06 -2. OpenNESS - 19.06.01 -3. OpenNESS - 19.09 -4. OpenNESS - 19.12 -5. OpenNESS - 20.03 -6. OpenNESS - 20.06 -7. OpenNESS - 20.09 - -# Features for Release -1. OpenNESS - 19.06 - - Edge Cloud Deployment options - - Controller-based deployment of Applications in Docker Containers/VM–using-Libvirt - - Controller + Kubernetes\* based deployment of Applications in Docker\* Containers - - OpenNESS Controller - - Support for Edge Node Orchestration - - Support for Web UI front end - - OpenNESS APIs - - Edge Application APIs - - Edge Virtualization Infrastructure APIs - - Edge Application life cycle APIs - - Core Network Configuration APIs - - Edge Application authentication APIs - - OpenNESS Controller APIs - - Platform Features - - Microservices based Appliance and Controller agent deployment - - Support for DNS for the edge - - CentOSc\* 7.6 / CentOS 7.6 + RT kernel - - Basic telemetry support - - Sample Reference Applications - - OpenVINO™ based Consumer Application - - Producer Application supporting OpenVINO™ - - Dataplane - - DPDK/KNI based Dataplane – NTS - - Support for deployment on IP, LTE (S1, SGi and LTE CUPS) - - Cloud Adapters - - Support for running Amazon\* Greengrass\* cores as an OpenNESS application - - Support for running Baidu\* Cloud as an OpenNESS application - - Documentation - - User Guide Enterprise and Operator Edge - - OpenNESS Architecture - - Swagger/Proto buff External API Guide - - 4G/CUPS API whitepaper - - Cloud Connector App note - - OpenVINO™ on OpenNESS App note -2. OpenNESS - 19.09 - - Edge Cloud Deployment options - - Asyn method for image download to avoid timeout - - Dataplane - - Support for OVN/OVS based Dataplane and network overlay for Network Edge (based on Kubernetes) - - Cloud Adapters - - Support for running Amazon Greengrass cores as an OpenNESS application with OVN/OVS as Dataplane and network overlay - - Support for Inter-App comms - - Support for OVS-DPDK or Linux\* bridge or Default interface for inter-Apps communication for OnPrem deployment - - Accelerator support - - Support for HDDL-R accelerator for interference in a container environment for OnPrem deployment - - Edge Applications - - Early Access Support for Open Visual Cloud (OVC) based Smart City App on OpenNESS OnPrem - - Support for Dynamic use of VPU or CPU for Inferences - - Gateway - - Support for Edge node and OpenNESS Controller gate way to support route-ability - - Documentation - - OpenNESS Architecture (update) - - OpenNESS Support for OVS as dataplane with OVN - - Open Visual Cloud Smart City Application on OpenNESS - Solution Overview - - Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS - - OpenNESS How-to Guide (update) -3. OpenNESS – 19.12 - - Hardware - - Support for Cascade lake 6252N - - Support for Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000 - - Edge Application - - Fully Cloud-native Open Visual Cloud Smart City Application pipeline on OpenNESS Network edge. - - Edge cloud - - EAA and CNCA microservice as native Kubernetes-managed services - - Support for Kubernetes version 1.16.2 - - Edge Compute EPA features support for Network Edge - - CPU Manager: Support deployment of POD with dedicated pinning - - SRIOV NIC: Support deployment of POD with dedicated SRIOV VF from NIC - - SRIOV FPGA: Support deployment of POD with dedicated SRIOV VF from FPGA - - Topology Manager: Support k8s to manage the resources allocated to workloads in a NUMA topology-aware manner - - BIOS/FW Configuration service - Intel SysCfg based BIOS/FW management service - - Hugepages: Support for allocation of 1G/2M huge pages to the Pod. - - Multus: Support for Multiple network interface in the PODs deployed by Kubernetes - - Node Feature discovery: Support detection of Silicon and Software features and automation of deployment of CNF and Applications - - FPGA Remote System Update service: Support the Open Programmable Acceleration Engine (OPAE) (fpgautil) based image update service for FPGA. - - Non-Privileged Container: Support deployment of non-privileged pods (CNFs and Applications as reference) - - Edge Compute EPA features support for OnPremises - - Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS - - OpenNESS Experience Kit for Network and OnPremises edge - - Offline Release Package: Customers should be able to create an installer package that can be used to install OnPremises version of OpenNESS without the need for Internet access. - - 5G NR Edge Cloud deployment support - - 5G NR edge cloud deployment support with SA mode - - AF: Support for 5G NGC Application function as a microservice - - NEF: Support for 5G NGC Network Exposure function as a microservice - - Support for 5G NR UDM, UPF, AMF, PCF and SCF (not part of the release) - - DNS support - - DNS support for UE - - DNS Support for Edge applications - - Documentation - - Completely reorganized documentation structure for ease of navigation - - 5G NR Edge Cloud deployment Whitepaper - - EPA application note for each of the features -4. OpenNESS – 20.03 - - OVN/OVS-DPDK support for dataplane - - Network Edge: Support for kube-ovn CNI with OVS or OVS-DPDK as dataplane. Support for Calico as CNI. - - OnPremises Edge: Support for OVS-DPDK CNI with OVS-DPDK as dataplane supporting application deployed in containers or VMs - - Support for VM deployments on Kubernetes mode - - Kubevirt based VM deployment support - - EPA Support for SRIOV Virtual function allocation to the VMs deployed using K8s - - EPA support - OnPremises - - Support for dedicated core allocation to applications running as VMs or Containers - - Support for dedicated SRIOV VF allocation to applications running in VM or containers - - Support for system resource allocation into the application running as a container - - Mount point for shared storage - - Pass environment variables - - Configure the port rules - - Core Network Feature (5G) - - PFD Management API support (3GPP 23.502 Sec. 52.6.3 PFD Management service) - - AF: Added support for PFD Northbound API - - NEF: Added support for PFD southbound API, and Stubs to loopback the PCF calls. - - kubectl: Enhanced CNCA kubectl plugin to configure PFD parameters - - WEB UI: Enhanced CNCA WEB UI to configure PFD params in OnPerm mode - - Auth2 based authentication between 5G Network functions: (as per 3GPP Standard) - - Implemented oAuth2 based authentication and validation - - AF and NEF communication channel is updated to authenticated based on oAuth2 JWT token in addition to HTTP2. - - HTTPS support - - Enhanced the 5G OAM, CNCA (web-ui and kube-ctl) to HTTPS interface - - Modular Playbook - - Support for customers to choose real-time or non-real-time kernel for an edge node - - Support for the customer to choose CNIs - Validated with Kube-OVN and Calico - - Edge Apps - - FlexRAN: Dockerfile and pod specification for the deployment of 4G or 5G FlexRAN - - AF: Dockerfile and pod specification - - NEF: Dockerfile and pod specification - - UPF: Dockerfile and pod specification -5. OpenNESS – 20.06 - - OpenNESS is now available in two distributions - - Open source (Apache 2.0 license) - - Intel Distribution of OpenNESS (Intel Proprietary License) - - Includes all the code from the open source distribution plus additional features and enhancements to improve the user experience - - Access requires a signed license. A request for access can be made at openness.org by navigating to the “Products” section and selecting “Intel Distribution of OpenNESS” - - Both distributions are hosted at github.com/open-ness - - On premises configuration now optionally supports Kubernetes - - Core Network Feature (5G) - - Policy Authorization Service support in AF and CNCA over the N5 Interface(3GPP 29.514 - Chapter 5 Npcf_PolicyAuthorization Service API). - - Core Network Notifications for User Plane Path Change event received through Policy Authorization support in AF. - - NEF South Bound Interfaces support to communicate with the Core Network Functions for Traffic Influence and PFD. - - Core Network Test Function (CNTF) microservice added for validating the AF & NEF South Bound Interface communication. - - Flavors added for Core Network control-plane and user-plane. - - OpenNESS assisted Edge cloud deployment in 5G Non Standalone mode whitepaper. - - OpenNESS 20.06 5G features enablement through the enhanced-OpenNESS release (IDO). - - Dataplane - - Support for Calico eBPF as CNI - - Performance baselining of the CNIs - - Visual Compute and Media Analytics - - Intel Visual Cloud Accelerator Card - Analytics (VCAC-A) Kubernetes deployment support (CPU, GPU, and VPU) - - Node feature discovery of VCAC-A - - Telemetry support for VCAC-A - - Provide ansible and Helm -playbook support for OVC codecs Intel® Xeon® CPU mode - video analytics service (REST API) for developers - - Edge Applications - - Smart City Application Pipeline supporting CPU or VCAC-A mode with Helm chart - - CDN Content Delivery using NGINX with SR-IOV capability for higher performance with Helm chart - - CDN transcode sample application using Intel® Xeon® CPU optimized media SDK with Helm Chart - - Support for Transcoding Service using Intel® Xeon® CPU optimized media SDK with Helm chart - - Intel Edge Insights application support with Helm chart - - Edge Network Functions - - FlexRAN DU with Helm Chart (FlexRAN not part of the release) - - xRAN Fronthaul with Helm CHart (xRAN app not part of the release) - - Core Network Function - Application Function with Helm Chart - - Core Network Function - Network Exposure Function With Helm Chart - - Core Network Function - UPF (UPF app not part of the release) - - Core network Support functions - OAM and CNTF - - Helm Chart for Kubernetes enhancements - - NFD, CMK, SRIOV-Device plugin and Multus\* - - Support for local Docker registry setup - - Support for deployment-specific Flavors - - Minimal - - RAN - 4G and 5G - - Core - User plane and Control Plane - - Media Analytics with VCAC-A and with CPU only mode - - CDN - Transcode - - CDN - Content Delivery - - Azure - Deployment of OpenNESS cluster on Microsoft\* Azure\* cloud - - Support for OpenNESS on CSP Cloud - - Azure - Deployment of OpenNESS cluster on Microsoft Azure cloud - - Telemetry Support - - Support for Collectd backend with hardware from Intel and custom metrics - - Cpu, cpufreq, load, hugepages, intel_pmu, intel_rdt, ipmi, ovs_stats, ovs_pmd_stats - - FPGA – PACN3000 (collectd) - Temp, Power draw - - VPU Device memory, VPU device thermal, VPU Device utilization - - Open Telemetry - Support for collector and exporter for metrics (e.g., heartbeat from app) - - Support for PCM counter for Prometheus\* and Grafana\* - - Telemetry Aware Scheduler - - Early Access support for Resource Management Daemon (RMD) - - RMD for cache allocation to the application Pods - - Ability to deploy OpenNESS Master and Node on the same platform -6. OpenNESS – 20.09 - - Native On-premises mode - - Following from the previous release decision of pausing Native on-premises Development the code has been move to a dedicated repository “native-on-prem” - - Kubernetes based solution will now support both Network and on-premises Edge - - Service Mesh support - - Basic support for Service Mesh using istio within an OpenNESS cluster - - Application of Service Mesh openness 5G and Media analytics - A dedicated network for service to service communications - - EAA Update - - EAA microservices has been updated to be more cloud-native friendly - - 5G Core AF and NEF - - User-Plane Path Change event notifications from AF received over N33 I/f [Traffic Influence updates from SMF received through NEF] - - AF/NEF/OAM Configuration and Certificate updates through Configmaps. - - AF and OAM API’s access authorization through Istio Gateway. - - Envoy Sidecar Proxy for all the 5G microservices(AF/NEF/OAM/CNTF) which enables support for telemetry(Request/Response Statistics), certificates management, http 2.0 protocol configuration(with/without TLS) - - Core-cplane flavor is enabled with Istio - - Edge Insights Application (update) - - Industrial Edge Insights Software update to version 2.3. - - Experience Kit now supports multiple detection video’s – Safety equipment detection, PCB default detection and also supports external video streams. - - CERA Near Edge - - Core network and Application reference architecture - - CERA provides reference integration of OpenNESS, Network function 5G UPF (Not part of the release), OpenVINO with EIS application. +# Release history + +## OpenNESS - 19.06 +- Edge Cloud Deployment options + - Controller-based deployment of Applications in Docker Containers/VM–using-Libvirt + - Controller + Kubernetes\* based deployment of Applications in Docker\* Containers +- OpenNESS Controller + - Support for Edge Node Orchestration + - Support for Web UI front end +- OpenNESS APIs + - Edge Application APIs + - Edge Virtualization Infrastructure APIs + - Edge Application life cycle APIs + - Core Network Configuration APIs + - Edge Application authentication APIs + - OpenNESS Controller APIs +- Platform Features + - Microservices based Appliance and Controller agent deployment + - Support for DNS for the edge + - CentOS\* 7.6 / CentOS 7.6 + RT kernel + - Basic telemetry support +- Sample Reference Applications + - OpenVINO™ based Consumer Application + - Producer Application supporting OpenVINO™ +- Dataplane + - DPDK/KNI based Dataplane – NTS + - Support for deployment on IP, LTE (S1, SGi and LTE CUPS) +- Cloud Adapters + - Support for running Amazon\* Greengrass\* cores as an OpenNESS application + - Support for running Baidu\* Cloud as an OpenNESS application +- Documentation + - User Guide Enterprise and Operator Edge + - OpenNESS Architecture + - Swagger/Proto buff External API Guide + - 4G/CUPS API whitepaper + - Cloud Connector App note + - OpenVINO™ on OpenNESS App note + +## OpenNESS - 19.09 +- Edge Cloud Deployment options + - Async method for image download to avoid timeout. +- Dataplane + - Support for OVN/OVS based Dataplane and network overlay for Network Edge (based on Kubernetes) +- Cloud Adapters + - Support for running Amazon Green grass cores as an OpenNESS application with OVN/OVS as Dataplane and network overlay +- Support for Inter-App comms + - Support for OVS-DPDK or Linux\* bridge or Default interface for inter-Apps communication for OnPrem deployment +- Accelerator support + - Support for HDDL-R accelerator for interference in a container environment for OnPrem deployment +- Edge Applications + - Early Access Support for Open Visual Cloud (OVC) based Smart City App on OpenNESS OnPrem + - Support for Dynamic use of VPU or CPU for Inferences +- Gateway + - Support for Edge node and OpenNESS Controller gate way to support route-ability +- Documentation + - OpenNESS Architecture (update) + - OpenNESS Support for OVS as dataplane with OVN + - Open Visual Cloud Smart City Application on OpenNESS - Solution Overview + - Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS + - OpenNESS How-to Guide (update) + +## OpenNESS - 19.12 +- Hardware + - Support for Cascade lake 6252N + - Support for Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000 +- Edge Application + - Fully cloud native Open Visual Cloud Smart City Application pipeline on OpenNESS Network edge. +- Edge cloud + - EAA and CNCA microservice as native Kubernetes-managed services + - Support for Kubernetes version 1.16.2 +- Edge Compute EPA features support for Network Edge + - CPU Manager: Support deployment of POD with dedicated pinning + - SRIOV NIC: Support deployment of POD with dedicated SRIOV VF from NIC + - SRIOV FPGA: Support deployment of POD with dedicated SRIOV VF from FPGA + - Topology Manager: Support k8s to manage the resources allocated to workloads in a NUMA topology-aware manner + - BIOS/FW Configuration service - Intel SysCfg based BIOS/FW management service + - Hugepages: Support for allocation of 1G/2M huge pages to the Pod + - Multus: Support for Multiple network interface in the PODs deployed by Kubernetes + - Node Feature discovery: Support detection of Silicon and Software features and automation of deployment of CNF and Applications + - FPGA Remote System Update service: Support the Open Programmable Acceleration Engine (OPAE) (fpgautil) based image update service for FPGA + - Non-Privileged Container: Support deployment of non-privileged pods (CNFs and Applications as reference) +- Edge Compute EPA features support for On-Premises + - Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS +- OpenNESS Experience Kit for Network and OnPremises edge + - Offline Release Package: Customers should be able to create an installer package that can be used to install OnPremises version of OpenNESS without the need for Internet access. +- 5G NR Edge Cloud deployment support + - 5G NR edge cloud deployment support with SA mode + - AF: Support for 5G NGC Application function as a microservice + - NEF: Support for 5G NGC Network Exposure function as a microservice + - Support for 5G NR UDM, UPF, AMF, PCF and SCF (not part of the release) +- DNS support + - DNS support for UE + - DNS Support for Edge applications +- Documentation + - Completely reorganized documentation structure for ease of navigation + - 5G NR Edge Cloud deployment Whitepaper + - EPA application note for each of the features + +## OpenNESS - 20.03 +- OVN/OVS-DPDK support for dataplane + - Network Edge: Support for kube-ovn CNI with OVS or OVS-DPDK as dataplane. Support for Calico as CNI. + - OnPremises Edge: Support for OVS-DPDK CNI with OVS-DPDK as dataplane supporting application deployed in containers or VMs +- Support for VM deployments on Kubernetes mode + - Kubevirt based VM deployment support + - EPA Support for SRIOV Virtual function allocation to the VMs deployed using Kubernetes +- EPA support - OnPremises + - Support for dedicated core allocation to applications running as VMs or Containers + - Support for dedicated SRIOV VF allocation to applications running in VM or containers + - Support for system resource allocation into the application running as a container + - Mount point for shared storage + - Pass environment variables + - Configure the port rules +- Core Network Feature (5G) + - PFD Management API support (3GPP 23.502 Sec. 52.6.3 PFD Management service) + - AF: Added support for PFD Northbound API + - NEF: Added support for PFD southbound API, and Stubs to loopback the PCF calls. + - kubectl: Enhanced CNCA kubectl plugin to configure PFD parameters + - WEB UI: Enhanced CNCA WEB UI to configure PFD params in OnPerm mode + - Auth2 based authentication between 5G Network functions: (as per 3GPP Standard) + - Implemented oAuth2 based authentication and validation + - AF and NEF communication channel is updated to authenticated based on oAuth2 JWT token in addition to HTTP2. + - HTTPS support + - Enhanced the 5G OAM, CNCA (web-ui and kube-ctl) to HTTPS interface +- Modular Playbook + - Support for customers to choose real-time or non-realtime kernel for an edge node + - Support for customers to choose CNIs - Validated with Kube-OVN and Calico +- Edge Apps + - FlexRAN: Dockerfile and pod specification for the deployment of 4G or 5G FlexRAN + - AF: Dockerfile and pod specification + - NEF: Dockerfile and pod specification + - UPF: Dockerfile and pod specification + +## OpenNESS - 20.06 +- OpenNESS is now available in two distributions + - Open source (Apache 2.0 license) + - Intel Distribution of OpenNESS (Intel Proprietary License) + - Includes all the code from the open source distribution plus additional features and enhancements to improve the user experience + - Access requires a signed license. A request for access can be made at openness.org by navigating to the "Products" section and selecting "Intel Distribution of OpenNESS" + - Both distributions are hosted at github.com/open-ness +- On premises configuration now optionally supports Kubernetes +- Core Network Feature (5G) + - Policy Authorization Service support in AF and CNCA over the N5 Interface(3GPP 29.514 - Chapter 5 Npcf_PolicyAuthorization Service API). + - Core Network Notifications for User Plane Path Change event received through Policy Authorization support in AF. + - NEF South Bound Interfaces support to communicate with the Core Network Functions for Traffic Influence and PFD. + - Core Network Test Function (CNTF) microservice added for validating the AF & NEF South Bound Interface communication. + - Flavors added for Core Network control-plane and user-plane. + - OpenNESS assisted Edge cloud deployment in 5G Non Standalone mode whitepaper. + - OpenNESS 20.06 5G features enablement through the enhanced-OpenNESS release (IDO). +- Dataplane + - Support for Calico eBPF as CNI + - Performance baselining of the CNIs +- Visual Compute and Media Analytics + - Intel Visual Cloud Accelerator Card - Analytics (VCAC-A) Kubernetes deployment support (CPU, GPU, and VPU) + - Node feature discovery of VCAC-A + - Telemetry support for VCAC-A + - Provide ansible and Helm -playbook support for OVC codecs Intel® Xeon® CPU mode - video analytics service (REST API) for developers +- Edge Applications + - Smart City Application Pipeline supporting CPU or VCAC-A mode with Helm chart + - CDN Content Delivery using NGINX with SR-IOV capability for higher performance with Helm chart + - CDN transcode sample application using Intel® Xeon® CPU optimized media SDK with Helm chart + - Support for Transcoding Service using Intel® Xeon® CPU optimized media SDK with Helm chart + - Intel Edge Insights application support with Helm chart +- Edge Network Functions + - FlexRAN DU with Helm Chart (FlexRAN not part of the release) + - xRAN Fronthaul with Helm CHart (xRAN app not part of the release) + - Core Network Function - Application Function with Helm chart + - Core Network Function - Network Exposure Function With Helm chart + - Core Network Function - UPF (UPF app not part of the release) + - Core network Support functions - OAM and CNTF +- Helm Chart for Kubernetes enhancements + - NFD, CMK, SRIOV-Device plugin and Multus\* + - Support for local Docker registry setup +- Support for deployment-specific Flavors + - Minimal + - RAN - 4G and 5G + - Core - User plane and Control Plane + - Media Analytics with VCAC-A and with CPU only mode + - CDN - Transcode + - CDN - Content Delivery + - Azure - Deployment of OpenNESS cluster on Microsoft\* Azure\* cloud +- Support for OpenNESS on CSP Cloud + - Azure - Deployment of OpenNESS cluster on Microsoft\* Azure\* cloud +- Telemetry Support + - Support for Collectd backend with hardware from Intel and custom metrics + - cpu, cpufreq, load, hugepages, intel_pmu, intel_rdt, ipmi, ovs_stats, ovs_pmd_stats + - FPGA – PACN3000 (collectd) - Temp, Power draw + - VPU Device memory, VPU device thermal, VPU Device utilization + - Open Telemetry - Support for collector and exporter for metrics (e.g., heartbeat from app) + - Support for PCM counter for Prometheus\* and Grafana\* + - Telemetry Aware Scheduler +- Early Access support for Resource Management Daemon (RMD) + - RMD for cache allocation to the application Pods +- Ability to deploy OpenNESS Master and Node on the same platform + +## OpenNESS - 20.09 +- Native On-premises mode + - Following from the previous release decision of pausing Native on-premises Development the code has been move to a dedicated repository “native-on-prem” + - Kubernetes based solution will now support both Network and on-premises Edge +- Service Mesh support + - Basic support for Service Mesh using Istio Service Mesh within an OpenNESS cluster. + > **NOTE**: When deploying Istio Service Mesh in VMs, a minimum of 8 CPU core and 16GB RAM must be allocated to each worker VM so that Istio operates smoothly + - Application of Service Mesh openness 5G and Media analytics - A dedicated network for service to service communications +- EAA Update + - EAA microservices has been updated to be more cloud-native friendly +- 5G Core AF and NEF + - User-Plane Path Change event notifications from AF received over N33 I/f [Traffic Influence updates from SMF received through NEF] + - AF/NEF/OAM Configuration and Certificate updates through Configmaps. + - AF and OAM API’s access authorization through Istio Gateway. + - Envoy Sidecar Proxy for all the 5G microservices(AF/NEF/OAM/CNTF) which enables support for telemetry(Request/Response Statistics), certificates management, http 2.0 protocol configuration(with/without TLS) + - Core-cplane flavor is enabled with Istio +- Edge Insights Application (update) + - Industrial Edge Insights Software update to version 2.3. + - Experience Kit now supports multiple detection video's – Safety equipment detection, PCB default detection and also supports external video streams. +- CERA Near Edge + - Core network and Application reference architecture + - CERA provides reference integration of OpenNESS, Network function 5G UPF (Not part of the release), OpenVINO with EIS application. + +## OpenNESS - 20.12 +- Reference Converged Edge Reference Architecture (CERA) On-Premises Edge and Private Wireless deployment focusing on On-Premises, Private Wireless and Ruggedized Outdoor deployments, presenting a scalable solution across the On-Premises edge. +- Reference deployment with Kubernetes enhancements for High performance compute and networking for an SD-WAN node (Edge) that runs Applications, Services, and SD-WAN CNF. AI/ML applications and services are targeted in this flavor with support for Hardware offload for inferencing. +- Reference deployment with Kubernetes enhancements for high-performance compute and networking for an SD-WAN node (Hub) that runs SD-WAN CNF. +- Reference deployment for high-performance Computing and Networking using SR-IOV for reference Untrusted Non-3GPP Access as defined by 3GPP Release 15. +- Reference implementation of the offline installation package for the CERA Access Edge flavor enabling installation of Kubernetes and related enhancements for Access edge deployments. +- Early access release of Edge Multi-Cluster Orchestration(EMCO), a Geo-distributed application orchestrator for Kubernetes. This release supports EMCO deploying and managing the life cycle of the Smart City Application pipeline on the edge cluster. More details in the [EMCO Release Notes](https://github.com/open-ness/EMCO/blob/main/ReleaseNotes.md). +- Azure Development kit (Devkit) supporting the installation of an OpenNESS Kubernetes cluster on the Microsoft* Azure* cloud. This is typically used by a customer who wants to develop applications and services for the edge using OpenNESS building blocks. +- Support Intel® vRAN Dedicated Accelerator ACC100, Kubernetes Cloud-native deployment supporting higher capacity 4G/LTE and 5G vRANs cells/carriers for FEC offload. +- Major system Upgrades: Kubernetes 1.19.3, CentOS 7.8, Calico 3.16, and Kube-OVN 1.5.2. # Changes to Existing Features - - **OpenNESS 19.06** There are no unsupported or discontinued features relevant to this release. - - **OpenNESS 19.06.01** There are no unsupported or discontinued features relevant to this release. - - **OpenNESS 19.09** There are no unsupported or discontinued features relevant to this release. - - **OpenNESS 19.12** - - NTS Dataplane support for Network edge is discontinued. - - Controller UI for Network edge has been discontinued except for the CNCA configuration. Customers can optionally leverage the Kubernetes dashboard to onboard applications. - - Edge node only supports non-realtime kernel. - - **OpenNESS 20.03** - - Support for HDDL-R only restricted to non-real-time or non-customized CentOS 7.6 default kernel. - - **OpenNESS 20.06** - - Offline install for Native mode OnPremises has be deprecated - - **OpenNESS 20.09** - - Native on-premises is now located in a dedicated repository with no further feature updates from previous release. + +## OpenNESS - 19.06 +There are no unsupported or discontinued features relevant to this release. + +## OpenNESS - 19.06.01 +There are no unsupported or discontinued features relevant to this release. + +## OpenNESS - 19.09 +There are no unsupported or discontinued features relevant to this release. + +## OpenNESS - 19.12 +- NTS Dataplane support for Network edge is discontinued. +- Controller UI for Network edge has been discontinued except for the CNCA configuration. Customers can optionally leverage the Kubernetes dashboard to onboard applications. +- Edge node only supports non-realtime kernel. + +## OpenNESS - 20.03 +- Support for HDDL-R only restricted to non-real-time or non-customized CentOS 7.6 default kernel. + +## OpenNESS - 20.06 +- Offline install for Native mode OnPremises has be deprecated + +## OpenNESS - 20.09 +- Native on-premises is now located in a dedicated repository with no further feature updates from previous release. + +## OpenNESS - 20.12 +There are no unsupported or discontinued features relevant to this release. # Fixed Issues -- **OpenNESS 19.06** There are no non-Intel issues relevant to this release. -- **OpenNESS 19.06.01** There are no non-Intel issues relevant to this release. -- **OpenNESS 19.06.01** - - VHOST HugePages dependency - - Bug in getting appId by IP address for the container - - Wrong value of appliance verification key printed by ansible script - - NTS is hanging when trying to add same traffic policy to multiple interfaces - - Application in VM cannot be started - - Bug in libvirt deployment - - Invalid status after app un-deployment - - Application memory field is in MB -- **OpenNESS 19.12** - - Improved usability/automation in Ansible scripts -- **OpenNESS 20.03** - - Realtime Kernel support for network edge with K8s. - - Modular playbooks -- **OpenNESS 20.06** - - Optimized the Kubernetes-based deployment by supporting multiple Flavors -- **OpenNESS 20.09** - - Further optimized the Kubernetes based deployment by supporting multiple Flavors - - cAdvisor occasional failure issue is resolved + +## OpenNESS - 19.06 +There are no non-Intel issues relevant to this release. + +## OpenNESS - 19.06.01 +There are no non-Intel issues relevant to this release. + +## OpenNESS - 19.06.01 +- VHOST HugePages dependency +- Bug in getting appId by IP address for the container +- Wrong value of appliance verification key printed by ansible script +- NTS is hanging when trying to add same traffic policy to multiple interfaces +- Application in VM cannot be started +- Bug in libvirt deployment +- Invalid status after app un-deployment +- Application memory field is in MB + +## OpenNESS - 19.12 +- Improved usability/automation in Ansible scripts + +## OpenNESS - 20.03 +- Realtime Kernel support for network edge with K8s. +- Modular playbooks + +## OpenNESS - 20.06 +- Optimized the Kubernetes based deployment by supporting multiple Flavors + +## OpenNESS - 20.09 +- Further optimized the Kubernetes based deployment by supporting multiple Flavors +- cAdvisor occasional failure issue is resolved +- "Traffic rule creation: cannot parse filled and cleared fields" in Legacy OnPremises is fixed +- Issue fixed when removing Edge Node from Controller when its offline and traffic policy is configured or app deployed + +## OpenNESS - 20.12 +- Known issue with Pod that uses hugepage get stuck in terminating state on deletion hs been fixed after upgrading to Kubernetes 1.19.3 +- Upgraded to Kube-OVN v1.5.2 for further Kube-OVN CNI enhancements # Known Issues and Limitations -- **OpenNESS 19.06** There are no issues relevant to this release. -- **OpenNESS 19.06.01** There is one issue relevant to this release: it is not possible to remove the application from Edge Node in case of error during application deployment. The issue concerns applications in a Virtual Machine. -- **OpenNESS 19.09** - - Gateway in multi-node - will not work when few nodes will have the same public IP (they will be behind one common NAT) - - Ansible in K8s can cause problems when rerun on a machine: - - If after running all 3 scripts - - Script 02 will be run again (it will not remove all necessary K8s related artifacts) - - We would recommend cleaning up the installation on the node -- **OpenNESS 19.12** - - Gateway in multi-node - will not work when few nodes will have the same public IP (they will be behind one common NAT) - - OpenNESS OnPremises: Cannot remove a failed/disconnected the edge node information/state from the controller - - The CNCA API (4G & 5G) supported in this release is an early access reference implementation and does not support authentication - - Real-time kernel support has been temporarily disabled to address the Kubernetes 1.16.2 and Realtime kernel instability. -- **OpenNESS 20.03** - - On-Premises edge installation takes more than 1.5 hours because of the Docker image build for OVS-DPDK - - Network edge installation takes more than 1.5 hours because of the Docker image build for OVS-DPDK - - OpenNESS controller allows management NICs to be in the pool of configuration, which might allow configuration by mistake. Thus, disconnecting the node from master - - When using the SRIOV EPA feature added in 20.03 with OVNCNI, the container cannot access the CNI port. This is due to the SRIOV port being set by changing the network used by the container from default to a custom network. This overwrites the OVNCNI network setting configured before this to enable the container to work with OVNCNI. Another issue with the SRIOV, is that this also overwrites the network configuration with the EAA and edgedns, agents, which prevents the SRIOV enabled container from communicating with the agents. - - Cannot remove Edge Node from Controller when its offline and traffic policy is configured or the app is deployed. -- **OpenNESS 20.06** - - On-Premises edge installation takes 1.5hrs because of the Docker image build for OVS-DPDK - - Network edge installation takes 1.5hrs because of docker image build for OVS-DPDK - - OpenNESS controller allows management NICs to be in the pool of configuration, which might allow configuration by mistake and thereby disconnect the node from master - - When using the SRIOV EPA feature added in 20.03 with OVNCNI, the container cannot access the CNI port. This is due to the SRIOV port being set by changing the network used by the container from default to a custom network, This overwrites the OVNCNI network setting configured prior to this to enable the container to work with OVNCNI. Another issue with the SRIOV, is that this also overwrites the network configuration with the EAA and edgedns, agents, which prevents the SRIOV enabled container from communicating with the agents. - - Cannot remove Edge Node from Controller when its offline and traffic policy is configured or app is deployed. - - Legacy OnPremises - Traffic rule creation: cannot parse filled and cleared fields - - There is an issue with using CDI when uploading VM images when CMK is enabled due to missing CMK taint toleration. The CDI upload pod does not get deployed and the `virtctl` plugin command times out waiting for the action to complete. A workaround for the issue is to invoke the CDI upload command, edit the taint toleration for the CDI upload to tolerate CMK, update the pod, create the PV, and let the pod run to completion. - - There is a known issue with cAdvisor which in certain scenarios occasionally fails to expose the metrics for the Prometheus endpoint. See the following GitHub\* link: https://github.com/google/cadvisor/issues/2537 -- **OpenNESS 20.09** - - Pod which uses hugepage get stuck in terminating state on deletion. This is a known issue on Kubernetes 1.18.x and is planned to be fixed in 1.19.x - - Calico cannot be used as secondary CNI with Multus in OpenNESS. It will work only as primary CNI. Calico must be the only network provider in each cluster. We do not currently support migrating a cluster with another network provider to use Calico networking. https://docs.projectcalico.org/getting-started/kubernetes/requirements - - collectd Cache telemetry using RDT does not work when RMD is enabled because of resource conflict. Workaround is to disable collectd RDT plugin when using RMD - this by default is implemented globally. With this workaround customers will be able to allocate the Cache but not use Cache related telemetry. In case where RMD is not being enabled customers who desire RDT telemetry can re-enable collectd RDT. - +## OpenNESS - 19.06 +There are no issues relevant to this release. + +## OpenNESS - 19.06.01 +There is one issue relevant to this release: it is not possible to remove the application from Edge Node in case of error during application deployment. The issue concerns applications in a Virtual Machine. + +## OpenNESS - 19.09 +- Gateway in multi-node - will not work when few nodes will have the same public IP (they will be behind one common NAT) +- Ansible in K8s can cause problems when rerun on a machine: + - If after running all 3 scripts + - Script 02 will be run again (it will not remove all necessary K8s related artifacts) + - We would recommend cleaning up the installation on the node + +## OpenNESS - 19.12 +- Gateway in multi-node - will not work when few nodes will have the same public IP (they will be behind one common NAT) +- OpenNESS On-Premises: Cannot remove a failed/disconnected the edge node information/state from the controller +- The CNCA API (4G & 5G) supported in this release is an early access reference implementation and does not support authentication +- Real-time kernel support has been temporarily disabled to address the Kubernetes 1.16.2 and Realtime kernel instability. + +## OpenNESS - 20.03 +- On-Premises edge installation takes more than 1.5 hours because of the Docker image build for OVS-DPDK +- Network edge installation takes more than 1.5 hours because of the Docker image build for OVS-DPDK +- OpenNESS controller allows management NICs to be in the pool of configuration, which might allow configuration by mistake. Thus, disconnecting the node from control plane +- When using the SRIOV EPA feature added in 20.03 with OVNCNI, the container cannot access the CNI port. This is due to the SRIOV port being set by changing the network used by the container from default to a custom network. This overwrites the OVNCNI network setting configured before this to enable the container to work with OVNCNI. Another issue with the SRIOV, is that this also overwrites the network configuration with the EAA and edgedns, agents, which prevents the SRIOV enabled container from communicating with the agents. +- Cannot remove Edge Node from Controller when its offline and traffic policy is configured or the app is deployed. + +## OpenNESS - 20.06 +- On-Premises edge installation takes 1.5hrs because of the Docker image build for OVS-DPDK +- Network edge installation takes 1.5hrs because of docker image build for OVS-DPDK +- OpenNESS controller allows management NICs to be in the pool of configuration, which might allow configuration by mistake and thereby disconnect the node from control plane +- When using the SRIOV EPA feature added in 20.03 with OVNCNI, the container cannot access the CNI port. This is due to the SRIOV port being set by changing the network used by the container from default to a custom network, This overwrites the OVNCNI network setting configured prior to this to enable the container to work with OVNCNI. Another issue with the SRIOV, is that this also overwrites the network configuration with the EAA and edgedns, agents, which prevents the SRIOV enabled container from communicating with the agents. +- Cannot remove Edge Node from Controller when its offline and traffic policy is configured or app is deployed. +- Legacy OnPremises - Traffic rule creation: cannot parse filled and cleared fields +- There is an issue with using CDI when uploading VM images when CMK is enabled due to missing CMK taint toleration. The CDI upload pod does not get deployed and the `virtctl` plugin command times out waiting for the action to complete. A workaround for the issue is to invoke the CDI upload command, edit the taint toleration for the CDI upload to tolerate CMK, update the pod, create the PV, and let the pod run to completion. +- There is a known issue with cAdvisor which in certain scenarios occasionally fails to expose the metrics for the Prometheus endpoint. See the following GitHub\* link: https://github.com/google/cadvisor/issues/2537 + +## OpenNESS - 20.09 +- Pod which uses hugepage get stuck in terminating state on deletion. This is a known issue on Kubernetes 1.18.x and is planned to be fixed in 1.19.x +- Calico cannot be used as secondary CNI with Multus in OpenNESS. It will work only as primary CNI. Calico must be the only network provider in each cluster. We do not currently support migrating a cluster with another network provider to use Calico networking. https://docs.projectcalico.org/getting-started/kubernetes/requirements +- Collectd Cache telemetry using RDT does not work when RMD is enabled because of resource conflict. Workaround is to disable collectd RDT plugin when using RMD - this by default is implemented globally. With this workaround customers will be able to allocate the Cache but not use Cache related telemetry. In case where RMD is not being enabled customers who desire RDT telemetry can re-enable collectd RDT. + +## OpenNESS - 20.12 +- cAdvisor CPU utilization of Edge Node is high and could cause a delay to get an interactive SSH session. A work around is to remove CAdvisor if not needed using `helm uninstall cadvisor -n telemetry` +- An issue appears when the KubeVirt Containerized Data Importer (CDI) upload pod is deployed with Kube-OVN CNI, the deployed pods readiness probe fails and pod is never in ready state. It is advised that the user uses other CNI such as Calico CNI when using CDI with OpenNESS +- Limitation of AF/NEF APIs usage: AF and NEF support only queued requests, hence the API calls should be made in sequence one after another using CNCA for the deterministic responses. If the API calls are made directly from multiple threads concurrently, the behavior is nondeterministic +- Telemetry deployment with PCM enabled will cause a deployment failure in single node cluster deployments due to PCM dashboards for Grafana not being found + # Release Content -- **OpenNESS 19.06** OpenNESS Edge node, OpenNESS Controller, Common, Spec, and OpenNESS Applications. -- **OpenNESS 19.06.01** OpenNESS Edge node, OpenNESS Controller, Common, Spec, and OpenNESS Applications. -- **OpenNESS 19.09** OpenNESS Edge node, OpenNESS Controller, Common, Spec, and OpenNESS Applications. -- **OpenNESS 19.12** OpenNESS Edge node, OpenNESS Controller, Common, Spec, OpenNESS Applications, and Experience kit. -- **OpenNESS 20.03** OpenNESS Edge node, OpenNESS Controller, Common, Spec, OpenNESS Applications, and Experience kit. -- **OpenNESS 20.06** - - Open Source: Edge node, Controller, Epcforedge, Common, Spec, Applications, and Experience kit. - - IDO: IDO Edge node, IDO Controller, IDO Epcforedge, IDO Spec, and IDO Experience kit. -- **OpenNESS 20.09** - - Open Source: Edge node, Controller, Epcforedge, Common, Spec, Applications and Experience kit. - - IDO: IDO Edge node, IDO Controller, IDO Epcforedge, IDO Spec and IDO Experience kit.> Note: Application repo common to Open Source and IDO - >**NOTE**: Application repo common to Open Source and IDO - + +## OpenNESS - 19.06 +OpenNESS Edge node, OpenNESS Controller, Common, Spec, and OpenNESS Applications. + +## OpenNESS - 19.06.01 +OpenNESS Edge node, OpenNESS Controller, Common, Spec, and OpenNESS Applications. + +## OpenNESS - 19.09 +OpenNESS Edge node, OpenNESS Controller, Common, Spec, and OpenNESS Applications. + +## OpenNESS - 19.12 +OpenNESS Edge node, OpenNESS Controller, Common, Spec, OpenNESS Applications, and Experience kit. + +## OpenNESS - 20.03 +OpenNESS Edge node, OpenNESS Controller, Common, Spec, OpenNESS Applications, and Experience kit. + +## OpenNESS - 20.06 +- Open Source: Edge node, Controller, Epcforedge, Common, Spec, Applications, and Experience kit. +- IDO: IDO Edge node, IDO Controller, IDO Epcforedge, IDO Spec, and IDO Experience kit. + +## OpenNESS - 20.09 +- Open Source: Edge node, Controller, Epcforedge, Common, Spec, Applications and Experience kit. +- IDO: IDO Edge node, IDO Controller, IDO Epcforedge, IDO Spec and IDO Experience kit. + +## OpenNESS - 20.12 +- Open Source: Edge node, Controller, Epcforedge, Common, Spec, Applications and Experience kit. +- IDO: IDO Edge node, IDO Controller, IDO Epcforedge, IDO Spec and IDO Experience kit. + +> **NOTE**: Edge applications repo is common to Open Source and IDO + # Hardware and Software Compatibility OpenNESS Edge Node has been tested using the following hardware specification: ## Intel® Xeon® D Processor - - Supermicro\* 3U form factor chassis server, product SKU code: 835TQ-R920B - - Motherboard type: [X11SDV-16C-TP8F](https://www.supermicro.com/products/motherboard/Xeon/D/X11SDV-16C-TP8F.cfm) - - Intel® Xeon® Processor D-2183IT +- Supermicro\* 3U form factor chassis server, product SKU code: 835TQ-R920B +- Motherboard type: [X11SDV-16C-TP8F](https://www.supermicro.com/products/motherboard/Xeon/D/X11SDV-16C-TP8F.cfm) +- Intel® Xeon® Processor D-2183IT + ## 2nd Generation Intel® Xeon® Scalable Processors - - -| | | -|------------------|---------------------------------------------------------------| -| CLX-SP | Compute Node based on CLX-SP(6252N) | -| Board | S2600WFT server board | -| | 2 x Intel(R) Xeon(R) Gold 6252N CPU @ 2.30GHz | -| | 2 x associated Heatsink | -| Memory | 12x Micron 16GB DDR4 2400MHz DIMMS * [2666 for PnP] | -| Chassis | 2U Rackmount Server Enclosure | -| Storage | Intel M.2 SSDSCKJW360H6 360G | -| NIC | 1x Intel Fortville NIC X710DA4 SFP+ ( PCIe card to CPU-0) | -| QAT | Intel Quick Assist Adapter Device 37c8 | -| | (Symmetrical design) LBG integrated | -| NIC on board | Intel-Ethernet-Controller-I210 (for management) | -| Other card | 2x PCIe Riser cards | + +| | | +| ------------ | ---------------------------------------------------------- | +| CLX-SP | Compute Node based on CLX-SP(6252N) | +| Board | S2600WFT server board | +| | 2 x Intel® Xeon® Gold 6252N CPU @ 2.30GHz | +| | 2 x associated Heatsink | +| Memory | 12x Micron 16GB DDR4 2400MHz DIMMS* [2666 for PnP] | +| Chassis | 2U Rackmount Server Enclosure | +| Storage | Intel M.2 SSDSCKJW360H6 360G | +| NIC | 1x Intel® Fortville NIC X710DA4 SFP+ ( PCIe card to CPU-0) | +| QAT | Intel® Quick Assist Adapter Device 37c8 | +| | (Symmetrical design) LBG integrated | +| NIC on board | Intel-Ethernet-Controller-I210 (for management) | +| Other card | 2x PCIe Riser cards | ## Intel® Xeon® Scalable Processors -| | | -|------------------|---------------------------------------------------------------| -| SKX-SP | Compute Node based on SKX-SP(6148) | -| Board | WolfPass S2600WFQ server board(symmetrical QAT)CPU | -| | 2 x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz | -| | 2 x associated Heatsink | -| Memory | 12x Micron 16GB DDR4 2400MHz DIMMS * [2666 for PnP] | -| Chassis | 2U Rackmount Server Enclosure | -| Storage | Intel M.2 SSDSCKJW360H6 360G | -| NIC | 1x Intel Fortville NIC X710DA4 SFP+ ( PCIe card to CPU-0) | -| QAT | Intel Quick Assist Adapter Device 37c8 | -| | (Symmetrical design) LBG integrated | -| NIC on board | Intel-Ethernet-Controller-I210 (for management) | -| Other card | 2x PCIe Riser cards | -| HDDL-R | [Mouser Mustang-V100](https://www.mouser.ie/datasheet/2/763/Mustang-V100_brochure-1526472.pdf) | -| VCAC-A | [VCAC-A Accelerator for Media Analytics](https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/media-analytics-vcac-a-accelerator-card-by-celestica-datasheet.pdf) | -| PAC-N3000 | [Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000 ](https://www.intel.com/content/www/us/en/programmable/products/boards_and_kits/dev-kits/altera/intel-fpga-pac-n3000/overview.html) | +| | | +| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| SKX-SP | Compute Node based on SKX-SP(6148) | +| Board | WolfPass S2600WFQ server board(symmetrical QAT)CPU | +| | 2 x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz | +| | 2 x associated Heatsink | +| Memory | 12x Micron 16GB DDR4 2400MHz DIMMS* [2666 for PnP] | +| Chassis | 2U Rackmount Server Enclosure | +| Storage | Intel® M.2 SSDSCKJW360H6 360G | +| NIC | 1x Intel® Fortville NIC X710DA4 SFP+ ( PCIe card to CPU-0) | +| QAT | Intel® Quick Assist Adapter Device 37c8 | +| | (Symmetrical design) LBG integrated | +| NIC on board | Intel-Ethernet-Controller-I210 (for management) | +| Other card | 2x PCIe Riser cards | +| HDDL-R | [Mouser Mustang-V100](https://www.mouser.ie/datasheet/2/763/Mustang-V100_brochure-1526472.pdf) | +| VCAC-A | [VCAC-A Accelerator for Media Analytics](https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/media-analytics-vcac-a-accelerator-card-by-celestica-datasheet.pdf) | +| PAC-N3000 | [Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000 ](https://www.intel.com/content/www/us/en/programmable/products/boards_and_kits/dev-kits/altera/intel-fpga-pac-n3000/overview.html) | +| ACC100 | [Intel® vRAN Dedicated Accelerator ACC100](https://networkbuilders.intel.com/solutionslibrary/intel-vran-dedicated-accelerator-acc100-product-brief) | # Supported Operating Systems -> OpenNESS was tested on CentOS Linux release 7.6.1810 (Core) : Note: OpenNESS is tested with CentOS 7.6 Pre-empt RT kernel to ensure VNFs and Applications can co-exist. There is no requirement from OpenNESS software to run on a Pre-empt RT kernel. + +OpenNESS was tested on CentOS Linux release 7.8.2003 (Core) +> **NOTE**: OpenNESS is tested with CentOS 7.8 Pre-empt RT kernel to ensure VNFs and Applications can co-exist. There is no requirement from OpenNESS software to run on a Pre-empt RT kernel. # Packages Version -Package: telemetry, cadvisor 0.36.0, grafana 7.0.3, prometheus 2.16.0, prometheus: node exporter 1.0.0-rc.0, tas 0., golang 1.14.9, docker 19.03.12, kubernetes 1.18.4, dpdk 18.11.6, ovs 2.12.0, ovn 2.12.0, helm 3.0, kubeovn 1.0.1, flannel 0.12.0, calico 3.14.0 , multus 3.6, sriov cni 2.3, nfd 0.6.0, cmk v1.4.1 TAS we build from specific commit “a13708825e854da919c6fdf05d50753113d04831” \ No newline at end of file + +Package: telemetry, cadvisor 0.36.0, grafana 7.0.3, prometheus 2.16.0, prometheus: node exporter 1.0.0-rc.0, golang 1.15, docker 19.03.12, kubernetes 1.19.3, dpdk 19.11, ovs 2.14.0, ovn 2.14.0, helm 3.0, kubeovn 1.5.2, flannel 0.12.0, calico 3.16.0, multus 3.6, sriov cni 2.3, nfd 0.6.0, cmk v1.4.1, TAS (from specific commit "a13708825e854da919c6fdf05d50753113d04831")