diff --git a/.gitignore b/.gitignore index 9e4a2100..a0918f83 100644 --- a/.gitignore +++ b/.gitignore @@ -1,2 +1,6 @@ - +.jekyll-metadata +Gemfile +Gemfile.lock +_config.yml +_site/ *.pdf diff --git a/README.md b/README.md index b3d701f0..edfb1949 100644 --- a/README.md +++ b/README.md @@ -5,19 +5,19 @@ Copyright (c) 2019-2020 Intel Corporation # OpenNESS Quick Start - ## Network Edge - ### Step 1. Get Hardware ► Step 2. [Getting started](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md) ► Step 3. [Applications Onboarding](https://github.com/open-ness/ido-specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md) + ## Network Edge + ### Step 1. Get Hardware ► Step 2. [Getting started](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md) ► Step 3. [Applications Onboarding](https://github.com/open-ness/ido-specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md) # OpenNESS solution documentation index -Below is the complete list of OpenNESS solution documentation +Below is the complete list of OpenNESS solution documentation -## Architecture +## Architecture * [architecture.md: OpenNESS Architecture overview](https://github.com/open-ness/ido-specs/blob/master/doc/architecture.md) * [flavors.md: OpenNESS Deployment Flavors](https://github.com/open-ness/ido-specs/blob/master/doc/flavors.md) -## Getting Started - Setup +## Getting Started - Setup * [getting-started: Folder containing how to get started with installing and trying OpenNESS Network Edge solutions](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started) * [openness-experience-kits.md: Overview of the OpenNESS Experience kits that are used to install the Network Edge solutions](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/openness-experience-kits.md) @@ -25,7 +25,7 @@ Below is the complete list of OpenNESS solution documentation * [controller-edge-node-setup.md: Started here for installing and trying OpenNESS Network Edge](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md) * [supported-epa.md: List of Silicon and Software EPA that are features that are supported in OpenNESS Network Edge](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge/supported-epa.md) -## Application onboarding - Deployment +## Application onboarding - Deployment * [applications-onboard: Now that you have installed OpenNESS platform start in this folder to onboard sample application on OpenNESS Network Edge](https://github.com/open-ness/ido-specs/blob/master/doc/applications-onboard) * [network-edge-applications-onboarding.md: Steps for onboarding sample application on OpenNESS Network Edge](https://github.com/open-ness/ido-specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md) @@ -33,48 +33,53 @@ Below is the complete list of OpenNESS solution documentation * [openness-interface-service.md: Using network interfaces management service](https://github.com/open-ness/ido-specs/blob/master/doc/applications-onboard/openness-interface-service.md) * [using-openness-cnca.md: Steps for configuring 4G CUPS or 5G Application Function for Edge deployment for Network Edge](https://github.com/open-ness/ido-specs/blob/master/doc/applications-onboard/using-openness-cnca.md) * [openness-eaa.md: Edge Application Agent: Description of Edge Application APIs and Edge Application Authentication APIs](https://github.com/open-ness/ido-specs/blob/master/doc/applications-onboard/openness-eaa.md) + * [openness-certsigner.md: Steps for issuing platform certificates](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/openness-certsigner.md) ## Radio Access Network (RAN) -* [ran: Folder containing details of 4G and 5G RAN deployment support](https://github.com/open-ness/ido-specs/tree/master/doc/ran) - * [openness_ran.md: Whitepaper detailing the 4G and 5G RAN deployment support on OpenNESS for Network Edge](https://github.com/open-ness/ido-specs/blob/master/doc/ran/openness_ran.md) - - -## Core Network - 4G and 5G - -* [core-network: Folder containing details of 4G CUPS and 5G edge cloud deployment support](https://github.com/open-ness/ido-specs/tree/master/doc/core-network) - * [openness_epc.md: Whitepaper detailing the 4G CUPS support for Edge cloud deployment in OpenNESS for Network Edge](https://github.com/open-ness/ido-specs/blob/master/doc/core-network/openness_epc.md) - * [openness_5g_nsa.md: Whitepaper detailing the 5G NSA Edge Cloud deployment support in OpenNESS for Network Edge](./doc/core-network/openness_5g_nsa.md) - * [openness_ngc.md: Whitepaper detailing the 5G SA Edge Cloud deployment support in OpenNESS for Network Edge](https://github.com/open-ness/ido-specs/blob/master/doc/core-network/openness_ngc.md) - * [openness_upf.md: Whitepaper detailing the UPF, AF, NEF deployment support on OpenNESS for Network Edge](https://github.com/open-ness/ido-specs/blob/master/doc/core-network/openness_upf.md) - -## Enhanced Platform Awareness - -* [enhanced-platform-awareness: Folder containing individual Silicon and Software EPA that are features that are supported in OpenNESS Network Edge](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness) - * [openness-hugepage.md: Hugepages support for Edge Applications and Network Functions](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-hugepage.md) - * [openness-node-feature-discovery.md: Edge Node hardware and software feature discovery support in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-node-feature-discovery.md) - * [openness-sriov-multiple-interfaces.md: Dedicated Physical Network interface allocation support for Edge Applications and Network Functions](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md) - * [openness-dedicated-core.md: Dedicated CPU core allocation support for Edge Applications and Network Functions](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-dedicated-core.md) - * [openness-bios.md: Edge platform BIOS and Firmware and configuration support in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-bios.md) - * [openness-fpga.md: Dedicated FPGA IP resource allocation support for Edge Applications and Network Functions](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-fpga.md) - * [openness_hddl.md: Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness_hddl.md) - * [openness-topology-manager.md: Resource Locality awareness support through Topology manager in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-topology-manager.md) +* [ran: Folder containing details of 4G and 5G RAN deployment support](https://github.com/open-ness/ido-specs/tree/master/doc/reference-architectures/ran) + * [openness_ran.md: Whitepaper detailing the 4G and 5G RAN deployment support on OpenNESS for Network Edge](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/ran/openness_ran.md) + * [openness_xran.md: Whitepaper detailing O-RAN Sample Application deployment support on OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/ran/openness_xran.md) + + +## Core Network - 4G and 5G + +* [core-network: Folder containing details of 4G CUPS and 5G edge cloud deployment support](https://github.com/open-ness/ido-specs/tree/master/doc/reference-architectures/core-network) + * [openness_epc.md: Whitepaper detailing the 4G CUPS support for Edge cloud deployment in OpenNESS for Network Edge](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/core-network/openness_epc.md) + * [openness_5g_nsa.md: Whitepaper detailing the 5G NSA Edge Cloud deployment support in OpenNESS for Network Edge](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/core-network/openness_5g_nsa.md) + * [openness_ngc.md: Whitepaper detailing the 5G SA Edge Cloud deployment support in OpenNESS for Network Edge](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/core-network/openness_ngc.md) + * [openness_upf.md: Whitepaper detailing the UPF, AF, NEF deployment support on OpenNESS for Network Edge](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/core-network/openness_upf.md) + +## Enhanced Platform Awareness + +* [enhanced-platform-awareness: Folder containing individual Silicon and Software EPA that are features that are supported in OpenNESS Network Edge](https://github.com/open-ness/ido-specs/tree/master/doc/building-blocks/enhanced-platform-awareness) + * [openness-hugepage.md: Hugepages support for Edge Applications and Network Functions](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-hugepage.md) + * [openness-node-feature-discovery.md: Edge Node hardware and software feature discovery support in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-node-feature-discovery.md) + * [openness-sriov-multiple-interfaces.md: Dedicated Physical Network interface allocation support for Edge Applications and Network Functions](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md) + * [openness-dedicated-core.md: Dedicated CPU core allocation support for Edge Applications and Network Functions](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-dedicated-core.md) + * [openness-bios.md: Edge platform BIOS and Firmware and configuration support in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-bios.md) + * [openness-fpga.md: Dedicated FPGA IP resource allocation support for Edge Applications and Network Functions](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md) + * [openness_hddl.md: Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness_hddl.md) + * [openness-topology-manager.md: Resource Locality awareness support through Topology manager in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-topology-manager.md) + * [openness-vca.md: Visual Compute Accelerator Card - Analytics (VCAC-A)](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md) + * [openness-kubernetes-dashboard.md: Kubernetes Dashboard in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-kubernetes-dashboard.md) + * [openness-rmd.md: Cache Allocation using Resource Management Daemon(RMD) in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-rmd.md) + * [openness-telemetry: Telemetry Support in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-telemetry.md) ## Dataplane -* [dataplane: Folder containing Dataplane and inter-app infrastructure support in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/dataplane) - * [openness-interapp.md: InterApp Communication support in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/dataplane/openness-interapp.md) - * [openness-ovn.md: OpenNESS Support for OVS as dataplane with OVN](https://github.com/open-ness/ido-specs/blob/master/doc/dataplane/openness-ovn.md) - * [openness-nts.md: Dataplane support for Edge Cloud between ENB and EPC (S1-U) Deployment](https://github.com/open-ness/ido-specs/blob/master/doc/dataplane/openness-nts.md) - * [openness-userspace-cni.md: Userspace CNI - Container Network Interface Kubernetes plugin](https://github.com/open-ness/ido-specs/blob/master/doc/dataplane/openness-userspace-cni.md) +* [dataplane: Folder containing Dataplane and inter-app infrastructure support in OpenNESS](https://github.com/open-ness/ido-specs/tree/master/doc/building-blocks/dataplane) + * [openness-interapp.md: InterApp Communication support in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/dataplane/openness-interapp.md) + * [openness-ovn.md: OpenNESS Support for OVS as dataplane with OVN](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/dataplane/openness-ovn.md) + * [openness-userspace-cni.md: Userspace CNI - Container Network Interface Kubernetes plugin](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/dataplane/openness-userspace-cni.md) -## Edge Applications +## Edge Applications * [applications: Folder Containing resource material for Edge Application developers](https://github.com/open-ness/ido-specs/blob/master/doc/applications) * [openness_appguide.md: How to develop or Port existing cloud application to the Edge cloud based on OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/applications/openness_appguide.md) * [openness_ovc.md: Open Visual Cloud Smart City reference Application for OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/applications/openness_ovc.md) * [openness_openvino.md: AI inference reference Edge application for OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/applications/openness_openvino.md) -## Cloud Adapters +## Cloud Adapters * [cloud-adapters: How to deploy public cloud IoT gateways on OpenNESS Edge Cloud](https://github.com/open-ness/ido-specs/blob/master/doc/cloud-adapters) * [openness_awsgreengrass.md: Deploying single or multiple instance of Amazon Greengrass IoT gateway on OpenNESS edge cloud as an edge application](https://github.com/open-ness/ido-specs/blob/master/doc/cloud-adapters/openness_awsgreengrass.md) @@ -83,8 +88,8 @@ Below is the complete list of OpenNESS solution documentation ## Reference Architectures * [CERA-Near-Edge.md: Converged Edge Reference Architecture Near Edge](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/CERA-Near-Edge.md) - -## API and Schema +* [CERA-5G-On-Prem.md: Converged Edge Reference Architecture On Premises Edge](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/CERA-5G-On-Prem.md) +## API and Schema * [Edge Application API: EAA](https://www.openness.org/api-documentation/?api=eaa) * [Edge Application Authentication API](https://www.openness.org/api-documentation/?api=auth) @@ -94,15 +99,15 @@ Below is the complete list of OpenNESS solution documentation ## Orchestration * [openness-helm.md: Helm support in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/orchestration/openness-helm.md) -## Release history +## Release history * [openness_releasenotes.md: This document provides high level system features, issues and limitations information for OpenNESS](https://github.com/open-ness/ido-specs/blob/master/openness_releasenotes.md) -## Related resources +## Related resources -* [OpenNESS Website - Developers : Website containing developer resources](https://www.openness.org/developers) +* [OpenNESS Website - Developers : Website containing developer resources](https://www.openness.org/developers) * [Intel Network Builders OpenNESS training ](https://builders.intel.com/university/networkbuilders/coursescategory/open-network-edge-services-software-openness) - + ## List of Abbreviations - 3GPP: Third Generation Partnership Project - CUPS: Control and User Plane Separation of EPC Nodes @@ -132,8 +137,8 @@ Below is the complete list of OpenNESS solution documentation - SGW-U: Serving Gateway - User Plane Function - TAC: Tracking Area Code - UE: User Equipment (in the context of LTE) -- VIM: Virtual Infrastructure Manager -- UUID: Universally Unique IDentifier +- VIM: Virtual Infrastructure Manager +- UUID: Universally Unique IDentifier - AMF: Access and Mobility Mgmt Function - SMF: Session Management Function - AUSF: Authentication Server Function @@ -146,9 +151,9 @@ Below is the complete list of OpenNESS solution documentation - AF: Application Function - SR-IOV: Single Root I/O Virtualization - NUMA: Non-Uniform Memory Access -- COTS: Commercial Off-The-Shelf +- COTS: Commercial Off-The-Shelf - DU: Distributed Unit of RAN - CU: Centralized Unit of RAN - SBI: Service Based Interfaces -- OEK: OpenNESS Experience Kit -- IDO: Intel Distribution of OpenNESS +- OEK: OpenNESS Experience Kit +- IDO: Intel Distribution of OpenNESS diff --git a/_data/navbars/applications-onboarding.yml b/_data/navbars/applications-onboarding.yml index 08bf0a05..d99fe102 100644 --- a/_data/navbars/applications-onboarding.yml +++ b/_data/navbars/applications-onboarding.yml @@ -3,7 +3,7 @@ title: "Application Onboarding" path: /applications-onboard/ -order: 2 +order: 3 section: - title: Network Edge Applications Onboarding path: /doc/applications-onboard/network-edge-applications-onboarding @@ -20,10 +20,15 @@ section: meta_title: Edge Application Agent (EAA) meta_description: OpenNESS enables Edge Applications to produce, discover and consume services that are available on the OpenNESS cluster through the Edge Application Agent (EAA) APIs. + - title: Certificate Signing + path: /doc/applications-onboard/openness-certsigner + meta_title: Certificate Signing + meta_description: Each application that needs the TLS authentication certificate should generate it using the Certificate Signer by sending a CSR via Certificate Requester + - title: Interface Service path: /doc/applications-onboard/openness-interface-service meta_title: OpenNESS Applications Onboard - OpenNESS Interface Service - meta_description: OpenNESS Interface service is an application running in Kubernetes pod on every worker node of OpenNESS Kubernetes cluster and provides OVS bridge, enabling external traffic scenarios. + meta_description: OpenNESS Interface service is an application running in Kubernetes pod on every node of OpenNESS Kubernetes cluster and provides OVS bridge, enabling external traffic scenarios. - title: VM support in OpenNESS for Network Edge path: /doc/applications-onboard/openness-network-edge-vm-support diff --git a/_data/navbars/building-blocks.yml b/_data/navbars/building-blocks.yml new file mode 100644 index 00000000..2bb56cfd --- /dev/null +++ b/_data/navbars/building-blocks.yml @@ -0,0 +1,100 @@ +# SPDX-License-Identifier: Apache-2.0 +# Copyright (c) 2020 Intel Corporation + +title: "Building Blocks" +path: /building-blocks/ +order: 2 +section: + - title: Data Plane + path: + section: + - title: Inter-App Communication + path: /doc/building-blocks/dataplane/openness-interapp + meta_title: Inter-App Communication Support in OpenNESS + meta_description: OpenNESS provides Inter-App Communication support for Network edge modes of OpenNESS. + + - title: OVN/OVS + path: /doc/building-blocks/dataplane/openness-ovn + meta_title: OpenNESS Support for OVS as Dataplane with OVN + meta_description: The primary objective of supporting OVN/OVS in OpenNESS is to demonstrate the capability of using a standard dataplane like OVS for an Edge Compute platform. + + - title: Userspace CNI + path: /doc/building-blocks/dataplane/openness-userspace-cni + meta_title: OpenNESS Userspace CNI, Setup Userspace CNI + meta_description: Userspace CNI is a Container Network Interface Kubernetes plugin that was designed to simplify the process of deployment of DPDK based applications in Kubernetes pods. + + - title: "Enhanced Platform Awareness" + path: + section: + - title: Hugepage Support + path: /doc/building-blocks/enhanced-platform-awareness/openness-hugepage + meta_title: OpenNESS Enhanced Platform Awareness - Hugepage Support on OpenNESS + meta_description: Huge page support openness, added to Kubernetes v1.8, enables the discovery, scheduling, and allocation of huge pages as a native first-class resource. + + - title: Node Feature Discovery Support + path: /doc/building-blocks/enhanced-platform-awareness/openness-node-feature-discovery + meta_title: Enhanced Platform Awareness - Node Feature Discovery Support in OpenNESS + meta_description: OpenNESS Node Feature Discovery is one of the Intel technologies that supports targeting of intelligent configuration and capacity consumption of platform capabilities. + + - title: Multiple Interface And PCIe SRIOV Support + path: /doc/building-blocks/enhanced-platform-awareness/openness-sriov-multiple-interfaces + meta_title: Enhanced Platform Awareness - Multiple Interface And PCIe SRIOV Support in OpenNESS + meta_description: Multiple Interface and PCIe SRIOV support in OpenNESS, OpenNESS Network Edge uses the Multus container network interface is a container network interface plugin for Kubernetes. + + - title: Dedicated CPU Core + path: /doc/building-blocks/enhanced-platform-awareness/openness-dedicated-core + meta_title: Enhanced Platform Awareness - Dedicated CPU Core for Workload Support in OpenNESS + meta_description: Multi-core COTS platforms are typical in any cloud or Cloudnative deployment. Parallel processing on multiple cores helps achieve better density. + + - title: BIOS and Firmware Configuration + path: /doc/building-blocks/enhanced-platform-awareness/openness-bios + meta_title: BIOS and Firmware Configuration on OpenNESS Platform + meta_description: BIOS and Firmware are the fundamental platform configurations of a typical Commercial off-the-shelf (COTS) platform. + + - title: FPGA Support + path: /doc/building-blocks/enhanced-platform-awareness/openness-fpga + meta_title: FPGA in OpenNESS - Programming, Resource Allocation and Configuration + meta_description: The FPGA Programmable acceleration card plays a key role in accelerating certain types of workloads which in-turn increases the overall compute capacity of a COTS platform. + + - title: Intel® vRAN Dedicated Accelerator ACC100 + path: /doc/building-blocks/enhanced-platform-awareness/openness-acc100 + meta_title: Intel® vRAN Dedicated Accelerator ACC100 + meta_description: Using ACC100 eASIC in OpenNESS, Resource Allocation, and Configuration + + - title: Intel® Movidius™ Myriad™ X HDDL Support + path: /doc/building-blocks/enhanced-platform-awareness/openness_hddl + meta_title: Intel® Movidius™ Myriad™ X HDDL Solution in OpenNESS + meta_description: Intel® Movidius™ Myriad™ X HDDL solution integrates multiple Myriad™ X SoCs in a PCIe add-in card form factor or a module form factor to build a scalable, high capacity deep learning solution. + + - title: Visual Compute Accelerator Card - Analytics (VCAC-A) + path: /doc/building-blocks/enhanced-platform-awareness/openness-vcac-a + meta_title: Visual Compute Accelerator Card - Analytics (VCAC-A) + meta_description: The Visual Cloud Accelerator Card - Analytics (VCAC-A) equips Intel® Xeon® Scalable Processor based platforms and Intel Movidius™ VPU to enhance the video codec, computer vision, and inference capabilities. + + - title: Topology Manager Support + path: /doc/building-blocks/enhanced-platform-awareness/openness-topology-manager + meta_title: Topology Manager Support in OpenNESS, Resource Locality Awareness + meta_description: Topology Manager is a solution permitting k8s components like CPU Manager and Device Manager, to coordinate the resources allocated to a workload. + + - title: Resource Management Daemon + path: /doc/building-blocks/enhanced-platform-awareness/openness-rmd + meta_title: Cache Allocation for Containers with Resource Management Daemon (RMD) + meta_description: Intel® Resource Director Technology (Intel® RDT) provides visibility and control over how shared resources such as last-level cache (LLC) and memory bandwidth are used by applications, virtual machines (VMs), and containers. + + - title: Telemetry support in OpenNESS + path: /doc/building-blocks/enhanced-platform-awareness/openness-telemetry + meta_title: Telemetry support in OpenNESS + meta_description: OpenNESS supports platform and application telemetry allowing users to retrieve information about the platform, the underlying hardware, cluster and applications deployed. + + - title: Kubernetes Dashboard in OpenNESS + path: /doc/building-blocks/enhanced-platform-awareness/openness-kubernetes-dashboard + meta_title: Kubernetes Dashboard in OpenNESS + meta_description: OpenNESS supports Kubernetes Dashboard that can be used to inspect and manage Kubernetes cluster. + + - title: Multi-Cluster Orchestration + path: + section: + - title: Edge Multi-Cluster Orchestrator (EMCO) + path: /doc/building-blocks/emco/openness-emco + meta_title: Edge Multi-Cluster Orchestrator + meta_description: Geo-Distributed multiple clusters application orchestration. diff --git a/_data/navbars/cloud-adapters.yml b/_data/navbars/cloud-adapters.yml index 12b7f045..0a5d465b 100644 --- a/_data/navbars/cloud-adapters.yml +++ b/_data/navbars/cloud-adapters.yml @@ -3,7 +3,7 @@ title: "Cloud Adapters" path: /cloud-adapters/ -order: 8 +order: 6 section: - title: AWS Greengrass path: /doc/cloud-adapters/openness_awsgreengrass diff --git a/_data/navbars/core-network-4G-5G.yml b/_data/navbars/core-network-4G-5G.yml deleted file mode 100644 index c0e298d3..00000000 --- a/_data/navbars/core-network-4G-5G.yml +++ /dev/null @@ -1,26 +0,0 @@ -# SPDX-License-Identifier: Apache-2.0 -# Copyright (c) 2020 Intel Corporation - -title: "Core Network - 4G and 5G" -path: /core-network/ -order: 4 -section: - - title: Evolved Packet Core (EPC) - path: /doc/core-network/openness_epc - meta_title: Edge Cloud Deployment with 3GPP 4G LTE CUPS of EPC - meta_description: OpenNESS is an open source edge computing platform that enables Service Providers and Enterprises to deploy applications and services on a network edge. - - - title: 5G Non-Stand Alone (NSA) - path: /doc/core-network/openness_5g_nsa - meta_title: Edge Cloud Deployment with 3GPP 5G Non Stand Alone - meta_description: OpenNESS is an open source edge computing platform that enables Service Providers and Enterprises to deploy applications and services on a network edge. - - - title: Next-Gen Core (NGC) - path: /doc/core-network/openness_ngc - meta_title: Edge Cloud Deployment with 3GPP 5G Stand Alone - meta_description: OpenNESS NGC provides reference REST-based APIs along with 3GPP standard traffic influencing APIs to address some of these major challenges in 5G edge deployments. - - - title: User Plane Function (UPF) - path: /doc/core-network/openness_upf - meta_title: User Plane Function (UPF) - meta_description: User Plane Function is the evolution of Control and User Plane Separation which part of the Rel.14 in Evolved Packet core. CUPS enabled PGW to be split into PGW-C and PGW-U. diff --git a/_data/navbars/dataplane.yml b/_data/navbars/dataplane.yml deleted file mode 100644 index faa9540e..00000000 --- a/_data/navbars/dataplane.yml +++ /dev/null @@ -1,21 +0,0 @@ -# SPDX-License-Identifier: Apache-2.0 -# Copyright (c) 2020 Intel Corporation - -title: "Dataplane" -path: /dataplane/ -order: 6 -section: - - title: Inter-App Communication - path: /doc/dataplane/openness-interapp - meta_title: Inter-App Communication Support in OpenNESS - meta_description: OpenNESS provides Inter-App Communication support for Network edge modes of OpenNESS. - - - title: OVN/OVS - path: /doc/dataplane/openness-ovn - meta_title: OpenNESS Support for OVS as Dataplane with OVN - meta_description: The primary objective of supporting OVN/OVS in OpenNESS is to demonstrate the capability of using a standard dataplane like OVS for an Edge Compute platform. - - - title: Userspace CNI - path: /doc/dataplane/openness-userspace-cni - meta_title: OpenNESS Userspace CNI, Setup Userspace CNI - meta_description: Userspace CNI is a Container Network Interface Kubernetes plugin that was designed to simplify the process of deployment of DPDK based applications in Kubernetes pods. diff --git a/_data/navbars/devkits.yml b/_data/navbars/devkits.yml new file mode 100644 index 00000000..18c8cedf --- /dev/null +++ b/_data/navbars/devkits.yml @@ -0,0 +1,11 @@ +# SPDX-License-Identifier: Apache-2.0 +# Copyright (c) 2020 Intel Corporation + +title: "Development Kits" +path: /devkits/ +order: 7 +section: + - title: OpenNESS Development Kit for Microsoft Azure + path: /doc/devkits/openness-azure-devkit + meta_title: OpenNESS Development Kit for Microsoft Azure + meta_description: This devkit supports the use of OpenNESS in cloud solutions. It leverages the Microsoft Azure Stack for OpenNESS deployment. diff --git a/_data/navbars/edge-applications.yml b/_data/navbars/edge-applications.yml index 362124b1..9c00dae3 100644 --- a/_data/navbars/edge-applications.yml +++ b/_data/navbars/edge-applications.yml @@ -3,7 +3,7 @@ title: "Edge Applications" path: /applications/ -order: 7 +order: 5 section: - title: Application Development & Porting Guide path: /doc/applications/openness_appguide diff --git a/_data/navbars/enhanced-platform-awareness.yml b/_data/navbars/enhanced-platform-awareness.yml deleted file mode 100644 index f4170f59..00000000 --- a/_data/navbars/enhanced-platform-awareness.yml +++ /dev/null @@ -1,66 +0,0 @@ -# SPDX-License-Identifier: Apache-2.0 -# Copyright (c) 2020 Intel Corporation - -title: "Enhanced Platform Awareness" -path: /enhanced-platform-awareness/ -order: 5 -section: - - title: Hugepage Support - path: /doc/enhanced-platform-awareness/openness-hugepage - meta_title: OpenNESS Enhanced Platform Awareness - Hugepage Support on OpenNESS - meta_description: Huge page support openness, added to Kubernetes v1.8, enables the discovery, scheduling, and allocation of huge pages as a native first-class resource. - - - title: Node Feature Discovery Support - path: /doc/enhanced-platform-awareness/openness-node-feature-discovery - meta_title: Enhanced Platform Awareness - Node Feature Discovery Support in OpenNESS - meta_description: OpenNESS Node Feature Discovery is one of the Intel technologies that supports targeting of intelligent configuration and capacity consumption of platform capabilities. - - - title: Multiple Interface And PCIe SRIOV Support - path: /doc/enhanced-platform-awareness/openness-sriov-multiple-interfaces - meta_title: Enhanced Platform Awareness - Multiple Interface And PCIe SRIOV Support in OpenNESS - meta_description: Multiple Interface and PCIe SRIOV support in OpenNESS, OpenNESS Network Edge uses the Multus container network interface is a container network interface plugin for Kubernetes. - - - title: Dedicated CPU Core - path: /doc/enhanced-platform-awareness/openness-dedicated-core - meta_title: Enhanced Platform Awareness - Dedicated CPU Core for Workload Support in OpenNESS - meta_description: Multi-core COTS platforms are typical in any cloud or Cloudnative deployment. Parallel processing on multiple cores helps achieve better density. - - - title: BIOS and Firmware Configuration - path: /doc/enhanced-platform-awareness/openness-bios - meta_title: BIOS and Firmware Configuration on OpenNESS Platform - meta_description: BIOS and Firmware are the fundamental platform configurations of a typical Commercial off-the-shelf (COTS) platform. - - - title: FPGA Support - path: /doc/enhanced-platform-awareness/openness-fpga - meta_title: FPGA in OpenNESS - Programming, Resource Allocation and Configuration - meta_description: The FPGA Programmable acceleration card plays a key role in accelerating certain types of workloads which in-turn increases the overall compute capacity of a COTS platform. - - - title: Intel® Movidius™ Myriad™ X HDDL Support - path: /doc/enhanced-platform-awareness/openness_hddl - meta_title: Intel® Movidius™ Myriad™ X HDDL Solution in OpenNESS - meta_description: Intel® Movidius™ Myriad™ X HDDL solution integrates multiple Myriad™ X SoCs in a PCIe add-in card form factor or a module form factor to build a scalable, high capacity deep learning solution. - - - title: Visual Compute Accelerator Card - Analytics (VCAC-A) - path: /doc/enhanced-platform-awareness/openness-vcac-a - meta_title: Visual Compute Accelerator Card - Analytics (VCAC-A) - meta_description: The Visual Cloud Accelerator Card - Analytics (VCAC-A) equips Intel® Xeon® Scalable Processor based platforms and Intel Movidius™ VPU to enhance the video codec, computer vision, and inference capabilities. - - - title: Topology Manager Support - path: /doc/enhanced-platform-awareness/openness-topology-manager - meta_title: Topology Manager Support in OpenNESS, Resource Locality Awareness - meta_description: Topology Manager is a solution permitting k8s components like CPU Manager and Device Manager, to coordinate the resources allocated to a workload. - - - title: Resource Management Daemon - path: /doc/enhanced-platform-awareness/openness-rmd - meta_title: Cache Allocation for Containers with Resource Management Daemon (RMD) - meta_description: Intel® Resource Director Technology (Intel® RDT) provides visibility and control over how shared resources such as last-level cache (LLC) and memory bandwidth are used by applications, virtual machines (VMs), and containers. - - - title: Telemetry support in OpenNESS - path: /doc/enhanced-platform-awareness/openness-telemetry - meta_title: Telemetry support in OpenNESS - meta_description: OpenNESS supports platform and application telemetry allowing users to retrieve information about the platform, the underlying hardware, cluster and applications deployed. - - - title: Kubernetes Dashboard in OpenNESS - path: /doc/enhanced-platform-awareness/openness-kubernetes-dashboard - meta_title: Kubernetes Dashboard in OpenNESS - meta_description: OpenNESS supports Kubernetes Dashboard that can be used to inspect and manage Kubernetes cluster. diff --git a/_data/navbars/introduction.yml b/_data/navbars/introduction.yml index 38cbf161..baabe3b1 100644 --- a/_data/navbars/introduction.yml +++ b/_data/navbars/introduction.yml @@ -2,9 +2,14 @@ # Copyright (c) 2020 Intel Corporation title: "Introduction" -path: /architecture/ +path: /introduction/ order: 0 section: + - title: OpenNESS Overview + path: /doc/overview + meta_title: OpenNESS Overview + meta_description: OpenNESS is an edge computing software toolkit that enables highly optimized and performant edge platforms to on-board and manage applications and network functions with cloud-like agility across any type of network. + - title: OpenNESS Architecture & Solution Overview path: /doc/architecture meta_title: OpenNESS Architecture And Solution Overview diff --git a/_data/navbars/radio-access-network.yml b/_data/navbars/radio-access-network.yml deleted file mode 100644 index e878eccd..00000000 --- a/_data/navbars/radio-access-network.yml +++ /dev/null @@ -1,15 +0,0 @@ -# SPDX-License-Identifier: Apache-2.0 -# Copyright (c) 2020 Intel Corporation - -title: "Radio Access Network (RAN)" -path: /ran/ -order: 3 -section: - - title: OpenNESS Radio Access Network - path: /doc/ran/openness_ran - meta_title: OpenNESS Radio Access Network is the Edge of Wireless Network - meta_description: OpenNESS Radio Access Network is the edge of the wireless network. OpenNESS Intel FlexRAN uses as a reference 4G and 5G base station for 4G and 5G end-to-end testing. - - title: O-RAN Front Haul Sample Application in OpenNESS - path: /doc/ran/openness_xran - meta_title: 5GNR FlexRAN Front Haul functional units deployment with OpenNESS based on O-RAN specifications at the Network Edge - meta_description: 5GNR FlexRAN Front Haul functional units deployment with OpenNESS based on O-RAN specifications at the Network Edge. diff --git a/_data/navbars/reference-architectures.yml b/_data/navbars/reference-architectures.yml index 737e2951..b98555e3 100644 --- a/_data/navbars/reference-architectures.yml +++ b/_data/navbars/reference-architectures.yml @@ -3,9 +3,55 @@ title: "Reference Architectures" path: /reference-architectures/ -order: 9 +order: 4 section: + - title: Core Network + path: + section: + - title: Evolved Packet Core (EPC) + path: /doc/reference-architectures/core-network/openness_epc + meta_title: Edge Cloud Deployment with 3GPP 4G LTE CUPS of EPC + meta_description: OpenNESS is an open source edge computing platform that enables Service Providers and Enterprises to deploy applications and services on a network edge. + + - title: 5G Non-Stand Alone (NSA) + path: /doc/reference-architectures/core-network/openness_5g_nsa + meta_title: Edge Cloud Deployment with 3GPP 5G Non Stand Alone + meta_description: OpenNESS is an open source edge computing platform that enables Service Providers and Enterprises to deploy applications and services on a network edge. + + - title: Next-Gen Core (NGC) + path: /doc/reference-architectures/core-network/openness_ngc + meta_title: Edge Cloud Deployment with 3GPP 5G Stand Alone + meta_description: OpenNESS NGC provides reference REST-based APIs along with 3GPP standard traffic influencing APIs to address some of these major challenges in 5G edge deployments. + + - title: User Plane Function (UPF) + path: /doc/reference-architectures/core-network/openness_upf + meta_title: User Plane Function (UPF) + meta_description: User Plane Function is the evolution of Control and User Plane Separation which part of the Rel.14 in Evolved Packet core. CUPS enabled PGW to be split into PGW-C and PGW-U. + + - title: Radio Access Network + path: + section: + - title: OpenNESS Radio Access Network + path: /doc/reference-architectures/ran/openness_ran + meta_title: OpenNESS Radio Access Network is the Edge of Wireless Network + meta_description: OpenNESS Radio Access Network is the edge of the wireless network. OpenNESS Intel FlexRAN uses as a reference 4G and 5G base station for 4G and 5G end-to-end testing. + + - title: O-RAN Front Haul Sample Application in OpenNESS + path: /doc/reference-architectures/ran/openness_xran + meta_title: 5GNR FlexRAN Front Haul functional units deployment with OpenNESS based on O-RAN specifications at the Network Edge + meta_description: 5GNR FlexRAN Front Haul functional units deployment with OpenNESS based on O-RAN specifications at the Network Edge. + - title: Converged Edge Reference Architecture Near Edge path: /doc/reference-architectures/CERA-Near-Edge meta_title: Converged Edge Reference Architecture Near Edge meta_description: Reference architecture combines wireless and high performance compute for IoT, AI, video and other services. + + - title: Converged Edge Reference Architecture On Premises Edge + path: /doc/reference-architectures/CERA-5G-On-Prem + meta_title: Converged Edge Reference Architecture On Premises Edge + meta_description: Reference architecture combines wireless and high performance compute for IoT, AI, video and other services. + + - title: Converged Edge Reference Architecture for SD-WAN + path: /doc/reference-architectures/openness_sdwan + meta_title: Converged Edge Reference Architecture for SD-WAN + meta_description: OpenNESS provides a reference solution for SD-WAN consisting of building blocks for cloud-native deployments. diff --git a/_data/navbars/release-history.yml b/_data/navbars/release-history.yml index 6f5db0db..5e1ccabd 100644 --- a/_data/navbars/release-history.yml +++ b/_data/navbars/release-history.yml @@ -2,8 +2,8 @@ # Copyright (c) 2020 Intel Corporation title: "Release history" -path: /openness_releasenotes/ -order: 10 +path: /release-notes/ +order: 8 section: - title: Release Notes path: /openness_releasenotes diff --git a/doc/applications-onboard/network-edge-applications-onboarding.md b/doc/applications-onboard/network-edge-applications-onboarding.md index bce7152f..b3bbd3fd 100644 --- a/doc/applications-onboard/network-edge-applications-onboarding.md +++ b/doc/applications-onboard/network-edge-applications-onboarding.md @@ -1,5 +1,5 @@ ```text -SPDX-License-Identifier: Apache-2.0 +SPDX-License-Identifier: Apache-2.0 Copyright (c) 2019-2020 Intel Corporation ``` @@ -17,10 +17,10 @@ Copyright (c) 2019-2020 Intel Corporation - [Onboarding OpenVINO application](#onboarding-openvino-application) - [Prerequisites](#prerequisites-1) - [Setting up networking interfaces](#setting-up-networking-interfaces) - - [Deploying the application](#deploying-the-application) + - [Deploying the Application](#deploying-the-application) - [Applying Kubernetes network policies](#applying-kubernetes-network-policies-1) - [Setting up Edge DNS](#setting-up-edge-dns) - - [Starting traffic from client simulator](#starting-traffic-from-client-simulator) + - [Starting traffic from Client Simulator](#starting-traffic-from-client-simulator) - [Onboarding Smart City sample application](#onboarding-smart-city-sample-application) - [Setting up networking interfaces](#setting-up-networking-interfaces-1) - [Building Smart City ingredients](#building-smart-city-ingredients) @@ -29,26 +29,26 @@ Copyright (c) 2019-2020 Intel Corporation - [Enhanced Platform Awareness](#enhanced-platform-awareness) - [VM support for Network Edge](#vm-support-for-network-edge) - [Troubleshooting](#troubleshooting) - - [Useful commands:](#useful-commands) + - [Useful Commands:](#useful-commands) - + # Introduction This document aims to familiarize users with the Open Network Edge Services Software (OpenNESS) application on-boarding process for the Network Edge. This document provides instructions on how to deploy an application from the Edge Controller to Edge Nodes in the cluster; it also provides sample deployment scenarios and traffic configuration for the application. The applications will be deployed from the Edge Controller via the Kubernetes `kubectl` command-line utility. Sample specification files for application onboarding are also provided. # Installing OpenNESS -The following application onboarding steps assume that OpenNESS was installed through [OpenNESS playbooks](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md). +The following application onboarding steps assume that OpenNESS was installed through [OpenNESS playbooks](https://github.com/open-ness/specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md). # Building applications Users must provide the application to be deployed on the OpenNESS platform for Network Edge. The application must be provided in a Docker\* image format that is available either from an external Docker repository (Docker Hub) or a locally built Docker image. The image must be available on the Edge Node, which the application will be deployed on. -> **Note**: The Docker registry setup is out of scope for this document. If users already have a docker container image file and would like to copy it to the node manually, they can use the `docker load` command to add the image. The success of using a pre-built Docker image depends on the application dependencies that users must know. +> **Note**: The Harbor registry setup is out of scope for this document. If users already have a docker container image file and would like to copy it to the node manually, they can use the `docker load` command to add the image. The success of using a pre-built Docker image depends on the application dependencies that users must know. -The OpenNESS [edgeapps](https://github.com/open-ness/edgeapps) repository provides images for OpenNESS supported applications. Pull the repository to your Edge Node to build the images. +The OpenNESS [edgeapps](https://github.com/open-ness/edgeapps) repository provides images for OpenNESS supported applications. Pull the repository to your Edge Node to build the images. -This document explains the build and deployment of two applications: -1. Sample application: a simple “Hello, World!” reference application for OpenNESS -2. OpenVINO™ application: A close to real-world inference application +This document explains the build and deployment of two applications: +1. Sample application: a simple “Hello, World!” reference application for OpenNESS +2. OpenVINO™ application: A close to real-world inference application ## Building sample application images The sample application is available in [the edgeapps repository](https://github.com/open-ness/edgeapps/tree/master/sample-app); further information about the application is contained within the `Readme.md` file. @@ -88,7 +88,7 @@ The following steps are required to build the sample application Docker images f Additionally, an application to generate sample traffic is provided. The application should be built on a separate host, which generates the traffic. -1. To build the client simulator application image from the application directory, navigate to the `./clientsim` directory and run: +1. To build the client simulator application image from the application directory, navigate to the `./clientsim` directory and run: ``` ./build-image.sh ``` @@ -120,17 +120,17 @@ Kubernetes NetworkPolicy is a mechanism that enables control over how pods are a ```yml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy - metadata: + metadata: name: eaa-prod-cons-policy namespace: default spec: podSelector: {} - policyTypes: + policyTypes: - Ingress ingress: - - from: + - from: - ipBlock: - cidr: 10.16.0.0/16 + cidr: 10.16.0.0/16 ports: - protocol: TCP port: 80 @@ -148,40 +148,137 @@ Kubernetes NetworkPolicy is a mechanism that enables control over how pods are a 1. To deploy a sample producer application, create the following `sample_producer.yml` pod specification file. ```yml + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: producer + + --- + kind: ClusterRoleBinding + apiVersion: rbac.authorization.k8s.io/v1 + metadata: + name: producer-csr-requester + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: csr-requester + subjects: + - kind: ServiceAccount + name: producer + namespace: default + + --- + apiVersion: v1 + kind: ConfigMap + metadata: + name: producer-csr-config + data: + certrequest.json: | + { + "CSR": { + "Name": "producer", + "Subject": { + "CommonName": "ExampleNamespace:ExampleProducerAppID" + }, + "DNSSANs": [], + "IPSANs": [], + "KeyUsages": [ + "digital signature", "key encipherment", "client auth" + ] + }, + "Signer": "openness.org/certsigner", + "WaitTimeout": "5m" + } + + --- apiVersion: apps/v1 kind: Deployment metadata: name: producer - spec: + labels: + app: producer + spec: replicas: 1 - selector: - matchLabels: - app: producer - template: + selector: + matchLabels: + app: producer + template: metadata: labels: app: producer spec: - tolerations: - - key: node-role.kube-ovn/master - effect: NoSchedule + securityContext: + runAsUser: 1000 + runAsGroup: 3000 + serviceAccountName: producer + initContainers: + - name: alpine + image: alpine:latest + command: ["/bin/sh"] + args: ["-c", "cp /root/ca-certrequester/cert.pem /root/certs/root.pem"] + imagePullPolicy: IfNotPresent + securityContext: + runAsUser: 0 + runAsGroup: 0 + resources: + requests: + cpu: "0.1" + limits: + cpu: "0.1" + memory: "128Mi" + volumeMounts: + - name: ca-certrequester + mountPath: /root/ca-certrequester + - name: certs + mountPath: /root/certs + - name: certrequester + image: certrequester:1.0 + args: ["--cfg", "/home/certrequester/config/certrequest.json"] + imagePullPolicy: Never + resources: + requests: + cpu: "0.1" + limits: + cpu: "0.1" + memory: "128Mi" + volumeMounts: + - name: config + mountPath: /home/certrequester/config/ + - name: certs + mountPath: /home/certrequester/certs/ containers: - name: producer image: producer:1.0 imagePullPolicy: Never + volumeMounts: + - name: certs + mountPath: /home/sample/certs/ ports: - - containerPort: 80 - containerPort: 443 + volumes: + - name: config + configMap: + name: producer-csr-config + - name: ca-certrequester + secret: + secretName: ca-certrequester + - name: certs + emptyDir: {} ``` 2. Deploy the pod: ``` kubectl create -f sample_producer.yml ``` -3. Check that the pod is running: +3. Accept the producer's CSR: + ``` + kubectl certificate approve producer + ``` +4. Check that the pod is running: ``` kubectl get pods | grep producer ``` -4. Verify logs of the sample application producer: +5. Verify logs of the sample application producer: ``` kubectl logs -f @@ -189,19 +286,65 @@ Kubernetes NetworkPolicy is a mechanism that enables control over how pods are a The Example Producer eaa.openness [{ExampleNotification 1.0.0 Description for Event #1 by Example Producer}]}]} Sending notification ``` -5. Verify logs of EAA +6. Verify logs of EAA ``` - kubectl logs -f -n openness + kubectl logs -f -n openness Expected output: RequestCredentials request from CN: ExampleNamespace:ExampleProducerAppID, from IP: properly handled ``` -6. To deploy a sample consumer application, create the following `sample_consumer.yml` pod specification file. +7. To deploy a sample consumer application, create the following `sample_consumer.yml` pod specification file. ```yml + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: consumer + + --- + kind: ClusterRoleBinding + apiVersion: rbac.authorization.k8s.io/v1 + metadata: + name: consumer-csr-requester + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: csr-requester + subjects: + - kind: ServiceAccount + name: consumer + namespace: default + + --- + apiVersion: v1 + kind: ConfigMap + metadata: + name: consumer-csr-config + data: + certrequest.json: | + { + "CSR": { + "Name": "consumer", + "Subject": { + "CommonName": "ExampleNamespace:ExampleConsumerAppID" + }, + "DNSSANs": [], + "IPSANs": [], + "KeyUsages": [ + "digital signature", "key encipherment", "client auth" + ] + }, + "Signer": "openness.org/certsigner", + "WaitTimeout": "5m" + } + + --- apiVersion: apps/v1 kind: Deployment metadata: name: consumer + labels: + app: consumer spec: replicas: 1 selector: @@ -212,33 +355,84 @@ Kubernetes NetworkPolicy is a mechanism that enables control over how pods are a labels: app: consumer spec: - tolerations: - - key: node-role.kube-ovn/master - effect: NoSchedule + securityContext: + runAsUser: 1000 + runAsGroup: 3000 + serviceAccountName: consumer + initContainers: + - name: alpine + image: alpine:latest + command: ["/bin/sh"] + args: ["-c", "cp /root/ca-certrequester/cert.pem /root/certs/root.pem"] + imagePullPolicy: IfNotPresent + securityContext: + runAsUser: 0 + runAsGroup: 0 + resources: + requests: + cpu: "0.1" + limits: + cpu: "0.1" + memory: "128Mi" + volumeMounts: + - name: ca-certrequester + mountPath: /root/ca-certrequester + - name: certs + mountPath: /root/certs + - name: certrequester + image: certrequester:1.0 + args: ["--cfg", "/home/certrequester/config/certrequest.json"] + imagePullPolicy: Never + resources: + requests: + cpu: "0.1" + limits: + cpu: "0.1" + memory: "128Mi" + volumeMounts: + - name: config + mountPath: /home/certrequester/config/ + - name: certs + mountPath: /home/certrequester/certs/ containers: - name: consumer image: consumer:1.0 imagePullPolicy: Never + volumeMounts: + - name: certs + mountPath: /home/sample/certs/ ports: - - containerPort: 80 - containerPort: 443 + volumes: + - name: config + configMap: + name: consumer-csr-config + - name: ca-certrequester + secret: + secretName: ca-certrequester + - name: certs + emptyDir: {} + ``` +8. Accept the consumer's CSR: + ``` + kubectl certificate approve consumer ``` -7. Deploy the pod: +9. Deploy the pod: ``` kubectl create -f sample_consumer.yml ``` -8. Check that the pod is running: +10. Check that the pod is running: ``` kubectl get pods | grep consumer ``` -9. Verify logs of the sample application consumer: +11. Verify logs of the sample application consumer: ``` kubectl logs -f Expected output: Received notification ``` -10. Verify logs of EAA +12. Verify logs of EAA ``` kubectl logs -f @@ -285,7 +479,7 @@ This section guides users through the complete process of onboarding the OpenVIN ... 0000:86:00.0 | 3c:fd:fe:b2:42:d0 | detached ... - + kubectl interfaceservice attach 0000:86:00.0 ... Interface: 0000:86:00.0 successfully attached @@ -302,10 +496,12 @@ This section guides users through the complete process of onboarding the OpenVIN 1. An application `yaml` specification file for the OpenVINO producer that is used to deploy the K8s pod can be found in the Edge Apps repository at [./applications/openvino/producer/openvino-prod-app.yaml](https://github.com/open-ness/edgeapps/blob/master/applications/openvino/producer/openvino-prod-app.yaml). The pod will use the Docker image, which must be [built](#building-openvino-application-images) and available on the platform. Deploy the producer application by running: ``` kubectl apply -f openvino-prod-app.yaml + kubectl certificate approve openvino-prod-app ``` 2. An application `yaml` specification file for the OpenVINO consumer that is used to deploy K8s pod can be found in the Edge Apps repository at [./applications/openvino/consumer/openvino-cons-app.yaml](https://github.com/open-ness/edgeapps/blob/master/applications/openvino/consumer/openvino-cons-app.yaml). The pod will use the Docker image, which must be [built](#building-openvino-application-images) and available on the platform. Deploy the consumer application by running: ``` kubectl apply -f openvino-cons-app.yaml + kubectl certificate approve openvino-cons-app ``` 3. Verify that no errors show up in the logs of the OpenVINO consumer application: ``` @@ -471,14 +667,14 @@ kubectl interfaceservice get ## Building Smart City ingredients - 1. Clone the Smart City Reference Pipeline source code from [GitHub](https://github.com/OpenVisualCloud/Smart-City-Sample.git) to the following: 1) Camera simulator machines, 2) OpenNESS Controller machine, and 3) Smart City cloud master machine. + 1. Clone the Smart City Reference Pipeline source code from [GitHub](https://github.com/OpenVisualCloud/Smart-City-Sample.git) to the following: 1) Camera simulator machines, 2) OpenNESS Controller machine, and 3) Smart City cloud control plane machine. 2. Build the Smart City application on all of the machines as explained in [Smart City deployment on OpenNESS](https://github.com/OpenVisualCloud/Smart-City-Sample/tree/openness-k8s/deployment/openness). At least 2 offices (edge nodes) must be installed on OpenNESS. ## Running Smart City 1. On the Camera simulator machines, assign an IP address to the ethernet interface which the dataplane traffic will be transmitted through to the edge office1 and office2 nodes: - + On camera-sim1: ```shell ip a a 192.168.1.10/24 dev @@ -504,7 +700,7 @@ kubectl interfaceservice get make start_openness_camera ``` - 3. On the Smart City cloud master machine, run the Smart City cloud containers: + 3. On the Smart City cloud control plane machine, run the Smart City cloud containers: ```shell make start_openness_cloud ``` @@ -519,18 +715,18 @@ kubectl interfaceservice get 4. On the OpenNESS Controller machine, build and run the Smart City cloud containers: ```shell export CAMERA_HOSTS=192.168.1.10,192.168.2.10 - export CLOUD_HOST= + export CLOUD_HOST= make make update make start_openness_office ``` - > **NOTE**: `` is where the Smart City cloud master machine can be reached on the management/cloud network. + > **NOTE**: `` is where the Smart City cloud control plane machine can be reached on the management/cloud network. - 5. From the web browser, launch the Smart City web UI at the URL `https:///` + 5. From the web browser, launch the Smart City web UI at the URL `https:///` -## Inter application communication +## Inter application communication The IAC is available via the default overlay network used by Kubernetes - Kube-OVN. For more information on Kube-OVN, refer to the Kube-OVN support in OpenNESS [documentation](https://github.com/open-ness/ido-specs/blob/master/doc/dataplane/openness-interapp.md#interapp-communication-support-in-openness-network-edge) diff --git a/doc/applications-onboard/openness-certsigner.md b/doc/applications-onboard/openness-certsigner.md new file mode 100644 index 00000000..014fb98e --- /dev/null +++ b/doc/applications-onboard/openness-certsigner.md @@ -0,0 +1,189 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020 Intel Corporation +``` + +# Certificate Signer +- [Overview](#overview) +- [Usage](#usage) + - [Deployment](#deployment) + +## Overview +Each application that needs the TLS authentication certificate should generate it using the Certificate Signer by sending a CSR via Certificate Requester. + +## Usage +Generally the CSR will be sent from a Pod's Certificate Requester InitContainer which then needs to be approved by an administrator using `kubectl certificate approve `. + +After that the Certificate Requester saves the certificate and key under `/home/certrequester/certs/cert.pem` and `/home/certrequester/certs/key.pem`. + +Application can use the certificate by having a shared volume mounted under `/home/certrequester/certs` in Certificate Requester container and under a requried path in its service container. + +Each application needs to perform mutual TLS authentication with the [EAA](openness-eaa.md). To achieve that, the application should trust the CA certificate `root.pem` that is stored in `ca-certrequester` Kubernetes Secret. + +### Deployment +In order to use Certificate Requester the following Kubernetes entities needs to be created: + +1. RBAC Service Account and Cluster Role Binding to a `csr-requester` Cluster Role. + + ```yml + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: service-acc + + --- + kind: ClusterRoleBinding + apiVersion: rbac.authorization.k8s.io/v1 + metadata: + name: service-csr-requester + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: csr-requester + subjects: + - kind: ServiceAccount + name: service-acc + namespace: default + ``` + +2. JSON CSR config: + - *CSR.Name*: Kubernetes CSR name + - *CSR.Subject*: https://golang.org/pkg/crypto/x509/pkix/#Name + - *CSR.DNSSANs*: A list of DNS SANs + - *CSR.IPSANs*: A list of IP SANs + - *CSR.KeyUsages*: Specifies valid usage contexts for keys, list of elements of type https://godoc.org/k8s.io/api/certificates/v1#KeyUsage + - *Signer*: Specifies Kubernetes Signer that will handle the CSR. OpenNESS Certificate Signer name should be used: "*openness.org/certsigner*" + - *WaitTimeout*: Specifies how much time the CSR can wait until it's approved before deletion. + + Sample config: + ```yml + --- + apiVersion: v1 + kind: ConfigMap + metadata: + name: service-csr-config + data: + certrequest.json: | + { + "CSR": { + "Name": "service", + "Subject": { + "CommonName": "ExampleNamespace:ExampleProducerAppID" + }, + "DNSSANs": [], + "IPSANs": [], + "KeyUsages": [ + "digital signature", "key encipherment", "client auth" + ] + }, + "Signer": "openness.org/certsigner", + "WaitTimeout": "5m" + } + ``` + +3. Sample deployment: + + ```yml + --- + apiVersion: apps/v1 + kind: Deployment + metadata: + name: service + labels: + app: service + spec: + replicas: 1 + selector: + matchLabels: + app: service + template: + metadata: + labels: + app: service + spec: + securityContext: + runAsUser: 1000 + runAsGroup: 3000 + serviceAccountName: service-acc + initContainers: + - name: alpine + image: alpine:3.12.0 + command: ["/bin/sh"] + args: ["-c", "cp /root/ca-certrequester/cert.pem /root/certs/root.pem"] + imagePullPolicy: IfNotPresent + securityContext: + runAsUser: 0 + runAsGroup: 0 + resources: + requests: + cpu: "0.1" + limits: + cpu: "0.1" + memory: "128Mi" + volumeMounts: + - name: ca-certrequester + mountPath: /root/ca-certrequester + - name: certs + mountPath: /root/certs + - name: certrequester + image: certrequester:1.0 + args: ["--cfg", "/home/certrequester/config/certrequest.json"] + imagePullPolicy: Never + resources: + requests: + cpu: "0.1" + limits: + cpu: "0.1" + memory: "128Mi" + volumeMounts: + - name: config + mountPath: /home/certrequester/config/ + - name: certs + mountPath: /home/certrequester/certs/ + containers: + - name: service + image: service:1.0 + imagePullPolicy: Never + volumeMounts: + - name: certs + mountPath: /home/sample/certs/ + ports: + - containerPort: 443 + volumes: + - name: config + configMap: + name: service-csr-config + - name: ca-certrequester + secret: + secretName: ca-certrequester + - name: certs + emptyDir: {} + ``` + After applying such deployment we can check the Pod and CSR status: + + ```bash + $ kubectl get pods + NAME READY STATUS RESTARTS AGE + service-7b6b4c7bdf-4xv6g 0/1 Init:1/2 0 9s + $ kubectl get csr + NAME AGE SIGNERNAME REQUESTOR CONDITION + service 11s openness.org/certsigner system:serviceaccount:default:service Pending + ``` + + To approve the CSR and start the Service: + + ``` + $ kubectl certificate approve service + certificatesigningrequest.certificates.k8s.io/service approved + $ kubectl get csr + NAME AGE SIGNERNAME REQUESTOR CONDITION + service 82m openness.org/certsigner system:serviceaccount:default:service Approved,Issued + $ kubectl get pods + NAME READY STATUS RESTARTS AGE + service-7b6b4c7bdf-4xv6g 1/1 Running 0 54s + ``` + +4. Cleanup can be perfomed by: + - deleting the entities defined in a YAML file in the previous step: `kubectl delete -f ` + - deleting the CSR: `kubectl delete csr service` diff --git a/doc/applications-onboard/openness-eaa.md b/doc/applications-onboard/openness-eaa.md index 93042f82..a97761f6 100644 --- a/doc/applications-onboard/openness-eaa.md +++ b/doc/applications-onboard/openness-eaa.md @@ -5,15 +5,15 @@ Copyright (c) 2020 Intel Corporation # Edge Application Agent (EAA) - [Edge Application APIs](#edge-application-apis) -- [Edge Application Authentication APIs](#edge-application-authentication-apis) +- [Edge Application Authentication](#edge-application-authentication) #### Edge Application API support -Before looking at the APIs that are exposed to the Edge Applications, let's have a look at two types of applications that can be deployed on the Edge Node. -- **Producer Application**: OpenNESS Producer application is an edge compute application that provides services to other applications running on the edge compute platform. E.g. Location Services, Mapping Services, Transcoding Services, etc. -- **Consumer Application**: OpenNESS Consumer application is an edge compute application that serves end users traffic directly. E.g. CDN App, Augmented Reality App, VR Application, Infotainment Application, etc. Pre-existing cloud applications that do not intend to call the EAA APIs but would like to serve the users (without any changes to the implementation) on the edge also fall into this category. +Before looking at the APIs that are exposed to the Edge Applications, let's have a look at two types of applications that can be deployed on the Edge Node. +- **Producer Application**: OpenNESS Producer application is an edge compute application that provides services to other applications running on the edge compute platform. E.g. Location Services, Mapping Services, Transcoding Services, etc. +- **Consumer Application**: OpenNESS Consumer application is an edge compute application that serves end users traffic directly. E.g. CDN App, Augmented Reality App, VR Application, Infotainment Application, etc. Pre-existing cloud applications that do not intend to call the EAA APIs but would like to serve the users (without any changes to the implementation) on the edge also fall into this category. -#### EAA component design +#### EAA component design ![Edge Application Agent component design](eaa-images/eaa-comp.png) @@ -29,13 +29,13 @@ API endpoint for edge applications is implemented in the EAA (Edge Application A | **Edge Service list subscription** | This API endpoint allows a Consumer Application to get the list of Producer Application services it has availed of\. | CDN Application can call this API to check if it has subscribed to Location and Transcoding services\. | ### Edge Application APIs -Edge Application APIs are implemented by the EAA. Edge Application APIs are important APIs for Edge application developers. EAA APIs are implemented as HTTPS REST. There are two types of use cases here. -1. **Porting of existing Public/Private Cloud application to the edge compute based on OpenNESS**: This is the scenario where a customer wants to run existing apps in public cloud on OpenNESS edge without calling any APIs or changing code. In this case, the only requirement is for an Application image (VM/Container) should be available to be deployed using OpenNESS Kubernetes Control plane. -3. **Native Edge compute Application calling EAA APIs**: This is the scenario where a customer wants to develop Edge compute applications that take advantage of the Edge compute services resulting in more tactile application that responds to the changing user, network or resource scenarios. +Edge Application APIs are implemented by the EAA. Edge Application APIs are important APIs for Edge application developers. EAA APIs are implemented as HTTPS REST. There are two types of use cases here. +1. **Porting of existing Public/Private Cloud application to the edge compute based on OpenNESS**: This is the scenario where a customer wants to run existing apps in public cloud on OpenNESS edge without calling any APIs or changing code. In this case, the only requirement is for an Application image (VM/Container) should be available to be deployed using OpenNESS Kubernetes Control plane. +3. **Native Edge compute Application calling EAA APIs**: This is the scenario where a customer wants to develop Edge compute applications that take advantage of the Edge compute services resulting in more tactile application that responds to the changing user, network or resource scenarios. -OpenNESS supports deployment of both types of applications mentioned above. The Edge Application Agent is a service that runs on the Edge Node and operates as a discovery service and basic message bus between applications via pubsub. The connectivity and discoverability of applications by one another is governed by an entitlement system and is controlled by policies set with the OpenNESS Controller. The entitlement system is still in its infancy, however, and currently allows all applications on the executing Edge Node to discover one another as well as publish and subscribe to all notifications. The Figure below provides the sequence diagram of the supported APIs for the application +OpenNESS supports deployment of both types of applications mentioned above. The Edge Application Agent is a service that runs on the Edge Node and operates as a discovery service and basic message bus between applications via pubsub. The connectivity and discoverability of applications by one another is governed by an entitlement system and is controlled by policies set with the OpenNESS Controller. The entitlement system is still in its infancy, however, and currently allows all applications on the executing Edge Node to discover one another as well as publish and subscribe to all notifications. The Figure below provides the sequence diagram of the supported APIs for the application -More details about the APIs can be found here [Edge Application APIs](https://www.openness.org/api-documentation/?api=eaa) +More details about the APIs can be found here [Edge Application APIs](https://www.openness.org/api-documentation/?api=eaa) ![Edge Application services APIs](eaa-images/eaa_services.svg) @@ -45,13 +45,8 @@ More details about the APIs can be found here [Edge Application APIs](https://ww _Figure - Edge Application API Sequence Diagram: Service, Subscription and Notification_ -### Edge Application Authentication APIs -OpenNESS supports authentication of Edge compute apps that intend to call EAA APIs. Applications are authenticated by the Edge Node microservice issuing the requesting application a valid TLS certificate after validating the identity of the application. It should be noted that in the OpenNESS solution, the Application can only be provisioned by the OpenNESS controller. There are two categories of Applications as discussed above and here is the implication for the authentication. -1. **Existing pubic cloud application ported to OpenNESS**: In this scenario, a customer may want to run existing apps in the public cloud on OpenNESS edge without calling any APIs or changing code. In this case the Application cannot call any EAA APIs and consume services on the edge compute. It just services the end-user traffic. So the Application will not call authentication API to acquire a TLS certificate. -2. **Native Edge compute Application calling EAA APIs**: In this scenario, a customer may want to develop Edge compute applications that take advantage of the Edge compute services resulting in more tactile application that responds to the changing user, network or resource scenarios. Such applications should first call authentication APIs and acquire TLS certificate. Authentication of Applications that provide services to other Applications on the edge compute (Producer Apps) is mandatory. +### Edge Application Authentication -For applications executing on the Local breakout the Authentication is not applicable since its not provisioned by the OpenNESS controller. +Connection to the EAA can be established after performing mutual TLS authentication. To achieve that the application needs to generate its certificate using Certificate Signer and should trust the CA certificate `root.pem` that is stored in `ca-certrequester` Kubernetes Secret. -Authentication APIs are implemented as HTTP REST APIs. - -More details about the APIs can be found here [Application Authentication APIs](https://www.openness.org/api-documentation/?api=auth) \ No newline at end of file +The details can be found on the [Certificate Signer page](openness-certsigner.md). diff --git a/doc/applications-onboard/openness-edgedns.md b/doc/applications-onboard/openness-edgedns.md index 9fb44ff0..e14581d4 100644 --- a/doc/applications-onboard/openness-edgedns.md +++ b/doc/applications-onboard/openness-edgedns.md @@ -8,8 +8,6 @@ Copyright (c) 2019 Intel Corporation - [Usage](#usage) - [Network edge usage](#network-edge-usage) - - ## Overview The edge platform must provide access to DNS. The edge platform receives the application DNS rules from the controller. This is specified in the ETSI Multi-access Edge Computing (MEC). From a 5G edge deployment perspective, the Primary DNS (priDns) and Secondary DNS (secDns) needs to be configured which is going to be consumed by the SMF. @@ -22,7 +20,7 @@ _Figure - DNS support on OpenNESS overview_ >**NOTE**: Secondary DNS service is out of the scope of OpenNESS and is only used for DNS forwarding. -EdgeDNS is a functionality to provide the Domain Name System (DNS) Server with a possibility to be controlled by its CLI. EdgeDNS Server listens for requests from a client's CLI. After receiving a CLI request, a function handling the request adds or removes the RULE inside of the EdgeDNS database. EdgeDNS supports only type A records for Set/Delete Fully Qualified Domain Names (FQDN) and the current forwarder is set to 8.8.8.8 (set in docker-compose.yml and openness.yaml). Network Edge mode provides EdgeDNS as a service, which is an application running in a K8s pod on each worker node of the OpenNESS K8s cluster. It allows users to add and remove DNS entries of the worker host directly from K8s control plane node using kubectl plugin. +EdgeDNS is a functionality to provide the Domain Name System (DNS) Server with a possibility to be controlled by its CLI. EdgeDNS Server listens for requests from a client's CLI. After receiving a CLI request, a function handling the request adds or removes the RULE inside of the EdgeDNS database. EdgeDNS supports only type A records for Set/Delete Fully Qualified Domain Names (FQDN) and the current forwarder is set to 8.8.8.8 (set in docker-compose.yml and openness.yaml). Network Edge mode provides EdgeDNS as a service, which is an application running in a K8s pod on each node of the OpenNESS K8s cluster. It allows users to add and remove DNS entries of the node host directly from K8s control plane using kubectl plugin. ## Usage @@ -55,17 +53,15 @@ In Network Edge, the EdgeDNS CLI is used as a Kubernetes\* plugin. The following `kubectl edgedns del ` to delete DNS entry of node ``` ->**NOTE**: `node_hostname` must be a valid worker node name; it can be found using `kubectl get nodes` +>**NOTE**: `node_hostname` must be a valid node name; it can be found using `kubectl get nodes` >**NOTE**: `JSON filename` is a path to the file containing record_type, fqdn, and addresses in case of setting operation. JSON file without record_type also is valid, and as default value "A" is set. - - -To set the DNS entry on the worker1 host from the `set.json` file, users must provide the following command: +To set the DNS entry on the node1 host from the `set.json` file, users must provide the following command: -`kubectl edgedns set worker set.json` +`kubectl edgedns set node1 set.json` The following command removes this entry: -`kubectl edgedns del worker del.json` +`kubectl edgedns del node1 del.json` diff --git a/doc/applications-onboard/openness-interface-service.md b/doc/applications-onboard/openness-interface-service.md index 31613089..df049f2f 100644 --- a/doc/applications-onboard/openness-interface-service.md +++ b/doc/applications-onboard/openness-interface-service.md @@ -12,15 +12,14 @@ Copyright (c) 2019-2020 Intel Corporation - [Userspace (DPDK) bridge](#userspace-dpdk-bridge) - [HugePages (DPDK)](#hugepages-dpdk) - [Examples](#examples) - - [Getting information about node interfaces](#getting-information-about-node-interfaces) - - [Attaching kernel interfaces](#attaching-kernel-interfaces) - - [Attaching DPDK interfaces](#attaching-dpdk-interfaces) - - [Detaching interfaces](#detaching-interfaces) + - [Getting information about node interfaces](#getting-information-about-node-interfaces) + - [Attaching kernel interfaces](#attaching-kernel-interfaces) + - [Attaching DPDK interfaces](#attaching-dpdk-interfaces) + - [Detaching interfaces](#detaching-interfaces) - ## Overview -Interface service is an application running in the Kubernetes\* pod on each worker node of the OpenNESS Kubernetes cluster. It allows users to attach additional network interfaces of the worker host to the provided OVS bridge, enabling external traffic scenarios for applications deployed in the Kubernetes\* pods. Services on each worker can be controlled from the control plane node using kubectl plugin. +Interface service is an application running in the Kubernetes\* pod on each node of the OpenNESS Kubernetes cluster. It allows users to attach additional network interfaces of the node to the provided OVS bridge, enabling external traffic scenarios for applications deployed in the Kubernetes\* pods. Services on each node can be controlled from the control plane using kubectl plugin. Interface service can attach both kernel and user space (DPDK) network interfaces to the appropriate OVS bridges. @@ -60,7 +59,7 @@ Update the physical Ethernet interface with an IP from the `192.168.1.0/24` subn * Use `kubectl interfaceservice attach ` to attach interfaces to the OVS bridge `` using a specified `driver`. * Use `kubectl interfaceservice detach ` to detach interfaces from `OVS br_local` bridge. ->**NOTE**: `node_hostname` must be a valid worker node name and can be found using `kubectl get nodes`. +>**NOTE**: `node_hostname` must be a valid node name and can be found using `kubectl get nodes`. >**NOTE**: Invalid/non-existent PCI addresses passed to attach/detach requests will be ignored @@ -129,7 +128,7 @@ kubeovn_dpdk_hugepages: "2Gi" # This is overall amount of hugepags available to ### Getting information about node interfaces ```shell -[root@master1 ~] kubectl interfaceservice get worker1 +[root@controlplane1 ~] kubectl interfaceservice get node1 Kernel interfaces: 0000:02:00.0 | 00:1e:67:d2:f2:06 | detached @@ -148,7 +147,7 @@ DPDK interfaces: ### Attaching kernel interfaces ```shell -[root@master1 ~] kubectl interfaceservice attach worker1 0000:07:00.2,0000:99:00.9,0000:07:00.3,00:123:123 br-local kernel +[root@controlplane1 ~] kubectl interfaceservice attach node1 0000:07:00.2,0000:99:00.9,0000:07:00.3,00:123:123 br-local kernel Invalid PCI address: 00:123:123. Skipping... Interface: 0000:99:00.9 not found. Skipping... Interface: 0000:07:00.2 successfully attached @@ -158,26 +157,26 @@ Interface: 0000:07:00.3 successfully attached Attaching to kernel-spaced bridges can be shortened to: ```shell -kubectl interfaceservice attach worker1 0000:07:00.2 +kubectl interfaceservice attach node1 0000:07:00.2 ``` or: ```shell -kubectl interfaceservice attach worker1 0000:07:00.2 bridge-name +kubectl interfaceservice attach node1 0000:07:00.2 bridge-name ``` ### Attaching DPDK interfaces >**NOTE**: The device to be attached to DPDK bridge should initially use kernel-space driver and should be not be attached to any bridges. ```shell -[root@master1 ~] kubectl interfaceservice attach worker1 0000:07:00.2,0000:07:00.3 br-userspace dpdk +[root@controlplane1 ~] kubectl interfaceservice attach node1 0000:07:00.2,0000:07:00.3 br-userspace dpdk Interface: 0000:07:00.2 successfully attached Interface: 0000:07:00.3 successfully attached ``` ### Detaching interfaces ```shell -[root@master1 ~] kubectl interfaceservice detach worker1 0000:07:00.2,0000:07:00.3 +[root@controlplane1 ~] kubectl interfaceservice detach node1 0000:07:00.2,0000:07:00.3 Interface: 0000:07:00.2 successfully detached Interface: 0000:07:00.3 successfully detached ``` diff --git a/doc/applications-onboard/openness-network-edge-vm-support.md b/doc/applications-onboard/openness-network-edge-vm-support.md index 2f53e494..852b8b55 100644 --- a/doc/applications-onboard/openness-network-edge-vm-support.md +++ b/doc/applications-onboard/openness-network-edge-vm-support.md @@ -132,14 +132,14 @@ The KubeVirt role responsible for bringing up KubeVirt components is enabled by ## VM deployment Provided below are sample deployment instructions for different types of VMs. -Please use sample `.yaml` specification files provided in the OpenNESS Edge Controller directory, [edgenode/edgecontroller/kubevirt/examples/](https://github.com/open-ness/edgenode/edgecontroller/tree/master/kubevirt/examples), to deploy the workloads. Some of the files require modification to suit the environment they will be deployed in. Specific instructions on modifications are provided in the following steps: +Please use sample `.yaml` specification files provided in the OpenNESS Edge Controller directory, [edgenode/edgecontroller/kubevirt/examples/](https://github.com/open-ness/edgenode/tree/master/edgecontroller/kubevirt/examples), to deploy the workloads. Some of the files require modification to suit the environment they will be deployed in. Specific instructions on modifications are provided in the following steps: ### Stateless VM deployment To deploy a sample stateless VM with containerDisk storage: 1. Deploy the VM: ```shell - [root@controller ~]# kubectl create -f /opt/edgenode/edgecontroller/kubevirt/examples/statelessVM.yaml + [root@controller ~]# kubectl create -f /opt/openness/edgenode/edgecontroller/kubevirt/examples/statelessVM.yaml ``` 2. Start the VM: ```shell @@ -150,7 +150,7 @@ To deploy a sample stateless VM with containerDisk storage: [root@controller ~]# kubectl get pods | grep launcher [root@controller ~]# kubectl get vms ``` - 4. Execute into the VM (pass/login cirros/gocubsgo): + 4. Execute into the VM (login/pass cirros/gocubsgo): ```shell [root@controller ~]# kubectl virt console cirros-stateless-vm ``` @@ -164,11 +164,13 @@ To deploy a sample stateful VM with persistent storage and additionally use a Ge >**NOTE**: Each stateful VM with a new Persistent Volume Claim (PVC) requires a new Persistent Volume (PV) to be created. See more in the [limitations section](#limitations). Also, CDI needs two PVs when creating a PVC and loading a VM image from the qcow2 file: one PV for the actual PVC to be created and one PV to translate the qcow2 image to raw input. +>**NOTE**: An issue appears when the CDI upload pod is deployed with Kube-OVN CNI, the deployed pods readiness probe fails and pod is never in ready state. It is advised that the user uses other CNI such as Calico CNI when using CDI with OpenNESS. + 1. Create a persistent volume for the VM: - - Edit the sample yaml with the hostname of the worker node: + - Edit the sample yaml with the hostname of the node: ```yaml - # /opt/edgenode/edgecontroller/kubevirt/examples/persistentLocalVolume.yaml + # /opt/openness/edgenode/edgecontroller/kubevirt/examples/persistentLocalVolume.yaml # For both kv-pv0 and kv-pv1, enter the correct hostname: - key: kubernetes.io/hostname operator: In @@ -177,7 +179,7 @@ To deploy a sample stateful VM with persistent storage and additionally use a Ge ``` - Create the PV: ```shell - [root@controller ~]# kubectl create -f /opt/edgenode/edgecontroller/kubevirt/examples/persistentLocalVolume.yaml + [root@controller ~]# kubectl create -f /opt/openness/edgenode/edgecontroller/kubevirt/examples/persistentLocalVolume.yaml ``` - Check that PV is created: ```shell @@ -188,7 +190,7 @@ To deploy a sample stateful VM with persistent storage and additionally use a Ge ``` 2. Download the Generic Cloud qcow image for CentOS 7: ```shell - [root@controller ~]# wget https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1907.qcow2 + [root@controller ~]# wget https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-2003.qcow2 ``` 3. Get the address of the CDI upload proxy: ```shell @@ -197,7 +199,7 @@ To deploy a sample stateful VM with persistent storage and additionally use a Ge 4. Create and upload the image to PVC via CDI: >**NOTE**: There is currently a limitation when using the CDI together with CMK (Intel's CPU Manager for Kubernetes). The CDI upload pod will fail to deploy on the node due to K8s node taint provided by CMK. For a workaround, see the [limitations section](#cdi-image-upload-fails-when-cmk-is-enabled). ```shell - [root@controller ~]# kubectl virt image-upload dv centos-dv --image-path=/root/kubevirt/CentOS-7-x86_64-GenericCloud-1907.qcow2 --insecure --size=15Gi --storage-class=local-storage --uploadproxy-url=https://:443 + [root@controller ~]# kubectl virt image-upload dv centos-dv --image-path=/root/kubevirt/CentOS-7-x86_64-GenericCloud-2003.qcow2 --insecure --size=15Gi --storage-class=local-storage --uploadproxy-url=https://:443 DataVolume default/centos-dv created Waiting for PVC centos-dv upload pod to be ready... @@ -208,7 +210,7 @@ To deploy a sample stateful VM with persistent storage and additionally use a Ge Uploading data completed successfully, waiting for processing to complete, you can hit ctrl-c without interrupting the progress Processing completed successfully - Uploading /root/kubevirt/CentOS-7-x86_64-GenericCloud-1907.qcow2 completed successfully + Uploading /root/kubevirt/CentOS-7-x86_64-GenericCloud-2003.qcow2 completed successfully ``` 5. Check that PV, DV, and PVC are correctly created: ```shell @@ -230,7 +232,7 @@ To deploy a sample stateful VM with persistent storage and additionally use a Ge ``` 8. Edit the .yaml file for the VM with the updated public key: ```yaml - # /opt/edgenode/edgecontroller/kubevirt/examples/cloudGenericVM.yaml + # /opt/openness/edgenode/edgecontroller/kubevirt/examples/cloudGenericVM.yaml users: - name: root password: root @@ -240,7 +242,7 @@ To deploy a sample stateful VM with persistent storage and additionally use a Ge ``` 9. Deploy the VM: ```shell - [root@controller ~]# kubectl create -f /opt/edgenode/edgecontroller/kubevirt/examples/cloudGenericVM.yaml + [root@controller ~]# kubectl create -f /opt/openness/edgenode/edgecontroller/kubevirt/examples/cloudGenericVM.yaml ``` 10. Start the VM: ```shell @@ -254,6 +256,8 @@ To deploy a sample stateful VM with persistent storage and additionally use a Ge ```shell [root@controller ~]# kubectl get vmi ``` +>**NOTE**: The user should verify that there is no K8s network policy that would block the traffic to the VM (ie. `block-all-ingress policy`). If such policy exists it should be either removed or a new policy should be created to allow traffic. To check current network policies run: `kubectl get networkpolicy -A`. See K8s [documentation for more information on network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/). + 13. SSH into the VM: ```shell [root@controller ~]# ssh @@ -264,7 +268,7 @@ To deploy a sample stateful VM with persistent storage and additionally use a Ge To deploy a VM requesting SRIOV VF of NIC: 1. Bind the SRIOV interface to the VFIO driver on Edge Node: ```shell - [root@worker ~]# /opt/dpdk-18.11.6/usertools/dpdk-devbind.py --bind=vfio-pci + [root@node ~]# /opt/openness/dpdk-18.11.6/usertools/dpdk-devbind.py --bind=vfio-pci ``` 2. Delete/Restart SRIOV device plugin on the node: ```shell @@ -272,7 +276,7 @@ To deploy a VM requesting SRIOV VF of NIC: ``` 3. Check that the SRIOV VF for VM is available as an allocatable resource for DP (wait a few seconds after restart): ``` - [root@controller ~]# kubectl get node -o json | jq '.status.allocatable' + [root@controller ~]# kubectl get node -o json | jq '.status.allocatable' { "cpu": "79", "devices.kubevirt.io/kvm": "110", @@ -290,7 +294,7 @@ To deploy a VM requesting SRIOV VF of NIC: ``` 4. Deploy the VM requesting the SRIOV device (if a smaller amount is available on the platform, adjust the number of HugePages required in the .yaml file): ```shell - [root@controller ~]# kubectl create -f /opt/edgenode/edgecontroller/kubevirt/examples/sriovVM.yaml + [root@controller ~]# kubectl create -f /opt/openness/edgenode/edgecontroller/kubevirt/examples/sriovVM.yaml ``` 5. Start the VM: ```shell @@ -348,7 +352,7 @@ Complete the following steps to create a snapshot: 1. Log into the Edge Node 2. Go to the virtual disk directory for the previously created VM: ```shell - [root@worker ~]# cd /var/vd/vol0/ && ls + [root@node ~]# cd /var/vd/vol0/ && ls ``` 3. Create a qcow2 snapshot image out of the virtual disk present in the directory (`disk.img`): ```shell @@ -382,7 +386,7 @@ The following script is an example of how to perform the above steps: ```shell #!/bin/bash -kubectl virt image-upload dv centos-dv --image-path=/root/CentOS-7-x86_64-GenericCloud-1907.qcow2 --insecure --size=15Gi --storage-class=local-storage --uploadproxy-url=https://:443 & +kubectl virt image-upload dv centos-dv --image-path=/root/CentOS-7-x86_64-GenericCloud-2003.qcow2 --insecure --size=15Gi --storage-class=local-storage --uploadproxy-url=https://:443 & sleep 5 @@ -396,7 +400,7 @@ kubectl apply -f cdiUploadCentosDvToleration.yaml sleep 5 -kubectl create -f /opt/edgenode/edgecontroller/kubevirt/examples/persistentLocalVolume.yaml +kubectl create -f /opt/openness/edgenode/edgecontroller/kubevirt/examples/persistentLocalVolume.yaml ``` ## Useful Commands and Troubleshooting @@ -427,9 +431,9 @@ Check that the IP address of the `cdi-upload-proxy` is correct and that the Netw ``` 2. Cannot SSH to stateful VM with Cloud Generic Image due to the public key being denied. -Confirm that the public key provided in `/opt/edgenode/edgecontroller/kubevirt/examples/cloudGenericVM.yaml` is valid and in a correct format. Example of a correct format: +Confirm that the public key provided in `/opt/openness/edgenode/edgecontroller/kubevirt/examples/cloudGenericVM.yaml` is valid and in a correct format. Example of a correct format: ```yaml - # /opt/edgenode/edgecontroller/kubevirt/examples/cloudGenericVM.yaml + # /opt/openness/edgenode/edgecontroller/kubevirt/examples/cloudGenericVM.yaml users: - name: root password: root diff --git a/doc/applications-onboard/using-openness-cnca.md b/doc/applications-onboard/using-openness-cnca.md index 1b9b7efb..68b31d5b 100644 --- a/doc/applications-onboard/using-openness-cnca.md +++ b/doc/applications-onboard/using-openness-cnca.md @@ -4,6 +4,7 @@ Copyright (c) 2019-2020 Intel Corporation ``` # Core Network Configuration Agent (CNCA) + - [4G/LTE Core Configuration using CNCA](#4glte-core-configuration-using-cnca) - [Configuring in Network Edge mode](#configuring-in-network-edge-mode) - [Sample YAML LTE CUPS userplane configuration](#sample-yaml-lte-cups-userplane-configuration) @@ -40,14 +41,16 @@ Copyright (c) 2019-2020 Intel Corporation For Network Edge mode, CNCA provides a kubectl plugin to configure the 4G/LTE Core network. Kubernetes\* adopts plugins concepts to extend its functionality. The `kube-cnca` plugin executes CNCA related functions within the Kubernetes eco-system. The plugin performs remote callouts against LTE Control and User Plane Separation (LTE CUPS) Operation Administration and Maintenance (OAM) agent. Available management with `kube-cnca` against LTE CUPS OAM agent are: + 1. Creation of LTE CUPS userplanes 2. Deletion of LTE CUPS userplanes 3. Updating (patching) LTE CUPS userplanes -The `kube-cnca` plugin is installed automatically on the control plane node during the installation phase of the [OpenNESS Experience Kit](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/openness-experience-kits.md). +The `kube-cnca` plugin is installed automatically on the control plane during the installation phase of the [OpenNESS Experience Kit](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/openness-experience-kits.md). In the following sections, a detailed explanation with examples is provided about the CNCA management. Creation of the LTE CUPS userplane is performed based on the configuration provided by the given YAML file. The YAML configuration should follow the provided sample YAML in [Sample YAML LTE CUPS userplane configuration](#sample-yaml-lte-cups-userplane-configuration) section. Use the `apply` command to post a userplane creation request onto Application Function (AF): + ```shell kubectl cnca apply -f ``` @@ -57,21 +60,25 @@ When the userplane is created successfully, the `apply` command returns the user >**NOTE**: All active userplanes can be retrieved from AF through the command `kubectl cnca get userplanes`. To retrieve an existing userplane with a known userplane ID, use the following command: + ```shell kubectl cnca get userplane ``` To retrieve all active userplanes at LTE CUPS OAM agent, use the following command: + ```shell kubectl cnca get userplanes ``` To modify an active userplane, use the `patch` command and provide a YAML file with the subset of the configuration to be modified: + ```shell kubectl cnca patch -f ``` To delete an active userplane, use the `delete` command: + ```shell kubectl cnca delete userplane ``` @@ -128,19 +135,18 @@ This role brings up the 5g OpenNESS setup in the loopback mode for testing and d - A clone of the ido-epcforedge repo from GitHub\* - Builds AF, NEF, OAM, and CNTF microservices - - Generates certificate files at the location **/etc/openness/certs/ngc** on the controller. + - Generates certificate files at the location **/opt/openness/certs/ngc** on the controller. - Creates ConfigMap **certs-cm** from the above directory. - Updates the configuration files of AF and NEF with the service names of NEF and CNTF respectively. - - Copies the OAM, NEF, CNTF and AF configuration to the location **/etc/openness/configs/ngc** on the controller. - - Creates ConfigMap **oauth2-cm** from the **/etc/openness/configs/ngc/oauth2.json** configuration file. - - Creates template of ConfigMaps **af-cm**,**nef-cm**,**cntf-cm**,**oam-cm** from the respective configuration json files present in the **/etc/openness/configs/ngc** directory. + - Copies the OAM, NEF, CNTF and AF configuration to the location **/opt/openness/configs/ngc** on the controller. + - Creates ConfigMap **oauth2-cm** from the **/opt/openness/configs/ngc/oauth2.json** configuration file. + - Creates template of ConfigMaps **af-cm**,**nef-cm**,**cntf-cm**,**oam-cm** from the respective configuration json files present in the **/opt/openness/configs/ngc** directory. - Copies these templates to the respective template folders of the helm charts for AF, NEF, OAM, and CNTF. - Creates docker images for AF, NEF, OAM, and CNTF microservices and adds them into the Docker\* registry at **\**. - Installs the helm charts for AF, NEF, OAM, and CNTF using the images from the Docker registry - - Copies the helm charts for AF, NEF, OAM, and CNTF into the location **/opt/openness-helm-charts/** - + - Copies the helm charts for AF, NEF, OAM, and CNTF into the location **/opt/openness/helm-charts/** -- On successful start of AF, NEF, OAM, and CNTF PODs. Status of PODs, Deployments, ConfigMaps, Services, images, and helm charts can be verified using the following commands: +- On successful AF, NEF, OAM, and CNTF PODs should start. Status of PODs, Deployments, ConfigMaps, Services, images, and helm charts can be verified using the following commands: ```shell - kubectl get pods -n ngc @@ -210,12 +216,12 @@ If the AF configuration needs to be updated (as per your deployment configuratio 2. Update the AF POD using helm: - - Open the AF configmap template file `/opt/openness-helm-charts/af/templates/configmapAF.yaml` and modify the parameters. + - Open the AF configmap template file `/opt/openness/helm-charts/af/templates/configmapAF.yaml` and modify the parameters. - Save and exit. - Now update the AF POD using the following command: ```shell - helm upgrade af /opt/openness-helm-charts/af --set image.repository=:5000/af-image + helm upgrade af /opt/openness/helm-charts/af --set image.repository=:5000/af-image Release "af" has been upgraded. Happy Helming! NAME: af LAST DEPLOYED: Fri Jul 24 12:29:44 2020 @@ -260,12 +266,12 @@ If the NEF configuration needs to be updated (as per your deployment configurati 2. Update the NEF POD using helm: - - Open the NEF configmap template file `/opt/openness-helm-charts/nef/templates/configmapNEF.yaml` and modify the parameters. + - Open the NEF configmap template file `/opt/openness/helm-charts/nef/templates/configmapNEF.yaml` and modify the parameters. - Save and exit. - Now update the NEF POD using the following command: ```shell - helm upgrade nef /opt/openness-helm-charts/nef --set image.repository=:5000/nef-image + helm upgrade nef /opt/openness/helm-charts/nef --set image.repository=:5000/nef-image Release "nef" has been upgraded. Happy Helming! NAME: nef LAST DEPLOYED: Fri Jul 24 12:37:20 2020 @@ -297,19 +303,22 @@ If the NEF configuration needs to be updated (as per your deployment configurati Modifying the OAM configuration. Follow the same steps as above (as done for AF) with the following differences: -- Open the file `/opt/openness-helm-charts/oam/templates/configmapOAM.yaml` and modify the parameters. +- Open the file `/opt/openness/helm-charts/oam/templates/configmapOAM.yaml` and modify the parameters. - Save and exit. - Now restart the OAM POD using the command: + +```shell +helm upgrade oam /opt/openness/helm-charts/oam --set image.repository=:5000/oam-image ``` -helm upgrade oam /opt/openness-helm-charts/oam --set image.repository=:5000/oam-image -``` + - A successful restart of the OAM with the updated config can be observed through OAM container logs. Run the following command to get OAM logs: `kubectl logs -f oam-659b5db5b5-l26q8 --namespace=ngc` -Modifying the oauth2 configuration. Complete the following steps: +Modifying the oauth2 configuration. Complete the following steps: -- Open the file `/etc/openness/configs/ngc/oauth2.json` and modify the parameters. +- Open the file `/opt/openness/configs/ngc/oauth2.json` and modify the parameters. - Save and exit. + 1. Delete the CNTF, AF, and NEF PODs using helm: ```shell @@ -320,21 +329,24 @@ Modifying the oauth2 configuration. Complete the following steps: helm uninstall cntf release "af" uninstalled ``` + 2. Update the ConfigMap associated with oauth2.json: ```shell - kubectl create configmap oauth2-cm --from-file /etc/openness/configs/ngc/oauth2.json -n ngc -o yaml --dry-run=client | kubectl replace -f - + kubectl create configmap oauth2-cm --from-file /opt/openness/configs/ngc/oauth2.json -n ngc -o yaml --dry-run=client | kubectl replace -f - ``` + 3. Restart NEF, CNTF, and AF PODs using the following commands: ```shell - helm install nef /opt/openness-helm-charts/nef --set image.repository=:5000/nef-image - helm install af /opt/openness-helm-charts/af --set image.repository=:5000/af-image - helm install cntf /opt/openness-helm-charts/cntf --set image.repository=:5000/cntf-image + helm install nef /opt/openness/helm-charts/nef --set image.repository=:5000/nef-image + helm install af /opt/openness/helm-charts/af --set image.repository=:5000/af-image + helm install cntf /opt/openness/helm-charts/cntf --set image.repository=:5000/cntf-image ``` -Modifying the certificates. Complete the following steps: -- Update the certificates present in the directory `/etc/openness/certs/ngc/`. +Modifying the certificates. Complete the following steps: + +- Update the certificates present in the directory `/opt/openness/certs/ngc/`. 1. Delete the CNTF, AF, NEF, and OAM PODs using helm: @@ -348,24 +360,28 @@ Modifying the certificates. Complete the following steps: helm uninstall oam release "oam" uninstalled ``` + 2. Update the ConfigMap associated with the certificates directory: ```shell - kubectl create configmap certs-cm --from-file /etc/openness/certs/ngc/ -n ngc -o yaml --dry-run=client | kubectl replace -f - + kubectl create configmap certs-cm --from-file /opt/openness/certs/ngc/ -n ngc -o yaml --dry-run=client | kubectl replace -f - ``` + 3. Check certs-cm present in available ConfigMaps list: ```shell kubectl get cm -n ngc ``` -3. Restart NEF, CNTF, AF, and OAM PODs using the following commands: + +4. Restart NEF, CNTF, AF, and OAM PODs using the following commands: ```shell - helm install nef /opt/openness-helm-charts/nef --set image.repository=:5000/nef-image - helm install af /opt/openness-helm-charts/af --set image.repository=:5000/af-image - helm install cntf /opt/openness-helm-charts/cntf --set image.repository=:5000/cntf-image - helm install oam /opt/openness-helm-charts/oam --set image.repository=:5000/oam-image + helm install nef /opt/openness/helm-charts/nef --set image.repository=:5000/nef-image + helm install af /opt/openness/helm-charts/af --set image.repository=:5000/af-image + helm install cntf /opt/openness/helm-charts/cntf --set image.repository=:5000/cntf-image + helm install oam /opt/openness/helm-charts/oam --set image.repository=:5000/oam-image ``` + ### Configuring in Network Edge mode For Network Edge mode, the CNCA provides a kubectl plugin to configure the 5G Core network. Kubernetes adopted plugin concepts to extend its functionality. The `kube-cnca` plugin executes CNCA related functions within the Kubernetes ecosystem. The plugin performs remote callouts against NGC OAM and AF microservice on the controller itself. @@ -380,15 +396,17 @@ The `kube-cnca` plugin is installed automatically on the control plane node duri Supported operations through `kube-cnca` plugin: - * Registration of edge service info for UPF with a 5G Core through OAM interface (co-located with Edge-Node) - * Un-registration of edge service info +- Registration of edge service info for UPF with a 5G Core through OAM interface (co-located with Edge-Node) +- Un-registration of edge service info To register the AF service through the NGC OAM function, run: + ```shell kubectl cnca register --dnai= --dnn= --tac= --priDns= --secDns= --upfIp= --snssai= ``` The following parameters MUST be provided to the command: + 1. Data Network Access Identifier (DNAI) 2. Data Network Name (DNN) 3. Primary DNS (priDns) @@ -399,6 +417,7 @@ The following parameters MUST be provided to the command: Upon successful registration, subscriptions can be instantiated over the NGC AF. The `af-service-id` is returned by the `register` command to be used in further correspondence with NGC OAM and AF functions. Un-registration of the AF service can be performed with the following command: + ```shell kubectl cnca unregister ``` @@ -407,34 +426,39 @@ kubectl cnca unregister Supported operations through `kube-cnca` plugin: - * Creation of traffic influence subscriptions through the AF microservice to steer application traffic towards edge-node - * Deletion of subscriptions - * Updating (patching) subscriptions - * get or get-all subscriptions +- Creation of traffic influence subscriptions through the AF microservice to steer application traffic towards edge-node +- Deletion of subscriptions +- Updating (patching) subscriptions +- get or get-all subscriptions Creation of the AF subscription is performed based on the configuration provided by the given YAML file. The YAML configuration should follow the provided sample YAML in the [Sample YAML NGC AF subscription configuration](#sample-yaml-ngc-af-subscription-configuration) section. Use the `apply` command to post a subscription creation request onto AF: + ```shell kubectl cnca apply -f ``` -When the subscription is successfully created, the `apply` command will return the subscription URL that includes a subscription identifier at the end of the string. Only this subscription identifier `` should be used in further correspondence with AF concerning this particular subscription. For example, https://localhost:8060/3gpp-traffic-influence/v1/1/subscriptions/11111 and subscription-id is 11111. It is the responsibility of the user to retain the `` as `kube-cnca` is a stateless function. +When the subscription is successfully created, the `apply` command will return the subscription URL that includes a subscription identifier at the end of the string. Only this subscription identifier `` should be used in further correspondence with AF concerning this particular subscription. For example, and subscription-id is 11111. It is the responsibility of the user to retain the `` as `kube-cnca` is a stateless function. To retrieve an existing subscription with a known subscription ID, use the following command: + ```shell kubectl cnca get subscription ``` To retrieve all active subscriptions at AF, execute this command: + ```shell kubectl cnca get subscriptions ``` To modify an active subscription, use the `patch` command and provide a YAML file with the subset of the configuration to be modified: + ```shell kubectl cnca patch -f ``` To delete an active subscription, use the `delete` command: + ```shell kubectl cnca delete subscription ``` @@ -448,83 +472,86 @@ apiVersion: v1 kind: ngc policy: afServiceId: 'afService001' - afAppId: app001 - afTransId: '' + afAppId: 'afApp01' + afTransId: 'afTrans01' appReloInd: false - dnn: edgeLocation001 + dnn: 'edgeLocation001' snssai: sst: 0 - sd: default - anyUeInd: false + sd: 'default' gpsi: '' - ipv4Addr: 127.0.0.1 + ipv4Addr: '127.0.0.1' ipv6Addr: '' macAddr: '' requestTestNotification: true - websockNotifConfig: - websocketUri: '' - requestWebsocketUri: true trafficRoutes: - - dnai: edgeLocation001 + - dnai: 'edgeLocation001' routeInfo: ipv4Addr: '' ipv6Addr: '' - routeProfId: default + routeProfId: 'default' ``` #### Packet Flow Description operations with 5G Core (through AF interface) Supported operations through the `kube-cnca` plugin: - * Creation of packet flow description (PFD) transactions through the AF microservice to perform accurate detection of application traffic for UPF in 5G Core - * Deletion of transactions and applications within a transaction - * Updating (patching) transactions and applications within a transaction - * Get or get all transactions. - * Get a specific application within a transaction +- Creation of packet flow description (PFD) transactions through the AF microservice to perform accurate detection of application traffic for UPF in 5G Core +- Deletion of transactions and applications within a transaction +- Updating (patching) transactions and applications within a transaction +- Get or get all transactions. +- Get a specific application within a transaction Creation of the AF PFD transaction is performed based on the configuration provided by the given YAML file. The YAML configuration should follow the provided sample YAML in the [Sample YAML NGC AF transaction configuration](#sample-yaml-ngc-af-transaction-configuration) section. Use the `apply` command as below to post a PFD transaction creation request onto AF: + ```shell kubectl cnca pfd apply -f ``` -When the PFD transaction is successfully created, the `apply` command will return the transaction URL, which includes a transaction identifier at the end of the string. Only this transaction identifier `` should be used in further correspondence with AF concerning this particular transaction. For example, https://localhost:8050/af/v1/pfd/transactions/10000 and transaction-id is 10000. It is the responsibility of the user to retain the `` as `kube-cnca` is a stateless function. +When the PFD transaction is successfully created, the `apply` command will return the transaction URL, which includes a transaction identifier at the end of the string. Only this transaction identifier `` should be used in further correspondence with AF concerning this particular transaction. For example, and transaction-id is 10000. It is the responsibility of the user to retain the `` as `kube-cnca` is a stateless function. To retrieve an existing PFD transaction with a known transaction ID, use the following command: + ```shell kubectl cnca pfd get transaction ``` To retrieve all active PFD transactions at AF, run: + ```shell kubectl cnca pfd get transactions ``` To modify an active PFD transaction, use the `patch` command and provide a YAML file with the subset of the configuration to be modified: + ```shell kubectl cnca pfd patch transaction -f ``` To delete an active PFD transaction, use the `delete` command: + ```shell kubectl cnca pfd delete transaction ``` To retrieve an existing application within a PFD transaction with a known application ID and transaction ID, use: + ```shell kubectl cnca pfd get transaction application ``` To modify an application within an active PFD transaction, use the `patch` command and provide a YAML file with the subset of the configuration to be modified: + ```shell kubectl cnca pfd patch transaction application -f ``` To delete an application within an active PFD transaction, use the `delete` command: + ```shell kubectl cnca pfd delete transaction application ``` - ##### Sample YAML NGC AF PFD transaction configuration The `kube-cnca pfd apply` expects the YAML configuration as in the format below. The file must contain the topmost configurations: `apiVersion`, `kind`, and `policy`. The configuration `policy` retains the NGC AF-specific transaction information. @@ -589,13 +616,14 @@ policy: Supported operations through `kube-cnca` plugin: - * Creation of Policy Authorization - Application session context through the AF microservice. - * Deletion of application session context. - * Updating (patching) application session context. - * Get application session context. - * Update or delete Event Notification within an application session context. +- Creation of Policy Authorization - Application session context through the AF microservice. +- Deletion of application session context. +- Updating (patching) application session context. +- Get application session context. +- Update or delete Event Notification within an application session context. Creation of the Policy Authorization Application session context is performed based on the configuration provided by the given YAML file. The YAML configuration should follow the provided sample YAML in the [Sample YAML NGC AF transaction configuration](#sample-yaml-ngc-af-policy-authorization-configuration) section. Use the `apply` command as shown below to post an application session context creation request onto AF: + ```shell kubectl cnca policy-authorization apply -f ``` @@ -603,26 +631,31 @@ kubectl cnca policy-authorization apply -f When the application session context is successfully created, the `apply` command will return the application session context ID (appSessionId). Only `` should be used in further correspondence with AF concerning this particular application session context. It is the responsibility of the user to retain the `` as `kube-cnca` is a stateless function. To retrieve an existing AppSession Session Context with a known appSessionId, use: + ```shell kubectl cnca policy-authorization get ``` To modify an active Application Session Context, use the `patch` command and provide a YAML file with the subset of the configuration to be modified: + ```shell kubectl cnca policy-authorization patch -f ``` To delete an active Application Session Context, use the `delete` command as below: + ```shell kubectl cnca policy-authorization delete ``` To add/modify Event Notification of active Application Session Context, use the `subscribe` command and provide a YAML file with the subset of the configuration to be modified: + ```shell kubectl cnca policy-authorization subscribe -f ``` To unsubscribe from Event Notification of active Application Session Context, use the `unsubscribe` command: + ```shell kubectl cnca policy-authorization unsubscribe ``` @@ -851,6 +884,7 @@ This sections describes the parameters that are used in the Packet flow descript >**NOTE**: One of the attributes of flowDescriptions, URls, and domainName is mandatory. ## Policy Authorization Application Session Context description + This section describes the parameters that are used in the AppSessionContextReqData part of Policy Authorization POST request. Groups mentioned as mandatory must be provided; in the absence of the Mandatory parameters, a 400 response is returned. | Attribute name | Mandatory | Description | diff --git a/doc/applications/app-guide/openness_openvinoexecflow.png b/doc/applications/app-guide/openness_openvinoexecflow.png index e62a78d7..2871fb36 100644 Binary files a/doc/applications/app-guide/openness_openvinoexecflow.png and b/doc/applications/app-guide/openness_openvinoexecflow.png differ diff --git a/doc/applications/openness_appguide.md b/doc/applications/openness_appguide.md index f8505502..66b14e04 100644 --- a/doc/applications/openness_appguide.md +++ b/doc/applications/openness_appguide.md @@ -14,8 +14,8 @@ Copyright (c) 2019 Intel Corporation - [Execution Flow Between EAA, Producer, and Consumer](#execution-flow-between-eaa-producer-and-consumer) - [Cloud Adapter Edge compute Application](#cloud-adapter-edge-compute-application) - [Application On-boarding](#application-on-boarding) + - [Authentication](#authentication) - [OpenNESS-aware Applications](#openness-aware-applications) - - [Authentication](#authentication) - [Service Activation](#service-activation) - [Service Discovery and Subscription](#service-discovery-and-subscription) - [Service Notifications](#service-notifications) @@ -56,13 +56,13 @@ OpenNESS application can be categorized in different ways depending on the scena ### Producer Application OpenNESS Producer applications are edge compute applications that provide services to other applications running on the edge compute platform. Producer applications do not serve end users traffic directly. They are sometimes referred to as Edge services. The following are some characteristics of a producer app: -- All producer apps must authenticate and acquire TLS. +- All producer apps must be TLS-capable and communicate through HTTPS. - All producer apps need to activate if the service provided by them needs to be discoverable by other edge applications. - A producer apps can have one or more fields for which it will provide notification updates. ### Consumer Application OpenNESS Consumer applications are edge compute applications that serve end users traffic directly. Consumer applications may or may not subscribe to the services from other producer applications on the edge node. The following are some characteristics of a consumer app: -- It is not mandatory for consumer apps to authenticate if they don't wish to call EAA APIs. +- It is not mandatory for consumer apps to be TLS-capable if they don't wish to call EAA APIs. - A consumer application can subscribe to any number of services from producer apps. Future extensions can implement entitlements to consumer apps to create access control lists. - Producer to Consumer update will use a web socket for notification. If there is further data to be shared between producer and consumer, other NFVi components such as OVS/VPP/NIC-VF can be used for data transfer. @@ -84,7 +84,7 @@ The consumer application is based on OpenVINO™ [OpenVINO] (https://software.in The OpenVINO producer application is responsible for activating a service in OpenNESS Edge Node. This service is simply a publication of the inference model name that can be used by the OpenVINO consumer application(s). This service involves sending periodic `openvino-model` notifications (its interval is defined by `NotificationInterval`), which in turn is absorbed by the consumer application(s). -The producer application commences publishing notifications after it handshakes with the Edge Application Agent (EAA) over HTTPS REST API. This handshake involves authentication and service activation. +The producer application commences publishing notifications after registration with the Edge Application Agent (EAA) over HTTPS REST API. This sample OpenVINO producer application represents a real-world application where city traffic behavior can is monitored by detecting humans and automobiles at different times of the day. @@ -92,12 +92,11 @@ This sample OpenVINO producer application represents a real-world application wh The OpenVINO consumer application executes object detection on the received video stream (from the client simulator) using an OpenVINO pre-trained model. The model of use is designated by the model name received in the `openvino-model` notification. The corresponding model file is provided to the integrated OpenVINO C++ application. -When the consumer application commences execution, it handshakes with EAA in a process that involves: -- Authentication -- Websocket connection establishment +When the consumer application commences execution, it communicates with EAA and perform operations involving: +- Websocket connection establishment - Service discovery -- Service subscription - +- Service subscription + Websocket connection retains a channel for EAA to forward notifications to the consumer application whenever a notification is received from the producer application over HTTPS REST API. Only subscribed-to notifications are forwarded on to the websocket. This sample OpenVINO consumer application represents a real-world application and depending on the input object model, it can detect objects in the input video stream and annotate (count if needed). @@ -133,67 +132,101 @@ Applications to be onboarded on the OpenNESS framework must be self-contained in > **NOTE:** Code snippets given in this guide are written in Go language; this is purely for the ease of demonstration. All other programming languages should suffice for the same purposes. -### OpenNESS-aware Applications -Edge applications must introduce themselves to the OpenNESS framework and identify if they would like to activate new edge services or consume an existing service. The Edge Application Agent (EAA) component is the handler of all the edge applications hosted by the OpenNESS edge node and acts as their point of contact. All interactions with EAA are through REST APIs, which are defined in [Edge Application Authentication API](https://www.openness.org/api-documentation/?api=auth) and [Edge Application API](https://www.openness.org/api-documentation/?api=eaa). +### Authentication -OpenNESS-awareness involves (a) authentication, (b) service activation/deactivation, (c) service discovery, (d) service subscription, and (e) Websocket connection establishment. The Websocket connection retains a channel for EAA for notification forwarding to pre-subscribed consumer applications. Notifications are generated by "producer" edge applications and absorbed by "consumer" edge applications. +All communications over EAA REST APIs are secured with HTTPS and TLS (Transport Layer Security). Therefore, all applications that are onboarded on OpenNESS must obtain X.509 certificates from a Certificate Authority (CA). This is performed by [signing a certificate through the Kubernetes* API](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/). Each application that requires a certificate should generate it using the Certificate Signer by sending a CSR via Certificate Requester as detailed in [Certificate Signer guide](../applications-onboard/openness-certsigner.md). In brief, the application YAML specs must be extended to: -The sequence of operations for the producer application are: -1. Authenticate with an OpenNESS edge node -2. Activate new service and include the list of notifications involved -3. Send notifications to the OpenNESS edge node, according to business logic +1. Include RBAC Service Account and Cluster Role Binding +2. Include 2 Init Containers +3. Create a ConfigMap with a JSON CSR config -The sequence of operations for the consumer application: -1. Authenticate with an OpenNESS edge node -2. Discover the available services on OpenNESS edge platform -3. Subscribe to services of interest and listen for notifications +The above changes will enabled the application to exercise CSR with Kubernetes* API through the OpenNESS CertSigner service. + +The cluster admin should manually approve the certificate in order to complete the application onboarding: -#### Authentication +```shell +kubectl certificate approve +``` -All communications over EAA REST APIs are secured with HTTPS and TLS (Transport Layer Security). Therefore, the wrapper program must authenticate itself by sending a Certificate Signing Request (CSR) to EAA to receive a digital identity certificate that is used in signing all the forthcoming HTTPS and Websocket communications. CSR is performed through the [Edge Application Authentication API](https://www.openness.org/api-documentation/?api=auth). +> **NOTE:** The authentication steps is required when onboarding OpenNESS-aware and OpenNESS-agnostic applications. -Example of the authentication procedure with EAA is given below: +The below Golang sample represent the logic that the application can use to load the signed certificate and use it for subsequent HTTPS communications. The function `CreateEncryptedClient` loads the certificates from the container local file system and instantiates the Golang's [TLS client](https://golang.org/pkg/net/http/#Client) instance that is subsequently used in further HTTP messaging. ```golang -certTemplate := x509.CertificateRequest{ - Subject: pkix.Name{ - CommonName: "namespace:app-id", - Organization: []string{"OpenNESS Organization"}, - }, - SignatureAlgorithm: x509.ECDSAWithSHA256, - EmailAddresses: []string{"hello@openness.org"}, +func CreateEncryptedClient() (*http.Client, error) { + + cert, err := tls.LoadX509KeyPair(Cfg.CertPath, Cfg.KeyPath) + if err != nil { + return nil, errors.Wrap(err, "Failed to load client certificate") + } + + certPool := x509.NewCertPool() + caCert, err := ioutil.ReadFile(Cfg.RootCAPath) + if err != nil { + return nil, errors.Wrap(err, "Failed to load CA Cert") + } + certPool.AppendCertsFromPEM(caCert) + + client := &http.Client{ + Transport: &http.Transport{ + TLSClientConfig: &tls.Config{RootCAs: certPool, + Certificates: []tls.Certificate{cert}, + ServerName: Cfg.EaaCommonName, + }, + }} + + log.Printf("%#v", client) + + tlsResp, err := client.Get("https://" + "https://eaa.openness:443") + if err != nil { + return nil, errors.Wrap(err, "Encrypted connection failure") + } + defer func() { + if e := tlsResp.Body.Close(); e != nil { + log.Println("Failed to close response body " + e.Error()) + } + }() + return client, nil } -conCsrBytes, _ := x509.CreateCertificateRequest(rand.Reader, - &certTemplate, prvKey) +func main() { + ... -csrMem := pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE REQUEST", - Bytes: conCsrBytes}) + client, err := CreateEncryptedClient() + if err != nil { + log.Fatal(err) + return + } -conID := AuthIdentity{ - Csr: string(csrMem), -} + req, err := http.NewRequest("POST", "https://eaa.openness:443/services", + bytes.NewReader(payload)) + if err != nil { + log.Fatal(err) + return + } -conIdBytes, _ := json.Marshal(conID) + resp, err := client.Do(req) + if err != nil { + log.Fatal(err) + return + } -resp, _ := http.Post("http://eaa.openness:80/auth", - bytes.NewBuffer(conIdBytes)) + ... +} +``` -var conCreds AuthCredentials -json.NewDecoder(resp.Body).Decode(&conCreds) +### OpenNESS-aware Applications +Edge applications must introduce themselves to the OpenNESS framework and identify if they would like to activate new edge services or consume an existing service. The Edge Application Agent (EAA) component is the handler of all the edge applications hosted by the OpenNESS edge node and acts as their point of contact. All interactions with EAA are through REST APIs, which are defined in [Edge Application API](https://www.openness.org/api-documentation/?api=eaa). -x509Encoded, _ := x509.MarshalECPrivateKey(prvKey) +OpenNESS-awareness involves (a) service activation/deactivation, (b) service discovery, (c) service subscription, and (c) Websocket connection establishment. The Websocket connection retains a channel for EAA for notification forwarding to pre-subscribed consumer applications. Notifications are generated by "producer" edge applications and absorbed by "consumer" edge applications. -pemEncoded := pem.EncodeToMemory(&pem.Block{Type: "PRIVATE KEY", - Bytes: x509Encoded}) -conCert, _ := tls.X509KeyPair([]byte(conCreds.Certificate), - pemEncoded) +The sequence of operations for the producer application are: +1. Activate new service and include the list of notifications involved +2. Send notifications to the OpenNESS edge node, according to business logic -conCertPool := x509.NewCertPool() -for _, cert := range conCreds.CaPool { - ok := conCertPool.AppendCertsFromPEM([]byte(cert)) -} -``` +The sequence of operations for the consumer application: +1. Discover the available services on OpenNESS edge platform +2. Subscribe to services of interest and listen for notifications #### Service Activation @@ -344,7 +377,7 @@ In a situation where the developer has a legacy, pre-compiled or binary applicat Legacy, pre-compiled, or binary applications can be made OpenNESS-aware by following few steps without editing their code. This can be done by wrapping these applications with a separate program that is written purposefully to (a) communicate with OpenNESS Edge Node and (b) execute the legacy application. -The wrapper program interacts with EAA for (a) authentication, (b) Websocket connection establishment, (c) service discovery, and (d) service subscription. And call the legacy application with the proper arguments based on the received notifications. Or, if the legacy application is intended to work as a producer application, then the wrapper programmer should activate the edge service with EAA and send the notifications based on the outcomes of the legacy application. +The wrapper program interacts with EAA for (a) Websocket connection establishment, (b) service discovery, and (c) service subscription. And call the legacy application with the proper arguments based on the received notifications. Or, if the legacy application is intended to work as a producer application, then the wrapper programmer should activate the edge service with EAA and send the notifications based on the outcomes of the legacy application. The code below gives an example of an executable application being called at the operating system level when a notification is received from EAA. The executable application is separately compiled and exists on the file system. A similar approach has been followed with the OpenVINO sample application, which was originally written in C++ but is called from a wrapper Go-lang program. diff --git a/doc/applications/openness_openvino.md b/doc/applications/openness_openvino.md index 578a9ff1..9688283a 100644 --- a/doc/applications/openness_openvino.md +++ b/doc/applications/openness_openvino.md @@ -10,7 +10,7 @@ Copyright © 2019 Intel Corporation - [Client Simulator](#client-simulator) - [OpenVINO Producer Application](#openvino-producer-application) - [OpenVINO Consumer Application](#openvino-consumer-application) -- [Execution Flow Between EAA, Producer & Consumer](#execution-flow-between-eaa-producer--consumer) +- [Execution Flow Between CertSigner, EAA, Producer & Consumer](#execution-flow-between-certsigner-eaa-producer--consumer) - [Build & Deployment of OpenVINO Applications](#build--deployment-of-openvino-applications) - [Docker Images Creation](#docker-images-creation) - [Streaming & Displaying the Augmented Video](#streaming--displaying-the-augmented-video) @@ -49,7 +49,7 @@ The client simulator is responsible for continuously transmitting a video stream The OpenVINO producer application is responsible for activating a service in OpenNESS Edge Node. This service is simply a publication of the inference model name, which can be used by the OpenVINO consumer application(s). This service involves sending periodic `openvino-inference` notifications, which in turn are absorbed by the consumer application(s). -The producer application commences publishing notifications after it handshakes with the Edge Application Agent (EAA) over HTTPS REST API. This handshaking involves authentication and service activation. +The producer application commences publishing notifications after it handshakes with the Edge Application Agent (EAA) over HTTPS REST API. This handshaking involves authentication and service activation. The HTTPS communication requires certificate which should be generated with using the Certificate Signer by sending a CSR via Certificate Requester. The `openvino-inference` provides information about the model name used in video inferencing and the acceleration type. Contents of the notification are defined by the below struct: @@ -79,9 +79,12 @@ By default, the producer Docker image builds with `CPU` only inferencing. OpenVINO consumer application executes object detection on the received video stream (from the client simulator) using an OpenVINO pre-trained model. The model of use is designated by the model name received in the `openvino-inference` notification. The corresponding model file is provided to the integrated OpenVINO C++ application. -When the consumer application commences execution, it handshakes with EAA in a process that involves (a) authentication, (b) WebSocket connection establishment, (c) service discovery, and (d) service subscription. The WebSocket connection retains a channel for EAA to forward notifications to the consumer application whenever a notification is received from the producer application over the HTTPS REST API. Only subscribed-to notifications are forwarded to the WebSocket. +When the consumer application commences execution, it handshakes with EAA in a process that involves (a) WebSocket connection establishment, (b) service discovery, and (c) service subscription. The WebSocket connection retains a channel for EAA to forward notifications to the consumer application whenever a notification is received from the producer application over the HTTPS REST API. Only subscribed-to notifications are forwarded to the WebSocket. -## Execution Flow Between EAA, Producer & Consumer +The HTTPS communication requires certificate which should be generated with using the Certificate Signer by sending a CSR via Certificate Requester. + + +## Execution Flow Between CertSigner, EAA, Producer & Consumer The simplified execution flow of the consumer and producer applications with EAA is depicted in the sequence diagram below. @@ -89,6 +92,8 @@ The simplified execution flow of the consumer and producer applications with EAA _Figure - OpenVINO Application Execution Flow_ +For more information about CSR, refer to [OpenNESS CertSigner](../applications-onboard/openness-certsigner.md) + ## Build & Deployment of OpenVINO Applications ### Docker Images Creation diff --git a/doc/applications/openness_service_mesh.md b/doc/applications/openness_service_mesh.md index 3f64288e..59f25e57 100644 --- a/doc/applications/openness_service_mesh.md +++ b/doc/applications/openness_service_mesh.md @@ -411,9 +411,9 @@ To access NGC function API’s (AF and OAM), the client request to the server us ```shell $ kubectl create secret generic ngc-credential -n istio-system \ - --from-file=tls.key=/etc/openness/certs/ngc/server-key.pem \ - --from-file=tls.crt=/etc/openness/certs/ngc/server-cert.pem \ - --from-file=ca.crt=/etc/openness/certs/ngc/root-ca-cert.pem + --from-file=tls.key=/opt/openness/certs/ngc/server-key.pem \ + --from-file=tls.crt=/opt/openness/certs/ngc/server-cert.pem \ + --from-file=ca.crt=/opt/openness/certs/ngc/root-ca-cert.pem ``` The `root-ca-cert.pem` is used to validate client certificates while the `server-cert.pem` and `server-key.pem` are used for providing server authentication and encryption. This below policy creates istio gateway with mutual TLS while using the `ngc-credential` secret created above. @@ -509,9 +509,11 @@ Istio service mesh can be deployed with OpenNESS using the OEK through the pre-d The Istio management console, [Kiali](https://kiali.io/), is deployed alongside Istio with the default credentials: * Username: `admin` -* Password: `admin` * Nodeport set to `30001` +To get the randomly generated password run the following command on Kubernetes controller: +`kubectl get secrets/kiali -n istio-system -o json | jq -r '.data.passphrase' | base64 -d` + Prometheus and Grafana are deployed in the OpenNESS platform as part of the telemetry role and are integrated with the Istio service mesh. To verify if Istio resources are deployed and running, use the following command: diff --git a/doc/arch-images/multi-location-edge.png b/doc/arch-images/multi-location-edge.png new file mode 100644 index 00000000..d1826b74 Binary files /dev/null and b/doc/arch-images/multi-location-edge.png differ diff --git a/doc/arch-images/openness-emco.png b/doc/arch-images/openness-emco.png new file mode 100644 index 00000000..1c60d8db Binary files /dev/null and b/doc/arch-images/openness-emco.png differ diff --git a/doc/arch-images/openness_overview.png b/doc/arch-images/openness_overview.png index 8f2a68d9..883fbc1f 100644 Binary files a/doc/arch-images/openness_overview.png and b/doc/arch-images/openness_overview.png differ diff --git a/doc/arch-images/resource.png b/doc/arch-images/resource.png new file mode 100644 index 00000000..48dc00d0 Binary files /dev/null and b/doc/arch-images/resource.png differ diff --git a/doc/architecture.md b/doc/architecture.md index de7d0a7e..a6c506de 100644 --- a/doc/architecture.md +++ b/doc/architecture.md @@ -2,91 +2,75 @@ SPDX-License-Identifier: Apache-2.0 Copyright (c) 2019-2020 Intel Corporation ``` + # OpenNESS Architecture and Solution Overview -- [OpenNESS Overview](#openness-overview) -- [OpenNESS Distributions](#openness-distributions) -- [Deployment Based on Location](#deployment-based-on-location) + - [Architecture Overview](#architecture-overview) - [Logical](#logical) - [Architecture](#architecture) - - [OpenNESS Kubernetes Control Plane Node](#openness-kubernetes-control-plane-node) + - [OpenNESS Kubernetes Control Plane](#openness-kubernetes-control-plane) - [OpenNESS Edge Node](#openness-edge-node) -- [Microservices, Kubernetes Extensions, and Enhancements](#microservices-kubernetes-extensions-and-enhancements) - - [Platform Pods - Enhanced Platform Awareness](#platform-pods---enhanced-platform-awareness) - - [System Pods](#system-pods) - - [Container Networking](#container-networking) - - [Telemetry](#telemetry) +- [Building Blocks, Kubernetes Extensions, and Enhancements](#building-blocks-kubernetes-extensions-and-enhancements) + - [Multi-Access Networking](#multi-access-networking) + - [Edge Multi-Cluster Orchestration](#edge-multi-cluster-orchestration) + - [Resource Management](#resource-management) + - [Resource Identification](#resource-identification) + - [Resource Allocation](#resource-allocation) + - [Resource Monitoring](#resource-monitoring) + - [Accelerators](#accelerators) + - [Dataplane/Container Network Interfaces](#dataplanecontainer-network-interfaces) + - [Edge Aware Service Mesh](#edge-aware-service-mesh) + - [Telemetry and Monitoring](#telemetry-and-monitoring) + - [Edge Services](#edge-services) - [Software Development Kits](#software-development-kits) -- [Edge Services and Network Functions](#edge-services-and-network-functions) -- [OpenNESS Experience Kit](#openness-experience-kit) - - [Minimal flavor](#minimal-flavor) - - [RAN node flavor](#ran-node-flavor) - - [Core node flavor](#core-node-flavor) - - [Application node flavor](#application-node-flavor) - - [Microsoft Azure OpenNESS](#microsoft-azure-openness) - - [Converged Edge Reference Architecture (CERA) Flavor](#converged-edge-reference-architecture-cera-flavor) +- [Converged Edge Reference Architecture](#converged-edge-reference-architecture) + - [CERA Minimal Flavor](#cera-minimal-flavor) + - [CERA Access Edge Flavor](#cera-access-edge-flavor) + - [CERA Near Edge Flavor](#cera-near-edge-flavor) + - [CERA On Prem Edge and Private Wireless](#cera-on-prem-edge-and-private-wireless) + - [CERA SD-WAN Edge Flavor](#cera-sd-wan-edge-flavor) + - [CERA SD-WAN Hub Flavor](#cera-sd-wan-hub-flavor) + - [CERA Media Analytics Flavor with VCAC-A](#cera-media-analytics-flavor-with-vcac-a) + - [CERA Media Analytics Flavor](#cera-media-analytics-flavor) + - [CERA CDN Transcode Flavor](#cera-cdn-transcode-flavor) + - [CERA CDN Caching Flavor](#cera-cdn-caching-flavor) + - [CERA Core Control Plane Flavor](#cera-core-control-plane-flavor) + - [CERA Core User Plane Flavor](#cera-core-user-plane-flavor) + - [CERA for Untrusted Non-3GPP Access Flavor](#cera-for-untrusted-non-3gpp-access-flavor) +- [Reference Edge Apps and Network Functions](#reference-edge-apps-and-network-functions) +- [OpenNESS Optimized Commercial Applications](#openness-optimized-commercial-applications) + - [OpenNESS DevKit for Microsoft Azure](#openness-devkit-for-microsoft-azure) - [Other References](#other-references) - [List of Abbreviations](#list-of-abbreviations) -## OpenNESS Overview -![](arch-images/modular.png) - -Open Network Edge Services Software (OpenNESS) is a software toolkit that enables highly optimized and performant edge platforms to onboard and manage applications and network functions with cloud-like agility. OpenNESS is a modular, microservice oriented architecture that can be consumed by a customer as a whole solution or in parts. - -OpenNESS is intended for the following types of users: -![](arch-images/customers.png) - -OpenNESS simplifies edge platform development and deployment: -- Abstracts Network Complexity: Users can choose from many data planes, container network interfaces, and access technologies. -- Cloud Native Capabilities: User support of cloud-native ingredients for resource orchestration, telemetry, and service mesh. -- Hardware and Software Optimizations for Best Performance and ROI: Dynamic discovery and optimal placement of apps and services. Users can expose underlying edge hardware and enable the control and management of hardware accelerators. - -OpenNESS provides three easy steps to achieve deployment: -1. Acquire hardware that meets the requirements -2. Meet the prerequisites and use the [Getting Started Guide](https://github.com/open-ness/specs/blob/master/doc/getting-started/openness-experience-kits.md) for deployment -3. Use [Application Onboarding](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md) for applications and Cloud-native Network Functions (CNFs) - -![](arch-images/start.png) - - -## OpenNESS Distributions -OpenNESS is released as two distributions: -1. OpenNESS : A full open-source distribution of OpenNESS -2. Intel® Distribution of OpenNESS : A licensed distribution from Intel that includes all the features in OpenNESS along with additional microservices, Kubernetes\* extensions, enhancements, and optimizations for Intel® architecture. - -The Intel Distribution of OpenNESS requires a secure login to the OpenNESS GitHub\* repository. For access to the Intel Distribution of OpenNESS, contact your Intel support representative. - -## Deployment Based on Location -OpenNESS supports the deployment of edge nodes that host applications and network functions at the following locations: -- On-Premises: The Edge Computing resources are located in the customer premises (e.g., industrial, retail, healthcare) and managed by either the Communication Service Provider (CoSP) or the Enterprise customer as a Private network (Private 4G/5G, uCPE/SDWAN). These deployments retain the sensitive data generated on-premises. -- Network Edge: Edge compute resources are often spread across the CoSP network (e.g. Access Edge - Cell site, Near Edge - Aggregation Sites, and Central Office - Regional Data Center) and managed by the CoSP. Adoption of 5G has paved the way for cloud-native, commercial off-the-shelf (COTS) deployments that host network functions and applications. - -Most of these deployments are fully virtualized and moving towards cloud-native platforms for agility and elasticity. -![](arch-images/locations.png) - ## Architecture Overview -Before reviewing the detailed architecture overview of OpenNESS, take a look at the logical overview of how OpenNESS microservices are laid out. + +Before reviewing the detailed architecture overview of OpenNESS, take a look at the logical overview of how the OpenNESS Building Blocks are laid out. ### Logical -The OpenNESS solution is built on top of Kubernetes, which is a production-grade container orchestration environment. A typical OpenNESS-based deployment consists of an **OpenNESS Kubernetes Control Plane Node** and an **OpenNESS Edge Node**. + +The OpenNESS solution is built on top of Kubernetes, which is a production-grade container orchestration environment. A typical OpenNESS-based deployment consists of an **OpenNESS Kubernetes Control Plane** and an **OpenNESS Edge Node**. ![](arch-images/openness_overview.png) -**OpenNESS Kubernetes Control Plane Node**: This node consists of microservices and Kubernetes extensions, enhancements, and optimizations that provide the functionality to configure one or more OpenNESS Edge Nodes and the application services that run on those nodes (Application Pod Placement, Configuration of Core Network, etc). +**OpenNESS Kubernetes Control Plane**: This node consists of microservices and Kubernetes extensions, enhancements, and optimizations that provide the functionality to configure one or more OpenNESS Edge Nodes and the application services that run on those nodes (Application Pod Placement, Configuration of Core Network, etc). **OpenNESS Edge Node**: This node consists of microservices and Kubernetes extensions, enhancements, and optimizations that are needed for edge application and network function deployments. It also consists of APIs that are often used for the discovery of application services. Another key ingredient is the 4G/5G core network functions that enable a private or public edge. OpenNESS uses reference network functions to validate this end-to-end edge deployment. This is key to understanding and measuring edge Key Performance Indicators (KPIs). ### Architecture + ![](arch-images/openness-arc.png) -#### OpenNESS Kubernetes Control Plane Node -The OpenNESS Kubernetes Control Plane Node consists of Vanilla Kubernetes Control Plane Node components along with OpenNESS microservices that interact with the Kubernetes Control Plane Node using Kubernetes defined APIs. +#### OpenNESS Kubernetes Control Plane + +The OpenNESS Kubernetes Control Plane consists of Vanilla Kubernetes Control Plane components along with OpenNESS microservices that interact with the Kubernetes Control Plane using Kubernetes defined APIs. + +The following are the high-level features of the OpenNESS Kubernetes Control Plane building blocks: -The following are the high-level features of the OpenNESS Kubernetes Control Plane microservice: - Configuration of the hardware platform that hosts applications and network functions - Configuration of network functions (4G, 5G, and WiFi\*) - Detection of various hardware and software capabilities of the edge cluster and use for scheduling applications and network functions @@ -94,157 +78,250 @@ The following are the high-level features of the OpenNESS Kubernetes Control Pla - Enable collection of hardware infrastructure, software, and application monitoring - Expose edge cluster capabilities northbound to a controller -#### OpenNESS Edge Node -The OpenNESS Edge Node consists of Vanilla Kubernetes Node components along with OpenNESS microservices that interact with Kubernetes node using Kubernetes defined APIs. +#### OpenNESS Edge Node + +The OpenNESS Edge Node consists of Vanilla Kubernetes Node components along with OpenNESS Building Blocks that interact with Kubernetes node using Kubernetes defined APIs. + +The following are the high-level features of the OpenNESS Kubernetes node building blocks: -The following are the high-level features of the OpenNESS Kubernetes node microservice: - Container runtime (Docker\*) and virtualization infrastructure (libvirt\*, Open vSwitch (OVS)\*, etc.) support -- Platform pods consisting of services that enable the configuration of a node for a particular deployment, device plugins enabling hardware resource allocation to an application pod, and detection of interfaces and reporting to the Control Plane node. +- Platform pods consisting of services that enable the configuration of a node for a particular deployment, device plugins enabling hardware resource allocation to an application pod, and detection of interfaces and reporting to the Control Plane. - System pods consisting of services that enable reporting the hardware and software features of each node to the Control Plane, resource isolation service for pods, and providing a DNS service to the cluster - Telemetry consisting of services that enable hardware, operating system, infrastructure, and application-level telemetry for the edge node - Support for real-time kernel for low latency applications and network functions like 4G and 5G base station and non-real-time kernel - + The OpenNESS Network functions are the key 4G and 5G functions that enable edge cloud deployment. OpenNESS provides these key reference network functions and the configuration agent in the Intel Distribution of OpenNESS. The OpenNESS solution validates the functionality and performance of key software development kits used for applications and network functions at the edge. This spans across edge applications that use Intel® Media SDK, OpenVINO™, Intel® Math Kernel Library (Intel® MKL), etc. and network functions that use Data Plane Development Kit (DPDK), Intel® Performance Primitives, Intel® MKL, OpenMP\*, OpenCL\*, etc. -## Microservices, Kubernetes Extensions, and Enhancements +## Building Blocks, Kubernetes Extensions, and Enhancements + +This section provides of an overview of the various OpenNESS Building Blocks, extensions to Kubernetes and enhancements to other open source frameworks required for the development of Edge platforms. These building blocks span across the system and platform pods discussed earlier. Many are provided as Helm charts. + +### Multi-Access Networking + +This building block represents a set of microservices that enables steering of traffic from various access networks to and from edge apps and services. -OpenNESS microservices and enhancements can be understood under the following sub-classification: All OpenNESS microservices are provided as Helm charts. +- **Application Function (AF):** is a microservice in the OpenNESS Kubernetes Control Plane that supports Traffic Influencing Subscription, Packet Flow Description Management functionality, and Policy Authorization to help steer the Edge-specific traffic in UPF towards the applications deployed on the OpenNESS edge node. AF is developed as per the Rel.15 3gpp specifications. +- **Network Exposure Function (NEF)**: is a microservice used for validation of AF functionality in OpenNESS before integrating with the 5G Core. The functionality is limited and in line with the AF functional scope. It includes a reference implementation for Traffic influence and PFD management. NEF is developed as per the Rel.15 3gpp specifications. +- **Core Network Configuration Agent (CNCA)**: is a microservice that provides an interface for orchestrators that interact with OpenNESS Kubernetes Control Plane to interact with the 5G Core network solution. CNCA provides a CLI (kube-ctl plugin) interface to interact with the AF and OAM services. +- **Edge Application Agent (EAA)**: Edge application APIs are implemented by the EAA. Edge application APIs are important APIs for edge application developers. EAA APIs provide APIs for service discovery, subscription, and update notification. EAA APIs are based on ETSI MEC- MEC11 MP1 APIs specifications. +- **Edge Interface Service**: This service is an application that runs in a Kubernetes pod on each node of the OpenNESS Kubernetes cluster. It allows attachment of additional network interfaces of the node host to provide an OVS bridge, enabling external traffic scenarios for applications deployed in Kubernetes pods. Services on each node can be controlled from the Control Plane using a kubectl plugin. This interface service can attach both kernel and userspace (DPDK) network interfaces to OVS bridges of a suitable type. +- **DNS Service**: Supports DNS resolution and forwarding services for the application deployed on edge computing. The DNS server is implemented based on the DNS library in Go. DNS service supports resolving DNS requests from user equipment (UE) and applications on the edge cloud. -### Platform Pods - Enhanced Platform Awareness -Enhanced Platform Awareness (EPA) represents a methodology and a related set of enhancements across multiple layers of the orchestration stack, targeting intelligent platform capabilities as well as configuration and capacity consumption. +### Edge Multi-Cluster Orchestration -EPA features include: -* HugePages support -* Non-uniform memory access (NUMA) topology awareness -* CPU pinning -* Integration of OVS with DPDK -* Support for I/O Pass-through via SR-IOV -* HDDL support -* FPGA resource allocation support, and many others +Edge Multi-Cluster Orchestration(EMCO), is a Geo-distributed application orchestrator for Kubernetes*. The main objective of EMCO is automation of the deployment of applications and services across clusters. It acts as a central orchestrator that can manage edge services and network functions across geographically distributed edge clusters from different third parties. Finally, the resource orchestration within a cluster of nodes will leverage Kubernetes* and Helm charts. -Why should users consider using EPA? To achieve optimal performance and efficiency characteristics. EPA extensions facilitate the automation of an advanced selection of capabilities and tuning parameters during the deployment of cloud-native solutions. EPA also enables service providers to offer differentiating and/or revenue-generating services that require leveraging specific hardware features. +![](arch-images/openness-emco.png) -OpenNESS provides a complete solution for users to integrate key EPA features needed for applications (CDN, AI inference, transcoding, gaming, etc.) and CNFs (RAN DU, CU, and Core) to work optimally for edge deployments. +Link: [EMCO](https://github.com/open-ness/specs/blob/master/doc/building-blocks/emco/openness-emco.md) +### Resource Management -OpenNESS supports the following EPA microservices, which typically span across the system and platform pods discussed earlier in this document. -- High-Density Deep Learning (HDDL): Software that enables OpenVINO™-based AI apps to run on Intel® Movidius™ Vision Processing Units (VPUs). It consists of the following components: +Resource Management represents a methodology which involves identification of the hardware and software resources on the edge cluster, Configuration and allocation of the resources and continuous monitoring of the resources for any changes. + +![](arch-images/resource.png) + +OpenNESS provides set of enhancements across multiple layers of the orchestration stack, targeting identification of platform capabilities as well as configuration and capacity consumption. + +Why should users consider using Resource Management? To achieve optimal performance and efficiency characteristics. Resource Management extensions facilitate the automation of an advanced selection of capabilities and tuning parameters during the deployment of cloud-native solutions. Resource Management also enables service providers to offer differentiating and/or revenue-generating services that require leveraging specific hardware features. + +OpenNESS provides a complete solution for users to integrate key resource management features needed for applications (CDN, AI inference, transcoding, gaming, etc.) and CNFs (RAN DU, CU, and Core) to work optimally for edge deployments. + +#### Resource Identification + +Resource identification involves detecting key hardware and software features on the platform that can be used for scheduling of the workload on the cluster. OpenNESS support Node Feature Discovery (NFD) microservices that detects hardware and software features and labels the nodes with relevant features. + +#### Resource Allocation + +Resource Allocation involves configuration of the certain hardware resources like CPU, IO devices, GPU, Accelerator devices, Memory (Hugepages) etc.. to the applications and services. OpenNESS provides many Device Plugins, Kubernetes jobs, CRD Operators and Red Hat OpenShift Operator: Special Resource Operators for configuration and resource allocation of VPU, GPU, FPGA, CPU L3 cache, Memory bandwidth resources to applications and services. + +#### Resource Monitoring + +Resource monitoring involves tracking the usage of allocated resources to the applications and services and also tracking the remaining allocatable resources. OpenNESS provides collectors, node exporters using collectd, telegraf and custom exporters as part of telemetry and monitoring of current resource usage. Resource monitoring support is provided for CPU, VPU, FPGA AND Memory. + +Link: [Enhanced Platform Awareness: Documents covering Accelerators and Resource Management](https://github.com/open-ness/specs/tree/master/doc/building-blocks/enhanced-platform-awareness) + +### Accelerators + +OpenNESS supports the following accelerator microservices. + +- **High-Density Deep Learning (HDDL)**: Software that enables OpenVINO™-based AI apps to run on Intel® Movidius™ Vision Processing Units (VPUs). It consists of the following components: - HDDL device plugin for K8s - HDDL service for scheduling jobs on VPUs -- Visual Compute Acceleration - Analytics (VCAC-A): Software that enables OpenVINO™-based AI apps and media apps to run on Intel® Visual Compute Accelerator Cards (Intel® VCA Cards). It is composed of the following components: +- **Visual Compute Acceleration - Analytics (VCAC-A)**: Software that enables OpenVINO™-based AI apps and media apps to run on Intel® Visual Compute Accelerator Cards (Intel® VCA Cards). It is composed of the following components: - VPU device plugin for K8s - HDDL service for scheduling jobs on VPU - GPU device plugin for K8s -- FPGA/eASIC/NIC: Software that enables AI inferencing for applications, high-performance and low-latency packet pre-processing on network cards, and offloading for network functions such as eNB/gNB offloading Forward Error Correction (FEC). It consists of: +- **FPGA/eASIC/NIC**: Software that enables AI inferencing for applications, high-performance and low-latency packet pre-processing on network cards, and offloading for network functions such as eNB/gNB offloading Forward Error Correction (FEC). It consists of: - FPGA device plugin for inferencing - SR-IOV device plugin for FPGA/eASIC - - Dynamic Device Profile for Network Interface Cards (NIC) -- Resource Management Daemon (RMD): RMD uses Intel® Resource Director Technology (Intel® RDT) to implement cache allocation and memory bandwidth allocation to the application pods. This is a key technology for achieving resource isolation and determinism on a cloud-native platform. -- Node Feature Discovery (NFD): Software that enables node feature discovery for Kubernetes. It detects hardware features available on each node in a Kubernetes cluster and advertises those features using node labels. -- Topology Manager: This component allows users to align their CPU and peripheral device allocations by NUMA node. -- Kubevirt: Provides support for running legacy applications in VM mode and the allocation of SR-IOV ethernet interfaces to VMs. - -### System Pods -- Edge Interface Service: This service is an application that runs in a Kubernetes pod on each node of the OpenNESS Kubernetes cluster. It allows attachment of additional network interfaces of the node host to provide an OVS bridge, enabling external traffic scenarios for applications deployed in Kubernetes pods. Services on each node can be controlled from the Control Plane node using a kubectl plugin. -This interface service can attach both kernel and userspace (DPDK) network interfaces to OVS bridges of a suitable type. -- BIOS/Firmware Configuration Service : Uses Intel's System Configuration Utility (syscfg) tool to build a pod that is scheduled by K8s as a job that configures both BIOS and FW with the given specification. -- DNS Service: Supports DNS resolution and forwarding services for the application deployed on edge computing. The DNS server is implemented based on the DNS library in Go. DNS service supports resolving DNS requests from user equipment (UE) and applications on the edge cloud. -- Video Transcode Service: An application microservice that exposes a REST API for transcoding on CPU or GPU. -- Edge Application Agent (EAA): Edge application APIs are implemented by the EAA. Edge application APIs are important APIs for edge application developers. EAA APIs provide APIs for service discovery, subscription, and update notification. - -### Container Networking + - Dynamic Device Profile for Network Interface Cards (NIC) + +### Dataplane/Container Network Interfaces + OpenNESS provides a flexible and high-performance set of container networking using Container Networking Interfaces (CNIs). Some of the high-performance, open-source CNIs are also supported. Container networking support in OpenNESS addresses the following: + - Highly-coupled, container-to-container communications - Pod-to-pod communications on the same node and across the nodes OpenNESS supports the following CNIs: -- SRIOV CNI: works with the SR-IOV device plugin for VF allocation for a container. -- User Space CNI: designed to implement userspace networking (as opposed to kernel space networking). -- Bond CNI: provides a method for aggregating multiple network interfaces into a single logical "bonded" interface. -- Multus CNI: enables attaching multiple network interfaces to pods in Kubernetes. -- Weave CNI: creates a virtual network that connects Docker containers across multiple hosts and enables their automatic discovery. -- Kube-OVN CNI: integrates the OVN-based network virtualization with Kubernetes. It offers an advanced container network fabric for enterprises with the most functions and the easiest operation. -- Calico CNI/eBPF: supports applications with higher performance using eBPF and IPv4/IPv6 dual-stack - -### Telemetry + +- **SRIOV CNI**: works with the SR-IOV device plugin for VF allocation for a container. +- **User Space CNI**: designed to implement userspace networking (as opposed to kernel space networking). +- **Bond CNI**: provides a method for aggregating multiple network interfaces into a single logical "bonded" interface. +- **Multus CNI**: enables attaching multiple network interfaces to pods in Kubernetes. +- **Weave CNI**: creates a virtual network that connects Docker containers across multiple hosts and enables their automatic discovery. +- **Kube-OVN CNI**: integrates the OVN-based network virtualization with Kubernetes. It offers an advanced container network fabric for enterprises with the most functions and the easiest operation. +- **Calico CNI/eBPF**: supports applications with higher performance using eBPF and IPv4/IPv6 dual-stack + +Link: [Dataplane and CNI](https://github.com/open-ness/specs/tree/master/doc/building-blocks/dataplane) + +### Edge Aware Service Mesh + +Istio is a feature-rich, cloud-native service mesh platform that provides a collection of key capabilities such as: Traffic Management, Security and Observability uniformly across a network of services. OpenNESS integrates natively with the Istio service mesh to help reduce the complexity of large scale edge applications, services, and network functions. + +Link: [Service Mesh](https://github.com/open-ness/specs/blob/master/doc/applications/openness_service_mesh.md) + +### Telemetry and Monitoring + Edge builders need a comprehensive telemetry framework that combines application telemetry, hardware telemetry, and events to create a heat-map across the edge cluster and enables the orchestrator to make scheduling decisions. Industry-leading, cloud-native telemetry and monitoring frameworks are supported on OpenNESS: -- Prometheus\* and Grafana\*: This is a cloud-native, industry-standard framework that provides a monitoring system and time series database. -- Telegraf This is a cloud-native, industry-standard agent for collecting, processing, aggregating, and writing metrics. -- Open Telemetry : Open Consensus, Open Tracing - CNCF project that provides the libraries, agents, and other components that you need to capture telemetry from your services so that you can better observe, manage, and debug them. + +- **Prometheus\* and Grafana\***: This is a cloud-native, industry-standard framework that provides a monitoring system and time series database. +- **Telegraf** This is a cloud-native, industry-standard agent for collecting, processing, aggregating, and writing metrics. +- **Open Telemetry**: Open Consensus, Open Tracing - CNCF project that provides the libraries, agents, and other components that you need to capture telemetry from your services so that you can better observe, manage, and debug them. Hardware Telemetry support: + - CPU: Supported metrics - cpu, cpufreq, load, HugePages, intel_pmu, intel_rdt, ipmi - Dataplane: Supported metrics - ovs_stats and ovs_pmd_stats - Accelerator: Supported Metrics from - FPGA–PAC-N3000, VCAC-A, HDDL-R, eASIC, GPU, and NIC OpenNESS also supports a reference application of using telemetry to take actions using Kubernetes APIs. This reference is provided to the Telemetry Aware Scheduler project. +Link: [Telemetry](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-telemetry.md) + +### Edge Services + +These building blocks are included as part of System pods. + +- **Video Analytics Service**: An application microservice that exposes a REST API for transcoding on CPU or GPU. + ### Software Development Kits + OpenNESS supports leading SDKs for edge services (applications) and network function development. As part of the development of OpenNESS, applications developed using these SDKs are optimized to provide optimal performance. This ensures that when customers develop applications using these SDKs, they can achieve optimal performance. -- OpenVINO™ SDK : The OpenVINO™ toolkit is composed of a variety of tools from Intel that work together to provide a complete computer vision pipeline solution that is optimized on Intel® architecture. This article will focus on the Intel® Media SDK component of the toolkit. The Intel Media SDK is a high-level API for specific video processing operations: decode, process, and encode. It supports H.265, H.264, MPEG-2, and more codecs. Video processing can be used to resize, scale, de-interlace, color conversion, de-noise, sharpen, and more. The Intel Media SDK works in the background to leverage hardware acceleration on Intel® architecture with optimized software fallback for each hardware platform. Thus, developers do not need to change the code from platform to platform and can focus more on the application itself rather than on hardware optimization. -- Intel Media SDK : SDK used for developing video applications with state-of-the-art libraries, tools, and samples. They are all accessible via a single API that enables hardware acceleration for fast video transcoding, image processing, and media workflows. The two main paths for application developers to access GPU media processing capabilities are Intel® Media SDK and Intel® SDK for OpenCL™ applications. -- DPDK: Data Plane Development Kit (DPDK) consists of libraries to accelerate packet-processing workloads that run on a wide variety of CPU architectures. -- Intel IPP: Intel® Integrated Performance Primitives (Intel® IPP) is an extensive library of ready-to-use, domain-specific functions that are highly optimized for diverse Intel® architectures. -- Intel® MKL: Intel® Math Kernel Library (Intel® MKL) optimizes code with minimal effort for future generations of Intel® processors. It is compatible with your choice of compilers, languages, operating systems, and linking and threading models. +- **OpenVINO™ SDK**: The OpenVINO™ toolkit is composed of a variety of tools from Intel that work together to provide a complete computer vision pipeline solution that is optimized on Intel® architecture. This article will focus on the Intel® Media SDK component of the toolkit. The Intel Media SDK is a high-level API for specific video processing operations: decode, process, and encode. It supports H.265, H.264, MPEG-2, and more codecs. Video processing can be used to resize, scale, de-interlace, color conversion, de-noise, sharpen, and more. The Intel Media SDK works in the background to leverage hardware acceleration on Intel® architecture with optimized software fallback for each hardware platform. Thus, developers do not need to change the code from platform to platform and can focus more on the application itself rather than on hardware optimization. +- **Intel Media SDK**: SDK used for developing video applications with state-of-the-art libraries, tools, and samples. They are all accessible via a single API that enables hardware acceleration for fast video transcoding, image processing, and media workflows. The two main paths for application developers to access GPU media processing capabilities are Intel® Media SDK and Intel® SDK for OpenCL™ applications. +- **DPDK**: Data Plane Development Kit (DPDK) consists of libraries to accelerate packet-processing workloads that run on a wide variety of CPU architectures. +- **Intel IPP**: Intel® Integrated Performance Primitives (Intel® IPP) is an extensive library of ready-to-use, domain-specific functions that are highly optimized for diverse Intel® architectures. +- **Intel® MKL**: Intel® Math Kernel Library (Intel® MKL) optimizes code with minimal effort for future generations of Intel® processors. It is compatible with your choice of compilers, languages, operating systems, and linking and threading models. -## Edge Services and Network Functions -OpenNESS supports a rich set of reference and commercial real-world edge services (applications) and network functions. These applications and network functions are a vehicle for validating functionality and performance KPIs for Edge. +## Converged Edge Reference Architecture -The following is a subset of supported edge applications: -- Smart city App: This end-to-end sample app implements aspects of smart city sensing, analytics, and management, utilizing CPU or VCA. -- CDN Transcode and Content Delivery App: The CDN Transcode sample app is an Open Visual Cloud software stack with all required open-source ingredients integrated to provide an out-of-the-box CDN media transcode service, including live streaming and video on demand. It provides a Docker-based software development environment for developers to easily build specific applications. -- Edge Insights: The Edge Insights application is designed to enable secure ingestion, processing, storage and management of data, and near real-time (~10ms), event-driven control, across a diverse set of industrial protocols. +Converged Edge Reference Architecture (CERA) is a set of pre-integrated & readily deployable HW/SW Reference Architectures powered by OpenNESS to significantly accelerate Edge Platform development. -The following is a subset of supported reference network functions: -- gNodeB or eNodeB: 5G or 4G base station implementation on Intel architecture based on Intel’s FlexRAN. -- Application Function (AF): AF interacts with the 5G control plane functions through 3GPP standard SBIs (Service-Based Interfaces) for enabling application influence on traffic routing, packet flow description, policy authorization, and core network notifications. -- Network Exposure Function (NEF): NEF securely exposes services and features of the 5G core. NEF interacts with the 5G control plane functions for providing the traffic influence and Packet Flow Description (PFD) services. -- User Plane Function (UPF): UPF is responsible for packet routing and forwarding, packet inspection, and QoS handling. The UPF may optionally integrate a Deep Packet Inspection (DPI) for packet inspection and classification. +OpenNESS includes an Ansible\* playbook that acts as a single interface for users to deploy various types of CERAs. The playbook organizes all of the above microservices, Kubernetes extensions, enhancements, and optimizations under easy to deploy node types called **flavors**, implemented as Ansible roles. -## OpenNESS Experience Kit -The OpenNESS Experience Kit is an Ansible\* playbook that acts as a single interface for users to deploy OpenNESS. The kit organizes all of the above microservices, Kubernetes extensions, enhancements, and optimizations under easy to deploy node types called flavors, implemented as Ansible roles. +For example, a user deploying a network edge at a cell site can choose the Access Edge flavor to deploy a node with all the microservices, Kubernetes extensions, enhancements, and optimizations required for a RAN node. -For example, a user deploying a network edge at a cell site can choose the Radio Access Network (RAN) flavor to deploy a node with all the microservices, Kubernetes extensions, enhancements, and optimizations required for a RAN node. +### CERA Minimal Flavor -### Minimal flavor This flavor supports the installation of the minimal set of components from OpenNESS and, it is typically used as a starting point for creating a custom node. -### RAN node flavor -RAN node typically refers to RAN Distributed Unit (DU) and Centralized Unit (CU) 4G/5G nodes deployed on the edge or far edge. In some cases, DU may be integrated into the radio. The example RAN deployment flavor uses FlexRAN as a reference DU. +### CERA Access Edge Flavor + +This flavor typically refers to RAN Distributed Unit (O-DU) and Centralized Unit (O-CU) 4G/5G nodes deployed on the access edge. In some cases, DU may be integrated into the radio. The example RAN deployment flavor uses FlexRAN as a reference DU. + +Link: [CERA Access Edge Overview](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/ran/openness_ran.md) + +Link: [ORAN Fronthaul](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/ran/openness_xran.md) + +### CERA Near Edge Flavor + +CERA Near Edge Flavor provides reference for edge deployments at aggregation points, mini central office and presents a scalable solution across the near edge network scaling from a single edge node to a multi cluster deployment services many edge nodes. The reference solution will used for deployments for example involving edge node with Core User plane function and Applications an services. + +Link: [CERA Near Edge Overview](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/CERA-Near-Edge.md) + +### CERA On Prem Edge and Private Wireless + +CERA 5G On Prem deployment focuses on On Premises, Private Wireless and Ruggedized Outdoor deployments, presenting a scalable solution across the On Premises edge. + +Link: [CERA On Prem Edge](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/CERA-5G-On-Prem.md) + +### CERA SD-WAN Edge Flavor + +CERA SD-WAN Edge flavor provides a reference deployment with Kubernetes enhancements for High performance compute and networking for a SD-WAN node that runs Applications, Services and SD-WAN CNF. AI/ML application and services are targeted in this flavor with support for Hardware offload for inferencing. + +### CERA SD-WAN Hub Flavor + +CERA SD-WAN Edge flavor provides a reference deployment with Kubernetes enhancements for High performance compute and networking for a SD-WAN node that runs SD-WAN CNF. -![RAN node flavor](arch-images/openness-flexran.png) +Link: [CERA SD-WAN](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/openness_sdwan.md) -### Core node flavor -Core nodes typically refer to user plane and control plane core workloads for 4G and 5G deployed on the edge and central location. In most edge deployments, UPF/SPGW-U plane is located on the edge along with the applications and services. The following diagram shows how OpenNESS can be used to deploy both user plane and control plane core nodes. Both user plane and control plane are supported as separate flavors. +### CERA Media Analytics Flavor with VCAC-A -![Core node flavor](arch-images/openness-core.png) +CERA Media Analytics Flavor provides Kubernetes enhancements for High performance compute, VPU and GPU offload device plugins for Intel VCAC-A card. This flavor can be tested using the Smart City reference app available in OpenNESS. -### Application node flavor -Application nodes typically refer to nodes running edge applications and services. The applications can be Smart City, CDN, AR/VR, Cloud Gaming, etc. In the example flavor below, the Smart City application pipeline is used. +### CERA Media Analytics Flavor -Under the application node, the following flavors are supported: -- Media Analytics Flavor -- Media Analytics Flavor with VCAC-C -- CDN Transcode -- CDN Content Delivery +CERA Media Analytics Flavor is similar to the CERA Media Analytics Flavor with VCAC-A except the Analytics pipeline runs on the Intel CPU rather than the Intel VCAC-A card. This flavor can be tested using the Smart City reference app available in OpenNESS. -![Application node flavor](arch-images/openness-ovc.png) +### CERA CDN Transcode Flavor -### Microsoft Azure OpenNESS -This flavor supports the installation of an OpenNESS Kubernetes cluster on a Microsoft\* Azure\* VM. This is typically used by a customer who requires the same Kubernetes cluster service on multiple clouds. +CERA for CDN transcode flavor provides key OpenNESS Kubernetes enhancements for high performance compute. The flavor can be tested using the CDN Transcode Sample which is an Open Visual Cloud software stack with all required open source ingredients well integrated to provide out-of-box CDN media transcode service, including live streaming and video on demand. It also provides Docker-based media delivery software development environment upon which developer can easily build their specific applications. -### Converged Edge Reference Architecture (CERA) Flavor -CERA from Intel provides foundational recipes that converge IT as well as OT and NT workloads on various on-premise and network edge platforms. +### CERA CDN Caching Flavor -In future OpenNESS releases, various CERA flavors will be available. Each of these recipes would include combinations of other OpenNESS flavors (e.g., RAN + UPF + Apps) +CERA for CDN transcode flavor provides key OpenNESS Kubernetes enhancements for high performance Networking using SR-IOV, NVMe and SSD device support. The flavor can be tested using the CDN Caching sample application in OpenNESS. + +### CERA Core Control Plane Flavor + +CERA for Core Control Plane Flavor provides key OpenNESS Kubernetes enhancements for core network control plane network functions. + +### CERA Core User Plane Flavor + +CERA for Core User Plane Flavor provides key OpenNESS Kubernetes enhancements for high performance Computing and Networking using SR-IOV for reference core network user plane network functions. + +Link: [CERA Core User Plane](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/core-network/openness_upf.md) + +Link: [5G Non Standalone deployment](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/core-network/openness_5g_nsa.md) + +Link: [5G Standalone deployment](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/core-network/openness_ngc.md) + +### CERA for Untrusted Non-3GPP Access Flavor + +CERA for Untrusted Non-3GPP Access Flavor provides key OpenNESS Kubernetes enhancements for high performance Computing and Networking using SR-IOV for reference Untrusted Non-3GPP Access as defined by 3GPP Release 15. + +## Reference Edge Apps and Network Functions + +OpenNESS supports a rich set of reference and commercial real-world edge services (applications) and network functions. These applications and network functions are a vehicle for validating functionality and performance KPIs for Edge. + +The following is a subset of supported edge applications: + +- **Smart city App**: This end-to-end sample app implements aspects of smart city sensing, analytics, and management, utilizing CPU or VCA. +- **CDN Transcode and Content Delivery App**: The CDN Transcode sample app is an Open Visual Cloud software stack with all required open-source ingredients integrated to provide an out-of-the-box CDN media transcode service, including live streaming and video on demand. It provides a Docker-based software development environment for developers to easily build specific applications. +- **Edge Insights**: The Edge Insights application is designed to enable secure ingestion, processing, storage and management of data, and near real-time (~10ms), event-driven control, across a diverse set of industrial protocols. + +The following is a subset of supported reference network functions: + +- **gNodeB or eNodeB**: 5G or 4G base station implementation on Intel architecture based on Intel’s FlexRAN. + +Link: [Documents covering OpenNESS supported Reference Architectures](https://github.com/open-ness/specs/tree/master/doc/reference-architectures) +## OpenNESS Optimized Commercial Applications + +OpenNESS Optimized Commercial applications are available at [Intel® Network Builders](https://networkbuilders.intel.com/commercial-applications) + +### OpenNESS DevKit for Microsoft Azure + +This devkit supports the installation of an OpenNESS Kubernetes cluster on a Microsoft* Azure* VM. This is typically used by a customer who want to develop applications and services for the edge using OpenNESS building blocks. ## Other References + - [3GPP_23401] 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; General Packet Radio Service (GPRS) enhancements for Evolved Universal Terrestrial Radio Access Network (E-UTRAN) access. - [3GPP_23214] 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Architecture enhancements for control and user plane separation of EPC nodes; Stage 2. - [ETSI_MEC_003] ETSI GS MEC 003 V2.1.1 Multi-access Edge Computing (MEC): Framework and Reference Architecture @@ -259,7 +336,7 @@ In future OpenNESS releases, various CERA flavors will be available. Each of the ## List of Abbreviations | Acronym | Definition | -|----------|-------------------------------------------------| +| -------- | ----------------------------------------------- | | 3GPP | Third Generation Partnership Project | | AF | Application Function | | AMF | Access and Mobility Mgmt Function | diff --git a/doc/dataplane/iap-images/iap1.png b/doc/building-blocks/dataplane/iap-images/iap1.png similarity index 100% rename from doc/dataplane/iap-images/iap1.png rename to doc/building-blocks/dataplane/iap-images/iap1.png diff --git a/doc/dataplane/iap-images/iap2.png b/doc/building-blocks/dataplane/iap-images/iap2.png similarity index 100% rename from doc/dataplane/iap-images/iap2.png rename to doc/building-blocks/dataplane/iap-images/iap2.png diff --git a/doc/dataplane/iap-images/iap3.png b/doc/building-blocks/dataplane/iap-images/iap3.png similarity index 100% rename from doc/dataplane/iap-images/iap3.png rename to doc/building-blocks/dataplane/iap-images/iap3.png diff --git a/doc/dataplane/index.html b/doc/building-blocks/dataplane/index.html similarity index 100% rename from doc/dataplane/index.html rename to doc/building-blocks/dataplane/index.html diff --git a/doc/dataplane/nts-images/nts1.png b/doc/building-blocks/dataplane/nts-images/nts1.png similarity index 100% rename from doc/dataplane/nts-images/nts1.png rename to doc/building-blocks/dataplane/nts-images/nts1.png diff --git a/doc/dataplane/nts-images/nts2.png b/doc/building-blocks/dataplane/nts-images/nts2.png similarity index 100% rename from doc/dataplane/nts-images/nts2.png rename to doc/building-blocks/dataplane/nts-images/nts2.png diff --git a/doc/dataplane/openness-interapp.md b/doc/building-blocks/dataplane/openness-interapp.md similarity index 100% rename from doc/dataplane/openness-interapp.md rename to doc/building-blocks/dataplane/openness-interapp.md diff --git a/doc/dataplane/openness-ovn.md b/doc/building-blocks/dataplane/openness-ovn.md similarity index 100% rename from doc/dataplane/openness-ovn.md rename to doc/building-blocks/dataplane/openness-ovn.md diff --git a/doc/dataplane/openness-userspace-cni.md b/doc/building-blocks/dataplane/openness-userspace-cni.md similarity index 100% rename from doc/dataplane/openness-userspace-cni.md rename to doc/building-blocks/dataplane/openness-userspace-cni.md diff --git a/doc/dataplane/ovn_images/openness_ovn.png b/doc/building-blocks/dataplane/ovn_images/openness_ovn.png similarity index 100% rename from doc/dataplane/ovn_images/openness_ovn.png rename to doc/building-blocks/dataplane/ovn_images/openness_ovn.png diff --git a/doc/dataplane/ovn_images/openness_ovnovs.png b/doc/building-blocks/dataplane/ovn_images/openness_ovnovs.png similarity index 100% rename from doc/dataplane/ovn_images/openness_ovnovs.png rename to doc/building-blocks/dataplane/ovn_images/openness_ovnovs.png diff --git a/doc/dataplane/ovn_images/ovncni_cluster.png b/doc/building-blocks/dataplane/ovn_images/ovncni_cluster.png similarity index 100% rename from doc/dataplane/ovn_images/ovncni_cluster.png rename to doc/building-blocks/dataplane/ovn_images/ovncni_cluster.png diff --git a/doc/enhanced-platform-awareness/index.html b/doc/building-blocks/emco/index.html similarity index 88% rename from doc/enhanced-platform-awareness/index.html rename to doc/building-blocks/emco/index.html index d31f9d8c..0d4fe09d 100644 --- a/doc/enhanced-platform-awareness/index.html +++ b/doc/building-blocks/emco/index.html @@ -10,5 +10,5 @@ ---

You are being redirected to the OpenNESS Docs.

diff --git a/doc/building-blocks/emco/openness-emco-images/emco-dig-create.png b/doc/building-blocks/emco/openness-emco-images/emco-dig-create.png new file mode 100755 index 00000000..786baa9f Binary files /dev/null and b/doc/building-blocks/emco/openness-emco-images/emco-dig-create.png differ diff --git a/doc/building-blocks/emco/openness-emco-images/emco-dig-instantiate.png b/doc/building-blocks/emco/openness-emco-images/emco-dig-instantiate.png new file mode 100755 index 00000000..1f63e447 Binary files /dev/null and b/doc/building-blocks/emco/openness-emco-images/emco-dig-instantiate.png differ diff --git a/doc/building-blocks/emco/openness-emco-images/emco-geo-distributed.png b/doc/building-blocks/emco/openness-emco-images/emco-geo-distributed.png new file mode 100644 index 00000000..60406673 Binary files /dev/null and b/doc/building-blocks/emco/openness-emco-images/emco-geo-distributed.png differ diff --git a/doc/building-blocks/emco/openness-emco-images/emco-istio-arch.png b/doc/building-blocks/emco/openness-emco-images/emco-istio-arch.png new file mode 100644 index 00000000..ef399208 Binary files /dev/null and b/doc/building-blocks/emco/openness-emco-images/emco-istio-arch.png differ diff --git a/doc/building-blocks/emco/openness-emco-images/emco-istio-auth.png b/doc/building-blocks/emco/openness-emco-images/emco-istio-auth.png new file mode 100644 index 00000000..8c016ca0 Binary files /dev/null and b/doc/building-blocks/emco/openness-emco-images/emco-istio-auth.png differ diff --git a/doc/building-blocks/emco/openness-emco-images/emco-register-controllers.png b/doc/building-blocks/emco/openness-emco-images/emco-register-controllers.png new file mode 100755 index 00000000..0a5add57 Binary files /dev/null and b/doc/building-blocks/emco/openness-emco-images/emco-register-controllers.png differ diff --git a/doc/building-blocks/emco/openness-emco-images/emco-status-monitoring.png b/doc/building-blocks/emco/openness-emco-images/emco-status-monitoring.png new file mode 100755 index 00000000..a145f937 Binary files /dev/null and b/doc/building-blocks/emco/openness-emco-images/emco-status-monitoring.png differ diff --git a/doc/building-blocks/emco/openness-emco-images/openness-emco-arch.png b/doc/building-blocks/emco/openness-emco-images/openness-emco-arch.png new file mode 100644 index 00000000..b995316c Binary files /dev/null and b/doc/building-blocks/emco/openness-emco-images/openness-emco-arch.png differ diff --git a/doc/building-blocks/emco/openness-emco-images/openness-emco-lccl.png b/doc/building-blocks/emco/openness-emco-images/openness-emco-lccl.png new file mode 100644 index 00000000..833e0168 Binary files /dev/null and b/doc/building-blocks/emco/openness-emco-images/openness-emco-lccl.png differ diff --git a/doc/building-blocks/emco/openness-emco-images/openness-emco-smtc.png b/doc/building-blocks/emco/openness-emco-images/openness-emco-smtc.png new file mode 100644 index 00000000..090b2ab1 Binary files /dev/null and b/doc/building-blocks/emco/openness-emco-images/openness-emco-smtc.png differ diff --git a/doc/building-blocks/emco/openness-emco-images/openness-emco-smtcui.png b/doc/building-blocks/emco/openness-emco-images/openness-emco-smtcui.png new file mode 100644 index 00000000..8796aa48 Binary files /dev/null and b/doc/building-blocks/emco/openness-emco-images/openness-emco-smtcui.png differ diff --git a/doc/building-blocks/emco/openness-emco-images/openness-emco-topology.png b/doc/building-blocks/emco/openness-emco-images/openness-emco-topology.png new file mode 100644 index 00000000..90b40f74 Binary files /dev/null and b/doc/building-blocks/emco/openness-emco-images/openness-emco-topology.png differ diff --git a/doc/building-blocks/emco/openness-emco.md b/doc/building-blocks/emco/openness-emco.md new file mode 100644 index 00000000..7ffd4ac0 --- /dev/null +++ b/doc/building-blocks/emco/openness-emco.md @@ -0,0 +1,468 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020 Intel Corporation +``` + +# Edge Multi-Cluster Orchestrator (EMCO) + +- [Background](#background) +- [EMCO Introduction](#emco-introduction) + - [EMCO Terminology](#emco-terminology) + - [EMCO Architecture](#emco-architecture) + - [Cluster Registration](#cluster-registration) + - [Distributed Application Scheduler](#distributed-application-scheduler) + - [Lifecycle Operations](#lifecycle-operations) + - [Network Configuration Management](#network-configuration-management) + - [Lifecycle Operations](#lifecycle-operations-1) + - [Distributed Cloud Manager](#distributed-cloud-manager) + - [Lifecycle Operations](#lifecycle-operations-2) + - [Level-1 Logical Clouds](#level-1-logical-clouds) + - [Level-0 Logical Clouds](#level-0-logical-clouds) + - [OVN Action Controller](#ovn-action-controller) + - [Traffic Controller](#traffic-controller) + - [Generic Action Controller](#generic-action-controller) + - [Resource Synchronizer](#resource-synchronizer) + - [Placment and Action Controllers in EMCO](#placment-and-action-controllers-in-emco) + - [Status Monitoring and Queries in EMCO](#status-monitoring-and-queries-in-emco) + - [EMCO Terminology](#emco-terminology-1) + - [EMCO API](#emco-api) + - [EMCO Authentication and Authorization](#emco-authentication-and-authorization) + - [EMCO Installation With OpenNESS Flavor](#emco-installation-with-openness-flavor) +- [EMCO Example: SmartCity Deployment](#emco-example-smartcity-deployment) + - [Cluster Setup](#cluster-setup) + - [Project Setup](#project-setup) + - [Logical Cloud Setup](#logical-cloud-setup) + - [Deploy SmartCity Application](#deploy-smartcity-application) + - [SmartCity Termination](#smartcity-termination) + +## Background +Edge Multi-Cluster Orchestration(EMCO), an OpenNESS Building Block, is a Geo-distributed application orchestrator for Kubernetes\*. EMCO operates at a higher level than Kubernetes\* and interacts with multiple of edges and clouds running Kubernetes. The main objective of EMCO is automation of the deployment of applications and services across multiple clusters. It acts as a central orchestrator that can manage edge services and network functions across geographically distributed edge clusters from different third parties. + +Increasingly we see a requirement of deploying 'composite applications' in multiple geographical locations. Some of the catalysts for this change are: + +- Latency - requirements for new low latency application use cases such as AR/VR. Need for ultra low latency response needed in IIOT and other cases. This requires running some parts of the applications on edges close to the user +- Bandwidth - processing data on edges to avoid costs associated with transporting the data to clouds for processing, +- Context/Promixity - running some part of the applications on edges near the user that require local context +- Privacy/Legal - some data can have legal requirements to not leave a geographic location + +![OpenNESS EMCO](openness-emco-images/emco-geo-distributed.png) +_Figure 1 - Orchestrate GeoDitributed Edge Applications_ + +> **NOTE**: A 'composite application' is a combination of multiple applications with each application packaged as a Helm chart. Based on the deployment intent, various applications of the composite application get deployed at various locations, and get replicated in multiple locations. + +Life cycle management of composite applications is complex. Instantiation and terminations of the complex application across multiple K8s clusters (Edges and Clouds), monitoring the status of the complex application deployment, Day 2 operations (Modification of the deployment intent, upgrades etc..) are few complex operations. + +Number of K8s clusters (Edges or clouds) could be in tens of thousands, number of complex applications that need to be managed could be in hundreds, number of applications in a complex application could be in tens and number of micro-services in each application of the complex application can be in tens. Moreover, there can be multiple deployments of the same complex applications for different purposes. To reduce the complexity, all these operations are to be automated. There shall be one-click deployment of the complex applications and one simple dashboard to know the status of the complex application deployment at any time. Hence, there is a need for Multi-Edge and Multi-Cloud distributed application orchestrator. + +Compared with other multiple-clusters orchestration, EMCO focuses on the following functionalities: +- Enrolling multiple geographically distributed OpenNESS clusters and third party cloud clusters. +- Orchestrating composite applications (composed of multiple individual applications) across different clusters. +- Deploying edge services and network functions on to different nodes spread across different clusters. +- Monitoring the health of the deployed edge services/network functions across different clusters. +- Orchestrating edge services and network functions with deployment intents based on compute, acceleration, and storage requirements. +- Supporting multiple tenants from different enterprises while ensuring confidentiality and full isolation between the tenants. + + +The following figure shows the topology overview for the OpenNESS EMCO orchestration with edge and multiple clusters. It also shows an example of deploying SmartCity with EMCO. +![OpenNESS EMCO](openness-emco-images/openness-emco-topology.png) + +_Figure 2 - Topology Overview with OpenNESS EMCO_ + +All the managed edge clusters and cloud clusters are connected with the EMCO cluster through the WAN network. +- The central orchestration (EMCO) cluster can be installed and provisioned by using the [OpenNESS Central Orchestrator Flavor](https://github.com/open-ness/specs/blob/master/doc/flavors.md). +- The edge clusters and the cloud cluster can be installed and provisioned by using the [OpenNESS Flavor](https://github.com/open-ness/specs/blob/master/doc/flavors.md). +- The composite application - [SmartCity](https://github.com/OpenVisualCloud/Smart-City-Sample) is composed of two parts: edge application and cloud (web) application. + - The edge application executes media processing and analytics on multiple edge clusters to reduce latency. + - The cloud application is like a web application for additional post-processing, such as calculating statistics and display/visualization on the cloud cluster side. + - The EMCO user can deploy the SmartCity applications across the clusters. Besides that, EMCO allows the operator to override configurations and profiles to satisfy deployment needs. + +This document aims to familiarize the user with EMCO and [OpenNESS deployment flavor](https://github.com/open-ness/specs/blob/master/doc/flavors.md) for EMCO installation and provision, and provide instructions accordingly. + +## EMCO Introduction + +### EMCO Terminology + +| Term | Description | +|:-----: | ----- | +| AppContext |

The AppContext is a set of records maintained in the EMCO etcd data store which maintains the collection of resources and clusters associated with a deployable EMCO resource (e.g. Deployment Intent Group)..

| +| Cluster Provider |

The provider is someone who owns clusters and registers them.

| +| Projects |

The project resource provides means for a collection of applications to be grouped. Several applications can exist under a specific project. Projects allows for grouping of applications under a common tenant to be defined.

| +| Composite application |

The composite application is combination of multiple applications. Based on the deployment intent, various applications of the composite application get deployed at various locations. Also, some applications of the composite application get replicated in multiple locations.

| +| Deployment Intent |

EMCO does not expect the editing of Helm charts provided by application/Network-function vendors by DevOps admins. Any customization and additional K8s resources that need to be present with the application are specified as deployment intents.

| +| Deployment Intent Group |

The Deployment Intent Group represents an instance of a composite application that can be deployed with a specified composite profile and a specified set of deployment intents which will control the placement and other configuration of the application resources.

| +| Placement |

EMCO supports to create generic placement intents for a given composite application. Normally, EMCO scheduler calls placement controllers first to figure out the edge/cloud locations for a given application. Finally, it works with 'resource synchronizer & status collector' to deploy K8s resources on various Edge/Cloud clusters.

| + +### EMCO Architecture +The following diagram depicts a high level overview of the EMCO architecture. +![OpenNESS EMCO](openness-emco-images/openness-emco-arch.png) + +_Figure 3 - EMCO Architecture_ + +- Cluster Registration Controller registers clusters by cluster owners. +- Distributed Application Scheduler provides a simplified and extensible placement. +- Network Configuration Management handles creation/management of virtual and provider networks. +- Hardware Platform Aware Controller enables scheduling with auto-discovery of platform features/ capabilities. +- Distributed Cloud Manager presents a single logical cloud from multiple edges. +- Secure Mesh Controller auto-configures both service mesh (ISTIO) and security policy (NAT, firewall). +- Secure WAN Controller automates secure overlays across edge groups. +- Resource Syncronizer manages instantiation of resources to clusters. +- Monitoring covers distributed application. + +#### Cluster Registration +A microservice exposes RESTful API. User can register cluster providers and clusters of those providers via these APIs. After preparing edge clusters and cloud clusters, which can be any Kubernetes\* cluster, user can onboard those clusters to EMCO by creating a cluster provider and then adding clusters to the cluster provider. After cluster providers are created, the KubeConfig files of edge and cloud clusters should be provided to EMCO as part of the multi-part POST call to the Cluster API. + +Additionally, after a cluster is created, labels and key value pairs can be added to the cluster via the EMCO API. Clusters can be specified by label when preparing placement intents. +> **NOTE**: The cluster provider is someone who owns clusters and registers them to EMCO. If an Enterprise has clusters, for example from AWS, then the cluster provider for those clusters from AWS is still considered as from that Enterprise. AWS is not the provider. Here, the provider is someone who owns clusters and registers them here. Since AWS does not register their clusters here, AWS is not considered cluster provider in this context. + +#### Distributed Application Scheduler +The distributed application scheduler microservice: +- Project Management provides multi-tenancy in the application from a user perspective. +- Composite App Management manages composite apps that are collections of Helm Charts, one per application. +- Composite Profile Management manages composite profiles that are collections of profile, one per application. +- Deployment Intent Group Management manages Intents for composite applications. +- Controller Registration manages placement and action controller registration, priorities etc. +- Status Notifier framework allows user to get on-demand status updates or notifications on status updates. +- Scheduler: + - Placement Controllers: Generic Placement Controller. + - Action Controllers. + +##### Lifecycle Operations +The Distributed Application Scheduler supports operations on a deployment intent group resource to instantiate the associated composite application with any placement, and action intents performed by the registered placement and action controllers. The basic flow of lifecycle operations on a deployment intent group after all the supporting resources have been created via the APIs are: +- approve: marks that the deployment intent group has been approved and is ready for instantiation. +- instantiate: the Distributed Application Scheduler prepares the application resourcs for deployment, and applies placement and action intents before invoking the Resource Synchronizer to deploy them to the intended remote clusters. +- status: (may be invoked at any step) provides information on the status of the deployment intent group. +- terminate: terminates the application resources of an instantiated application from all of the clusters to which it was deployed. In some cases, if a remote cluster is intermittently unreachable, the instantiate operation may still retry the instantiate operation for that cluster. The terminate operation will cause the instantiate operation to complete (i.e. fail), before the termination operation is performed. +- stop: In some cases, if the remote cluster is intermittently unreachable, the Resource Synchronizer will continue retrying an instantiate or terminate operation. The stop operation can be used to force the retry operation to stop, and the instantiate or terminate operation will complete (with a failed status). In the case of terminate, this allows the deployment intent group resource to be deleted via the API, since deletion is prevented until a deployment intent group resource has reached a completed terminate operation status. +Refer to [EMCO Resource Lifecycle Operations](https://github.com/open-ness/EMCO/tree/main/docs/user/Resource_Lifecycle.md) for more details. + +#### Network Configuration Management +The network configuration management (NCM) microservice: +- Provider Network Management to create provider networks. +- Virtual Network Management to create dynamic virtual networks. +- Controller Registration manages network plugin controllers, priorities etc. +- Status Notifier framework allows user to get on-demand status updates or notifications on status updates. +- Scheduler with Built in Controller - OVN-for-K8s-NFV Plugin Controller. + +##### Lifecycle Operations +The Network Configuration Management microservice supports operations on the network intents of a cluster resource to instantiate the associated provider and virtual networks that have been defined via the API for the cluster. The basic flow of lifecycle operations on a cluster, after all the supporting network resources have been created via the APIs are: +- apply: the Network Configuration Management microservice prepares the network resources and invokes the Resource Synchronizer to deploy them to the designated cluster. +- status: (may be invoked at any step) provides information on the status of the cluster networks. +- terminate: terminates the network resources from the cluster to which they were deployed. In some cases, if a remote cluster is intermittently unreachable, the Resource Synchronizer may still retry the instantiate operation for that cluster. The terminate operation will cause the instantiate operation to complete (i.e. fail), before the termination operation is performed. +- stop: In some cases, if the remote cluster is intermittently unreachable, the Resource Synchronizer will continue retrying an instantiate or terminate operation. The stop operation can be used to force the retry operation to stop, and the instantate or terminate operation will be completed (with a failed status). In the case of terminate, this allows the deployment intent group resource to be deleted via the API, since deletion is prevented until a deployment intent group resource has reached a completed terminate operation status. + +#### Distributed Cloud Manager +The Distributed Cloud Manager (DCM) provides the Logical Cloud abstraction and effectively completes the concept of "multi-cloud". One Logical Cloud is a grouping of one or many clusters, each with their own control plane, specific configurations and geo-location, which get partitioned for a particular EMCO project. This partitioning is made via the creation of distinct, isolated namespaces in each of the Kubernetes\* clusters that thus make up the Logical Cloud. + +A Logical Cloud is the overall target of a Deployment Intent Group and is a mandatory parameter (the specific applications under it further refine what gets run and in which location). A Logical Cloud must be explicitly created and instantiated before a Deployment Intent Group can be instantiated. + +Due to the close relationship with Clusters, which are provided by Cluster Registration (clm) above, it is important to understand the mapping between the two. A Logical Cloud groups many Clusters together but a Cluster may also be grouped by multiple Logical Clouds, effectively turning the cluster multi-tenant. The partitioning/multi-tenancy of a particular Cluster, via the different Logical Clouds, is done today at the namespace level (different Logical Clouds access different namespace names, and the name is consistent across the multiple clusters of the Logical Cloud). + +![Mapping between Logical Clouds and Clusters](openness-emco-images/openness-emco-lccl.png) + +_Figure 4 - Mapping between Logical Clouds and Clusters_ + +##### Lifecycle Operations +Prerequisites to using Logical Clouds: +* with the project-less Cluster Registration API, create the cluster providers, clusters and optionally cluster labels. +* with the Distributed Application Scheduler API, create a project which acts as a tenant in EMCO. + +The basic flow of lifecycle operations to get a Logical Cloud up and running via the Distributed Cloud Manager API is: +* Create a Logical Cloud specifying the following attributes: + - Level: either 1 or 0, depending on whether an admin or a custom/user cloud is sought - more on the differences below. + - (*for Level-1 only*) Namespace name - the namespace to use in all of the Clusters of the Logical Cloud. + - (*for Level-1 only*) User name - the name of the user that will be authenticating to the Kubernetes\* APIs to access the namespaces created. + - (*for Level-1 only*) User permissions - permissions that the user specified will have in the namespace specified, in all of the clusters. +* (*for Level-1 only*) Create resource quotas and assign them to the Logical Cloud created: this specifies what quotas/limits the user will face in the Logical Cloud, for each of the Clusters. +* Assign the Clusters previously created with the project-less Cluster Registration API to the newly-created Logical Cloud. +* Instantiate the Logical Cloud. All of the clusters assigned to the Logical Cloud are automatically set up to join the Logical Cloud. Once this operation is complete, the Distributed Application Scheduler's lifecycle operations can be followed to deploy applications on top of the Logical Cloud. + +Apart from the creation/instantiation of Logical Clouds, the following operations are also available: +* Terminate a Logical Cloud - this removes all of the Logical Cloud -related resources from all of the respective Clusters. +* Delete a Logical Cloud - this eliminates all traces of the Logical Cloud in EMCO. + +##### Level-1 Logical Clouds +Logical Clouds were introduced to group and partition clusters in a multi-tenant way and across boundaries, improving flexibility and scalability. A Level-1 Logical Cloud is the default type of Logical Cloud providing just that much. When projects request a Logical Cloud to be created, they provide what permissions are available, resource quotas and clusters that compose it. The Distributed Cloud Manager, alongside the Resource Synchronizer, sets up all the clusters accordingly, with the necessary credentials, namespace/resources, and finally generating the kubeconfig files used to authenticate/reach each of those clusters in the context of the Logical Cloud. + +##### Level-0 Logical Clouds +In some use cases, and in the administrative domains where it makes sense, a project may want to access raw, unmodified, administrator-level clusters. For such cases, no namespaces need to be created and no new users need to be created or authenticated in the API. To solve this, the Distributed Cloud Manager introduces Level-0 Logical Clouds, which offer the same consistent interface as Level-1 Logical Clouds to the Distributed Application Scheduler. Being of type Level-0 means "the lowest-level", or the administrator level. As such, no changes will be made to the clusters themselves. Instead, the only operation that takes place is the reuse of credentials already provided via the Cluster Registration API for the clusters assigned to the Logical Cloud (instead of generating new credentials, namespace/resources and kubeconfig files). + +#### OVN Action Controller +The OVN Action Controller (ovnaction) microservice is an action controller which may be registered and added to a deployment intent group to apply specific network intents to resources in the composite application. It provides the following functionalities: +- Network intent APIs which allow specification of network connection intents for resources within applications. +- On instantiation of a deployment intent group configured to utilize the ovnaction controller, network interface annotations will be added to the pod template of the identified application resources. +- ovnaction supports specifying interfaces which attach to networks created by the Network Configuration Management microservice. + +#### Traffic Controller +The traffic controller microservice provides a way to create network policy resources across edge clusters. It provides inbound RESTful APIs to create intents to open the traffic from clients, and provides change and delete APIs for update and deletion of traffic intents. Using the information provided through intents, it also creates a network policy resource for each of the application servers on the corresponding edge cluster. +> **NOTE**:For network policy to work, edge cluster must have network policy support using CNI such as calico. + +#### Generic Action Controller +The generic action controller microservice is an action controller which may be registered with the central orchestrator. It can achieve the following usecases: + +- Create a new Kubernetes* object and deploy that along with a specific application which is part of the composite Application. There are two variations here: + + - Default : Apply the new object to every instance of the app in every cluster where the app is deployed. + - Cluster-Specific : Apply the new object only where the app is deployed to a specific cluster, denoted by a cluster-name or a list of clusters denoted by a cluster-label. + +- Modify an existing Kubernetes* object which may have been deployed using the Helm chart for an app, or may have been newly created by the above mentioned usecase. Modification may correspond to specific fields in the YAML definition of the object. + +To achieve both the usecases, the controller exposes RESTful APIs to create, update and delete the following: + +- Resource - Specifies the newly defined object or an existing object. +- Customization - Specifies the modifications (using JSON Patching) to be applied on the objects. + + +#### Resource Synchronizer +This microservice is the one which deploys the resources in edge/cloud clusters. 'Resource contexts' created by various microservices are used by this microservice. It takes care of retrying, in case the remote clusters are not reachable temporarily. + +#### Placment and Action Controllers in EMCO +This section illustrates some key aspects of the EMCO controller architecture. Depending on the needs of a composite application, intents that handle specific operations for application resources (e.g. addition, modification, etc.) can be created via the APIs provided by the corresponding controller API. The following diagram shows the sequence of interactions to register controllers with EMCO. + +![OpenNESS EMCO](openness-emco-images/emco-register-controllers.png) + +_Figure 5 - Register placement and action controllers with EMCO_ + +This diagram illustrates the sequence of operations taken to prepare a Deployment Intent Group that utilizes some intents supported by controllers. The desired set of controllers and associated intents are included in the definition of a Deployment Intent Group to satisfy the requirements of a specific deployed instance of a composite application. + +![OpenNESS EMCO](openness-emco-images/emco-dig-create.png) + +_Figure 6 - Create a Deployment Intent Group_ + +When the Deployment Intent Group is instantiated, the identified set of controllers are invoked in order to perform their specific operations. + +![OpenNESS EMCO](openness-emco-images/emco-dig-instantiate.png) + +_Figure 7 - Instantiate a Deployment Intent Group_ + +In this initial release of EMCO, a built-in generic placement controller is provided in the `orchestrator`. As described above, the three provided action controllers are the OVN Action, Traffic and Generic Action controllers. + +#### Status Monitoring and Queries in EMCO +When a resource like a Deployment Intent Group is instantiated, status information about both the deployment and the deployed resources in the cluster are collected and made available for query by the API. The following diagram illustrates the key components involved. For more information about status queries see [EMCO Resource Lifecycle Operations](https://github.com/open-ness/EMCO/tree/main/docs/user/Resource_Lifecycle.md). + +![OpenNESS EMCO](openness-emco-images/emco-status-monitoring.png) + +_Figure 8 - Status Monitoring and Query Sequence_ + +### EMCO Terminology + +| | | +|------------------------|----------------------------------------------------------------------------------------------------------------------------------| +| Cluster Provider | The provider is someone who owns clusters and registers them. | +| Projects | The project resource provides means for a collection of applications to be grouped. | +| | Several applications can exist under a specific project. | +| | Projects allows for grouping of applications under a common tenant to be defined. | +| Composite application | Composite application is combination of multiple applications. | +| | Based on the deployment intent, various applications of the composite application get deployed at various locations. | +| | Also, some applications of the composite application get replicated in multiple locations. | +| Deployment Intent | EMCO does not expect the editing of helm charts provided by application/Network-function vendors by DevOps admins. | +| | Any customization and additional K8s resources that need to be present with the application are specified as deployment intents. | +| Placement Intent | EMCO supports to create generic placement intents for a given composite application. | +| | Normally, EMCO scheduler calls placement controllers first to figure out the edge/cloud locations for a given application. | +| | Finally works with 'resource synchronizer & status collector' to deploy K8s resources on various Edge/Cloud clusters. | + + +### EMCO API +For user interaction, EMCO provides [RESTful API](https://github.com/open-ness/EMCO/blob/main/docs/emco_apis.yaml). Apart from that, EMCO also provides CLI. For the detailed usage, refer to [EMCO CLI](https://github.com/open-ness/EMCO/tree/main/src/tools/emcoctl) +> **NOTE**: The EMCO RESTful API is the foundation for the other interaction facilities like the EMCO CLI, EMCO GUI (available in the future) and other orchestrators. + +### EMCO Authentication and Authorization +EMCO uses Istio and other open source solutions to provide Multi-tenancy solution leveraging Istio Authorization and Authentication frameworks. This is achieved without adding any logic to EMCO microservices. +- Authentication and Authorization for EMCO users is done at the Istio Ingress Gateway, where all the traffic enters the cluster. + +- Istio along with autherservice (Istio ecosystem project) enables request-level authentication with JSON Web Token (JWT) validation. Authservice is an entity that works along side with Envoy proxy. It is used to work with external IAM systems (OAUTH2). Many Enterprises have their own OAUTH2 server for authenticating users and provide roles. + +- Authservice and ISTIO can be configured to talk to multiple OAUTH2 servers. Using this capability EMCO can support multiple tenants, for example one tenant belonging to one project. + +- Using Istio AuthorizationPolicy access to different EMCO resources can be controlled based on roles defined for the users. + +The following figure shows various EMCO services running in a cluster with Istio. + +![OpenNESS EMCO](openness-emco-images/emco-istio-arch.png) + +_Figure 9 - EMCO setup with Istio and Authservice_ + +The following figure shows the authentication flow with EMCO, Istio and Authservice + +![OpenNESS EMCO](openness-emco-images/emco-istio-auth.png) + +_Figure 10 - EMCO Authenication with external OATH2 Server_ + +Detailed steps for configuring EMCO with Istio can be found in [EMCO Integrity and Access Management](https://github.com/open-ness/EMCO/tree/main/docs/user/Emco_Integrity_Access_Management.md) document. + +Steps for EMCO Authentication and Authorization Setup: +- Install and Configure Keycloak Server to be used in the setup. This server runs outside the EMCO cluster + - Create a new realm, add users and roles to Keycloak +- Install Istio in the Kubernetes* cluster where EMCO is running +- Enable Sidecar Injection in EMCO namesapce +- Install EMCO in EMCO namespace (with Istio sidecars) +- Configure Istio Ingress gateway resources for EMCO Services +- Configure Istio Ingress gateway to enable running along with Authservice +- Apply EnvoyFilter for Authservice +- Apply Authentication and Authorization Policies + +### EMCO Installation With OpenNESS Flavor +EMCO supports [multiple deployment options](https://github.com/open-ness/EMCO/tree/main/deployments). [OpenNESS Experience Kit](https://github.com/open-ness/specs/blob/master/doc/getting-started/openness-experience-kits.md) offers the `central_orchestrator` flavor to automate EMCO build and deployment as mentioned below. +- The first step is to prepare one server environment which needs to fulfill the [preconditions](https://github.com/open-ness/specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md#preconditions). +- Then place the EMCO server hostname in `[controller_group]` group in `inventory.ini` file of openness-experience-kit. +> **NOTE**: `[edgenode_group]` and `[edgenode_vca_group]` are not required for configuration, since EMCO micro services just need to be deployed on the Kubernetes* control plane node. +- Run script `./deploy_ne.sh -f central_orchestrator`. Deployment should complete successfully. In the flavor, harbor registry is deployed to provide images services as well. + +```shell +# kubectl get pods -n emco +NAMESPACE NAME READY STATUS RESTARTS AGE +emco clm-6979f6c886-tjfrv 1/1 Running 0 14m +emco dcm-549974b6fc-42fbm 1/1 Running 0 14m +emco dtc-948874b6fc-p2fbx 1/1 Running 0 14m +emco etcd-5f646586cb-p7ctj 1/1 Running 0 14m +emco gac-788874b6fc-p1kjx 1/1 Running 0 14m +emco mongo-5f7d44fbc5-n74lm 1/1 Running 0 14m +emco ncm-58b85b4688-tshmc 1/1 Running 0 14m +emco orchestrator-78b76cb547-xrvz5 1/1 Running 0 14m +emco ovnaction-5d8d4447f9-nn7l6 1/1 Running 0 14m +emco rsync-99b85b4x88-ashmc 1/1 Running 0 14m +``` + +## EMCO Example: SmartCity Deployment +- The [SmartCity application](https://github.com/OpenVisualCloud/Smart-City-Sample) is a sample application that is built on top of the OpenVINO™ and Open Visual Cloud software stacks for media processing and analytics. The composite application is composed of two parts: EdgeApp + WebApp (cloud application for additional post-processing such as calculating statistics and display/visualization) +- The edge cluster (representing regional office), the cloud cluster and the EMCO are connected with each other. +- The whole deployment architecture diagram is shown as below: +![OpenNESS EMCO](openness-emco-images/openness-emco-smtc.png) + +_Figure 11 - SmartCity Deployment Architecture Overview_ + +The example steps are shown as follows: +- Prerequisites + - Make one edge cluster and one cloud cluster ready by using OpenNESS Flavor. + - Prepare one server with a vanilla CentOS\* 7.8.2003 for EMCO installation. +- EMCO installation +- Cluster setup +- Project setup +- Logical cloud Setup +- Deploy SmartCity application + +### Cluster Setup +In the step, cluster provider will be created. And both the edge cluster and the cloud cluster will be registered in the EMCO. + +1. After [EMCO Installation With OpenNESS Flavor](#emco-installation-with-openness-flavor), logon to the EMCO host server and maker sure that Harbor and EMCO microservices are in running status. + +2. On the edge and cloud cluster, run the following command to make Docker logon to the Harbor deployed on the EMCO server, thus the clusters can pull SmartCity images from the Harbor: + ```shell + HARBORRHOST= + + cd /etc/docker/certs.d/ + mkdir ${HARBORRHOST} + cd ${HARBORRHOST} + curl -sk -X GET https://${HARBORRHOST}/api/v2.0/systeminfo/getcert -H "accept: application/json" -o harbor.crt + HARBORRPW=Harbor12345 + docker login ${HARBORRHOST} -u admin -p ${HARBORRPW} + ``` + > **NOTE**: should be `:30003`. + +3. On the EMCO server, download the [scripts,profiles and configmap JSON files](https://github.com/open-ness/edgeapps/tree/master/applications/smart-city-app/emco). + +4. Run the command for the environment setup with success return as below: + ```shell + # cd cli-scripts/ + # ./setup_env.sh + ``` + > **NOTE**: [SmartCity application](https://github.com/OpenVisualCloud/Smart-City-Sample) secrets need the specific information only accessiable by the edge cluster and the cloud cluster. `setup_env.sh` will automate it. + +5. Run the command for the clusters setup with expected result as below: + ```shell + # cd cli-scripts/ + # ./01_apply.sh + .... + URL: cluster-providers/smartcity-cluster-provider/clusters/edge01/labels Response Code: 201 Response: {"label-name":"LabelSmartCityEdge"} + URL: cluster-providers/smartcity-cluster-provider/clusters/cloud01/labels Response Code: 201 Response: {"label-name":"LabelSmartCityCloud"} + ``` + +### Project Setup + +Run the command for the project setup with expected result as below: + +```shell +# cd cli-scripts/ +# ./02_apply.sh + +Using config file: emco_cfg.yaml +http://localhost:31298/v2 +URL: projects Response Code: 201 Response: {"metadata":{"name":"project_smtc","description":"","UserData1":"","UserData2":""}} +``` + +### Logical Cloud Setup + +Run the command for the logical cloud setup with expected result as below: + +```shell +# cd cli-scripts/ +# ./03_apply.sh + +Using config file: emco_cfg.yaml +http://localhost:31877/v2 +URL: projects/project_smtc/logical-clouds Response Code: 201 Response: {"metadata":{"name":"default","description":"","userData1":"","userData2":""},"spec":{"namespace":"","level":"0","user":{"user-name":"","type":"","user-permissions":null}}} +http://localhost:31877/v2 +URL: projects/project_smtc/logical-clouds/default/cluster-references Response Code: 201 Response: {"metadata":{"name":"lc-edge01","description":"","userData1":"","userData2":""},"spec":{"cluster-provider":"smartcity-cluster-provider","cluster-name":"edge01","loadbalancer-ip":"0.0.0.0","certificate":""}} +http://localhost:31877/v2 +URL: projects/project_smtc/logical-clouds/default/instantiate Response Code: 200 Response: +``` + +### Deploy SmartCity Application + +1. Run the command for the SmartCity application deployment with expected result as below: + ```shell + # cd cli-scripts/ + # ./04_apply.sh + + http://localhost:31298/v2 + URL: projects/project_smtc/composite-apps/composite_smtc/v1/deployment-intent-groups/smtc-deployment-intent-group/approve Response Code: 202 Response: + http://localhost:31298/v2 + URL: projects/project_smtc/composite-apps/composite_smtc/v1/deployment-intent-groups/smtc-deployment-intent-group/instantiate Response Code: 202 Response: + ``` + > **NOTE**: EMCO supports generic K8S resource configuration including configmap, secret,etc. The example offers the usage about [configmap configuration](https://github.com/open-ness/edgeapps/blob/master/applications/smart-city-app/emco/cli-scripts/04_apps_template.yaml) to the clusters. + +2. Verify SmartCity Application Deployment Information. +The pods on the edge cluster are in the running status as shown as below: + + ```shell + # kubectl get pods + NAME READY STATUS RESTARTS AGE + traffic-office1-alert-5b56f5464c-ldwrf 1/1 Running 0 20h + traffic-office1-analytics-traffic-6b995d4d6-nhf2p 1/1 Running 0 20h + traffic-office1-camera-discovery-78bccbdb44-k2ffx 1/1 Running 0 20h + traffic-office1-cameras-6cb67ccc84-8zkjg 1/1 Running 0 20h + traffic-office1-db-84bcfd54cd-ht52s 1/1 Running 1 20h + traffic-office1-db-init-64fb9db988-jwjv9 1/1 Running 0 20h + traffic-office1-mqtt-f9449d49c-dwv6l 1/1 Running 0 20h + traffic-office1-mqtt2db-5649c4778f-vpxhq 1/1 Running 0 20h + traffic-office1-smart-upload-588d95f78d-8x6dt 1/1 Running 1 19h + traffic-office1-storage-7889c67c57-kbkjd 1/1 Running 1 19h + ``` + The pods on the cloud cluster are in the running status as shown as below: + ```shell + # kubectl get pods + NAME READY STATUS RESTARTS AGE + cloud-db-5d6b57f947-qhjz6 1/1 Running 0 20h + cloud-storage-5658847d79-66bxz 1/1 Running 0 96m + cloud-web-64fb95884f-m9fns 1/1 Running 0 20h + ``` + +3. Verify Smart City GUI +From a web browser, launch the Smart City web UI at URL `https://`. The GUI shows like: +![OpenNESS EMCO](openness-emco-images/openness-emco-smtcui.png) + +_Figure 12 - SmartCity UI_ + + +### SmartCity Termination + +Run the command for the SmartCity termination with expected result as below: +```shell +# cd cli-scripts/ +# ./88_terminate.sh + +Using config file: emco_cfg.yaml +http://localhost:31298/v2 +URL: projects/project_smtc/composite-apps/composite_smtc/v1/deployment-intent-groups/smtc-deployment-intent-group/terminate Response Code: 202 Response: +``` + +After termination, the SmartCity application will be deleted from the clusters. diff --git a/doc/building-blocks/enhanced-platform-awareness/acc100-images/acc100-diagram.png b/doc/building-blocks/enhanced-platform-awareness/acc100-images/acc100-diagram.png new file mode 100644 index 00000000..44f66094 Binary files /dev/null and b/doc/building-blocks/enhanced-platform-awareness/acc100-images/acc100-diagram.png differ diff --git a/doc/building-blocks/enhanced-platform-awareness/acc100-images/acc100-k8s.png b/doc/building-blocks/enhanced-platform-awareness/acc100-images/acc100-k8s.png new file mode 100644 index 00000000..64927539 Binary files /dev/null and b/doc/building-blocks/enhanced-platform-awareness/acc100-images/acc100-k8s.png differ diff --git a/doc/enhanced-platform-awareness/biosfw-images/openness_biosfw.png b/doc/building-blocks/enhanced-platform-awareness/biosfw-images/openness_biosfw.png similarity index 100% rename from doc/enhanced-platform-awareness/biosfw-images/openness_biosfw.png rename to doc/building-blocks/enhanced-platform-awareness/biosfw-images/openness_biosfw.png diff --git a/doc/enhanced-platform-awareness/cmk-images/cmk1.png b/doc/building-blocks/enhanced-platform-awareness/cmk-images/cmk1.png similarity index 100% rename from doc/enhanced-platform-awareness/cmk-images/cmk1.png rename to doc/building-blocks/enhanced-platform-awareness/cmk-images/cmk1.png diff --git a/doc/enhanced-platform-awareness/cmk-images/cmk2.png b/doc/building-blocks/enhanced-platform-awareness/cmk-images/cmk2.png similarity index 100% rename from doc/enhanced-platform-awareness/cmk-images/cmk2.png rename to doc/building-blocks/enhanced-platform-awareness/cmk-images/cmk2.png diff --git a/doc/enhanced-platform-awareness/fpga-images/openness-fpga1.png b/doc/building-blocks/enhanced-platform-awareness/fpga-images/openness-fpga1.png similarity index 100% rename from doc/enhanced-platform-awareness/fpga-images/openness-fpga1.png rename to doc/building-blocks/enhanced-platform-awareness/fpga-images/openness-fpga1.png diff --git a/doc/enhanced-platform-awareness/fpga-images/openness-fpga2.png b/doc/building-blocks/enhanced-platform-awareness/fpga-images/openness-fpga2.png similarity index 100% rename from doc/enhanced-platform-awareness/fpga-images/openness-fpga2.png rename to doc/building-blocks/enhanced-platform-awareness/fpga-images/openness-fpga2.png diff --git a/doc/enhanced-platform-awareness/fpga-images/openness-fpga3.png b/doc/building-blocks/enhanced-platform-awareness/fpga-images/openness-fpga3.png similarity index 100% rename from doc/enhanced-platform-awareness/fpga-images/openness-fpga3.png rename to doc/building-blocks/enhanced-platform-awareness/fpga-images/openness-fpga3.png diff --git a/doc/enhanced-platform-awareness/fpga-images/openness-fpga4.png b/doc/building-blocks/enhanced-platform-awareness/fpga-images/openness-fpga4.png similarity index 100% rename from doc/enhanced-platform-awareness/fpga-images/openness-fpga4.png rename to doc/building-blocks/enhanced-platform-awareness/fpga-images/openness-fpga4.png diff --git a/doc/building-blocks/enhanced-platform-awareness/hddl-images/hddlservice.png b/doc/building-blocks/enhanced-platform-awareness/hddl-images/hddlservice.png new file mode 100644 index 00000000..118f5206 Binary files /dev/null and b/doc/building-blocks/enhanced-platform-awareness/hddl-images/hddlservice.png differ diff --git a/doc/enhanced-platform-awareness/hddl-images/openness_HDDL.png b/doc/building-blocks/enhanced-platform-awareness/hddl-images/openness_HDDL.png similarity index 100% rename from doc/enhanced-platform-awareness/hddl-images/openness_HDDL.png rename to doc/building-blocks/enhanced-platform-awareness/hddl-images/openness_HDDL.png diff --git a/doc/enhanced-platform-awareness/hddl-images/openness_dynamic.png b/doc/building-blocks/enhanced-platform-awareness/hddl-images/openness_dynamic.png similarity index 100% rename from doc/enhanced-platform-awareness/hddl-images/openness_dynamic.png rename to doc/building-blocks/enhanced-platform-awareness/hddl-images/openness_dynamic.png diff --git a/doc/building-blocks/enhanced-platform-awareness/index.html b/doc/building-blocks/enhanced-platform-awareness/index.html new file mode 100644 index 00000000..4289955e --- /dev/null +++ b/doc/building-blocks/enhanced-platform-awareness/index.html @@ -0,0 +1,14 @@ + + +--- +title: OpenNESS Documentation +description: Home +layout: openness +--- +

You are being redirected to the OpenNESS Docs.

+ diff --git a/doc/enhanced-platform-awareness/multussriov-images/multus-pod-image.svg b/doc/building-blocks/enhanced-platform-awareness/multussriov-images/multus-pod-image.svg similarity index 100% rename from doc/enhanced-platform-awareness/multussriov-images/multus-pod-image.svg rename to doc/building-blocks/enhanced-platform-awareness/multussriov-images/multus-pod-image.svg diff --git a/doc/enhanced-platform-awareness/multussriov-images/sriov-cni.png b/doc/building-blocks/enhanced-platform-awareness/multussriov-images/sriov-cni.png similarity index 100% rename from doc/enhanced-platform-awareness/multussriov-images/sriov-cni.png rename to doc/building-blocks/enhanced-platform-awareness/multussriov-images/sriov-cni.png diff --git a/doc/enhanced-platform-awareness/multussriov-images/sriov-dp.png b/doc/building-blocks/enhanced-platform-awareness/multussriov-images/sriov-dp.png similarity index 100% rename from doc/enhanced-platform-awareness/multussriov-images/sriov-dp.png rename to doc/building-blocks/enhanced-platform-awareness/multussriov-images/sriov-dp.png diff --git a/doc/enhanced-platform-awareness/nfd-images/nfd0.png b/doc/building-blocks/enhanced-platform-awareness/nfd-images/nfd0.png similarity index 100% rename from doc/enhanced-platform-awareness/nfd-images/nfd0.png rename to doc/building-blocks/enhanced-platform-awareness/nfd-images/nfd0.png diff --git a/doc/enhanced-platform-awareness/nfd-images/nfd1.png b/doc/building-blocks/enhanced-platform-awareness/nfd-images/nfd1.png similarity index 100% rename from doc/enhanced-platform-awareness/nfd-images/nfd1.png rename to doc/building-blocks/enhanced-platform-awareness/nfd-images/nfd1.png diff --git a/doc/enhanced-platform-awareness/nfd-images/nfd2.png b/doc/building-blocks/enhanced-platform-awareness/nfd-images/nfd2.png similarity index 100% rename from doc/enhanced-platform-awareness/nfd-images/nfd2.png rename to doc/building-blocks/enhanced-platform-awareness/nfd-images/nfd2.png diff --git a/doc/building-blocks/enhanced-platform-awareness/openness-acc100.md b/doc/building-blocks/enhanced-platform-awareness/openness-acc100.md new file mode 100644 index 00000000..4985c27a --- /dev/null +++ b/doc/building-blocks/enhanced-platform-awareness/openness-acc100.md @@ -0,0 +1,300 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020 Intel Corporation +``` + +# Using ACC100 eASIC in OpenNESS: Resource Allocation, and Configuration +- [Overview](#overview) +- [Intel® vRAN Dedicated Accelerator ACC100 FlexRAN Host Interface Overview](#intel-vran-dedicated-accelerator-acc100-flexran-host-interface-overview) +- [Intel® vRAN Dedicated Accelerator ACC100 Orchestration and Deployment with Kubernetes\* for FlexRAN](#intel-vran-dedicated-accelerator-acc100-orchestration-and-deployment-with-kubernetes-for-flexran) +- [Using the Intel® vRAN Dedicated Accelerator ACC100 on OpenNESS](#using-the-intel-vran-dedicated-accelerator-acc100-on-openness) + - [ACC100 (FEC) Ansible Installation for OpenNESS Network Edge](#acc100-fec-ansible-installation-for-openness-network-edge) + - [OpenNESS Experience Kit](#openness-experience-kit) + - [FEC VF configuration for OpenNESS Network Edge](#fec-vf-configuration-for-openness-network-edge) + - [Requesting Resources and Running Pods for OpenNESS Network Edge](#requesting-resources-and-running-pods-for-openness-network-edge) + - [Verifying Application POD Access and Usage of FPGA on OpenNESS Network Edge](#verifying-application-pod-access-and-usage-of-fpga-on-openness-network-edge) +- [Reference](#reference) + +## Overview + +Intel® vRAN Dedicated Accelerator ACC100 plays a key role in accelerating 4G and 5G Virtualized Radio Access Networks (vRAN) workloads, which in turn increases the overall compute capacity of a commercial, off-the-shelf platform. + +Intel® vRAN Dedicated Accelerator ACC100 provides the following features: + +- LDPC FEC processing for 3GPP 5G: + - LDPC encoder/decoder + - Code block CRC generation/checking + - Rate matching/de-matching + - HARQ buffer management +- Turbo FEC processing for 3GPP 4G: + - Turbo encoder/decoder + - Code block CRC generation/checking + - Rate matching/de-matching +- Scalable to required system configuration +- Hardware DMA support +- Performance monitoring +- Load balancing supported by the hardware queue manager (QMGR) +- Interface through the DPDK BBDev library and APIs + +Intel® vRAN Dedicated Accelerator ACC100 benefits include: +- Reduced platform power, E2E latency and Intel® CPU core count requirements as well as increase in cell capacity than existing programmable accelerator +- Accelerates both 4G and 5G data concurrently +- Lowers development cost using commercial off the shelf (COTS) servers +- Accommodates space-constrained implementations via a low-profile PCIe* card form factor. +- Enables a variety of flexible FlexRAN deployments from small cell to macro to Massive +MIMO networks +- Supports extended temperature for the most challenging of RAN deployment scenarios + +For more information, see product brief in [Intel® vRAN Dedicated Accelerator ACC100](https://builders.intel.com/docs/networkbuilders/intel-vran-dedicated-accelerator-acc100-product-brief.pdf). + +This document explains how the ACC100 resource can be used on the Open Network Edge Services Software (OpenNESS) platform for accelerating network functions and edge application workloads. We use the Intel® vRAN Dedicated Accelerator ACC100 to accelerate the LTE/5G Forward Error Correction (FEC) in the 5G or 4G L1 base station network function such as FlexRAN. + +FlexRAN is a reference layer 1 pipeline of 4G eNb and 5G gNb on Intel® architecture. The FlexRAN reference pipeline consists of an L1 pipeline, optimized L1 processing modules, BBU pooling framework, cloud and cloud-native deployment support, and accelerator support for hardware offload. Intel® vRAN Dedicated Accelerator ACC100 card is used by FlexRAN to offload FEC (Forward Error Correction) for 4G and 5G. + +## Intel® vRAN Dedicated Accelerator ACC100 FlexRAN Host Interface Overview +Intel® vRAN Dedicated Accelerator ACC100 card used in the FlexRAN solution exposes the following physical functions to the CPU host: +- One FEC interface that can be used of 4G or 5G FEC acceleration + - The LTE FEC IP components have turbo encoder/turbo decoder and rate matching/de-matching + - The 5GNR FEC IP components have low-density parity-check (LDPC) Encoder / LDPC Decoder, rate matching/de-matching, and UL HARQ combining + +![Intel® vRAN Dedicated Accelerator ACC100 support](acc100-images/acc100-diagram.png) + +## Intel® vRAN Dedicated Accelerator ACC100 Orchestration and Deployment with Kubernetes\* for FlexRAN +FlexRAN is a low-latency network function that implements the FEC. FlexRAN uses the FEC resources from the ACC100 using POD resource allocation and the Kubernetes\* device plugin framework. Kubernetes* provides a device plugin framework that is used to advertise system hardware resources to the Kubelet. Instead of customizing the code for Kubernetes* (K8s) itself, vendors can implement a device plugin that can be deployed either manually or as a DaemonSet. The targeted devices include GPUs, high-performance NICs, FPGAs, InfiniBand\* adapters, and other similar computing resources that may require vendor-specific initialization and setup. + +![Intel® vRAN Dedicated Accelerator ACC100 Orchestration and deployment with OpenNESS Network Edge for FlexRAN](acc100-images/acc100-k8s.png) + +_Figure - Intel® vRAN Dedicated Accelerator ACC100 Orchestration and deployment with OpenNESS Network Edge for FlexRAN_ + +## Using the Intel® vRAN Dedicated Accelerator ACC100 on OpenNESS +Further sections provide instructions on how to use the ACC100 eASIC features: configuration and accessing from an application on the OpenNESS Network Edge. + +When the Intel® vRAN Dedicated Accelerator ACC100 is available on the Edge Node platform it exposes the Single Root I/O Virtualization (SRIOV) Virtual Function (VF) devices which can be used to accelerate the FEC in the vRAN workload. To take advantage of this functionality for a cloud-native deployment, the PF (Physical Function) of the device must be bound to the DPDK IGB_UIO userspace driver to create several VFs (Virtual Functions). Once the VFs are created, they must also be bound to a DPDK userspace driver to allocate them to specific K8s pods running the vRAN workload. + +The full pipeline of preparing the device for workload deployment and deploying the workload can be divided into the following stages: + +- Enabling SRIOV, binding devices to appropriate drivers, and the creation of VFs: delivered as part of the Edge Nodes Ansible automation. +- Queue configuration of FPGAs PF/VFs with an aid of [pf-bb-config](https://github.com/intel/pf-bb-config) utility: Docker\* image creation delivered as part of the Edge Nodes Ansible automation. The images being pushed to a local Harbor registry, sample pod (job) deployment via Helm charts. +- Enabling orchestration and allocation of the devices (VFs) to non-root pods requesting the devices: leveraging the support of "accelerator" SRIOV VFs from K8s SRIOV Device Plugin. K8s plugin deployment is delivered as part of the Edge Controller's Ansible automation. +- Simple sample BBDEV application to validate the pipeline (i.e., SRIOV creation - Queue configuration - Device orchestration - Pod deployment): Script delivery and instructions to build Docker image for sample application delivered as part of Edge Apps package. + +### ACC100 (FEC) Ansible Installation for OpenNESS Network Edge +To run the OpenNESS package with ACC100 (FEC) functionality, the feature needs to be enabled on both Edge Controller and Edge Node. It can be deployed via the ["flexran" flavor of OpenNESS](https://github.com/open-ness/ido-openness-experience-kits/tree/master/flavors/flexran). + +#### OpenNESS Experience Kit +To enable ACC100 support from OEK, SRIOV must be enabled in OpenNESS: +```yaml +# flavors/flexran/all.yml +kubernetes_cnis: +-
+- sriov +``` + +Also, enable the following options in `flavors/flexran/all.yml`: +The following device config is the default config for the Intel® vRAN Dedicated Accelerator ACC100. +```yaml +# flavors/flexran/all.yml + +acc100_sriov_userspace_enable: true + +acc100_userspace_vf: + enabled: true + vendor_id: "8086" + vf_device_id: "0d5d" + pf_device_id: "0d5c" + vf_number: "2" + vf_driver: "vfio-pci" +``` + +Run setup script `deploy_ne.sh -f flexran`. + +After a successful deployment, the following pods will be available in the cluster: +```shell +kubectl get pods -A + +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-ovn kube-ovn-cni-hdgrl 1/1 Running 0 3d19h +kube-ovn kube-ovn-cni-px79b 1/1 Running 0 3d18h +kube-ovn kube-ovn-controller-578786b499-74vzm 1/1 Running 0 3d19h +kube-ovn kube-ovn-controller-578786b499-j22gl 1/1 Running 0 3d19h +kube-ovn ovn-central-5f456db89f-z7d6x 1/1 Running 0 3d19h +kube-ovn ovs-ovn-46k8f 1/1 Running 0 3d18h +kube-ovn ovs-ovn-5r2p6 1/1 Running 0 3d19h +kube-system coredns-6955765f44-mrc82 1/1 Running 0 3d19h +kube-system coredns-6955765f44-wlvhc 1/1 Running 0 3d19h +kube-system etcd-silpixa00394960 1/1 Running 0 3d19h +kube-system kube-apiserver-silpixa00394960 1/1 Running 0 3d19h +kube-system kube-controller-manager-silpixa00394960 1/1 Running 0 3d19h +kube-system kube-multus-ds-amd64-2zdqt 1/1 Running 0 3d18h +kube-system kube-multus-ds-amd64-db8fd 1/1 Running 0 3d19h +kube-system kube-proxy-dd259 1/1 Running 0 3d19h +kube-system kube-proxy-sgn9g 1/1 Running 0 3d18h +kube-system kube-scheduler-silpixa00394960 1/1 Running 0 3d19h +kube-system kube-sriov-cni-ds-amd64-k9wnd 1/1 Running 0 3d18h +kube-system kube-sriov-cni-ds-amd64-pclct 1/1 Running 0 3d19h +kube-system kube-sriov-device-plugin-amd64-fhbv8 1/1 Running 0 3d18h +kube-system kube-sriov-device-plugin-amd64-lmx9k 1/1 Running 0 3d19h +openness eaa-78b89b4757-xzh84 1/1 Running 0 3d18h +openness edgedns-dll9x 1/1 Running 0 3d18h +openness interfaceservice-grjlb 1/1 Running 0 3d18h +openness nfd-master-dd4ch 1/1 Running 0 3d19h +openness nfd-worker-c24wn 1/1 Running 0 3d18h +openness syslog-master-9x8hc 1/1 Running 0 3d19h +openness syslog-ng-br92z 1/1 Running 0 3d18h +``` + +### FEC VF configuration for OpenNESS Network Edge +To configure the VFs with the necessary number of queues for the vRAN workload, the BBDEV configuration utility is going to run as a job within a privileged container. The configuration utility is available to run as a Helm chart available from `/opt/openness/helm-charts/bb_config`. + +Sample configMap, which can be configured by changing values, if other than typical config is required, with a profile for the queue configuration is provided as part of Helm chart template `/opt/openness/helm-charts/bb_config/templates/acc100-config.yaml` populated with values from `/opt/openness/helm-charts/bb_config/values.yaml`. Helm chart installation requires a provision of hostname for the target node during job deployment. Additionally, the default values in Helm chart will deploy FPGA config, a flag needs to be provided to invoke ACC100 config. + +Install the Helm chart by providing configmap and BBDEV config utility job with the following command from `/opt/openness/helm-charts/` on Edge Controller: + +```shell +helm install --set nodeName= --set device=ACC100 intel-acc100-cfg bb_config +``` + +Verify if the job has completed and that the state of the pod created for this job is “Completed”. Check the logs of the pod to see a complete successful configuration. +``` +kubectl get pods +kubectl logs intel-acc100-cfg--xxxxx +``` +Expected: `ACC100 PF [0000:af:00.0] configuration complete!` + +To redeploy the job on another node, use the following command: + +``` +helm upgrade --set nodeName= intel-acc100-cfg bb_config +``` + +To uninstall the job, run: +``` +helm uninstall intel-acc100-cfg +``` + +### Requesting Resources and Running Pods for OpenNESS Network Edge +As part of the OpenNESS Ansible automation, a K8s SRIOV device plugin to orchestrate the ACC100 VFs (bound to the userspace driver) is running. This enables the scheduling of pods requesting this device. To check the number of devices available on the Edge Node from Edge Controller, run: + +```shell +kubectl get node -o json | jq '.status.allocatable' + +"intel.com/intel_fec_acc100": "2" +``` + +To request the device as a resource in the pod, add the request for the resource into the pod specification file by specifying its name and the amount of resources required. If the resource is not available or the amount of resources requested is greater than the number of resources available, the pod status will be “Pending” until the resource is available. +**NOTE**: The name of the resource must match the name specified in the configMap for the K8s devices plugin [configMap.yml](https://github.com/open-ness/openness-experience-kits/blob/master/roles/kubernetes/cni/sriov/controlplane/files/sriov/templates/configMap.yml). + +A sample pod requesting the ACC100 (FEC) VF may look like this: + +``` +apiVersion: v1 +kind: Pod +metadata: + name: test + labels: + env: test +spec: + containers: + - name: test + image: centos:latest + command: [ "/bin/bash", "-c", "--" ] + args: [ "while true; do sleep 300000; done;" ] + resources: + requests: + intel.com/intel_fec_acc100: '1' + limits: + intel.com/intel_fec_acc100: '1' +``` + +To test the resource allocation to the pod, save the above code snippet to the `sample.yaml` file and create the pod. +``` +kubectl create -f sample.yaml +``` +Once the pod is in the 'Running' state, check that the device was allocated to the pod (a uioX device and an environmental variable with a device PCI address should be available): +``` +kubectl exec -it test -- ls /dev +kubectl exec -it test -- printenv | grep FEC +``` +To check the number of devices currently allocated to pods, run (and search for 'Allocated Resources'): + +``` +kubectl describe node +``` + +### Verifying Application POD Access and Usage of FPGA on OpenNESS Network Edge +To verify the functionality of all the features are working together (SRIOV binding - K8s device plugin - BBDEV config) and functionality of the ACC100 (FEC) VF inside a non-root pod, build a Docker image and run a simple validation application for the device. + +The automation of the Docker image build is available from the Edge Apps package. The image must be built on the same node that it is meant to be deployed or a server with the same configuration as the node that will run the workload. This is due to the Kernel dependencies of DPDK during the application build. + +Navigate to: + +``` +edgeapps/fpga-sample-app +``` + +Build the image: + +`./build-image.sh` + +From the Edge Controller, deploy the application pod. The pod specification is located at `/fpga`: + +``` +kubectl create -f fpga-sample-app.yaml +``` + +Execute into the application pod and run the sample app: +```shell +# enter the pod +kubectl exec -it pod-bbdev-sample-app -- /bin/bash + +# run test application +./test-bbdev.py --testapp-path ./testbbdev -e="-w ${PCIDEVICE_INTEL_COM_INTEL_FEC_ACC100}" -i -n 1 -b 1 -l 1 -c validation -v ./test_vectors/ldpc_dec_v7813.data + +# sample output +Executing: ./dpdk-test-bbdev -w0000:b0:00.0 -- -n 1 -l 1 -c validation -i -v ldpc_dec_v7813.data -b 1 +EAL: Detected 96 lcore(s) +EAL: Detected 2 NUMA nodes +Option -w, --pci-whitelist is deprecated, use -a, --allow option instead +EAL: Multi-process socket /var/run/dpdk/rte/mp_socket +EAL: Selected IOVA mode 'VA' +EAL: No available hugepages reported in hugepages-1048576kB +EAL: Probing VFIO support... +EAL: VFIO support initialized +EAL: using IOMMU type 1 (Type 1) +EAL: Probe PCI driver: intel_acc100_vf (8086:d5d) device: 0000:b0:00.0 (socket 1) +EAL: No legacy callbacks, legacy socket not created + +=========================================================== +Starting Test Suite : BBdev Validation Tests +Test vector file = ldpc_dec_v7813.data +Device 0 queue 16 setup failed +Allocated all queues (id=16) at prio0 on dev0 +Device 0 queue 32 setup failed +Allocated all queues (id=32) at prio1 on dev0 +Device 0 queue 48 setup failed +Allocated all queues (id=48) at prio2 on dev0 +Device 0 queue 64 setup failed +Allocated all queues (id=64) at prio3 on dev0 +Device 0 queue 64 setup failed +All queues on dev 0 allocated: 64 ++ ------------------------------------------------------- + +== test: validation +dev:0000:b0:00.0, burst size: 1, num ops: 1, op type: RTE_BBDEV_OP_LDPC_DEC +Operation latency: + avg: 31202 cycles, 13.5661 us + min: 31202 cycles, 13.5661 us + max: 31202 cycles, 13.5661 us +TestCase [ 0] : validation_tc passed + + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + + + Test Suite Summary : BBdev Validation Tests + + Tests Total : 1 + + Tests Skipped : 0 + + Tests Passed : 1 + + Tests Failed : 0 + + Tests Lasted : 413.594 ms + + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``` +The output of the application should indicate a total of ‘1’ tests and ‘1’ test passing; this concludes the validation of the FEC VF working correctly inside a K8s pod. + +## Reference +- [Intel® vRAN Dedicated Accelerator ACC100](https://networkbuilders.intel.com/solutionslibrary/intel-vran-dedicated-accelerator-acc100-product-brief) diff --git a/doc/enhanced-platform-awareness/openness-bios.md b/doc/building-blocks/enhanced-platform-awareness/openness-bios.md similarity index 95% rename from doc/enhanced-platform-awareness/openness-bios.md rename to doc/building-blocks/enhanced-platform-awareness/openness-bios.md index 79ea4594..9ea4f6be 100644 --- a/doc/enhanced-platform-awareness/openness-bios.md +++ b/doc/building-blocks/enhanced-platform-awareness/openness-bios.md @@ -26,7 +26,7 @@ OpenNESS provides a reference implementation demonstrating how to configure low- >**NOTE**: The Intel® System Configuration Utility is not intended for and should not be used on any non-Intel server products. -The OpenNESS Network Edge implementation goes a step further and provides an automated process using Kubernetes\* to save and restore BIOS and firmware settings. To do this, the Intel® System Configuration Utility is packaged as a pod and deployed as a Kubernetes job that uses ConfigMap. This ConfigMap provides a mount point that has the BIOS and firmware profile that needs to be used for the worker node. A platform reboot is required for the BIOS and firmware configuration to be applied. To enable this, the BIOS and firmware job is deployed as a privileged pod. +The OpenNESS Network Edge implementation goes a step further and provides an automated process using Kubernetes\* to save and restore BIOS and firmware settings. To do this, the Intel® System Configuration Utility is packaged as a pod and deployed as a Kubernetes job that uses ConfigMap. This ConfigMap provides a mount point that has the BIOS and firmware profile that needs to be used for the node. A platform reboot is required for the BIOS and firmware configuration to be applied. To enable this, the BIOS and firmware job is deployed as a privileged pod. ![BIOS and Firmware configuration on OpenNESS](biosfw-images/openness_biosfw.png) @@ -61,4 +61,4 @@ To enable BIOSFW, perform the following steps: * Use `kubectl biosfw direct /d BIOSSETTINGS "Quiet Boot"` to run `syscfg /d BIOSSETTINGS "Quiet Boot"` on `` node. ## Reference -- [Intel Save and Restore System Configuration Utility (SYSCFG)](https://downloadcenter.intel.com/download/28713/Save-and-Restore-System-Configuration-Utility-SYSCFG-) \ No newline at end of file +- [Intel Save and Restore System Configuration Utility (SYSCFG)](https://downloadcenter.intel.com/download/28713/Save-and-Restore-System-Configuration-Utility-SYSCFG-) diff --git a/doc/enhanced-platform-awareness/openness-dedicated-core.md b/doc/building-blocks/enhanced-platform-awareness/openness-dedicated-core.md similarity index 100% rename from doc/enhanced-platform-awareness/openness-dedicated-core.md rename to doc/building-blocks/enhanced-platform-awareness/openness-dedicated-core.md diff --git a/doc/enhanced-platform-awareness/openness-fpga.md b/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md similarity index 78% rename from doc/enhanced-platform-awareness/openness-fpga.md rename to doc/building-blocks/enhanced-platform-awareness/openness-fpga.md index ecfee506..ef302795 100644 --- a/doc/enhanced-platform-awareness/openness-fpga.md +++ b/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md @@ -66,19 +66,26 @@ When the Intel® FPGA PAC N3000 is programmed with a vRAN 5G image, it exposes t The full pipeline of preparing the device for workload deployment and deploying the workload can be divided into the following stages (subfeatures): -- Programming the FPGA with RTL factory and user images: feature installation via Ansible\* automation and a K8s kubectl plugin are provided to use the feature. +- Programming the FPGA with RTL user images: feature installation via Ansible\* automation and a K8s kubectl plugin are provided to use the feature. - Enabling SRIOV, binding devices to appropriate drivers, and the creation of VFs: delivered as part of the Edge Nodes Ansible automation. -- Queue configuration of FPGAs PF/VFs with an aid of DPDK Baseband Device (BBDEV) config utility: Docker\* image creation delivered as part of the Edge Nodes Ansible automation (dependency on the config utility from the FlexRAN package). The images being pushed to a local Docker registry, sample pod (job) deployment via Helm charts. +- Queue configuration of FPGAs PF/VFs with an aid of DPDK Baseband Device (BBDEV) config utility: Docker\* image creation delivered as part of the Edge Nodes Ansible automation. The images being pushed to a local Harbor registry, sample pod (job) deployment via Helm charts. - Enabling orchestration and allocation of the devices (VFs) to non-root pods requesting the devices: leveraging the support of FPGA SRIOV VFs from K8s SRIOV Device Plugin. K8s plugin deployment is delivered as part of the Edge Controller's Ansible automation. -- Simple sample BBDEV application to validate the pipeline (i.e., SRIOV creation - Queue configuration - Device orchestration - Pod deployment): Script delivery and instructions to build Docker image for sample application delivered as part of Edge Apps package. +- Simple sample DPDK BBDEV application to validate the pipeline (i.e., SRIOV creation - Queue configuration - Device orchestration - Pod deployment): Script delivery and instructions to build Docker image for sample application delivered as part of Edge Apps package. It is assumed that the FPGA is always used with the OpenNESS Network Edge, paired with the Multus\* plugin to enable the workload pod with a default K8s network interface. The Multus CNI is a container network interface (CNI) plugin for Kubernetes that enables attaching multiple network interfaces to pods. +It is also assumed that the Intel® FPGA PAC N3000 MAX10 build version of the card used for OpenNESS setup is at least version 2.0.x and has RoT capability, ie: +```shell +Board Management Controller, MAX10 NIOS FW version D.2.0.19 +Board Management Controller, MAX10 Build version D.2.0.6 +``` +For information on how to update and flash the MAX10 to supported version see [Intel® FPGA PAC N3000 documentation](https://www.intel.com/content/www/us/en/programmable/documentation/xgz1560360700260.html#wzl1570122399760). + ### FPGA (FEC) Ansible installation for OpenNESS Network Edge To run the OpenNESS package with FPGA (FEC) functionality, the feature needs to be enabled on both Edge Controller and Edge Node. #### OpenNESS Experience Kit -To enable FPGA support from OEK, change the variable `ne_opae_fpga_enable` in `group_vars/all/10-default.yml` to `true`: +To enable FPGA support from OEK, change the variable `ne_opae_fpga_enable` in `group_vars/all/10-default.yml` (or flavour alternative file) to `true`: ```yaml # group_vars/all/10-default.yml ne_opae_fpga_enable: true @@ -88,7 +95,7 @@ Additionally, SRIOV must be enabled in OpenNESS: ```yaml # group_vars/all/10-default.yml kubernetes_cnis: -- kubeovn +-
- sriov ``` @@ -110,15 +117,11 @@ fpga_userspace_vf: The following packages need to be placed into specific directories for the feature to work: -1. A clean copy of `bbdev_config_service` needs to be placed in the `openness-experience-kits/fpga_config` directory. The package can be obtained as part of the 19.10 release of FlexRAN. To obtain the package, contact your Intel representative or visit the [Resource Design Center](https://cdrdv2.intel.com/v1/dl/getContent/615743 ). - -2. The OPAE package `n3000-1-3-5-beta-rte-setup.zip` needs to be placed inside the `openness-experience-kits/opae_fpga` directory. The package can be obtained as part of Intel® FPGA PAC N3000 OPAE beta release. To obtain the package, contact your Intel representative or visit the [Resource Design Center](https://cdrdv2.intel.com/v1/dl/getContent/616082). - -3. The factory image configuration package `n3000-1-3-5-beta-cfg-2x2x25g-setup.zip` needs to be placed inside the `openness-experience-kits/opae_fpga` directory. The package can be obtained as part of PAC N3000 OPAE beta release. To obtain the package, contact your Intel representative or visit the [Resource Design Center](https://cdrdv2.intel.com/v1/dl/getContent/616080). +1. The OPAE package `OPAE_SDK_1.3.7-5_el7.zip` needs to be placed inside the `ido-openness-experience-kits/opae_fpga` directory. The package can be obtained as part of Intel® FPGA PAC N3000 OPAE beta release. To obtain the package, contact your Intel representative. Run setup script `deploy_ne.sh`. -After a successful deployment, the following pods will be available in the cluster: +After a successful deployment, the following pods will be available in the cluster (CNI pods may vary depending on deployment): ```shell kubectl get pods -A @@ -154,33 +157,45 @@ openness syslog-ng-br92z 1/1 Running 0 ``` ### FPGA programming and telemetry on OpenNESS Network Edge -To program the FPGA factory image (one-time secure upgrade) or the user image (5GN FEC vRAN) of the Intel® FPGA PAC N3000 via OPAE a `kubectl` plugin for K8s is provided. The plugin also allows for obtaining basic FPGA telemetry. This plugin will deploy K8s jobs that run to completion on the desired host and display the logs/output of the command. +It is expected the the factory image of the Intel® FPGA PAC N3000 is of version 2.0.x. To program the user image (5GN FEC vRAN) of the Intel® FPGA PAC N3000 via OPAE a `kubectl` plugin for K8s is provided - it is expected that the provided user image is signed or un-signed (development purposes) by the user, see the [documentation](https://www.intel.com/content/www/us/en/programmable/documentation/pei1570494724826.html) for more information on how to sign/un-sign the image file. The plugin also allows for obtaining basic FPGA telemetry. This plugin will deploy K8s jobs that run to completion on the desired host and display the logs/output of the command. The following are the operations supported by the `kubectl rsu` K8s plugin. They are run from the Edge Controller: -1. To display currently supported capabilities and information on how to use them, run: +1. To check the version of the MAX10 image and FW run: +``` +kubectl rsu get fme -n +Board Management Controller, MAX10 NIOS FW version D.2.0.19 +Board Management Controller, MAX10 Build version D.2.0.6 +//****** FME ******// +Object Id : 0xEF00000 +PCIe s:b:d.f : 0000:1b:00.0 +Device Id : 0x0b30 +Numa Node : 0 +Ports Num : 01 +Bitstream Id : 0x2315842A010601 +Bitstream Version : 0.2.3 +Pr Interface Id : a5d72a3c-c8b0-4939-912c-f715e5dc10ca +Boot Page : user +``` +2. To display currently supported capabilities and information on how to use them, run: ``` kubectl rsu -h ``` -2. To run one time secure upgrade of the factory image, run: -``` -kubectl rsu flash -n -``` -3. To display information about RSU supported devices that can be used to program the FPGA, and to list FPGA user images available on the host, run: +2. To display information about RSU supported devices that can be used to program the FPGA, and to list FPGA user images available on the host, run: ``` kubectl rsu discover -n ``` -4. To copy and sign a user image to the desired platform, run the following command to copy an already signed image add `--no-sign` to the command: -To obtain a user FPGA image for 5GNR vRAN such as `ldpc5g_2x2x25g`, contact your Intel Representative. +3. To copy and the user image to the desired platform, run the following command. +To obtain a user FPGA image for 5GNR vRAN such as `2x2x25G-5GLDPC-v1.6.1-3.0.0`, contact your Intel Representative. ``` kubectl rsu load -f -n ``` -5. To program the FPGA with user image (vRAN for 5GNR), run: +4. To program the FPGA with user image (vRAN for 5GNR), run: ``` kubectl rsu program -f -n -d ``` -6. To obtain basic telemetry (temperature, power usage, and FPGA image information, etc.), run: +5. To obtain basic telemetry (temperature, power usage, and FPGA image information, etc.), run: ``` kubectl rsu get temp -n kubectl rsu get power -n @@ -200,62 +215,28 @@ Pr Interface Id : a5d72a3c-c8b0-4939-912c-f715e5dc10ca Boot Page : user ``` 7. For more information on the usage of each `kubectl rsu` plugin capability, run each command with the `-h` argument. +8. To monitor progress of deployed jobs run: +``` +kubectl logs -f +``` To run vRAN workloads on the Intel® FPGA PAC N3000, the FPGA must be programmed with the appropriate factory and user images per the instructions. -Additionally, in a scenario where the user wants to manually deploy a K8s job for OPAE without the use of the `kubectl rsu` plugin, the following sample `.yml` specification can be used as a template. The provided `args` needs to be changed accordingly; this job can be run with `kubectl create -f sample.yml`: - -```yaml -apiVersion: batch/v1 -kind: Job -metadata: - name: fpga-opae-job -spec: - template: - spec: - containers: - - securityContext: - privileged: true - name: fpga-opea - image: fpga-opae-pacn3000:1.0 - imagePullPolicy: Never - command: [ "/bin/bash", "-c", "--" ] - args: [ "./check_if_modules_loaded.sh && fpgasupdate /root/images/ && rsu bmcimg (RSU_PCI_bus_function_id)" ] - volumeMounts: - - name: class - mountPath: /sys/devices - readOnly: false - - name: image-dir - mountPath: /root/images - readOnly: false - volumes: - - hostPath: - path: "/sys/devices" - name: class - - hostPath: - path: "/temp/vran_images" - name: image-dir - restartPolicy: Never - nodeSelector: - kubernetes.io/hostname: samplenodename - - backoffLimit: 0 -``` #### Telemetry monitoring - Support for monitoring temperature and power telemetry of the Intel® FPGA PAC N3000 is also provided from OpenNESS with a CollectD collector that is configured for the `flexran` flavor. Intel® FPGA PAC N3000 telemetry monitoring is provided to CollectD as a plugin. It collects the temperature and power metrics from the card and exposes them to Prometheus\* from which the user can easily access the metrics. For more information on how to enable telemetry for FPGA in OpenNESS, see the [telemetry whitepaper](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-telemetry.md#collectd). + Support for monitoring temperature and power telemetry of the Intel® FPGA PAC N3000 is also provided from OpenNESS with a CollectD collector that is configured for the `flexran` flavor. Intel® FPGA PAC N3000 telemetry monitoring is provided to CollectD as a plugin. It collects the temperature and power metrics from the card and exposes them to Prometheus\* from which the user can easily access the metrics. For more information on how to enable telemetry for FPGA in OpenNESS, see the [telemetry whitepaper](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-telemetry.md#collectd). ![PACN3000 telemetry](fpga-images/openness-fpga4.png) ### FEC VF configuration for OpenNESS Network Edge -To configure the VFs with the necessary number of queues for the vRAN workload the BBDEV configuration utility is run as a job within a privileged container. The configuration utility is available to run as a Helm chart available from `/opt/openness-helm-charts/fpga_config`. +To configure the VFs with the necessary number of queues for the vRAN workload the BBDEV configuration utility is run as a job within a privileged container. The configuration utility is available to run as a Helm chart available from `/opt/openness/helm-charts/bb_config`. -Sample configMap, which can be configured by changing values if other than typical config is required, with a profile for the queue configuration is provided as part of Helm chart template `/opt/openness-helm-charts/fpga_config/templates/fpga-config.yaml` populated with values from `/opt/openness-helm-charts/fpga_config/values.yaml`. Helm chart installation requires a provision of hostname for the target node during job deployment. +Sample configMap, which can be configured by changing values if other than typical configuration is required, with a profile for the queue configuration, is provided as part of Helm chart template `/opt/openness/helm-charts/bb_config/templates/fpga-config.yaml` populated with values from `/opt/openness/helm-charts/bb_config/values.yaml`. Helm chart installation requires a provision of hostname for the target node during job deployment. -Install the Helm chart by providing configmap and BBDEV config utility job with the following command from `/opt/openness-helm-charts/` on Edge Controller: +Install the Helm chart by providing configmap and BBDEV config utility job with the following command from `/opt/openness/helm-charts/` on Edge Controller: ```shell -helm install --set nodeName= intel-fpga-cfg fpga_config +helm install --set nodeName= intel-fpga-cfg bb_config ``` Check if the job has completed and that the state of the pod created for this job is “Completed”. Check the logs of the pod to see a complete successful configuration. @@ -268,7 +249,7 @@ Expected: `Mode of operation = VF-mode FPGA_LTE PF [0000:xx:00.0] configuration To redeploy the job on another node, use the following command: ``` -helm upgrade --set nodeName= intel-fpga-cfg fpga_config +helm upgrade --set nodeName= intel-fpga-cfg bb_config ``` To uninstall the job, run: @@ -286,7 +267,7 @@ kubectl get node -o json | jq '.status.allocatable' ``` To request the device as a resource in the pod, add the request for the resource into the pod specification file by specifying its name and amount of resources required. If the resource is not available or the amount of resources requested is greater than the number of resources available, the pod status will be “Pending” until the resource is available. -**NOTE**: The name of the resource must match the name specified in the configMap for the K8s devices plugin (`./fpga/configMap.yml`). +**NOTE**: The name of the resource must match the name specified in the configMap for the K8s devices plugin [configMap.yml](https://github.com/open-ness/openness-experience-kits/blob/master/roles/kubernetes/cni/sriov/controlplane/files/sriov/templates/configMap.yml). A sample pod requesting the FPGA (FEC) VF may look like this: @@ -322,7 +303,7 @@ kubectl exec -it test -- printenv | grep FEC To check the number of devices currently allocated to pods, run (and search for 'Allocated Resources'): ``` -kubectl describe node +kubectl describe node ``` ### Verifying Application POD access and usage of FPGA on OpenNESS Network Edge @@ -336,13 +317,11 @@ Navigate to: edgeapps/fpga-sample-app ``` -Copy the necessary `dpdk_19.11_new.patch` file into the directory. This patch is available as part of FlexRAN 20.02 release package. To obtain the FlexRAN patch allowing 5G functionality for BBDEV in DPDK, contact your Intel representative or visit the [Resource Design Center](https://cdrdv2.intel.com/v1/dl/getContent/615743). - Build the image: `./build-image.sh` -From the Edge Controller, deploy the application pod. The pod specification is located at `/fpga`: +From the Edge Controlplane, deploy the application pod. The pod specification is located at `/opt/openness/edgenode/edgecontroller/fpga/fpga-sample-app.yaml`: ``` kubectl create -f fpga-sample-app.yaml @@ -354,7 +333,7 @@ Execute into the application pod and run the sample app: kubectl exec -it pod-bbdev-sample-app -- /bin/bash # run test application -./test-bbdev.py --testapp-path ./testbbdev -e="-w ${PCIDEVICE_INTEL_COM_INTEL_FEC_5G}" -i -n 1 -b 1 -l 1 -c validation -v ./test_vectors/ldpc_dec_v7813.data +./test-bbdev.py --testapp-path ./dpdk-test-bbdev -e="-w ${PCIDEVICE_INTEL_COM_INTEL_FEC_5G}" -i -n 1 -b 1 -l 1 -c validation -v ldpc_dec_v7813.data # sample output EAL: Detected 48 lcore(s) diff --git a/doc/enhanced-platform-awareness/openness-hugepage.md b/doc/building-blocks/enhanced-platform-awareness/openness-hugepage.md similarity index 100% rename from doc/enhanced-platform-awareness/openness-hugepage.md rename to doc/building-blocks/enhanced-platform-awareness/openness-hugepage.md diff --git a/doc/enhanced-platform-awareness/openness-kubernetes-dashboard.md b/doc/building-blocks/enhanced-platform-awareness/openness-kubernetes-dashboard.md similarity index 100% rename from doc/enhanced-platform-awareness/openness-kubernetes-dashboard.md rename to doc/building-blocks/enhanced-platform-awareness/openness-kubernetes-dashboard.md diff --git a/doc/enhanced-platform-awareness/openness-node-feature-discovery.md b/doc/building-blocks/enhanced-platform-awareness/openness-node-feature-discovery.md similarity index 100% rename from doc/enhanced-platform-awareness/openness-node-feature-discovery.md rename to doc/building-blocks/enhanced-platform-awareness/openness-node-feature-discovery.md diff --git a/doc/enhanced-platform-awareness/openness-rmd.md b/doc/building-blocks/enhanced-platform-awareness/openness-rmd.md similarity index 93% rename from doc/enhanced-platform-awareness/openness-rmd.md rename to doc/building-blocks/enhanced-platform-awareness/openness-rmd.md index c4e7e202..2a0242a8 100644 --- a/doc/enhanced-platform-awareness/openness-rmd.md +++ b/doc/building-blocks/enhanced-platform-awareness/openness-rmd.md @@ -133,9 +133,10 @@ metadata: spec: # Add fields here coreIds: ["INFERRED_CORE_ID"] - cache: - max: 6 - min: 6 + rdt: + cache: + max: 6 + min: 6 nodes: ["YOUR_WORKER_NODE_HERE"] ``` Apply and validate it: @@ -161,19 +162,10 @@ Events: ``` ### Start monitoring the cache usage with the PQOS tool ```bash -# Install - once off -git clone https://github.com/intel/intel-cmt-cat.git -make install # Run it -pqos +pqos -I ``` -If the PQOS tool fails to start, download the following tool: -```bash -git clone https://github.com/opcm/pcm.git -make install -pcm # run it for a second, then ctrl-c -``` -After you start and stop pcm, you should be able to run the pqos tool without a further problem. Look especially at the cores your pods got assigned. The LLC column (last level cache / L3 cache) should change after you run the `stress-ng` commands below. +Look especially at the cores your pods got assigned. The LLC column (last level cache / L3 cache) should change after you run the `stress-ng` commands below. ### Starting the stress-ng command on the prepared pods Pod1 diff --git a/doc/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md b/doc/building-blocks/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md similarity index 100% rename from doc/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md rename to doc/building-blocks/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md diff --git a/doc/enhanced-platform-awareness/openness-telemetry.md b/doc/building-blocks/enhanced-platform-awareness/openness-telemetry.md similarity index 93% rename from doc/enhanced-platform-awareness/openness-telemetry.md rename to doc/building-blocks/enhanced-platform-awareness/openness-telemetry.md index 84f204c6..0ef6db58 100644 --- a/doc/enhanced-platform-awareness/openness-telemetry.md +++ b/doc/building-blocks/enhanced-platform-awareness/openness-telemetry.md @@ -89,7 +89,7 @@ In OpenNESS, Prometheus is deployed as a K8s Deployment with a single pod/replic ### Grafana -Grafana is an open-source visualization and analytics software. It takes the data provided from external sources and displays relevant data to the user via dashboards. It enables the user to create customized dashboards based on the information the user wants to monitor and allows for the provision of additional data sources. In OpenNESS, the Grafana pod is deployed on the control plane node as a K8s `Deployment` type and is by default provisioned with data from Prometheus. It is enabled by default in OEK and can be enabled/disabled by changing the `telemetry_grafana_enable` flag. The password to gain access to the dashboard can be altered with the `telemetry_grafana_pass` flag. +Grafana is an open-source visualization and analytics software. It takes the data provided from external sources and displays relevant data to the user via dashboards. It enables the user to create customized dashboards based on the information the user wants to monitor and allows for the provision of additional data sources. In OpenNESS, the Grafana pod is deployed on a control plane as a K8s `Deployment` type and is by default provisioned with data from Prometheus. It is enabled by default in OEK and can be enabled/disabled by changing the `telemetry_grafana_enable` flag. #### Usage @@ -100,8 +100,11 @@ Grafana is an open-source visualization and analytics software. It takes the dat http://:32000 ``` -2. Log in to the dashboard using the default credentials (login: admin, password: grafana) - ![Grafana login](telemetry-images/grafana_login.png) +2. Access the dashboard + 1. Extract grafana password by running the following command on Kubernetes controller: + ```kubectl get secrets/grafana -n telemetry -o json | jq -r '.data."admin-password"' | base64 -d``` + 2. Log in to the dashboard using the password from the previous step and `admin` login + ![Grafana login](telemetry-images/grafana_login.png) 3. To create a new dashboard, navigate to `http://:32000/dashboards`. ![Grafana dash](telemetry-images/grafana-new-dash.png) 4. Navigate to dashboard settings. @@ -146,7 +149,7 @@ Node Exporter is a Prometheus exporter that exposes hardware and OS metrics of * #### VCAC-A -Node Exporter also enables exposure of telemetry from Intel's VCAC-A card to Prometheus. The telemetry from the VCAC-A card is saved into a text file; this text file is used as an input to the Node Exporter. More information on VCAC-A usage in OpenNESS is available [here](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-vcac-a.md). +Node Exporter also enables exposure of telemetry from Intel's VCAC-A card to Prometheus. The telemetry from the VCAC-A card is saved into a text file; this text file is used as an input to the Node Exporter. More information on VCAC-A usage in OpenNESS is available [here](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md). ### cAdvisor @@ -161,20 +164,7 @@ It collects and aggregates data about running containers such as resource usage, ### CollectD -CollectD is a daemon/collector enabling the collection of hardware metrics from computers and network equipment. It provides support for CollectD plugins, which extends its functionality for specific metrics collection such as Intel® RDT, Intel PMU, and ovs-dpdk. The metrics collected are easily exposed to the Prometheus monitoring tool via the usage of the `write_prometheus` plugin. In OpenNESS, CollectD is supported with the help of the [OPNFV Barometer project](https://wiki.opnfv.org/display/fastpath/Barometer+Home) - using its Docker image and available plugins. As part of the OpenNESS release, a CollectD plugin for - -Search Results - - -Enter your search term: - - -042 - -Your search came up with the following results: - -Intel® FPGA Accelerator Packaging Utility -Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000 telemetry is now available from OpenNESS (power and temperature telemetry). In OpenNESS, the CollectD pod is deployed as a K8s `Daemonset` on every available Edge Node, and it is deployed as a privileged container. +CollectD is a daemon/collector enabling the collection of hardware metrics from computers and network equipment. It provides support for CollectD plugins, which extends its functionality for specific metrics collection such as Intel® RDT, Intel PMU, and ovs-dpdk. The metrics collected are easily exposed to the Prometheus monitoring tool via the usage of the `write_prometheus` plugin. In OpenNESS, CollectD is supported with the help of the [OPNFV Barometer project](https://wiki.opnfv.org/display/fastpath/Barometer+Home) - using its Docker image and available plugins. As part of the OpenNESS release, a CollectD plugin for Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000 telemetry is now available from OpenNESS (power and temperature telemetry). In OpenNESS, the CollectD pod is deployed as a K8s `Daemonset` on every available Edge Node, and it is deployed as a privileged container. #### Plugins @@ -199,7 +189,7 @@ The various OEK flavors are enabled for CollectD deployment as follows: 1. Select the flavor for the deployment of CollectD from the OEK during OpenNESS deployment; the flavor is to be selected with `telemetry_flavor: `. - In the event of using the `flexran` profile, `n3000-1-3-5-beta-cfg-2x2x25g-setup.zip` and `n3000-1-3-5-beta-rte-setup.zip` need to be available in `./openness-experience-kits/opae_fpga` directory; for details about the packages, see [FPGA support in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-fpga.md#edge-controller) + In the event of using the `flexran` profile, `OPAE_SDK_1.3.7-5_el7.zip` needs to be available in `./ido-openness-experience-kits/opae_fpga` directory; for details about the packages, see [FPGA support in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md#edge-controller) 2. To access metrics available from CollectD, connect to the Prometheus [dashboard](#prometheus). 3. Look up an example the CollectD metric by specifying the metric name (ie. `collectd_cpufreq`) and pressing `execute` under the `graph` tab. ![CollectD Metric](telemetry-images/collectd_metric.png) @@ -218,11 +208,11 @@ OpenCensus exporter/receiver is used in the default OpenNESS configuration for a #### Usage 1. Pull the Edge Apps repository. -2. Build the sample telemetry application Docker image and push to the local Docker registry from the Edge Apps repo. +2. Build the sample telemetry application Docker image and push to the local Harbor registry from the Edge Apps repo. ```shell cd edgeapps/applications/telemetry-sample-app/image/ - ./build.sh push + ./build.sh push ``` 3. Create a secret using a root-ca created as part of OEK telemetry deployment (this will authorize against the Collector certificates). @@ -232,7 +222,7 @@ OpenCensus exporter/receiver is used in the default OpenNESS configuration for a ./create-secret.sh ``` -4. Configure and deploy the sample telemetry application with the side-car OpenTelemetry agent from the Edge Apps repo using Helm. Edit `edgeapps/applications/telemetry-sample-app/opentelemetry-agent/values.yaml`, and change `app:image:repository: 10.0.0.1:5000/intel/metricapp` to the IP address of the Docker registry. +4. Configure and deploy the sample telemetry application with the side-car OpenTelemetry agent from the Edge Apps repo using Helm. Edit `edgeapps/applications/telemetry-sample-app/opentelemetry-agent/values.yaml`, and change `app:image:repository: 10.0.0.1:30003/intel/metricapp` to the IP address of the Harbor registry. ```shell cd edgeapps/applications/telemetry-sample-app/ @@ -273,7 +263,7 @@ Processor Counter Monitor (PCM) is an application programming interface (API) an [Telemetry Aware Scheduler](https://github.com/intel/telemetry-aware-scheduling) enables the user to make K8s scheduling decisions based on the metrics available from telemetry. This is crucial for a variety of Edge use-cases and workloads where it is critical that the workloads are balanced and deployed on the best suitable node based on hardware ability and performance. The user can create a set of policies defining the rules to which pod placement must adhere. Functionality to de-schedule pods from given nodes if a rule is violated is also provided. TAS consists of a TAS Extender which is an extension to the K8s scheduler. It correlates the scheduling policies with deployment strategies and returns decisions to the K8s Scheduler. It also consists of a TAS Controller that consumes TAS policies and makes them locally available to TAS components. A metrics pipeline that exposes metrics to a K8s API must be established for TAS to be able to read in the metrics. In OpenNESS, the metrics pipeline consists of: - Prometheus: responsible for collecting and providing metrics. - Prometheus Adapter: exposes the metrics from Prometheus to a K8s API and is configured to provide metrics from Node Exporter and CollectD collectors. -TAS is enabled by default in OEK, a sample scheduling policy for TAS is provided for [VCAC-A node deployment](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-vcac-a.md#telemetry-support). +TAS is enabled by default in OEK, a sample scheduling policy for TAS is provided for [VCAC-A node deployment](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md#telemetry-support). #### Usage diff --git a/doc/enhanced-platform-awareness/openness-topology-manager.md b/doc/building-blocks/enhanced-platform-awareness/openness-topology-manager.md similarity index 95% rename from doc/enhanced-platform-awareness/openness-topology-manager.md rename to doc/building-blocks/enhanced-platform-awareness/openness-topology-manager.md index f8e570ec..e5f60e13 100644 --- a/doc/enhanced-platform-awareness/openness-topology-manager.md +++ b/doc/building-blocks/enhanced-platform-awareness/openness-topology-manager.md @@ -35,13 +35,13 @@ Topology Manager is a Kubelet component that aims to co-ordinate the set of comp Topology Manager is enabled by default with a `best-effort` policy. You can change the settings before OpenNESS installation by editing the `group_vars/all/10-default.yml` file: ```yaml -### Kubernetes Topology Manager configuration (for worker) +### Kubernetes Topology Manager configuration (for a node) # CPU settings cpu: # CPU policy - possible values: none (disabled), static (default) policy: "static" # Reserved CPUs - reserved_cpus: 1 + reserved_cpus: "0,1" # Kubernetes Topology Manager policy - possible values: none (disabled), best-effort (default), restricted, single-numa-node topology_manager: @@ -50,7 +50,7 @@ topology_manager: Where `` can be `none`, `best-effort`, `restricted` or `single-numa-node`. Refer to the [Kubernetes Documentation](https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/) for details of these policies. -You can also set `kubernetes_reserved_cpus` to a number that suits you best. This parameter specifies the number of logical CPUs that will be reserved for a Kubernetes system Pods. +You can also set `reserved_cpus` to a number that suits you best. This parameter specifies the logical CPUs that will be reserved for a Kubernetes system Pods and OS daemons. ### Usage To use Topology Manager create a Pod with a `guaranteed` QoS class (requests equal to limits). For example: diff --git a/doc/enhanced-platform-awareness/openness-vcac-a.md b/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md similarity index 89% rename from doc/enhanced-platform-awareness/openness-vcac-a.md rename to doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md index 915ab297..52a77028 100644 --- a/doc/enhanced-platform-awareness/openness-vcac-a.md +++ b/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md @@ -27,12 +27,12 @@ Equipped with a CPU, the VCAC-A card is installed with a standalone operating sy > * The full acronym *VCAC-A* is loosely used when talking about the PCIe card. The VCAC-A installation involves a [two-stage build](https://github.com/OpenVisualCloud/VCAC-SW-Analytics/): -1. VCA host kernel build and configuration: this stage patches the CentOS\* 7.6 kernel and builds the necessary modules and dependencies. +1. VCA host kernel build and configuration: this stage patches the CentOS\* 7.8 kernel and builds the necessary modules and dependencies. 2. VCAC-A system image (VCAD) generation: this stage builds an Ubuntu\*-based (VCAD) image that is loaded on the VCAC-A card. -The OEK automates the overall build and installation process of the VCAC-A card by joining it as a standalone logical worker node to the OpenNESS cluster. When successful, the OpenNESS controller is capable of selectively scheduling workloads on the "VCA node" for proximity to the hardware acceleration. +The OEK automates the overall build and installation process of the VCAC-A card by joining it as a standalone logical node to the OpenNESS cluster. The OEK supports force build VCAC-A system image (VCAD) via flag (force\_build\_enable: true (default value)), it also allows the customer to disable the flag to re-use last system image built. When successful, the OpenNESS controller is capable of selectively scheduling workloads on the "VCA node" for proximity to the hardware acceleration. -When onboarding applications such as [Open Visual Cloud Smart City Sample](https://github.com/open-ness/edgeapps/tree/master/applications/smart-city-app) with the existence of VCAC-A, the OpenNESS controller schedules all the application pods onto the edge worker node except the *video analytics* processing that is scheduled on the VCA node as shown in the figure below. +When onboarding applications such as [Open Visual Cloud Smart City Sample](https://github.com/open-ness/edgeapps/tree/master/applications/smart-city-app) with the existence of VCAC-A, the OpenNESS controller schedules all the application pods onto the edge node except the *video analytics* processing that is scheduled on the VCA node as shown in the figure below. ![Smart City Setup](vcaca-images/smart-city-app-vcac-a.png) @@ -55,11 +55,11 @@ affinity: ``` ### VCA Pools -Another construct used when deploying OpenNESS is the `VCA pool`, which is a similar concept to the *Node pools* that are supported by [Azure\* Kubernetes\* Service](https://docs.microsoft.com/en-us/azure/aks/use-multiple-node-pools) and [Google\* Kubernetes Engine](https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools). The VCA pool is a unique label assigned to the group of VCA nodes that are plugged into one edge worker node. This enables the scheduler to execute related workloads within the same VCA pool (i.e., within the same edge node physical space). The VCA pool is assigned the label `vcac-pool=`, which reflects the hostname of the VCA host that all the VCAC-A cards are connected to. +Another construct used when deploying OpenNESS is the `VCA pool`, which is a similar concept to the *Node pools* that are supported by [Azure\* Kubernetes\* Service](https://docs.microsoft.com/en-us/azure/aks/use-multiple-node-pools) and [Google\* Kubernetes Engine](https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools). The VCA pool is a unique label assigned to the group of VCA nodes that are plugged into one node. This enables the scheduler to execute related workloads within the same VCA pool (i.e., within the same edge node physical space). The VCA pool is assigned the label `vcac-pool=`, which reflects the hostname of the VCA host that all the VCAC-A cards are connected to. Also, the VCA nodes follow a special naming convention. They are assigned the name of their host nodes appended with *vca* keyword and a number (`-vcaX`). The number is an incremental index to differentiate between multiple VCAC-A cards that are installed. -In the example below, this is a cluster composed of 1 master `silpixa00399671`, 1 VCA host `silpixa00400194`, and 3 VCAC-A cards: `silpixa00400194-vca1`, `silpixa00400194-vca2`, and `silpixa00400194-vca3`. The 3 VCAC-A cards are connected to the node `silpixa00400194`. +In the example below, this is a cluster composed of 1 control plane `silpixa00399671`, 1 VCA host `silpixa00400194`, and 3 VCAC-A cards: `silpixa00400194-vca1`, `silpixa00400194-vca2`, and `silpixa00400194-vca3`. The 3 VCAC-A cards are connected to the node `silpixa00400194`. ```shell $ kubectl get nodes NAME STATUS ROLES AGE VERSION @@ -166,11 +166,11 @@ _Figure - Using VCAC-A Telemetry with OpenNESS_ 4. Now that the VPU device usage became 60, when the `OpenVINO` application turns up, it gets scheduled on VCA pool B in fulfillment of the policy. ## Media-Analytics-VCA Flavor -The pre-defined OpenNESS flavor *media-analytics-vca* is provided to provision an optimized system configuration for media analytics workloads leveraging VCAC-A acceleration. This flavor is applied through the OEK playbook as described in the [OpenNESS Flavors](../flavors) document and encompasses the VCAC-A installation. +The pre-defined OpenNESS flavor *media-analytics-vca* is provided to provision an optimized system configuration for media analytics workloads leveraging VCAC-A acceleration. This flavor is applied through the OEK playbook as described in the [OpenNESS Flavors](../flavors.md#media-analytics-flavor-with-vcac-a) document and encompasses the VCAC-A installation. The VCAC-A installation in OEK performs the following tasks: - Pull the release package from [Open Visual Cloud VCAC-A card media analytics software](https://github.com/OpenVisualCloud/VCAC-SW-Analytics) and the required dependencies -- Apply CentOS 7.6 kernel patches and build kernel RPM +- Apply CentOS 7.8 kernel patches and build kernel RPM - Apply module patches and build driver RPM - Build daemon utilities RPM - Install docker-ce and kubernetes on the VCA host diff --git a/doc/enhanced-platform-awareness/openness_hddl.md b/doc/building-blocks/enhanced-platform-awareness/openness_hddl.md similarity index 69% rename from doc/enhanced-platform-awareness/openness_hddl.md rename to doc/building-blocks/enhanced-platform-awareness/openness_hddl.md index 04b47bbe..5dfca7b6 100644 --- a/doc/enhanced-platform-awareness/openness_hddl.md +++ b/doc/building-blocks/enhanced-platform-awareness/openness_hddl.md @@ -27,17 +27,23 @@ Each implementation for each hardware is an inference engine plugin. The plugin for the Intel® Movidius™ Myriad™ X HDDL solution, or IE HDDL plugin for short, supports the Intel® Movidius™ Myriad™ X HDDL Solution hardware PCIe card. It communicates with the Intel® Movidius™ Myriad™ X HDDL HAL API to manage multiple Intel® Movidius™ Myriad™ X devices in the card, and it schedules deep-learning neural networks and inference tasks to these devices. ## HDDL OpenNESS Integration +OpenNESS provides support for the deployment of OpenVINO™ applications and workloads accelerated through Intel® Vision Accelerator Design with the Intel® Movidius™ VPU HDDL-R add-in card. As a prerequisite for enabling the support, it is required for the HDDL add-in card to be inserted into the PCI slot of the Edge Node platform. The support is then enabled by setting the appropriate flag - 'ne_hddl_enable' in the '/group_vars/all/10-default.yml' before running OEK playbooks. +> **NOTE** No pre-defined flavor is provided for HDDL. If user wants to enable HDDL with flavor, can set flag - 'ne_hddl_enable' in the 'flavors//all.yml'. The node with HDDL card inserted will be labelled as 'hddl-zone=true'. -OpenNESS provides support for the deployment of OpenVINO™ applications and workloads accelerated through Intel® Vision Accelerator Design with the Intel® Movidius™ VPU HDDL-R add-in card. As a prerequisite for enabling the support, it is required for the HDDL add-in card to be inserted into the PCI slot of the Edge Node platform. The support is then enabled by setting the appropriate flag in a configuration file before deployment of the Edge Node software toolkit. +The OEK automation script for HDDL will involve the following steps: +- Download the HDDL DaemonSet yaml file from [Open Visual Cloud dockerfiles software](https://github.com/OpenVisualCloud/Dockerfiles) and templates it with specific configuration to satifiy OpenNESS need such as OpenVINO version...etc. +- Download the OpenVINO™, install kernel-devel and then install HDDL dependencies. +- Build the HDDLDdaemon image. +- Label the node with 'hddl-zone=true'. +- HDDL Daemon automatically brings up on the node with label 'hddl-zone=true'. -With a correct configuration during the Edge Node bring up, an automated script will install all components necessary, such as kernel drivers required for the correct operation of the Vision Processing Units (VPUs) and 'udev rules' required for correct kernel driver assignment and booting of these devices on the Edge Node host platform. +The HDDL Daemon provides the backend service to manage VPUs and dispatch inference tasks to VPUs. OpenVINO™-based applications that utilizes HDDL hardware need to access the device node '/dev/ion' and domain socket under '/var/tmp' to communicate with the kernel and HDDL service. +> **NOTE** With the default kernel used by OpenNESS OEK, the ion driver will not enabled by OpenVINO™ toolkits, and the shared memory - '/dev/shm' will be used as fallback. More details refer to [installing_openvino_docker_linux](https://docs.openvinotoolkit.org/2020.2/_docs_install_guides_installing_openvino_docker_linux.html) -After the OpenNESS automated script installs all necessary tools and components for Edge Node bring up, another automated script responsible for deployment of all micro-services is run. As part of this particular script, a Docker\* container running a 'hddl-service' is started if the option for HDDL support is enabled. This container, which is part of OpenNESS system services, is a privileged container with 'SYS_ADMIN' capabilities and access to the host’s devices. - -The 'hddl-service' container is running the HDDL Daemon which is responsible for bringing up the HDDL Service within the container. The HDDL Service enables the communication between the OpenVino™ applications required to run inference on HDDL devices and VPUs needed to run the workload. This communication is done via a socket, which is created by the HDDL service. The default location of the socket is the `/var/tmp/`directory of the Edge Node host. The application container requiring HDDL acceleration needs to be exposed to this socket. ![HDDL-Block-Diagram](hddl-images/hddlservice.png) + ## Summary The Intel® Movidius™ Myriad™ X HDDL solution integrates multiple Intel® Movidius™ Myriad™ X brand SoCs in a PCIe add-in card form factor or a module form factor to build a scalable, high-capacity, deep-learning solution. OpenNESS provides a toolkit for customers to put together deep-learning solutions at the edge. To take it further for efficient resource usage, OpenNESS provides a mechanism to use CPU or VPU depending on the load or any other criteria. diff --git a/doc/enhanced-platform-awareness/telemetry-images/architecture.svg b/doc/building-blocks/enhanced-platform-awareness/telemetry-images/architecture.svg similarity index 100% rename from doc/enhanced-platform-awareness/telemetry-images/architecture.svg rename to doc/building-blocks/enhanced-platform-awareness/telemetry-images/architecture.svg diff --git a/doc/enhanced-platform-awareness/telemetry-images/cadvisor_metric.png b/doc/building-blocks/enhanced-platform-awareness/telemetry-images/cadvisor_metric.png similarity index 100% rename from doc/enhanced-platform-awareness/telemetry-images/cadvisor_metric.png rename to doc/building-blocks/enhanced-platform-awareness/telemetry-images/cadvisor_metric.png diff --git a/doc/enhanced-platform-awareness/telemetry-images/collectd_metric.png b/doc/building-blocks/enhanced-platform-awareness/telemetry-images/collectd_metric.png similarity index 100% rename from doc/enhanced-platform-awareness/telemetry-images/collectd_metric.png rename to doc/building-blocks/enhanced-platform-awareness/telemetry-images/collectd_metric.png diff --git a/doc/enhanced-platform-awareness/telemetry-images/grafana-add-panel.png b/doc/building-blocks/enhanced-platform-awareness/telemetry-images/grafana-add-panel.png similarity index 100% rename from doc/enhanced-platform-awareness/telemetry-images/grafana-add-panel.png rename to doc/building-blocks/enhanced-platform-awareness/telemetry-images/grafana-add-panel.png diff --git a/doc/enhanced-platform-awareness/telemetry-images/grafana-dash-setting.png b/doc/building-blocks/enhanced-platform-awareness/telemetry-images/grafana-dash-setting.png similarity index 100% rename from doc/enhanced-platform-awareness/telemetry-images/grafana-dash-setting.png rename to doc/building-blocks/enhanced-platform-awareness/telemetry-images/grafana-dash-setting.png diff --git a/doc/enhanced-platform-awareness/telemetry-images/grafana-new-dash.png b/doc/building-blocks/enhanced-platform-awareness/telemetry-images/grafana-new-dash.png similarity index 100% rename from doc/enhanced-platform-awareness/telemetry-images/grafana-new-dash.png rename to doc/building-blocks/enhanced-platform-awareness/telemetry-images/grafana-new-dash.png diff --git a/doc/enhanced-platform-awareness/telemetry-images/grafana-panel-settings.png b/doc/building-blocks/enhanced-platform-awareness/telemetry-images/grafana-panel-settings.png similarity index 100% rename from doc/enhanced-platform-awareness/telemetry-images/grafana-panel-settings.png rename to doc/building-blocks/enhanced-platform-awareness/telemetry-images/grafana-panel-settings.png diff --git a/doc/enhanced-platform-awareness/telemetry-images/grafana-panel.png b/doc/building-blocks/enhanced-platform-awareness/telemetry-images/grafana-panel.png similarity index 100% rename from doc/enhanced-platform-awareness/telemetry-images/grafana-panel.png rename to doc/building-blocks/enhanced-platform-awareness/telemetry-images/grafana-panel.png diff --git a/doc/enhanced-platform-awareness/telemetry-images/grafana-pcm-dashboard.png b/doc/building-blocks/enhanced-platform-awareness/telemetry-images/grafana-pcm-dashboard.png similarity index 100% rename from doc/enhanced-platform-awareness/telemetry-images/grafana-pcm-dashboard.png rename to doc/building-blocks/enhanced-platform-awareness/telemetry-images/grafana-pcm-dashboard.png diff --git a/doc/enhanced-platform-awareness/telemetry-images/grafana-save-dash.png b/doc/building-blocks/enhanced-platform-awareness/telemetry-images/grafana-save-dash.png similarity index 100% rename from doc/enhanced-platform-awareness/telemetry-images/grafana-save-dash.png rename to doc/building-blocks/enhanced-platform-awareness/telemetry-images/grafana-save-dash.png diff --git a/doc/enhanced-platform-awareness/telemetry-images/grafana-save.png b/doc/building-blocks/enhanced-platform-awareness/telemetry-images/grafana-save.png similarity index 100% rename from doc/enhanced-platform-awareness/telemetry-images/grafana-save.png rename to doc/building-blocks/enhanced-platform-awareness/telemetry-images/grafana-save.png diff --git a/doc/enhanced-platform-awareness/telemetry-images/grafana-settings.png b/doc/building-blocks/enhanced-platform-awareness/telemetry-images/grafana-settings.png similarity index 100% rename from doc/enhanced-platform-awareness/telemetry-images/grafana-settings.png rename to doc/building-blocks/enhanced-platform-awareness/telemetry-images/grafana-settings.png diff --git a/doc/enhanced-platform-awareness/telemetry-images/grafana_login.png b/doc/building-blocks/enhanced-platform-awareness/telemetry-images/grafana_login.png similarity index 100% rename from doc/enhanced-platform-awareness/telemetry-images/grafana_login.png rename to doc/building-blocks/enhanced-platform-awareness/telemetry-images/grafana_login.png diff --git a/doc/enhanced-platform-awareness/telemetry-images/node_exporter_metric.png b/doc/building-blocks/enhanced-platform-awareness/telemetry-images/node_exporter_metric.png similarity index 100% rename from doc/enhanced-platform-awareness/telemetry-images/node_exporter_metric.png rename to doc/building-blocks/enhanced-platform-awareness/telemetry-images/node_exporter_metric.png diff --git a/doc/enhanced-platform-awareness/telemetry-images/pcm-metrics.png b/doc/building-blocks/enhanced-platform-awareness/telemetry-images/pcm-metrics.png similarity index 100% rename from doc/enhanced-platform-awareness/telemetry-images/pcm-metrics.png rename to doc/building-blocks/enhanced-platform-awareness/telemetry-images/pcm-metrics.png diff --git a/doc/enhanced-platform-awareness/telemetry-images/pcm-stats.png b/doc/building-blocks/enhanced-platform-awareness/telemetry-images/pcm-stats.png similarity index 100% rename from doc/enhanced-platform-awareness/telemetry-images/pcm-stats.png rename to doc/building-blocks/enhanced-platform-awareness/telemetry-images/pcm-stats.png diff --git a/doc/enhanced-platform-awareness/telemetry-images/prometheus_graph.png b/doc/building-blocks/enhanced-platform-awareness/telemetry-images/prometheus_graph.png similarity index 100% rename from doc/enhanced-platform-awareness/telemetry-images/prometheus_graph.png rename to doc/building-blocks/enhanced-platform-awareness/telemetry-images/prometheus_graph.png diff --git a/doc/enhanced-platform-awareness/telemetry-images/prometheus_metrics.png b/doc/building-blocks/enhanced-platform-awareness/telemetry-images/prometheus_metrics.png similarity index 100% rename from doc/enhanced-platform-awareness/telemetry-images/prometheus_metrics.png rename to doc/building-blocks/enhanced-platform-awareness/telemetry-images/prometheus_metrics.png diff --git a/doc/enhanced-platform-awareness/telemetry-images/prometheus_targets.png b/doc/building-blocks/enhanced-platform-awareness/telemetry-images/prometheus_targets.png similarity index 100% rename from doc/enhanced-platform-awareness/telemetry-images/prometheus_targets.png rename to doc/building-blocks/enhanced-platform-awareness/telemetry-images/prometheus_targets.png diff --git a/doc/enhanced-platform-awareness/tm-images/tm1.png b/doc/building-blocks/enhanced-platform-awareness/tm-images/tm1.png similarity index 100% rename from doc/enhanced-platform-awareness/tm-images/tm1.png rename to doc/building-blocks/enhanced-platform-awareness/tm-images/tm1.png diff --git a/doc/enhanced-platform-awareness/tm-images/tm2.png b/doc/building-blocks/enhanced-platform-awareness/tm-images/tm2.png similarity index 100% rename from doc/enhanced-platform-awareness/tm-images/tm2.png rename to doc/building-blocks/enhanced-platform-awareness/tm-images/tm2.png diff --git a/doc/enhanced-platform-awareness/vcaca-images/smart-city-app-vcac-a.png b/doc/building-blocks/enhanced-platform-awareness/vcaca-images/smart-city-app-vcac-a.png similarity index 100% rename from doc/enhanced-platform-awareness/vcaca-images/smart-city-app-vcac-a.png rename to doc/building-blocks/enhanced-platform-awareness/vcaca-images/smart-city-app-vcac-a.png diff --git a/doc/enhanced-platform-awareness/vcaca-images/using-vcac-a-telemetry.png b/doc/building-blocks/enhanced-platform-awareness/vcaca-images/using-vcac-a-telemetry.png similarity index 100% rename from doc/enhanced-platform-awareness/vcaca-images/using-vcac-a-telemetry.png rename to doc/building-blocks/enhanced-platform-awareness/vcaca-images/using-vcac-a-telemetry.png diff --git a/doc/enhanced-platform-awareness/vcaca-images/vcac-a-vpu-metrics.png b/doc/building-blocks/enhanced-platform-awareness/vcaca-images/vcac-a-vpu-metrics.png similarity index 100% rename from doc/enhanced-platform-awareness/vcaca-images/vcac-a-vpu-metrics.png rename to doc/building-blocks/enhanced-platform-awareness/vcaca-images/vcac-a-vpu-metrics.png diff --git a/doc/building-blocks/index.html b/doc/building-blocks/index.html new file mode 100644 index 00000000..f1499d27 --- /dev/null +++ b/doc/building-blocks/index.html @@ -0,0 +1,14 @@ + + +--- +title: OpenNESS Documentation +description: Home +layout: openness +--- +

You are being redirected to the OpenNESS Docs.

+ diff --git a/doc/cloud-adapters/openness_baiducloud.md b/doc/cloud-adapters/openness_baiducloud.md index d59a856b..57619722 100644 --- a/doc/cloud-adapters/openness_baiducloud.md +++ b/doc/cloud-adapters/openness_baiducloud.md @@ -322,7 +322,7 @@ The scripts can be found in the release package with the subfolder name `setup_b └── measure_rtt_openedge.py ``` -Before running the scripts, install python3.6 and paho mqtt on a CentOS\* Linux\* machine, where the recommended version is CentOS Linux release 7.6.1810 (Core). +Before running the scripts, install python3.6 and paho mqtt on a CentOS\* Linux\* machine, where the recommended version is CentOS Linux release 7.8.2003 (Core). The following are recommended install commands: ```docker diff --git a/doc/devkits/index.html b/doc/devkits/index.html new file mode 100644 index 00000000..ca350b29 --- /dev/null +++ b/doc/devkits/index.html @@ -0,0 +1,14 @@ + + +--- +title: OpenNESS Documentation +description: Home +layout: openness +--- +

You are being redirected to the OpenNESS Docs.

+ diff --git a/doc/devkits/openness-azure-devkit.md b/doc/devkits/openness-azure-devkit.md new file mode 100644 index 00000000..889f9af5 --- /dev/null +++ b/doc/devkits/openness-azure-devkit.md @@ -0,0 +1,17 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020 Intel Corporation +``` + +# OpenNESS Development Kit for Microsoft Azure + +## Introduction + +This devkit supports the use of OpenNESS in cloud solutions. It leverages the Azure Stack for OpenNESS deployment. +The devkit offers a quick and easy way to deploy OpenNESS on cloud for developers and businesses. It contains templates +for automated depoyment, and supports deployment using Porter. It enables cloud solutions supported by Intel's processors. + +## Getting Started + +Following document contains steps for quick deployment on Azure: +* [openness-experience-kits/cloud/README.md: Deployment and setup guide](https://github.com/open-ness/openness-experience-kits/blob/master/cloud/README.md) diff --git a/doc/enhanced-platform-awareness/hddl-images/hddlservice.png b/doc/enhanced-platform-awareness/hddl-images/hddlservice.png deleted file mode 100644 index 415e1c6e..00000000 Binary files a/doc/enhanced-platform-awareness/hddl-images/hddlservice.png and /dev/null differ diff --git a/doc/flavors.md b/doc/flavors.md index 1124c636..cbb84c1b 100644 --- a/doc/flavors.md +++ b/doc/flavors.md @@ -3,19 +3,27 @@ SPDX-License-Identifier: Apache-2.0 Copyright (c) 2020 Intel Corporation ``` +- [OpenNESS Deployment Flavors](#openness-deployment-flavors) + - [CERA Minimal Flavor](#cera-minimal-flavor) + - [CERA Access Edge Flavor](#cera-access-edge-flavor) + - [CERA Media Analytics Flavor](#cera-media-analytics-flavor) + - [CERA Media Analytics Flavor with VCAC-A](#cera-media-analytics-flavor-with-vcac-a) + - [CERA CDN Transcode Flavor](#cera-cdn-transcode-flavor) + - [CERA CDN Caching Flavor](#cera-cdn-caching-flavor) + - [CERA Core Control Plane Flavor](#cera-core-control-plane-flavor) + - [CERA Core User Plane Flavor](#cera-core-user-plane-flavor) + - [CERA Untrusted Non3gpp Access Flavor](#cera-untrusted-non3gpp-access-flavor) + - [CERA Near Edge Flavor](#cera-near-edge-flavor) + - [CERA 5G On-Prem Flavor](#cera-5g-on-prem-flavor) + - [Reference Service Mesh](#reference-service-mesh) + - [Central Orchestrator Flavor](#central-orchestrator-flavor) + # OpenNESS Deployment Flavors + This document introduces the supported deployment flavors that are deployable through OpenNESS Experience Kits (OEKs. -- [Minimal Flavor](#minimal-flavor) -- [FlexRAN Flavor](#flexran-flavor) -- [Service Mesh Flavor](#service-mesh-flavor) -- [Media Analytics Flavor](#media-analytics-flavor) -- [Media Analytics Flavor with VCAC-A](#media-analytics-flavor-with-vcac-a) -- [CDN Transcode Flavor](#cdn-transcode-flavor) -- [CDN Caching Flavor](#cdn-caching-flavor) -- [Core Control Plane Flavor](#core-control-plane-flavor) -- [Core User Plane Flavor](#core-user-plane-flavor) - -## Minimal Flavor + +## CERA Minimal Flavor + The pre-defined *minimal* deployment flavor provisions the minimal set of configurations for bringing up the OpenNESS network edge deployment. The following are steps to install this flavor: @@ -30,60 +38,36 @@ This deployment flavor enables the following ingredients: * The default Kubernetes CNI: `kube-ovn` * Telemetry -## FlexRAN Flavor + +## CERA Access Edge Flavor + The pre-defined *flexran* deployment flavor provisions an optimized system configuration for vRAN workloads on Intel® Xeon® platforms. It also provisions for deployment of Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000 tools and components to enable offloading for the acceleration of FEC (Forward Error Correction) to the FPGA. The following are steps to install this flavor: 1. Configure the OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md). -2. Run the OEK deployment script: +2. Configure the flavor file to reflect desired deployment. + - Configure the CPUs selected for isolation and OS/K8s processes from command line in files [controller_group.yml](https://github.com/open-ness/openness-experience-kits/blob/master/flavors/flexran/controller_group.yml) and [edgenode_group.yml](https://github.com/open-ness/openness-experience-kits/blob/master/flavors/flexran/edgenode_group.yml) - please note that in single node mode the edgenode_group.yml is used to configure the CPU isolation. + - Configure the amount of CPUs reserved for K8s and OS from K8s level with `reserved_cpu` flag in [all.yml](https://github.com/open-ness/openness-experience-kits/blob/master/flavors/flexran/all.yml) file. + - Configure whether the FPGA or eASIC support for FEC is desired or both in [all.yml](https://github.com/open-ness/openness-experience-kits/blob/master/flavors/flexran/all.yml) file. + +3. Run OEK deployment script: ```shell $ deploy_ne.sh -f flexran ``` This deployment flavor enables the following ingredients: -* Node feature discovery +* Node Feature Discovery * SRIOV device plugin with FPGA configuration * Calico CNI * Telemetry * FPGA remote system update through OPAE * FPGA configuration +* eASIC ACC100 configuration * RT Kernel * Topology Manager * RMD operator -## Service Mesh Flavor -The pre-defined *service-mesh* deployment flavor installs the OpenNESS service mesh that is based on [Istio](https://istio.io/). - -Steps to install this flavor are as follows: -1. Configure OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md). -2. Run OEK deployment script: - ```shell - $ deploy_ne.sh -f service-mesh - ``` - -This deployment flavor enables the following ingredients: -* Node Feature Discovery -* The default Kubernetes CNI: `kube-ovn` -* Istio service mesh -* Kiali management console -* Telemetry - -> **NOTE:** Kiali management console username & password can be changed by editing the variables `istio_kiali_username` & `istio_kiali_password`. - -Following parameters in the flavor/all.yaml can be customize for Istio deployment: - -``` -# Istio deployment profile possible values: default, demo, minimal, remote -istio_deployment_profile: "default" +## CERA Media Analytics Flavor -# Kiali -istio_kiali_username: "admin" -istio_kiali_password: "admin" -istio_kiali_nodeport: 30001 -``` - -> **NOTE:** If creating a customized flavor, the Istio service mesh installation can be included in the Ansible playbook by setting the flag `ne_istio_enable: true` in the flavor file. - -## Media Analytics Flavor The pre-defined *media-analytics* deployment flavor provisions an optimized system configuration for media analytics workloads on Intel® Xeon® platforms. It also provisions a set of video analytics services based on the [Video Analytics Serving](https://github.com/intel/video-analytics-serving) for analytics pipeline management and execution. The following are steps to install this flavor: @@ -94,19 +78,18 @@ The following are steps to install this flavor: ``` > **NOTE:** The video analytics services integrates with the OpenNESS service mesh when the flag `ne_istio_enable: true` is set. -> **NOTE:** Kiali management console username & password can be changed by editing the variables `istio_kiali_username` & `istio_kiali_password`. +> **NOTE:** Kiali management console username can be changed by editing the variable `istio_kiali_username`. By default `istio_kiali_password` is randomly generated and can be retirieved by running `kubectl get secrets/kiali -n istio-system -o json | jq -r '.data.passphrase' | base64 -d` on the Kubernetes controller. This deployment flavor enables the following ingredients: * Node feature discovery -* VPU and GPU device plugins -* HDDL daemonset * The default Kubernetes CNI: `kube-ovn` * Video analytics services * Telemetry * Istio service mesh - conditional * Kiali management console - conditional -## Media Analytics Flavor with VCAC-A +## CERA Media Analytics Flavor with VCAC-A + The pre-defined *media-analytics-vca* deployment flavor provisions an optimized system configuration for media analytics workloads leveraging Visual Cloud Accelerator Card – Analytics (VCAC-A) acceleration. It also provisions a set of video analytics services based on the [Video Analytics Serving](https://github.com/intel/video-analytics-serving) for analytics pipeline management and execution. The following are steps to install this flavor: @@ -117,7 +100,7 @@ The following are steps to install this flavor: silpixa00400194 ``` - > **NOTE:** The VCA host name should *only* be placed once in the `inventory.ini` file and under the `[edgenode_vca_group]` group. + > **NOTE:** The VCA host name should *only* be placed once in the `inventory.ini` file and under the `[edgenode_vca_group]` group. 3. Run the OEK deployment script: ```shell @@ -125,6 +108,7 @@ The following are steps to install this flavor: ``` > **NOTE:** At the time of writing this document, *Weave Net*\* is the only supported CNI for network edge deployments involving VCAC-A acceleration. The `weavenet` CNI is automatically selected by the *media-analytics-vca*. +> **NOTE:** The flag `force_build_enable` (default true) supports force build VCAC-A system image (VCAD) by default, it is defined in flavors/media-analytics-vca/all.yml. By setting the flag as false, OEK will not rebuild the image and re-use the last system image built during deployment. If the flag is true, OEK will force build VCA host kernel and node system image which will take several hours. This deployment flavor enables the following ingredients: * Node feature discovery @@ -134,8 +118,9 @@ This deployment flavor enables the following ingredients: * Video analytics services * Telemetry -## CDN Transcode Flavor -The pre-defined *cdn-transcode* deployment flavor provisions an optimized system configuration for Content Delivery Network (CDN) transcode sample workloads on Intel® Xeon® platforms. +## CERA CDN Transcode Flavor + +The pre-defined *cdn-transcode* deployment flavor provisions an optimized system configuration for Content Delivery Network (CDN) transcode sample workloads on Intel® Xeon® platforms. The following are steps to install this flavor: 1. Configure the OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md). @@ -149,8 +134,9 @@ This deployment flavor enables the following ingredients: * The default Kubernetes CNI: `kube-ovn` * Telemetry -## CDN Caching Flavor -The pre-defined *cdn-caching* deployment flavor provisions an optimized system configuration for CDN content delivery workloads on Intel® Xeon® platforms. +## CERA CDN Caching Flavor + +The pre-defined *cdn-caching* deployment flavor provisions an optimized system configuration for CDN content delivery workloads on Intel® Xeon® platforms. The following are steps to install this flavor: 1. Configure the OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md). @@ -165,9 +151,9 @@ This deployment flavor enables the following ingredients: * Telemetry * Kubernetes Topology Manager policy: `single-numa-node` -## Core Control Plane Flavor +## CERA Core Control Plane Flavor -The pre-defined Core Control Plane flavor provisions the minimal set of configurations for 5G Control Plane Network Functions on Intel® Xeon® platforms. +The pre-defined Core Control Plane flavor provisions the minimal set of configurations for 5G Control Plane Network Functions on Intel® Xeon® platforms. The following are steps to install this flavor: @@ -195,7 +181,7 @@ This deployment flavor enables the following ingredients: > **NOTE:** Istio service mesh is enabled by default in the `core-cplane` deployment flavor. To deploy 5G CNFs without Istio, the flag `ne_istio_enable` in `flavors/core-cplane/all.yml` must be set to `false`. -## Core User Plane Flavor +## CERA Core User Plane Flavor The pre-defined Core Control Plane flavor provisions the minimal set of configurations for a 5G User Plane Function on Intel® Xeon® platforms. @@ -217,3 +203,132 @@ This deployment flavor enables the following ingredients: - HugePages of size 1Gi and the amount of HugePages as 8G for the nodes > **NOTE**: For a reference UPF deployment, refer to [5G UPF Edge App](https://github.com/open-ness/edgeapps/tree/master/network-functions/core-network/5G/UPF) + +## CERA Untrusted Non3gpp Access Flavor + +The pre-defined Untrusted Non3pp Access flavor provisions the minimal set of configurations for a 5G Untrusted Non3gpp Access Network Functions like Non3GPP Interworking Function(N3IWF) on Intel® Xeon® platforms. + +The following are steps to install this flavor: + +1. Configure the OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md). + +2. Run the x-OEK deployment script: + + ```bash + $ ido-openness-experience-kits# deploy_ne.sh -f untrusted-non3pp-access + ``` + +This deployment flavor enables the following ingredients: + +- Node feature discovery +- Kubernetes CNI: calico and SR-IOV. +- Kubernetes Device Plugin +- Telemetry +- HugePages of size 1Gi and the amount of HugePages as 10G for the nodes + +## CERA Near Edge Flavor + +The pre-defined CERA Near Edge flavor provisions the required set of configurations for a 5G Converged Edge Reference Architecture for Near Edge deployments on Intel® Xeon® platforms. + +The following are steps to install this flavor: +1. Configure the OEK under CERA repository as described in the [Converged Edge Reference Architecture Near Edge](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/CERA-Near-Edge.md). + +2. Run the x-OEK for CERA deployment script: + ```shell + $ ido-converged-edge-experience-kits# deploy_openness_for_cera.sh + ``` + +This deployment flavor enables the following ingredients: + +- Kubernetes CNI: kube-ovn and SRIOV. +- SR-IOV support for kube-virt +- Virtual Functions +- CPU Manager for Kubernetes (CMK) with 16 exclusive cores and 1 core in shared pool. +- Kubernetes Device Plugin +- BIOSFW feature +- Telemetry +- HugePages of size 1Gi and the amount of HugePages as 8G for the nodes +- RMD operator + +## CERA 5G On-Prem Flavor + +The pre-defined CERA Near Edge flavor provisions the required set of configurations for a 5G Converged Edge Reference Architecture for On Premises deployments on Intel® Xeon® platforms. It also provisions for deployment of Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000 tools and components to enable offloading for the acceleration of FEC (Forward Error Correction) to the FPGA. + +The following are steps to install this flavor: +1. Configure the OEK under CERA repository as described in the [Converged Edge Reference Architecture On Premises Edge](https://github.com/open-ness/ido-specs/blob/master/doc/reference-architectures/CERA-5G-On-Prem.md). + +2. Run the x-OEK for CERA deployment script: + ```shell + $ ido-converged-edge-experience-kits# deploy_openness_for_cera.sh + ``` + +This deployment flavor enables the following ingredients: + +- Kubernetes CNI: Calico and SRIOV. +- SRIOV device plugin with FPGA configuration +- Virtual Functions +- FPGA remote system update through OPAE +- FPGA configuration +- RT Kernel +- Topology Manager +- Kubernetes Device Plugin +- BIOSFW feature +- Telemetry +- HugePages of size 1Gi and the amount of HugePages as 40G for the nodes +- RMD operator + +## Reference Service Mesh + +Service Mesh technology enables services discovery and sharing of data between application services. This technology can be useful in any CERA. Customers will find Service Mesh under flavors directory as a reference to quickly try out the technology and understand the implications. In future OpenNESS releases this Service Mesh will not be a dedicated flavor. + +The pre-defined *service-mesh* deployment flavor installs the OpenNESS service mesh that is based on [Istio](https://istio.io/). + +> **NOTE**: When deploying Istio Service Mesh in VMs, a minimum of 8 CPU core and 16GB RAM must be allocated to each worker VM so that Istio operates smoothly + +Steps to install this flavor are as follows: +1. Configure OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md). +2. Run OEK deployment script: + ```shell + $ deploy_ne.sh -f service-mesh + ``` + +This deployment flavor enables the following ingredients: +* Node Feature Discovery +* The default Kubernetes CNI: `kube-ovn` +* Istio service mesh +* Kiali management console +* Telemetry + +> **NOTE:** Kiali management console username can be changed by editing the variable `istio_kiali_username`. By default `istio_kiali_password` is randomly generated and can be retirieved by running `kubectl get secrets/kiali -n istio-system -o json | jq -r '.data.passphrase' | base64 -d` on the Kubernetes controller. + +Following parameters in the flavor/all.yaml can be customize for Istio deployment: + +```code +# Istio deployment profile possible values: default, demo, minimal, remote +istio_deployment_profile: "default" + +# Kiali +istio_kiali_username: "admin" +istio_kiali_password: "{{ lookup('password', '/dev/null length=16') }}" +istio_kiali_nodeport: 30001 +``` + +> **NOTE:** If creating a customized flavor, the Istio service mesh installation can be included in the Ansible playbook by setting the flag `ne_istio_enable: true` in the flavor file. + +## Central Orchestrator Flavor + +Central Orchestrator Flavor is used to deploy EMCO. + +The pre-defined *orchestration* deployment flavor provisions an optimized system configuration for emco (central orchestrator) workloads on Intel Xeon servers. It also provisions a set of central orchestrator services for [edge, multiple clusters orchestration](building-blocks/emco/openness-emco.md). + +Steps to install this flavor are as follows: +1. Configure OEK as described in the [OpenNESS Getting Started Guide for Network Edge](getting-started/network-edge/controller-edge-node-setup.md). +2. Run OEK deployment script: + ```shell + $ deploy_ne.sh -f central_orchestrator + ``` + +This deployment flavor enables the following ingredients: +* Harbor Registry +* The default Kubernetes CNI: `kube-ovn` +* EMCO services \ No newline at end of file diff --git a/doc/getting-started/network-edge/controller-edge-node-setup-images/harbor_ui.png b/doc/getting-started/network-edge/controller-edge-node-setup-images/harbor_ui.png new file mode 100644 index 00000000..448feee6 Binary files /dev/null and b/doc/getting-started/network-edge/controller-edge-node-setup-images/harbor_ui.png differ diff --git a/doc/getting-started/network-edge/controller-edge-node-setup.md b/doc/getting-started/network-edge/controller-edge-node-setup.md index 2a8bf357..8836fff0 100644 --- a/doc/getting-started/network-edge/controller-edge-node-setup.md +++ b/doc/getting-started/network-edge/controller-edge-node-setup.md @@ -14,10 +14,13 @@ Copyright (c) 2019-2020 Intel Corporation - [VM support for Network Edge](#vm-support-for-network-edge) - [Application on-boarding](#application-on-boarding) - [Single-node Network Edge cluster](#single-node-network-edge-cluster) - - [Docker registry](#docker-registry) - - [Deploy Docker registry](#deploy-docker-registry) - - [Docker registry image push](#docker-registry-image-push) - - [Docker registry image pull](#docker-registry-image-pull) + - [Harbor registry](#harbor-registry) + - [Deploy Harbor registry](#deploy-harbor-registry) + - [Harbor login](#harbor-login) + - [Harbor registry image push](#harbor-registry-image-push) + - [Harbor registry image pull](#harbor-registry-image-pull) + - [Harbor UI](#harbor-ui) + - [Harbor CLI](#harbor-registry-CLI) - [Kubernetes cluster networking plugins (Network Edge)](#kubernetes-cluster-networking-plugins-network-edge) - [Selecting cluster networking plugins (CNI)](#selecting-cluster-networking-plugins-cni) - [Adding additional interfaces to pods](#adding-additional-interfaces-to-pods) @@ -31,7 +34,6 @@ Copyright (c) 2019-2020 Intel Corporation - [Setting Git](#setting-git) - [GitHub token](#github-token) - [Customize tag/branch/sha to checkout](#customize-tagbranchsha-to-checkout) - - [Installing Kubernetes dashboard](#installing-kubernetes-dashboard) - [Customization of kernel, grub parameters, and tuned profile](#customization-of-kernel-grub-parameters-and-tuned-profile) # Quickstart @@ -49,7 +51,7 @@ The following set of actions must be completed to set up the Open Network Edge S To use the playbooks, several preconditions must be fulfilled. These preconditions are described in the [Q&A](#qa) section below. The preconditions are: -- CentOS\* 7.6.1810 must be installed on hosts where the product is deployed. It is highly recommended to install the operating system using a minimal ISO image on nodes that will take part in deployment (obtained from inventory file). Also, do not make customizations after a fresh manual install because it might interfere with Ansible scripts and give unpredictable results during deployment. +- CentOS\* 7.8.2003 must be installed on hosts where the product is deployed. It is highly recommended to install the operating system using a minimal ISO image on nodes that will take part in deployment (obtained from inventory file). Also, do not make customizations after a fresh manual install because it might interfere with Ansible scripts and give unpredictable results during deployment. - Hosts for the Edge Controller (Kubernetes control plane) and Edge Nodes (Kubernetes nodes) must have proper and unique hostnames (i.e., not `localhost`). This hostname must be specified in `/etc/hosts` (refer to [Setup static hostname](#setup-static-hostname)). @@ -137,47 +139,195 @@ To deploy Network Edge in a single-node cluster scenario, follow the steps below > Default settings in the single-node cluster mode are those of the Edge Node (i.e., kernel and tuned customization enabled). 4. Single-node cluster can be deployed by running command: `./deploy_ne.sh single` -## Docker registry +## Harbor registry -Docker registry is a storage and distribution system for Docker Images. On the OpenNESS environment, Docker registry service is deployed as a pod on Control plane Node. Docker registry authentication enabled with self-signed certificates as well as all node and control plane nodes will have access to the Docker registry. +Harbor registry is an open source cloud native registry which can support images and relevant artifacts with extended functionalities as described in [Harbor](https://goharbor.io/). On the OpenNESS environment, Harbor registry service is installed on Control plane Node by Harbor Helm Chart [github](https://github.com/goharbor/harbor-helm/releases/tag/v1.5.1). Harbor registry authentication enabled with self-signed certificates as well as all nodes and control plane will have access to the Harbor registry. -### Deploy Docker registry +### Deploy Harbor registry -Ansible "docker_registry" roles created on openness-experience-kits. For deploying a Docker registry on Kubernetes, control plane node roles are enabled on the openness-experience-kits "network_edge.yml" file. +#### System Prerequisite +* The available system disk should be reserved at least 20G for Harbor PV/PVC usage. The defaut disk PV/PVC total size is 20G. The values can be configurable in the ```roles/harbor_registry/controlplane/defaults/main.yaml```. +* If huge pages enabled, need 1G(hugepage size 1G) or 300M(hugepage size 2M) to be reserved for Harbor usage. + +#### Ansible Playbooks +Ansible "harbor_registry" roles created on openness-experience-kits. For deploying a Harbor registry on Kubernetes, control plane roles are enabled on the openness-experience-kits "network_edge.yml" file. ```ini - role: docker_registry/controlplane - role: docker_registry/node - ``` -The following steps are processed during the Docker registry deployment on the OpenNESS setup. + role: harbor_registry/controlplane + role: harbor_registry/node + ``` + +The following steps are processed by openness-experience-kits during the Harbor registry installation on the OpenNESS control plane node. + +* Download Harbor Helm Charts on the Kubernetes Control plane Node. +* Check whether huge pages is enabled and templates values.yaml file accordingly. +* Create namespace and disk PV for Harbor Services (The defaut disk PV/PVC total size is 20G. The values can be configurable in the ```roles/harbor_registry/controlplane/defaults/main.yaml```). +* Install Harbor on the control plane node using the Helm Charts (The CA crt will be generated by Harbor itself). +* Create the new project - ```intel``` for OpenNESS microservices, Kurbernetes enhanced add-on images storage. +* Docker login the Harbor Registry, thus enable pulling, pushing and tag images with the Harbor Registry + + +On the OpenNESS edge nodes, openness-experience-kits will conduct the following steps: +* Get harbor.crt from the OpenNESS control plane node and save into the host location + /etc/docker/certs.d/ +* Docker login the Harbor Registry, thus enable pulling, pushing and tag images with the Harbor Registry +* After above steps, the Node and Ansible host can access the private Harbor registry. +* The IP address of the Harbor registry will be: "Kubernetes_Control_Plane_IP" +* The port number of the Harbor registry will be: 30003 + + +#### Projects +Two Harbor projects will be created by OEK as below: +- ```library``` The registry project can be used by edge application developer as default images registries. +- ```intel``` The registry project contains the registries for the OpenNESS microservices and relevant kubernetes addon images. Can also be used for OpenNESS sample application images. -* Generate a self-signed certificate on the Kubernetes Control plane Node. -* Build and deploy a docker-registry pod on the Control plane Node. -* Generate client.key and client.csr on the node -* Authenticate client.csr for server access. -* Share public key and client.cert on trusted Node and Ansible build host location - /etc/docker/certs.d/ -* After the Docker registry deploys, the Node and Ansible host can access the private Docker registry. -* The IP address of the Docker registry will be: "Kubernetes_Control_Plane_IP" -* The port number of the Docker registry will be: 5000 +### Harbor login +For the nodes inside of the OpenNESS cluster, openness-experience-kits ansible playbooks automatically login and prepare harbor CA certifications to access Harbor services. -### Docker registry image push -Use the Docker tag to create an alias of the image with the fully qualified path to your Docker registry after the tag successfully pushes the image to the Docker registry. +For the external host outside of the OpenNESS cluster, can use following commands to access the Harbor Registry: +```shell +# create directory for harbor's CA crt +mkdir /etc/docker/certs.d/${Kubernetes_Control_Plane_IP}:${port}/ + +# get EMCO harbor CA.crt +set -o pipefail && echo -n | openssl s_client -showcerts -connect ${Kubernetes_Control_Plane_IP}:${port} 2>/dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /etc/docker/certs.d/${Kubernetes_Control_Plane_IP}:${port}/harbor.crt + +# docker login harobr registry +docker login ${Kubernetes_Control_Plane_IP}:${port} -uadmin -p${harborAdminPassword} +``` +The default access configuration for the Harbor Registry is: ```ini - docker tag nginx:latest Kubernetes_Control_Plane_IP:5000/nginx:latest - docker push Kubernetes_Control_Plane_IP:5000/nginx:latest - ``` +Kubernetes_Control_Plane_IP: 30003(default) +harborAdminPassword: Harbor12345(default) + ``` + +### Harbor registry image push +Use the Docker tag to create an alias of the image with the fully qualified path to your Harbor registry after the tag successfully pushes the image to the Harbor registry. + + ```shell + docker tag nginx:latest {Kubernetes_Control_Plane_IP}:30003/intel/nginx:latest + docker push {Kubernetes_Control_Plane_IP}:30003/intel/nginx:latest + ``` Now image the tag with the fully qualified path to your private registry. You can push the image to the registry using the Docker push command. -### Docker registry image pull -Use the `docker pull` command to pull the image from Docker registry: +### Harbor registry image pull +Use the `docker pull` command to pull the image from Harbor registry: - ```ini - docker pull Kubernetes_Control_Plane_IP:5000/nginx:latest - ``` ->**NOTE**: should be replaced as per our docker registry IP address. + ```shell + docker pull {Kubernetes_Control_Plane_IP}:30003/intel/nginx:latest + ``` + +### Harbor UI +Open the https://{Kubernetes_Control_Plane_IP}:30003 with login username ```admin``` and password ```Harbor12345```: +![](controller-edge-node-setup-images/harbor_ui.png) + +You could see two projects: ```intel``` and ```library``` on the Web UI. For more details about Harbor usage, can refer to [Harbor docs](https://goharbor.io/docs/2.1.0/working-with-projects/). + +### Harbor CLI +Apart for Harbor UI, you can also use ```curl``` to check Harbor projects and images. The examples will be shown as below. +```text +In the examples, 10.240.224.172 is IP address of {Kubernetes_Control_Plane_IP} +If there is proxy connection issue with ```curl``` command, can add ```--proxy``` into the command options. +``` +#### CLI - List Project +Use following example commands to check projects list: + ```shell + # curl -X GET "https://10.240.224.172:30003/api/v2.0/projects" -H "accept: application/json" -k --cacert /etc/docker/certs.d/10.240.224.172:30003/harbor.crt -u "admin:Harbor12345 | jq" + [ + { + "creation_time": "2020-11-26T08:47:31.626Z", + "current_user_role_id": 1, + "current_user_role_ids": [ + 1 + ], + "cve_allowlist": { + "creation_time": "2020-11-26T08:47:31.628Z", + "id": 1, + "items": [], + "project_id": 2, + "update_time": "2020-11-26T08:47:31.628Z" + }, + "metadata": { + "public": "true" + }, + "name": "intel", + "owner_id": 1, + "owner_name": "admin", + "project_id": 2, + "repo_count": 3, + "update_time": "2020-11-26T08:47:31.626Z" + }, + { + "creation_time": "2020-11-26T08:39:13.707Z", + "current_user_role_id": 1, + "current_user_role_ids": [ + 1 + ], + "cve_allowlist": { + "creation_time": "0001-01-01T00:00:00.000Z", + "items": [], + "project_id": 1, + "update_time": "0001-01-01T00:00:00.000Z" + }, + "metadata": { + "public": "true" + }, + "name": "library", + "owner_id": 1, + "owner_name": "admin", + "project_id": 1, + "update_time": "2020-11-26T08:39:13.707Z" + } + ] + + ``` + +#### CLI - List Image Repositories +Use following example commands to check images repository list of project - ```intel```: + ```shell + # curl -X GET "https://10.240.224.172:30003/api/v2.0/projects/intel/repositories" -H "accept: application/json" -k --cacert /etc/docker/certs.d/10.240.224.172:30003/harbor.crt -u "admin:Harbor12345" | jq + [ + { + "artifact_count": 1, + "creation_time": "2020-11-26T08:57:43.690Z", + "id": 3, + "name": "intel/sriov-device-plugin", + "project_id": 2, + "pull_count": 1, + "update_time": "2020-11-26T08:57:55.240Z" + }, + { + "artifact_count": 1, + "creation_time": "2020-11-26T08:56:16.565Z", + "id": 2, + "name": "intel/sriov-cni", + "project_id": 2, + "update_time": "2020-11-26T08:56:16.565Z" + }, + { + "artifact_count": 1, + "creation_time": "2020-11-26T08:49:25.453Z", + "id": 1, + "name": "intel/multus", + "project_id": 2, + "update_time": "2020-11-26T08:49:25.453Z" + } + ] + + ``` + +#### CLI - Delete Image +Use following example commands to delete the image repository of project - ```intel```, for example: + ```shell + # curl -X DELETE "https://10.240.224.172:30003/api/v2.0/projects/intel/repositories/nginx" -H "accept: application/json" -k --cacert /etc/docker/certs.d/10.240.224.172:30003/harbor.crt -u "admin:Harbor12345" + ``` + +Use following example commands to delete a specific image version: + ```sh + # curl -X DELETE "https://10.240.224.172:30003/api/v2.0/projects/intel/repositories/nginx/artifacts/1.14.2" -H "accept: application/json" -k --cacert /etc/docker/certs.d/10.240.224.172:30003/harbor.crt -u "admin:Harbor12345" + ``` ## Kubernetes cluster networking plugins (Network Edge) @@ -457,113 +607,6 @@ controller_repository_branch: openness-20.03 edgenode_repository_branch: openness-20.03 ``` -## Installing Kubernetes dashboard - -Kubernetes does not ship with a graphical interface by default, but a web-based tool called [Kubernetes Dashboard](https://github.com/kubernetes/dashboard) can be installed with a few simple steps. The Kubernetes dashboard allows users to manage the cluster and edge applications. - -The Kubernetes dashboard can only be installed with Network Edge deployments. - -Follow the below steps to install the Kubernetes dashboard after OpenNESS is installed through [playbooks](#running-playbooks). - -1. Deploy the dashboard using `kubectl`: - - ```shell - kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.1/aio/deploy/recommended.yaml - ``` - -2. Get all the pods in all namespaces: - - ```shell - kubectl get pods -o wide --all-namespaces - ``` - -3. Create a new service account with the cluster admin: - - ```shell - cat > dashboard-admin-user.yaml << EOF - apiVersion: v1 - kind: ServiceAccount - metadata: - name: admin-user - namespace: kubernetes-dashboard - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: admin-user - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-admin - subjects: - - kind: ServiceAccount - name: admin-user - namespace: kubernetes-dashboard - EOF - ``` - -4. Apply the admin user: - - ```shell - kubectl apply -f dashboard-admin-user.yaml - ``` - -5. Edit kubernetes-dashboard service through the following command: - - ```shell - kubectl -n kubernetes-dashboard edit service kubernetes-dashboard - ``` - - Add `externalIPs` to the service spec, replace `` with the actual controller IP address: - - ```yaml - spec: - externalIPs: - - - ``` - - > **OPTIONAL**: By default the dashboard is accessible at port 443, and it can be changed by editing the port value `- port: ` in the service spec. - -6. Verify that the `kubernetes-dashboard` service has `EXTERNAL-IP` assigned: - - ```shell - kubectl -n kubernetes-dashboard get service kubernetes-dashboard - ``` - -7. Open the dashboard from the browser at `https:///`. If the port was changed according to the OPTIONAL note in step 5, then use `https://:/` instead. - - > **NOTE**: A Firefox\* browser can be an alternative to Chrome\* and Internet Explorer\* in case the dashboard web page is blocked due to certification a issue. - -8. Capture the bearer token using this command: - - ```shell - kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}') - ``` - -Paste the token in the browser to log in, as shown in the following diagram: - -![Dashboard Login](controller-edge-node-setup-images/dashboard-login.png) - -_Figure - Kubernetes Dashboard Login_ - -9. Go to the OpenNESS Controller installation directory and edit the `.env` file with the dashboard link `INFRASTRUCTURE_UI_URL=https://:/` to integrate it with the OpenNESS controller UI: - - ```shell - cd /opt/edgecontroller/ - vi .env - ``` - -10. Build the OpenNESS Controller UI: - - ```shell - cd /opt/edgecontroller/ - make ui-up - ``` - -11. The OpenNESS controller landing page is accessible at `http:///`. - > **NOTE**: `LANDING_UI_URL` can be retrieved from `.env` file. - - ## Customization of kernel, grub parameters, and tuned profile OpenNESS Experience Kits provide an easy way to customize the kernel version, grub parameters, and tuned profile. For more information, refer to [the OpenNESS Experience Kits guide](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/openness-experience-kits.md). diff --git a/doc/getting-started/network-edge/offline-edge-deployment.md b/doc/getting-started/network-edge/offline-edge-deployment.md new file mode 100644 index 00000000..0b1fb4b3 --- /dev/null +++ b/doc/getting-started/network-edge/offline-edge-deployment.md @@ -0,0 +1,157 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2019-2020 Intel Corporation +``` + +- [OpenNESS Network Edge: Offline Deployment](#openness-network-edge-offline-deployment) + - [OpenNESS support in offline environment](#openness-support-in-offline-environment) + - [Setup prerequisites](#setup-prerequisites) + - [Creating the offline package from an online node](#creating-the-offline-package-from-an-online-node) + - [Placing the complete offline package in offline environment](#placing-the-complete-offline-package-in-offline-environment) + - [Deployment in offline environment](#deployment-in-offline-environment) +# OpenNESS Network Edge: Offline Deployment + +## OpenNESS support in offline environment + +The OpenNESS projects supports a deployment of the solution in an air-gapped, offline environment. The support is currently limited to "[flexran" deployment flavor of OpenNESS Experience Kit](https://github.com/open-ness/ido-openness-experience-kits/tree/master/flavors/flexran) only and it allows for offline deployment of vRAN specific components. Internet connection is needed to create the offline package, a script to download and build all necessary components will create an archive of all the necessary files. Once the offline package is created the installation of OpenNESS Experience Kits will be commenced as usual, in the same way as the default online installation would. + +It can be deployed in two different scenarios. The first scenario is to deploy the OpenNESS Experience Kits from the online "jumper" node which is being used to create the offline package, this internet connected node must have a network connection to the air-gapped/offline nodes. The second scenario is to copy the whole OpenNESS Experience Kit directory with the already archived packages to the air-gapped/offline environment (for example via USB or other media or means) and run the OpenNESS Experience Kit from within the offline environment. All the nodes within the air-gapped/offline cluster need to able to SSH into each other. + +Figure 1. Scenario one - online node connected to the air-gapped network +![Scenario one - online node connected to the air-gapped network](offline-images/offline-ssh.png) +Figure 2. Scenario two - OEK copied to the air-gapped network +![Scenario two - OEK copied to the air-gapped network](offline-images/offline-copy.png) + +## Setup prerequisites + +* A node with access to internet to create the offline package. +* Cluster set up in an air-gapped environment. +* Clean setup, see [pre-requisites](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md#preconditions) +* [Optional] If OEK is run from an online jumper node, the node needs to be able to SSH into each machine in air-gapped environment. +* [Optional] A media such as USB drive to copy the offline OEK package to the air-gapped environment if there is no connection from online node. +* All the nodes in air-gapped environment must be able to SSH to each other without requiring password input, see [getting-started.md](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md#exchanging-ssh-keys-between-hosts). +* The control plane node needs to be able to SSH itself. +* The time and date of the nodes in offline environment is manually synchronized by the cluster's admin. +* User provided files - OPAE_SDK_1.3.7-5_el7.zip and syscfg_package.zip + +## Creating the offline package from an online node + +To create the offline package the user must have an access to an online node from which the offline package creator can download all necessary files and build Docker images. The list of files to be downloaded/build is provided in a form of a package definition list (Only package definition list for "flexran" flavor of OpenNESS is provided at the time of writing). Various categories of files to be downloaded are provided within this list including: RPMs, PIP pacakges, Helm charts, Dockerfiles, Go modules, and miscellaneous downloads. According to the category of a file the logic of offline package creator script will handle the download/build accordingly. Some files such as proprietary packages need to be provided by user in specified directories (see following steps). Once the offline package creator collects all necessary components it will pack them into an archive and then place them in appropriate place within the OpenNESS Experience Kits directory. Once the packages are archived the OpenNESS Experience Kits are ready to be deployed in air-gapped environment. The following diagram illustrates the workflow of the offline package creator. Additional information regarding the offline package creator can be found in the [README.md file](https://github.com/open-ness/openness-experience-kits/blob/master/offline_package_creator/README.md). + +Figure 3. Offline package creator workflow +![OPC flow](offline-images/offline-flow.png) + +To run the offline package creator run the following steps (user should not be a "root" user but does need "sudo" privileges to create the package, RT components will require installation of RT kernel on the node by the OPC): + +Clone the OpenNESS Experience Kits repo to an online node: + +```shell +# https://github.com/open-ness/ido-openness-experience-kits.git +``` + +Navigate to offline package creator directory: + +```shell +# cd ido-openness-experience-kits/oek/offline_package_creator/ +``` + +Create a directory from which user provided files can be accessed: + +```shell +# mkdir /// +``` + +Copy the 'OPAE_SDK_1.3.7-5_el7.zip' file (optional but necessary by default - to be done when OPAE is enabled in "flexran" flavor of OEK) and syscfg_package.zip (optional but necessary by default- to be done when BIOS config is enabled in "flexran" flavor of OEK) to the provided directory: + +```shell +# cp OPAE_SDK_1.3.7-5_el7.zip /// +# cp syscfg_package.zip /// +``` + +Edit [ido-openness-experience-kits/oek/offline_package_creator/scripts/initrc](https://github.com/open-ness/openness-experience-kits/blob/master/offline_package_creator/scripts/initrc) file and update with GitHub username/token if necessary, HTTP/GIT proxy if behind firewall and provide paths to file dependencies. + +```shell +# open-ness token +GITHUB_USERNAME="" +GITHUB_TOKEN="" + +# User add ones +HTTP_PROXY="http://
:" #Add proxy first +GIT_PROXY="http://
:" + +# location of OPAE_SDK_1.3.7-5_el7.zip +BUILD_OPAE=disable +DIR_OF_OPAE_ZIP="///" + +# location of syscfg_package.zip +BUILD_BIOSFW=disable +DIR_OF_BIOSFW_ZIP="///" + +# location of the zip packages for collectd-fpga +BUILD_COLLECTD_FPGA=disable +DIR_OF_FPGA_ZIP="///" +``` + +Start the offline package creator script [ido-openness-experience-kits/oek/offline_package_creator/offline_package_creator.sh](https://github.com/open-ness/openness-experience-kits/blob/master/offline_package_creator/offline_package_creator.sh) + +```shell +# bash offline_package_creator.sh all +``` + +The script will download all the files define in the [pdl_flexran.yml](https://github.com/open-ness/openness-experience-kits/blob/master/offline_package_creator/package_definition_list/pdl_flexran.yml) and build other necessary images, then copy them to a designated directory. Once the script is finished executing the user should expect three files under the `ido-openness-experience-kits/roles/offline_roles/unpack_offline_package/files` directory: + +```shell +# ls ido-openness-experience-kits/roles/offline_roles/unpack_offline_package/files + +checksum.txt prepackages.tar.gz opcdownloads.tar.gz +``` + +Once the archive packages are created and placed in the OEK, the OEK is ready to be configured for offline/air-gapped installation. + +## Placing the complete offline package in offline environment + +User has two options of deploying the OEK for offline/air-gapped environment. Please refer to Figure 1 and Figure 2 of this document for diagrams. + +Scenario 1: User will deploy the OEK from an online node with a network connection to the offline/air-gapped environment. In this case if the online node is the same one as the one on which the offline package creator was run and created the archive files for OEK than the OEK directory does not need to be moved and will be used as is. The online node is expected to have a password-less SSH connection with all the offline nodes enabled - all the offline nodes are expected to have a password-less SSH connection between control plane and node and vice-versa, and the control plane node needs to be able to SSH itself. + +Scenario 2: User will deploy the OEK from a node within the offline/air-gapped environment. In this case the user needs to copy the whole OEK directory containing the archived files from [previous section](#creating-the-offline-package-from-an-online-node) from the online node to one of the nodes in the offline environment via USB drive or alternative media. It is advisable that the offline node used to run the OEK is a separate node to the actual cluster, if the node is also used as part of the cluster it will reboot during the script run due to kernel upgrade and the OEK will need to be run again - this may have un-forseen consequences. All the offline nodes are expected to have a password-less SSH connection between control plane and node and vice-versa, and the control plane node needs to be able to SSH itself. + +Regardless of the scenario in which the OEK will be deployed the deployment method is the same. + +## Deployment in offline environment + +Once all the previous steps provided within this document are completed and the OEK with offline archives is placed on the node which will run the OEK automation, the user should get familiar with the ["Running-playbooks"](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md#running-playbooks) and ["Preconditions"](https://github.com/open-ness/ido-specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md#preconditions) sections of getting started guide and deploy the OpenNESS as per usual deployment steps. Please note only deployment of "flexran" flavour is supported for offline/air-gapped environment, other flavours/configurations and default deployment may fail due to missing dependencies, the support for ACC100 accelerator is not available for offline deployment of "flexran" flavour at the time of writing. Both multi-node and single node modes are supported. + +During the deployment of the offline version of the OEK the archived files created by the offline package creator will be extracted and placed in appropriate directory. The OEK will set up a local file share server on the control plane node and move the files to the said server. The OEK will also create a local yum repo. All the files and packages will be pulled from this file share server by nodes across the air-gapped OpenNESS cluster. During the execution of the OEK the Ansible scripts will follow the same logic as per the online mode with the difference that all the components will be pulled locally from the file share server instead of the internet. + +The following are the specific steps to enable offline/air-gaped deployment from OEK: + +Enable the offline deployment in [ido-openness-experience-kits/group_vars/all/10-open.yml](https://github.com/open-ness/ido-openness-experience-kits/blob/master/group_vars/all/10-open.yml) + +```yaml +## Offline Mode support +offline_enable: True +``` + +Make sure the time on offline nodes is synchronized. + +Make sure nodes can access each other through SSH without password. +Make sure cotrol-plane node can SSH itself. ie: + +```shell +# hostname -I + +# ssh-copy-id +``` + +Make sure the CPUs allocation in "flexran" flavor is configured as desired, [see configs in flavor directory](https://github.com/open-ness/ido-openness-experience-kits/tree/master/flavors/flexran). + +Deploy OpenNESS using FlexRAN flavor for multi or single node: + +```shell +# ./deploy_ne.sh -f flexran +``` +OR +```shell +# ./deploy_ne.sh -f flexran single +``` diff --git a/doc/getting-started/network-edge/offline-images/offline-copy.png b/doc/getting-started/network-edge/offline-images/offline-copy.png new file mode 100644 index 00000000..35df385b Binary files /dev/null and b/doc/getting-started/network-edge/offline-images/offline-copy.png differ diff --git a/doc/getting-started/network-edge/offline-images/offline-flow.png b/doc/getting-started/network-edge/offline-images/offline-flow.png new file mode 100644 index 00000000..0b01a301 Binary files /dev/null and b/doc/getting-started/network-edge/offline-images/offline-flow.png differ diff --git a/doc/getting-started/network-edge/offline-images/offline-ssh.png b/doc/getting-started/network-edge/offline-images/offline-ssh.png new file mode 100644 index 00000000..c6efe0a6 Binary files /dev/null and b/doc/getting-started/network-edge/offline-images/offline-ssh.png differ diff --git a/doc/getting-started/openness-experience-kits.md b/doc/getting-started/openness-experience-kits.md index 449bce4c..32559f7a 100644 --- a/doc/getting-started/openness-experience-kits.md +++ b/doc/getting-started/openness-experience-kits.md @@ -6,16 +6,16 @@ Copyright (c) 2019 Intel Corporation # OpenNESS Experience Kits - [Purpose](#purpose) - [OpenNESS setup playbooks](#openness-setup-playbooks) -- [Customizing kernel, grub parameters, and tuned profile & variables per host.](#customizing-kernel-grub-parameters-and-tuned-profile--variables-per-host) +- [Customizing kernel, grub parameters, and tuned profile & variables per host](#customizing-kernel-grub-parameters-and-tuned-profile--variables-per-host) - [IP address range allocation for various CNIs and interfaces](#ip-address-range-allocation-for-various-cnis-and-interfaces) - [Default values](#default-values) - [Use newer realtime kernel (3.10.0-1062)](#use-newer-realtime-kernel-3100-1062) - [Use newer non-rt kernel (3.10.0-1062)](#use-newer-non-rt-kernel-3100-1062) - [Use tuned 2.9](#use-tuned-29) - [Default kernel and configure tuned](#default-kernel-and-configure-tuned) - - [Change amount of hugepages](#change-amount-of-hugepages) - - [Change size of hugepages](#change-size-of-hugepages) - - [Change amount and size of hugepages](#change-amount-and-size-of-hugepages) + - [Change amount of HugePages](#change-amount-of-hugepages) + - [Change size of HugePages](#change-size-of-hugepages) + - [Change amount and size of HugePages](#change-amount-and-size-of-hugepages) - [Remove input output memory management unit (IOMMU) from grub params](#remove-input-output-memory-management-unit-iommu-from-grub-params) - [Add custom GRUB parameter](#add-custom-grub-parameter) - [Configure OVS-DPDK in kube-ovn](#configure-ovs-dpdk-in-kube-ovn) @@ -35,7 +35,7 @@ OEKs allow a user to customize kernel, grub parameters, and tuned profiles by le OEKs contain a `host_vars/` directory that can be used to place a YAML file (`nodes-inventory-name.yml`, e.g., `node01.yml`). The file would contain variables that would override roles' default values. -> **NOTE**: Despite the ability to customize parameters (kernel), it is required to have a clean CentOS\* 7.6.1810 operating system installed on hosts (from a minimal ISO image) that will be later deployed from Ansible scripts. This OS shall not have any user customizations. +> **NOTE**: Despite the ability to customize parameters (kernel), it is required to have a clean CentOS\* 7.8.2003 operating system installed on hosts (from a minimal ISO image) that will be later deployed from Ansible scripts. This OS shall not have any user customizations. To override the default value, place the variable's name and new value in the host's vars file. For example, the contents of `host_vars/node01.yml` that would result in skipping kernel customization on that node: @@ -50,7 +50,7 @@ The following are several common customization scenarios. The OpenNESS Experience kits deployment uses/allocates/reserves a set of IP address ranges for different CNIs and interfaces. The server or host IP address should not conflict with the default address allocation. In case if there is a critical need for the server IP address used by the OpenNESS default deployment, it would require to modify the default addresses used by the OpenNESS. -Following files sepcify the CIDR for CNIs and interfaces. These are the IP address ranges allocated and used by default just for reference. +Following files specify the CIDR for CNIs and interfaces. These are the IP address ranges allocated and used by default just for reference. ```yaml flavors/media-analytics-vca/all.yml:19:vca_cidr: "172.32.1.0/12" @@ -74,11 +74,11 @@ Here are several default values: # --- machine_setup/custom_kernel kernel_skip: false # use this variable to disable custom kernel installation for host -kernel_repo_url: http://linuxsoft.cern.ch/cern/centos/7/rt/CentOS-RT.repo -kernel_repo_key: http://linuxsoft.cern.ch/cern/centos/7/os/x86_64/RPM-GPG-KEY-cern +kernel_repo_url: http://linuxsoft.cern.ch/cern/centos/7.8.2003/rt/CentOS-RT.repo +kernel_repo_key: http://linuxsoft.cern.ch/cern/centos/7.8.2003/os/x86_64/RPM-GPG-KEY-cern kernel_package: kernel-rt-kvm kernel_devel_package: kernel-rt-devel -kernel_version: 3.10.0-957.21.3.rt56.935.el7.x86_64 +kernel_version: 3.10.0-1127.19.1.rt56.1116.el7.x86_64 kernel_dependencies_urls: [] kernel_dependencies_packages: [] @@ -95,8 +95,8 @@ additional_grub_params: "" # --- machine_setup/configure_tuned tuned_skip: false # use this variable to skip tuned profile configuration for host tuned_packages: -- http://linuxsoft.cern.ch/scientific/7x/x86_64/os/Packages/tuned-2.11.0-8.el7.noarch.rpm -- http://linuxsoft.cern.ch/scientific/7x/x86_64/os/Packages/tuned-profiles-realtime-2.11.0-8.el7.noarch.rpm +- tuned-2.11.0-8.el7 +- http://linuxsoft.cern.ch/scientific/7.8/x86_64/os/Packages/tuned-profiles-realtime-2.11.0-8.el7.noarch.rpm tuned_profile: realtime tuned_vars: | isolated_cores=2-3 @@ -104,16 +104,16 @@ tuned_vars: | nohz_full=2-3 ``` -### Use newer realtime kernel (3.10.0-1062) -By default, `kernel-rt-kvm-3.10.0-957.21.3.rt56.935.el7.x86_64` from `http://linuxsoft.cern.ch/cern/centos/$releasever/rt/$basearch/` repository is installed. +### Use different realtime kernel (3.10.0-1062) +By default, `kernel-rt-kvm-3.10.0-1127.19.1.rt56.1116.el7.x86_64` from buil-in repository is installed. To use another version (e.g., `kernel-rt-kvm-3.10.0-1062.9.1.rt56.1033.el7.x86_64`), create a `host_var` file for the host with content: ```yaml kernel_version: 3.10.0-1062.9.1.rt56.1033.el7.x86_64 ``` -### Use newer non-rt kernel (3.10.0-1062) -The OEK installs a real-time kernel by default from a specific repository. However, the non-rt kernel is present in the official CentOS repository. Therefore, to use a newer non-rt kernel, the following overrides must be applied: +### Use different non-rt kernel (3.10.0-1062) +The OEK installs a real-time kernel by default. However, the non-rt kernel is present in the official CentOS repository. Therefore, to use a different non-rt kernel, the following overrides must be applied: ```yaml kernel_repo_url: "" # package is in default repository, no need to add new repository kernel_package: kernel # instead of kernel-rt-kvm diff --git a/doc/orchestration/index.html b/doc/orchestration/index.html new file mode 100644 index 00000000..6bbee5e8 --- /dev/null +++ b/doc/orchestration/index.html @@ -0,0 +1,14 @@ + + +--- +title: OpenNESS Documentation +description: Home +layout: openness +--- +

You are being redirected to the OpenNESS Docs.

+ diff --git a/doc/orchestration/openness-helm.md b/doc/orchestration/openness-helm.md index 94565588..393cf68d 100644 --- a/doc/orchestration/openness-helm.md +++ b/doc/orchestration/openness-helm.md @@ -2,6 +2,7 @@ SPDX-License-Identifier: Apache-2.0 Copyright (c) 2020 Intel Corporation ``` + # Helm support in OpenNESS - [Introduction](#introduction) @@ -54,12 +55,12 @@ OpenNESS provides the following helm charts: The EPA, Telemetry, and k8s plugins helm chart files will be saved in a specific directory on the OpenNESS controller. To modify the directory, change the following variable `ne_helm_charts_default_dir` in the `group_vars/all/10-default.yml` file: ```yaml - ne_helm_charts_default_dir: /opt/openness-helm-charts/ + ne_helm_charts_default_dir: /opt/openness/helm-charts/ ``` To check helm charts files, run the following command on the OpenNESS controller: ```bash - $ ls /opt/openness-helm-charts/ + $ ls /opt/openness/helm-charts/ vpu-plugin gpu-plugin node-feature-discovery prometheus ``` diff --git a/doc/overview.md b/doc/overview.md new file mode 100644 index 00000000..19c16748 --- /dev/null +++ b/doc/overview.md @@ -0,0 +1,107 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2019-2020 Intel Corporation +``` + + +# OpenNESS Overview + +- [Introduction to OpenNESS](#introduction-to-openness) +- [Why consider OpenNESS](#why-consider-openness) +- [Building Blocks](#building-blocks) +- [Distributions](#distributions) +- [Consumption Models](#consumption-models) + +## Introduction to OpenNESS + +OpenNESS is an edge computing software toolkit that enables highly optimized and performant edge platforms to on-board and manage applications and network functions with cloud-like agility across any type of network. + +The toolkit includes a variety of Building Blocks that enable you to build different types of Converged Edge platforms that combine IT (Information Technology), OT (Operational Technology) and CT (Communications Technology). + +OpenNESS can help speed up the development of Edges such as: + +- Cloud Native RAN with Apps +- 5G distributed UPF with Apps +- uCPE/SD-WAN with Apps +- AI/vision inferencing apps with MEC +- Media apps with MEC + +OpenNESS is a Certified Kubernetes* offering. See the [CNCF Software Conformance program](https://www.cncf.io/certification/software-conformance/) for details. + +## Why consider OpenNESS + +In the era of 5G, as the cloud architectures start to disaggregate, various locations on the Telco edge start to become prime candidates for compute workloads that are capable of delivering a new set of KPIs for a new breed of apps and services. These locations include the On-prem edge (located typically in an enterprise), the Access Edge (located at or close to a 5G basestation), the Near Edge (the next aggregation point hosting a distributed UPF) and the Regional Data Center (hosting a Next Gen Central Office with wireless/wireline convergence). + +![](arch-images/multi-location-edge.png) + +As the industry seeks to settle on a consistent cloud native platform approach capable of extending across these edge locations, lowering the Total Cost of Ownership (TCO) becomes paramount. However a number challenges need to be overcome to achieve this vision: + +- Deliver platform consistency and scalability across diverse edge location requirements +- Optimize cloud native frameworks to meet stringent edge KPIs and simplify network complexity +- Leverage a broad ecosystem and evolving standards for edge computing + +OpenNESS brings together the best of breed cloud native frameworks to build a horizontal edge computing platform to address these challenges. + +**Benefits of OpenNESS** + +Edge Performant & Optimized: + +- Data plane acceleration, throughput, real-time optimizations for low latency, accelerators for crypto, AI, & Media, telemetry & resource management, Edge native power, security, performance/footprint optimizations, Cloud Native containers & microservices based, seamless and frictionless connectivity + +Multi-Access Edge Networking: + +- 3GPP & ETSI MEC based 5G/4G/WiFi capabilities +- Complies with Industry Standards (3GPP, CNCF, ORAN, ETSI) + +Ease of Use, Consumability & Time to Market (TTM) + +- Multi-location, Multi-Access, Multi-Cloud +- Delivered via use case specific Reference Architectures for ease of consumption and to accelerate TTM +- Easy to consume Intel silicon features, integrated set of components (networking, AI, media, vertical use cases), significantly reduce development time, ability to fill gaps in partner/customer IP portfolio + +## Building Blocks + +OpenNESS is composed of a set of Building Blocks, each intended to offer a set of capabilities for edge solutions. + +| Building Block | Summary | +| -------------------------------- | ------------------------------------------------------------ | +| Multi-Access Networking | 3GPP Network function microservices enabling deployment of an edge cloud in a 5G network | +| Edge Multi-Cluster Orchestration | Manage CNFs and applications across massively distributed edge Kubernetes* clusters, placement algorithms based on platform awareness/SLA/cost, multi-cluster service mesh automation | +| Edge Aware Service Mesh | Enhancements for high performance, reduced resource utilization, security and automation | +| Edge WAN Overlay | Highly optimized and secure WAN overlay implementation, providing abstraction of multiple edge & cloud provider networks as a uniform network, traffic sanitization, and edge aware SD-WAN | +| Confidential Computing | Protecting Data In Use at the edge, IP protection in multi tenant hosted environments | +| Resource Management | Kubernetes* extensions for Node Feature Discovery, NUMA awareness, Core Pinning, Resource Management Daemon, Topology Management | +| Data plane CNI | Optimized dataplanes and CNIs for various edge use cases: OVN, eBPF, SRIOV | +| Accelerators | Kubernetes* operators and device plugins for VPU, GPU, FPGA | +| Telemetry and Monitoring | Platform and application level telemetry leveraging industry standard frameworks | +| Green Edge | Modular microservices and Kubernetes* enhancements to manage different power profiles, events and scheduling, and detecting hotspots when deploying services | + + +## Distributions +OpenNESS is released as two distributions: +1. OpenNESS : A full open-source distribution of OpenNESS +2. Intel® Distribution of OpenNESS : A licensed distribution from Intel that includes all the features in OpenNESS along with additional microservices, Kubernetes\* extensions, enhancements, and optimizations for Intel® architecture. + +The Intel Distribution of OpenNESS requires a secure login to the OpenNESS GitHub repository. For access to the Intel Distribution of OpenNESS, contact your Intel support representative. + +## Consumption Models + +OpenNESS can be consumed as a whole or as individual building blocks. Whether you are an infrastructure developer or an app developer, if you are moving your business to the Edge, you may benefit from utilizing OpenNESS in your next project. + +**Building Blocks** + +You can explore the various building blocks packaged as Helm Charts and Kubernetes* Operators via the [OpenNESS github project](https://github.com/open-ness). + +**Converged Edge Reference Architectures (CERA)** + +CERA is a set of pre-integrated and readily deployable HW/SW Reference Architectures powered by OpenNESS to significantly accelerate Edge Platform Development, available via the [OpenNESS github project](https://github.com/open-ness). + +**Cloud Devkits** + +Software toolkits to easily deploy an OpenNESS cluster in a cloud environment such as Azure Cloud, available via the [OpenNESS github project](https://github.com/open-ness). + +**Converged Edge Insights** + +Ready to deploy software packages available via the [Intel® Edge Software Hub](https://www.intel.com/content/www/us/en/edge-computing/edge-software-hub.html), comes with use case specific reference implementations to kick start your next pathfinding effort for the Edge. + +Next explore the [OpenNESS Architecture](architecture.md). diff --git a/doc/ran/openness-ran.png b/doc/ran/openness-ran.png deleted file mode 100644 index 1f46c47e..00000000 Binary files a/doc/ran/openness-ran.png and /dev/null differ diff --git a/doc/reference-architectures/CERA-5G-On-Prem.md b/doc/reference-architectures/CERA-5G-On-Prem.md new file mode 100644 index 00000000..399b52a8 --- /dev/null +++ b/doc/reference-architectures/CERA-5G-On-Prem.md @@ -0,0 +1,829 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020 Intel Corporation +``` + +# Converged Edge Reference Architecture 5G On Premises Edge +The Converged Edge Reference Architectures (CERA) are a set of pre-integrated HW/SW reference architectures based on OpenNESS to accelerate the development of edge platforms and architectures. This document describes the CERA 5G On Premises Edge, which combines wireless networking and high performance compute for IoT, AI, video and other services. + +- [CERA 5G On Prem](#cera-5g-on-prem) + - [CERA 5G On Prem Experience Kit](#cera-5g-on-prem-experience-kit) + - [CERA 5G On Prem OpenNESS Configuration](#cera-5g-on-prem-openness-configuration) + - [CERA 5G On Prem Deployment Architecture](#cera-5g-on-prem-deployment-architecture) + - [CERA 5G On Prem Experience Kit Deployments](#cera-5g-on-prem-experience-kit-deployments) + - [Edge Service Applications Supported on CERA 5G On Prem](#edge-service-applications-supported-on-cera-5g-on-prem) + - [OpenVINO™](#openvino) + - [Edge Insights Software](#edge-insights-software) + - [CERA 5G On Prem Hardware Platform](#cera-5g-on-prem-hardware-platform) + - [Hardware Acceleration](#hardware-acceleration) + - [CERA 5G On Prem OpenNESS Deployment](#cera-5g-on-prem-openness-deployment) + - [Setting up Target Platform Before Deployment](#setting-up-target-platform-before-deployment) + - [BIOS Setup](#bios-setup) + - [Setting up Machine with Ansible](#setting-up-machine-with-ansible) + - [Steps to be performed on the machine, where the Ansible playbook is going to be run](#steps-to-be-performed-on-the-machine-where-the-ansible-playbook-is-going-to-be-run) + - [CERA 5G On Premise Experience Kit Deployment](#cera-5g-on-premise-experience-kit-deployment) +- [5G Core Components](#5g-core-components) + - [dUPF](#dupf) + - [Overview](#overview) + - [Deployment](#deployment) + - [Prerequisites](#prerequisites) + - [Settings](#settings) + - [Configuration](#configuration) + - [UPF](#upf) + - [Overview](#overview-1) + - [Deployment](#deployment-1) + - [Prerequisites](#prerequisites-1) + - [Settings](#settings-1) + - [Configuration](#configuration-1) + - [AMF-SMF](#amf-smf) + - [Overview](#overview-2) + - [Deployment](#deployment-2) + - [Prerequisites](#prerequisites-2) + - [Settings](#settings-2) + - [Configuration](#configuration-2) + - [Remote-DN](#remote-dn) + - [Overview](#overview-3) + - [Prerequisites](#prerequisites-3) + - [Local-DN](#local-dn) + - [Overview](#overview-4) + - [Prerequisites](#prerequisites-4) + - [OpenVINO](#openvino-1) + - [Settings](#settings-3) + - [Deployment](#deployment-3) + - [Streaming](#streaming) + - [EIS](#eis) + - [gNodeB](#gnodeb) + - [Overview](#overview-5) + - [Deployment](#deployment-4) + - [Prerequisites](#prerequisites-5) + - [Settings](#settings-4) + - [Configuration](#configuration-3) + - [Time synchronization over PTP for node server](#time-synchronization-over-ptp-for-node-server) + - [Overview](#overview-6) + - [Prerequisites](#prerequisites-6) + - [Settings](#settings-5) + - [GMC configuration](#gmc-configuration) +- [Conclusion](#conclusion) +- [Learn more](#learn-more) +- [Acronyms](#acronyms) + +## CERA 5G On Prem +CERA 5G On Prem deployment focuses on On Premises, Private Wireless and Ruggedized Outdoor deployments, presenting a scalable solution across the On Premises Edge. The assumed 3GPP deployment architecture is based on the figure below from 3GPP 23.501 Rel15 which shows the reference point representation for concurrent access to two (e.g. local and central) data networks (single PDU Session option). The highlighted yellow blocks - RAN, UPF and Data Network (edge apps) are deployed on the CERA 5G On Prem. + +![3GPP Network](cera-on-prem-images/3gpp_on_prem.png) + +> Figure 1 - 3GPP Network + +### CERA 5G On Prem Experience Kit +The CERA 5G On Prem implementation in OpenNESS supports a single Orchestration domain, optimizing the edge node to support Network Functions (gNB, UPF) and Applications at the same time. This allows the deployment on small uCPE and pole mounted form factors. + +#### CERA 5G On Prem OpenNESS Configuration +CERA 5G On Prem is a combination of the existing OpenNESS Building Blocks required to run 5G gNB, UPF, Applications and their associated HW Accelerators. CERA 5G On Prem also adds CMK and RMD to better support workload isolation and mitigate any interference from applications affecting the performance of the network functions. The below diagram shows the logical deployment with the OpenNESS Building Blocks. + +![CERA 5G On Prem Architecture](cera-on-prem-images/cera-on-prem-arch.png) + +> Figure 2 - CERA 5G On Prem Architecture + +#### CERA 5G On Prem Deployment Architecture + +![CERA 5G On Prem Deployment](cera-on-prem-images/cera_deployment.png) + +> Figure 3 - CERA 5G On Prem Deployment + +CERA 5G On Prem architecture supports a single platform (Xeon® SP and Xeon D) that hosts both the Edge Node and the Kubernetes* Control Plane. The UPF is deployed using SRIOV-Device plugin and SRIOV-CNI allowing direct access to the network interfaces used for connection to the gNB and back haul. For high throughput workloads such as UPF network function, it is recommended to use single root input/output (SR-IOV) pass-through the physical function (PF) or the virtual function (VF), as required. Also, in some cases, the simple switching capability in the NIC can be used to send traffic from one application to another, as there is a direct path of communication required between the UPF and the Data plane, this becomes an option. It should be noted that the VF-to-VF option is only suitable when there is a direct connection between PODs on the same PF with no support for advanced switching. In this scenario, it is advantageous to configure the UPF with three separate interfaces for the different types of traffic flowing in the system. This eliminates the need for additional traffic switching at the host. In this case, there is a separate interface for N3 traffic to the Access Network, N9 and N4 traffic can share an interface to the backhaul network. While local data network traffic on the N6 can be switched directly to the local applications, similarly gNB DU and CU interfaces N2 and N4 are separated. Depending on performance requirements, a mix of data planes can be used on the platform to meet the varying requirements of the workloads. + +The applications are deployed on the same edge node as the UPF and gNB. + +The use of Intel® Resource Director Technology (Intel® RDT) ensures that the cache allocation and memory bandwidth are optimized for the workloads on running on the platform. + +Intel® Speed Select Technology (Intel® SST) can be used to further enhance the performance of the platform. + +The following Building Blocks are supported in OpenNESS + +- High-Density Deep Learning (HDDL): Software that enables OpenVINO™-based AI apps to run on Intel® Movidius Vision Processing Units (VPUs). It consists of the following components: + - HDDL device plugin for K8s + - HDDL service for scheduling jobs on VPUs +- FPGA/eASIC/NIC: Software that enables AI inferencing for applications, high-performance and low-latency packet pre-processing on network cards, and offloading for network functions such as eNB/gNB offloading Forward Error Correction (FEC). It consists of: + - FPGA device plugin for inferencing + - SR-IOV device plugin for FPGA/eASIC + - Dynamic Device Profile for Network Interface Cards (NIC) +- Resource Management Daemon (RMD): RMD uses Intel® Resource Director Technology (Intel® RDT) to implement cache allocation and memory bandwidth allocation to the application pods. This is a key technology for achieving resource isolation and determinism on a cloud-native platform. +- Node Feature Discovery (NFD): Software that enables node feature discovery for Kubernetes*. It detects hardware features available on each node in a Kubernetes* cluster and advertises those features using node labels. +- Topology Manager: This component allows users to align their CPU and peripheral device allocations by NUMA node. +- Kubevirt: Provides support for running legacy applications in VM mode and the allocation of SR-IOV ethernet interfaces to VMs. +- Precision Time Protocol (PTP): Uses primary-secondary architecture for time synchronization between machines connected through ETH. The primary clock is a reference clock for the secondary nodes that adapt their clocks to the primary node's clock. Grand Master Clock (GMC) can be used to precisely set primary clock. + +#### CERA 5G On Prem Experience Kit Deployments +The CERA 5G On Prem experience kit deploys both the 5G On Premises cluster and also a second cluster to host the 5GC control plane functions and provide an additional Data Network POD to act as public network for testing purposes. Note that the Access network and UE are not configured as part of the CERA 5G On Prem Experience Kit. Also required but not provided is a binary iUPF, UPF and 5GC components. Please contact your local Intel® representative for more information. + +![CERA Experience Kit](cera-on-prem-images/cera-full-setup.png) + +> Figure 4 - CERA Experience Kit + +### Edge Service Applications Supported by CERA 5G On Prem +The CERA architectural paradigm enables convergence of edge services and applications across different market segments. This is demonstrated by taking diverse workloads native to different segments and successfully integrating within a common platform. The reference considers workloads segments across the following applications: + +Smart city: Capture of live camera streams to monitor and measure pedestrian and vehicle movement within a zone. + +Industrial: Monitoring of the manufacturing quality of an industrial line, the capture of video streams focuses on manufactured devices on an assembly line and the real-time removal of identified defect parts. + +While these use cases are addressing different market segments, they all have similar requirements: + +- Capture video either from a live stream from a camera, or streamed from a recorded file. + +- Process that video using inference with a trained machine learning model, computer vision filters, etc. + +- Trigger business control logic based on the results of the video processing. + +Video processing is inherently compute intensive and, in most cases, especially in edge processing, video processing becomes the bottleneck in user applications. This, ultimately, impacts service KPIs such as frames-per-second, number of parallel streams, latency, etc. + +Therefore, pre-trained models, performing numerical precision conversions, offloading to video accelerators, heterogeneous processing and asynchronous execution across multiple types of processors all of which increase video throughput are extremely vital in edge video processing. However these requirements can significantly complicate software development, requiring expertise that is rare in engineering teams and increasing the time-to-market. + +#### OpenVINO™ +The Intel® Distribution of OpenVINO™ toolkit helps developers and data scientists speed up computer vision workloads, streamline deep learning inference and deployments, and enable easy, heterogeneous execution across Intel® architecture platforms from edge to cloud. It helps to unleash deep learning inference using a common API, streamlining deep learning inference and deployment using standard or custom layers without the overhead of frameworks. + +#### Edge Insights Software +Intel® Edge Insights for Industrial offers a validated solution to easily integrate customers' data, devices, and processes in manufacturing applications, which helps enable near-real-time intelligence at the edge, greater operational efficiency, and security in factories. +Intel® Edge Insights for Industrial takes advantage of modern microservices architecture. This approach helps OEMs, device manufacturers, and solution providers integrate data from sensor networks, operational sources, external providers, and industrial systems more rapidly. The modular, product-validated software enables the extraction of machine data at the edge. It also allows that data to be communicated securely across protocols and operating systems managed cohesively, and analyzed quickly. +Allowing machines to communicate interchangeably across different protocols and operating systems eases the process of data ingestion, analysis, storage, and management. Doing so, also helps industrial companies build powerful analytics and machine learning models easily and generate actionable predictive insights at the edge. +Edge computing software deployments occupy a middle layer between the operating system and applications built upon it. Intel® Edge Insights for Industrial is created and optimized for Intel® architecture-based platforms and validated for underlying operating systems. It's capability supports multiple edge-critical Intel® hardware components like CPUs, FPGAs, accelerators, and Intel® Movidius Vision Processing Unit (VPU). Also, its modular architecture offers OEMs, solution providers, and ISVs the flexibility to pick and choose the features and capabilities that they wish to include or expand upon for customized solutions. As a result, they can bring solutions to market fast and accelerate customer deployments. + +For more information on the supported EIS demos support, see [EIS whitepaper](https://github.com/open-ness/edgeapps/blob/master/applications/eis-experience-kit/docs/whitepaper.md) + +### CERA 5G On Prem Hardware Platform +CERA 5G On Prem is designed to run on standard, off-the-shelf servers with Intel® Xeon CPUs. Dedicated platform is [Single socket SP SYS-E403-9P-FN2T](https://www.supermicro.com/en/products/system/Box_PC/SYS-E403-9P-FN2T.cfm) + + +#### Hardware Acceleration +Based on deployment scenario and capacity requirements, there is option to utilize hardware accelerators on the platform to increase performance of certain workloads. Hardware accelerators can be assigned to the relevant container on the platform through the OpenNESS Controller, enabling modular deployments to meet the desired use case. + +AI Acceleration +Video inference is done using the OpenVINO™ toolkit to accelerate the inference processing to identify people, vehicles or other items, as required. This is already optimized for software implementation and can be easily changed to utilize hardware acceleration if it is available on the platform. + +Intel® Movidius Myriad X Vision +Intel® Movidius Myriad X Vision Processing Unit (VPU) can be added to a server to provide a dedicated neural compute engine for accelerating deep learning inferencing at the edge. To take advantage of the performance of the neural compute engine, Intel® has developed the high-density deep learning (HDDL) inference engine plugin for inference of neural networks. + +In the current example when the HDDL is enabled on the platform, the OpenVINO™ toolkit sample application reduces its CPU requirements from two cores to a single core. + +In future releases additional media analytics services may be enabled e.g VCAC-A card, for more information refer to [OpenNESS VA Services](../applications/openness_va_services.md) + +Intel® FPGA PAC N3000 +The Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) plays a key role in accelerating certain types of workloads, which in turn increases the overall compute capacity of a commercial, off-the-shelf platform. FPGA benefits include: +- Flexibility - FPGA functionality can change upon every power up of the device. +- Acceleration - Get products to market faster and increase your system performance. +- Integration - Modern FPGAs include on-die processors, transceiver I/Os at 28 Gbps (or faster), RAM blocks, DSP engines, and more. +- Total Cost of Ownership (TCO) - While ASICs may cost less per unit than an equivalent FPGA, building them requires a non-recurring expense (NRE), expensive software tools, specialized design teams, and long manufacturing cycles. + +The Intel® FPGA PAC N3000 is a full-duplex, 100 Gbps in-system, re-programmable acceleration card for multi-workload networking application acceleration. It has an optimal memory mixture designed for network functions, with an integrated network interface card (NIC) in a small form factor that enables high throughput, low latency, and low power per bit for a custom networking pipeline. + +For more references, see [openness-fpga.md: Dedicated FPGA IP resource allocation support for Edge Applications and Network Functions](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md) + +Intel® QAT +The Intel® QuickAssist Adapter provides customers with a scalable, flexible, and extendable way to offer Intel® QuickAssist Technology (Intel® QAT) crypto acceleration and compression capabilities to their existing product lines. Intel® QuickAssist Technology (Intel® QAT) provides hardware acceleration to assist with the performance demands of securing and routing Internet traffic and other workloads, such as compression and wireless 4G LTE and 5G gnb algorithm offload, thereby reserving processor cycles for application and control processing. + + +### CERA 5G On Prem OpenNESS Deployment + +#### Setting up Target Platform Before Deployment + +Perform the following steps on the target machine before deployment: + +1. Ensure that, the target machine gets IP address automatically on boot every time. +Example command: +`hostname -I` + +2. Change target machine's hostname. + * Edit file `vi /etc/hostname`. Press `Insert` key to enter Insert mode. Delete the old hostname and replace it with the new one. Exit the vi editor by pressing `Esc` key and then type `:wq` and press `Enter` key. + * Edit file `vi /etc/hosts`. Press `Insert` key to enter Insert mode. Add a space at the end of both lines and write hostname after it. Exit the vi editor by pressing `Esc` key and then type `:wq` and press `Enter` key. + +3. Reboot the target machine. + +### BIOS Setup +The BIOS settings on the edge node must be properly set in order for the OpenNESS building blocks to function correctly. They may be set either during deployment of the reference architecture, or manually. The settings that must be set are: +* Enable Intel® Hyper-Threading Technology +* Enable Intel® Virtualization Technology +* Enable Intel® Virtualization Technology for Directed I/O +* Enable SR-IOV Support + +### Setting up Machine with Ansible + +#### Steps to be performed on the machine, where the Ansible playbook is going to be run + +1. Copy SSH key from machine, where the Ansible playbook is going to be run, to the target machine. Example commands: + > NOTE: Generate ssh key if is not present on the machine: `ssh-keygen -t rsa` (Press enter key to apply default values) + + Do it for each target machine. + ```shell + ssh-copy-id root@TARGET_IP + ``` + > NOTE: Replace TARGET_IP with the actual IP address of the target machine. + +2. Clone `ido-converged-edge-experience-kits` repo from `github.com/open-ness` using git token. + ```shell + git clone --recursive GIT_TOKEN@github.com:open-ness/ido-converged-edge-experience-kits.git + ``` + > NOTE: Replace GIT_TOKEN with your git token. + +3. Update repositories by running following commands. + ```shell + cd ido-converged-edge-experience-kits + git submodule foreach --recursive git checkout master + git submodule update --init --recursive + ``` + +4. Provide target machines IP addresses for OpenNESS deployment in `ido-converged-edge-experience-kits/openness_inventory.ini`. For Singlenode setup, set the same IP address for both `controller` and `node01`, the line with `node02` should be commented by adding # at the beginning. +Example: + ```ini + [all] + controller ansible_ssh_user=root ansible_host=192.168.1.43 # First server NE + node01 ansible_ssh_user=root ansible_host=192.168.1.43 # First server NE + ; node02 ansible_ssh_user=root ansible_host=192.168.1.12 + ``` + At that stage provide IP address only for `CERA 5G NE` server. + + If the GMC device is available, the node server can be synchronized. In the `ido-converged-edge-experience-kits/openness_inventory.ini`, `node01` should be added to `ptp_slave_group`. The default value `controller` for `[ptp_master]` should be removed or commented. + ```ini + [ptp_master] + #controller + + [ptp_slave_group] + node01 + ``` + +5. Edit `ido-converged-edge-experience-kits/openness/group_vars/all/10-open.yml` and provide some correct settings for deployment. + + Git token. + ```yaml + git_repo_token: "your git token" + ``` + Proxy if it is required. + ```yaml + # Setup proxy on the machine - required if the Internet is accessible via proxy + proxy_enable: true + # Clear previous proxy settings + proxy_remove_old: true + # Proxy URLs to be used for HTTP, HTTPS and FTP + proxy_http: "http://proxy.example.org:3128" + proxy_https: "http://proxy.example.org:3129" + proxy_ftp: "http://proxy.example.org:3129" + # Proxy to be used by YUM (/etc/yum.conf) + proxy_yum: "{{ proxy_http }}" + # No proxy setting contains addresses and networks that should not be accessed using proxy (e.g. local network, Kubernetes CNI networks) + proxy_noproxy: "127.0.0.1,localhost,192.168.1.0/24" + ``` + NTP server + ```yaml + ### Network Time Protocol (NTP) + # Enable machine's time synchronization with NTP server + ntp_enable: true + # Servers to be used by NTP instead of the default ones (e.g. 0.centos.pool.ntp.org) + ntp_servers: ['ntp.server.com'] + ``` + +6. Edit file `ido-converged-edge-experience-kits/openness/flavors/cera_5g_on_premise/edgenode_group.yml` and provide correct CPU settings. + + ```yaml + tuned_vars: | + isolated_cores=2-23,26-47 + nohz=on + nohz_full=2-23,26-47 + + # CPUs to be isolated (for RT procesess) + cpu_isol: "2-23,26-47" + # CPUs not to be isolate (for non-RT processes) - minimum of two OS cores necessary for controller + cpu_os: "0-1,24-25" + ``` + + If a GMC is connected to the setup, then node server synchronization can be enabled inside ido-converged-edge-experience-kits/openness/flavors/cera_5g_on_premise/edgenode_group.yml file. + ```yaml + ptp_sync_enable: true + ``` + +7. Edit file `ido-converged-edge-experience-kits/openness/flavors/cera_5g_on_premise/controller_group.yml` and provide names of `network interfaces` that are connected to second server and number of VF's to be created. + + ```yaml + sriov: + network_interfaces: {eno1: 5, eno2: 10} + ``` + +8. Edit file `ido-converged-edge-experience-kits/openness/x-oek/oek/host_vars/node01.yml` if a GMC is connected and the node server should be synchronized. + + For single node setup (this is the default mode for CERA), `ptp_port` keeps the host's interface connected to Grand Master, e.g.: + ```yaml + ptp_port: "eno3" + ``` + + Variable `ptp_network_transport` keeps network transport for ptp. Choose `"-4"` for default CERA setup. The `gm_ip` variable should contain the GMC's IP address. The Ansible scripts set the IP on the interface connected to the GMC, according to the values in the variables `ptp_port_ip` and `ptp_port_cidr`. + ```yaml + # Valid options: + # -2 Select the IEEE 802.3 network transport. + # -4 Select the UDP IPv4 network transport. + ptp_network_transport: "-4" + + + # Grand Master IP, e.g.: + # gm_ip: "169.254.99.9" + gm_ip: "169.254.99.9" + + # - ptp_port_ip contains a static IP for the server port connected to GMC, e.g.: + # ptp_port_ip: "169.254.99.175" + # - ptp_port_cidr - CIDR for IP from, e.g.: + # ptp_port_cidr: "24" + ptp_port_ip: "169.254.99.175" + ptp_port_cidr: "24" + ``` + +9. Execute the `deploy_openness_for_cera.sh` script in `ido-converged-edge-experience-kits` to start OpenNESS platform deployment process by running the following command: + ```shell + ./deploy_openness_for_cera.sh cera_5g_on_premise + ``` + Note: This might take few hours. + +10. After a successful OpenNESS deployment, edit again `ido-converged-edge-experience-kits/openness_inventory.ini`, change IP address to `CERA 5G CN` server. + ```ini + [all] + controller ansible_ssh_user=root ansible_host=192.168.1.109 # Second server CN + node01 ansible_ssh_user=root ansible_host=192.168.1.109 # Second server CN + ; node02 ansible_ssh_user=root ansible_host=192.168.1.12 + ``` + Then run `deploy_openness_for_cera.sh` again. + ```shell + ./deploy_openness_for_cera.sh + ``` + All settings in `ido-converged-edge-experience-kits/openness/group_vars/all/10-open.yml` are the same for both servers. + + For `CERA 5G CN` server disable synchronization with GMC inside `ido-converged-edge-experience-kits/openness/flavors/cera_5g_on_premise/edgenode_group.yml` file. + ```yaml + ptp_sync_enable: false + ``` + +11. When both servers have deployed OpenNess, login to `CERA 5G CN` server and generate `RSA ssh key`. It's required for AMF/SMF VM deployment. + ```shell + ssh-keygen -t rsa + # Press enter key to apply default values + ``` +12. The full setup is now ready for CERA deployment. + +### CERA 5G On Premise Experience Kit Deployment +The following prerequisites should be met for CERA deployment. + +1. CentOS should use the following kernel and have no newer kernels installed: + * `3.10.0-1127.19.1.rt56.1116.el7.x86_64` on Near Edge server. + * `3.10.0-1127.el7.x86_64` on Core Network server. + +2. Edit file `ido-converged-edge-experience-kits/cera_config.yaml` and provide correct settings: + + Git token + ```yaml + git_repo_token: "your git token" + ``` + Decide which demo application should be launched + ```yaml + # choose which demo will be launched: `eis` or `openvino` + deploy_app: "eis" + ``` + EIS release package location + ```yaml + # provide EIS release package archive absolute path + eis_release_package_path: "" + ``` + [OpenVino](#OpenVINO) settings, if OpenVino app was set as active demo application + ```yaml + display_host_ip: "" # update ip for visualizer HOST GUI. + save_video: "enable" + ``` + Proxy settings + ```yaml + # Setup proxy on the machine - required if the Internet is accessible via proxy + proxy_os_enable: true + # Clear previous proxy settings + proxy_os_remove_old: true + # Proxy URLs to be used for HTTP, HTTPS and FTP + proxy_os_http: "http://proxy.example.org:3129" + proxy_os_https: "http://proxy.example.org:3128" + proxy_os_ftp: "http://proxy.example.org:3128" + proxy_os_noproxy: "127.0.0.1,localhost,192.168.1.0/24" + # Proxy to be used by YUM (/etc/yum.conf) + proxy_yum_url: "{{ proxy_os_http }}" + ``` + See [more details](#dUPF) for dUPF configuration + ```yaml + # Define PCI addresses (xxxx:xx:xx.x format) for i-upf + n3_pci_bus_address: "0000:19:0a.0" + n4_n9_pci_bus_address: "0000:19:0a.1" + n6_pci_bus_address: "0000:19:0a.2" + + # Define VPP VF interface names for i-upf + n3_vf_interface_name: "VirtualFunctionEthernet19/a/0" + n4_n9_vf_interface_name: "VirtualFunctionEthernet19/a/1" + n6_vf_interface_name: "VirtualFunctionEthernet19/a/2" + + # PF interface name of N3 created VF + host_if_name_N3: "eno2" + # PF interface name of N4, N6, N9 created VFs + host_if_name_N4_N6_n9: "eno2" + ``` + [gNodeB](#gNodeB) configuration + ```yaml + ## gNodeB related config + gnodeb_fronthaul_vf1: "0000:65:02.0" + gnodeb_fronthaul_vf2: "0000:65:02.1" + + gnodeb_fronthaul_vf1_mac: "ac:1f:6b:c2:48:ad" + gnodeb_fronthaul_vf2_mac: "ac:1f:6b:c2:48:ab" + + n2_gnodeb_pci_bus_address: "0000:19:0a.3" + n3_gnodeb_pci_bus_address: "0000:19:0a.4" + + fec_vf_pci_addr: "0000:b8:00.1" + + # DPDK driver used (vfio-pci/igb_uio) to VFs bindings + dpdk_driver_gnodeb: "igb_uio" + + ## ConfigMap vars + + fronthaul_if_name: "enp101s0f0" + ``` + Settings for `CERA 5G CN` + ```yaml + ## PSA-UPF vars + + # Define N4/N9 and N6 interface device PCI bus address + PCI_bus_address_N4_N9: '0000:19:0a.0' + PCI_bus_address_N6: '0000:19:0a.1' + + # 5gc binaries directory name + package_5gc_path: "/opt/amf-smf/" + + # vpp interface name as per setup connection + vpp_interface_N4_N9_name: 'VirtualFunctionEthernet19/a/0' + vpp_interface_N6_name: 'VirtualFunctionEthernet19/a/1' + ``` +3. If needed change additional settings for `CERA 5G NE` in `ido-converged-edge-experience-kits/host_vars/cera_5g_ne.yml`. + ```yaml + # DPDK driver used (vfio-pci/igb_uio) to VFs bindings + dpdk_driver_upf: "igb_uio" + + # Define path where i-upf is located on remote host + upf_binaries_path: "/opt/flexcore-5g-rel/i-upf/" + ``` + OpenVino model + ```yaml + model: "pedestrian-detection-adas-0002" + ``` +4. Build the following docker images required and provide necessary binaries. + - [dUPF](#dUPF) + - [UPF](#UPF) + - [AMF-SMF](#AMF-SMF) + - [gNB](#gNodeB) +5. Provide correct IP for target servers in file `ido-converged-edge-experience-kits/cera_inventory.ini` + ```ini + [all] + cera_5g_ne ansible_ssh_user=root ansible_host=192.168.1.109 + cera_5g_cn ansible_ssh_user=root ansible_host=192.168.1.43 + ``` +6. Deploy CERA Experience Kit + ```shell + ./deploy_cera.sh + ``` + +## 5G Core Components +This section describes in details how to build particular images and configure ansible for deployment. + +### dUPF + +#### Overview + +The Distributed User Plane Function (dUPF) is a part of 5G Access Network, it is responsible for packets routing. It has 3 separate interfaces for `N3, N4/N9` and `N6` data lines. `N3` interface is used for connection with video stream source. `N4/N9` interface is used for connection with `UPF` and `AMF/SMF`. `N6` interface is used for connection with `EDGE-APP` (locally), `UPF` and `Remote-DN` + +The `CERA dUPF` component is deployed on `CERA 5G Near Edge (cera_5g_ne)` node. It is deployed as a POD - during deployment of CERA 5G On Prem automatically. + +#### Deployment + +##### Prerequisites + +To deploy dUPF correctly, one needs to provide Docker image to Docker repository on the target node. There is a script on the `open-ness/eddgeapps/network-functions/core-network/5G/UPF` repo provided by CERA, which builds the image automatically. + +##### Settings +The following variables need to be defined in `cera_config.yaml` +```yaml +n3_pci_bus_address: "" - PCI bus address of VF, which is used for N3 interface by dUPF +n4_n9_pci_bus_address: "" - PCI bus address of VF, which is used for N4 and N9 interface by dUPF +n6_pci_bus_address: "" - PCI bus address of VF, which is used for N6 interface by dUPF + +n3_vf_interface_name: "" - name of VF, which is used for N3 interface by dUPF +n4_n9_vf_interface_name: "" - name of VF, which is used for N4 and N9 interface by dUPF +n6_vf_interface_name: "" - name of VF, which is used for N6 interface by dUPF +``` + +##### Configuration +The dUPF is configured automatically during the deployment. + +### UPF +#### Overview + +The `User Plane Function (UPF)` is a part of 5G Core Network, it is responsible for packets routing. It has 2 separate interfaces for `N4/N9` and `N6` data lines. `N4/N9` interface is used for connection with `dUPF` and `AMF/SMF` (locally). `N6` interface is used for connection with `EDGE-APP`, `dUPF` and `Remote-DN` (locally). + +The CERA UPF component is deployed on `CERA 5G Core Network (cera_5g_cn)` node. It is deployed as a POD - during deployment of CERA 5G On Prem automatically. + +#### Deployment +##### Prerequisites + +To deploy `UPF` correctly one needs to provide a Docker image to Docker Repository on target nodes. There is a script on the `open-ness/eddgeapps/network-functions/core-network/5G/UPF` repo provided by CERA, which builds the image automatically. + +##### Settings + +The following variables need to be defined in the `cera_config.yaml` +```yaml +PCI_bus_address_N4_N9: "" - PCI bus address of VF, which is used for N4 and N9 interface by UPF +PCI_bus_address_N6: "" - PCI bus address of VF, which is used for N6 interface by UPF + +vpp_interface_N4_N9_name: "" - name of VF, which is used for N4 and N9 interface by UPF +vpp_interface_N6_name: "" - name of VF, which is used for N6 interface by UPF +``` + +##### Configuration +The `UPF` is configured automatically during the deployment. + + +### AMF-SMF +#### Overview + +AMF-SMF is a part of 5G Core Architecture responsible for `Session Management(SMF)` and `Access and Mobility Management(AMF)` Functions - it establishes sessions and manages date plane packages. + +The CERA `AMF-SMF` component is deployed on `CERA 5G Core Network (cera_5g_cn)` node and communicates with UPF and dUPF, so they must be deployed and configured before `AMF-SMF`. + +#### Deployment +##### Prerequisites + +To deploy `AMF-SMF` correctly, one needs to provide a Docker image to Docker Repository on target machine(cera_5g_cn). There is a script on the `open-ness/eddgeapps/network-functions/core-network/5G/AMF-SMF` repository provided by CERA, which builds the image automatically. + +##### Settings + +Following variables need to be defined in `cera_config.yaml` +```yaml +# 5gc binaries directory name +package_5gc_path: "/opt/amf-smf/" +``` + +##### Configuration +The `AMF-SMF` is configured automatically during the deployment. + + +### Remote-DN +#### Overview +Remote Data Network is component, which represents `“internet”` in networking. CERA Core Network manages which data should apply to `Near Edge Application(EIS/OpenVINO)` or go further to the network. + + +##### Prerequisites +Deployment of Remote-DN is completely automated, so there is no need to set or configure anything. + + +### Local-DN +#### Overview +Local Data Network is component, which is responsible for combining Core part with Edge applications. It can convert incoming video streaming protocol for acceptable format by EIS/OpenVino + + +#### Prerequisites +Deployment of Local-DN is completely automated, so there is no need to set or configure anything. + +### OpenVINO + +#### Settings +In the `cera_config.yaml` file can be chosen for which application should be built and deployed. Set a proper value for the deploy_app variable. +```yaml +deploy_app: "" - Type openvino if OpenVINO demo should be launched. +``` + +Several variables must be set in the file `host_vars/cera_5g_ne.yml`: +```yaml +model: "pedestrian-detection-adas-0002" - Model for which the OpenVINO demo will be run. Models which can be selected: pedestrian-detection-adas-0002, pedestrian-detection-adas-binary-0001, pedestrian-and-vehicle-detector-adas-0001, vehicle-detection-adas-0002, vehicle-detection-adas-binary-0001, person-vehicle-bike-detection-crossroad-0078, person-vehicle-bike-detection-crossroad-1016, person-reidentification-retail-0031, person-reidentification-retail-0248, person-reidentification-retail-0249, person-reidentification-retail-0300, road-segmentation-adas-0001 + +save_video: "enable" - For value "enable" the output will be written to /root/saved_video/ov-output.mjpeg file on cera_5g_ne machine. This variable should not be changed. +``` + +#### Deployment +After running the `deploy_cera.sh` script, pod ov-openvino should be available on `cera_5g_ne` machine. The status of the ov-openvino pod can be checked by use: +```shell +kubectl -n openvino get pods -o wide +``` +Immediately after creating, the ov-openvino pod will wait for input streaming. If streaming is not available, ov-openvino pod will restart after some time. After this restart, this pod will wait for streaming again. + +#### Streaming +Video to OpenVINO™ pod should be streamed to IP `192.168.1.101` and port `5000`. Make sure that the pod with OpenVINO™ is visible from your streaming machine. In the simplest case, the video can be streamed from the same machine where pod with OpenVINO™ is available. + +Output will be saved to the `saved_video/ov-output.mjpeg` file (`save_video` variable in the `host_vars/cera_5g_ne.yml` should be set to `"enable"` and should be not changed). + +Streaming is possible from a file or from a camera. For continuous and uninterrupted streaming of a video file, the video file can be streamed in a loop. An example of a Bash file for streaming is shown below. +```shell +#!/usr/bin/env bash +while : +do + ffmpeg -re -i Rainy_Street.mp4 -pix_fmt yuvj420p -vcodec mjpeg \ + -huffman 0 -map 0:0 -pkt_size 1200 -f rtp rtp://192.168.1.101:5000 +done +``` +Where: +* `ffmpeg` - Streaming software must be installed on the streaming machine. +* `Rainy_Street.mp4` - The file that will be streamed. This file can be downloaded by: + ```shell + wget https://storage.googleapis.com/coverr-main/zip/Rainy_Street.zip + ``` + +The OpenVINO™ demo, saves its output to saved_video/ov-output.mjpeg file on the cera_5g_cn machine. + +- To stop the OpenVINO™ demo and interrupt creating the output video file - run on the cera_5g_cn: kubectl delete -f /opt/openvino/yamls/openvino.yaml +- To start the OpenVINO™ demo and start creating the output video file (use this command if ov-openvino pod does not exist) - run on the cera_5g_cn: kubectl apply -f /opt/openvino/yamls/openvino.yaml + +### EIS +Deployment of EIS is completely automated, so there is no need to set or configure anything except providing release package archive. +```yaml +# provide EIS release package archive absolute path +eis_release_package_path: "" +``` + +For more details about `eis-experience-kit` check [README.md](https://github.com/open-ness/edgeapps/blob/master/applications/eis-experience-kit/README.md) + +### gNodeB +#### Overview + +`gNodeB` is a part of 5G Core Architecture and is deployed on `CERA 5G Nere Edge (cera_5g_ne)` node. + +#### Deployment +#### Prerequisites + +To deploy `gNodeB` correctly it is required to provide a Docker image to Docker Repository on target machine(cera_5g_ne). There is a script on the `open-ness/eddgeapps/network-functions/ran/5G/gnb` repository provided by CERA, which builds the image automatically. For `gNodeB` deployment FPGA card is required PAC N3000 and also QAT card. + +#### Settings + +The following variables need to be defined in `cera_config.yaml` +```yaml +## gNodeB related config +# Fronthaul require two VFs +gnodeb_fronthaul_vf1: "0000:65:02.0" - PCI bus address of VF, which is used as fronthaul +gnodeb_fronthaul_vf2: "0000:65:02.1" - PCI bus address of VF, which is used as fronthaul + +gnodeb_fronthaul_vf1_mac: "ac:1f:6b:c2:48:ad" - MAC address which will be set on the first VF during deployment +gnodeb_fronthaul_vf2_mac: "ac:1f:6b:c2:48:ab" - MAC address which will be set on the second VF during deployment + +n2_gnodeb_pci_bus_address: "0000:19:0a.3" - PCI bus address of VF, which is used for N2 interface +n3_gnodeb_pci_bus_address: "0000:19:0a.4" - PCI bus address of VF, which is used for N3 interface + +fec_vf_pci_addr: "0000:b8:00.1" - PCI bus address of VF, which is assigned to FEC PAC N3000 accelerator + +# DPDK driver used (vfio-pci/igb_uio) to VFs bindings +dpdk_driver_gnodeb: "igb_uio" - driver for binding interfaces + +## ConfigMap vars +fronthaul_if_name: "enp101s0f0" - name of fronthaul interface +``` + +#### Configuration +The `gNodeB` is configured automatically during the deployment. + +### Time synchronization over PTP for node server +#### Overview +The CERA 5G on Premises node must be synchronized with PTP to allow connection with the RRH. + +#### Prerequisites +Not every NIC supports hardware timestamping. To verify if the NIC supports hardware timestamping, run the ethtool command for the interface in use. + +Example: +```shell +ethtool -T eno3 +``` + +Sample output: +```shell +Time stamping parameters for eno3: +Capabilities: + hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE) + software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE) + hardware-receive (SOF_TIMESTAMPING_RX_HARDWARE) + software-receive (SOF_TIMESTAMPING_RX_SOFTWARE) + software-system-clock (SOF_TIMESTAMPING_SOFTWARE) + hardware-raw-clock (SOF_TIMESTAMPING_RAW_HARDWARE) +PTP Hardware Clock: 0 +Hardware Transmit Timestamp Modes: + off (HWTSTAMP_TX_OFF) + on (HWTSTAMP_TX_ON) +Hardware Receive Filter Modes: + none (HWTSTAMP_FILTER_NONE) + all (HWTSTAMP_FILTER_ALL) +``` + +For software time stamping support, the parameters list should include: +- SOF_TIMESTAMPING_SOFTWARE +- SOF_TIMESTAMPING_TX_SOFTWARE +- SOF_TIMESTAMPING_RX_SOFTWARE + +For hardware time stamping support, the parameters list should include: +- SOF_TIMESTAMPING_RAW_HARDWARE +- SOF_TIMESTAMPING_TX_HARDWARE +- SOF_TIMESTAMPING_RX_HARDWARE + +GMC must be properly configured and connected to the server's ETH port. + +#### Settings +If the GMC has been properly configured and connected to the server then the node server can be synchronized. +In the `ido-converged-edge-experience-kits/openness_inventory.ini` file, `node01` should be added to `ptp_slave_group` and the content inside the `ptp_master` should be empty or commented. +```ini +[ptp_master] +#controller + +[ptp_slave_group] +node01 +``` +Server synchronization can be enabled inside `ido-converged-edge-experience-kits/openness/flavors/cera_5g_on_premise/edgenode_group.yml` file. +```yaml +ptp_sync_enable: true +``` +Edit file `ido-converged-edge-experience-kits/openness/x-oek/oek/host_vars/node01.yml` if a GMC is connected and the node server should be synchronized. + +For single node setup (this is the default mode for CERA), `ptp_port` keeps the host's interface connected to Grand Master, e.g.: +```yaml +ptp_port: "eno3" +``` + +Variable `ptp_network_transport` keeps network transport for ptp. Choose `"-4"` for default CERA setup. The `gm_ip` variable should contain the GMC's IP address. The Ansible scripts set the IP on the interface connected to the GMC, according to the values in the variables `ptp_port_ip` and `ptp_port_cidr`. +```yaml +# Valid options: +# -2 Select the IEEE 802.3 network transport. +# -4 Select the UDP IPv4 network transport. +ptp_network_transport: "-4" + +# Grand Master IP, e.g.: +# gm_ip: "169.254.99.9" +gm_ip: "169.254.99.9" + +# - ptp_port_ip contains a static IP for the server port connected to GMC, e.g.: +# ptp_port_ip: "169.254.99.175" +# - ptp_port_cidr - CIDR for IP from, e.g.: +# ptp_port_cidr: "24" +ptp_port_ip: "169.254.99.175" +ptp_port_cidr: "24" +``` + +#### GMC configuration + +Important settings: +- Port State: Master +- Delay Mechanism: E2E +- Network Protocol: IPv4 +- Sync Interval: 0 +- Delay Request Interval: 0 +- Pdelay Request Interval: 0 +- Announce Interval: 3 +- Announce Receipt Timeout: 3 +- Multicast/Unicast Operation: Unicast +- Negotiation: ON +- DHCP: Enable +- VLAN: Off +- Profile: Default (1588 v2) +- Two step clock: FALSE +- Clock class: 248 +- Clock accuracy: 254 +- Offset scaled log: 65535 +- Priority 1: 128 +- Priority 2: 128 +- Domain number: 0 +- Slave only: FALSE + +## Conclusion +CERA 5G On Premises deployment provides a reference implementation of how to use OpenNESS software to efficiently deploy, manage and optimize the performance of network functions and applications suited to running at the On Premises Network. With the power of Intel® architecture CPUs and the flexibility to add hardware accelerators, CERA systems can be customized for a wide range of applications. + +## Learn more +* [Building on NFVI foundation from Core to Cloud to Edge with Intel® Architecture](https://networkbuilders.intel.com/social-hub/video/building-on-nfvi-foundation-from-core-to-cloud-to-edge-with-intel-architecture) +* [Edge Software Hub](https://software.intel.com/content/www/us/en/develop/topics/iot/edge-solutions.html) +* [Solution Brief: Converged Edge Reference Architecture (CERA) for On-Premise/Outdoor](https://networkbuilders.intel.com/solutionslibrary/converged-edge-reference-architecture-cera-for-on-premise-outdoor#.XffY5ut7kfI) + +## Acronyms + +| | | +|-------------|---------------------------------------------------------------| +| AI | Artificial intelligence | +| AN | Access Network | +| CERA | Converged Edge Reference Architecture | +| CN | Core Network | +| CNF | Container Network Function | +| CommSPs | Communications Service Providers | +| DPDK | Data Plane Developer Kit | +| eNB | e-NodeB | +| EPA | Enhance Platform Awareness | +| EPC | Extended Packet Core | +| FPGA | Field Programmable Gate Array | +| GMC | Grand Master Clock | +| IPSEC | Internet Protocol Security | +| MEC | Multi-Access Edge Computing | +| OpenNESS | Open Network Edge Services Software | +| OpenVINO | Open Visual Inference and Neural Network Optimization | +| OpenVX | Open Vision Acceleration | +| OVS | Open Virtual Switch | +| PF | Physical Function | +| RAN | Radio Access Network | +| PTP | Precision Time Protocol | +| SD-WAN | Software Defined Wide Area Network | +| uCPE | Universal Customer Premises Equipment | +| UE | User Equipment | +| VF | Virtual function | +| VM | Virtual Machine | diff --git a/doc/reference-architectures/CERA-Near-Edge.md b/doc/reference-architectures/CERA-Near-Edge.md index c9a16b83..b422da83 100644 --- a/doc/reference-architectures/CERA-Near-Edge.md +++ b/doc/reference-architectures/CERA-Near-Edge.md @@ -338,13 +338,35 @@ Example: # Servers to be used by NTP instead of the default ones (e.g. 0.centos.pool.ntp.org) ntp_servers: ['ger.corp.intel.com'] ``` -6. Execute the `deploy_openness_for_cera.sh` script in `ido-converged-edge-experience-kits` to start OpenNESS platform deployment process by running following command: + +6. Edit file `ido-converged-edge-experience-kits/openness/flavors/cera_5g_near_edge/edgenode_group.yml` and provide correct CPU settings. + + ```yaml + tuned_vars: | + isolated_cores=1-16,25-40 + nohz=on + nohz_full=1-16,25-40 + # CPUs to be isolated (for RT procesess) + cpu_isol: "1-16,25-40" + # CPUs not to be isolate (for non-RT processes) - minimum of two OS cores necessary for controller + cpu_os: "0,17-23,24,41-47" + ``` + +7. Edit file `ido-converged-edge-experience-kits/openness/flavors/cera_5g_near_edge/controller_group.yml` and provide names of `network interfaces` that are connected to second server and number of VF's to be created. + + ```yaml + sriov: + network_interfaces: {eno1: 5, eno2: 2} + ``` + > NOTE: On various platform interfaces can have different name. For e.g `eth1` instead of `eno1`. Please verify interface name before deployment and do right changes. + +8. Execute the `deploy_openness_for_cera.sh` script in `ido-converged-edge-experience-kits` to start OpenNESS platform deployment process by running following command: ```shell - ./deploy_openness_for_cera.sh + ./deploy_openness_for_cera.sh cera_5g_near_edge ``` It might take few hours. -7. After successful OpenNESS deployment, edit again `ido-converged-edge-experience-kits/openness_inventory.ini`, change IP address to `CERA 5G CN` server. +9. After successful OpenNESS deployment, edit again `ido-converged-edge-experience-kits/openness_inventory.ini`, change IP address to `CERA 5G CN` server. ```ini [all] controller ansible_ssh_user=root ansible_host=192.168.1.109 # Second server CN @@ -357,17 +379,19 @@ Example: ``` All settings in `ido-converged-edge-experience-kits/openness/group_vars/all/10-open.yml` are the same for both servers. -8. When both servers have deployed OpenNess, login to `CERA 5G CN` server and generate `RSA ssh key`. It's required for AMF/SMF VM deployment. +10. When both servers have deployed OpenNess, login to `CERA 5G CN` server and generate `RSA ssh key`. It's required for AMF/SMF VM deployment. ```shell ssh-keygen -t rsa # Press enter key to apply default values ``` -9. Now full setup is ready for CERA deployment. +11. Now full setup is ready for CERA deployment. ### CERA Near Edge Experience Kit Deployment For CERA deployment some prerequisites have to be fulfilled. -1. Edit file `ido-converged-edge-experience-kits/group_vars/all.yml` and provide correct settings: +1. CentOS should use kernel `kernel-3.10.0-957.el7.x86_64` and have no newer kernels installed. + +2. Edit file `ido-converged-edge-experience-kits/group_vars/all.yml` and provide correct settings: Git token ```yaml @@ -389,7 +413,8 @@ For CERA deployment some prerequisites have to be fulfilled. vm_image_path: "/opt/flexcore-5g-rel/ubuntu_18.04.qcow2" ``` -2. Edit file `ido-converged-edge-experience-kits/host_vars/localhost.yml` and provide correct proxy if is required. +3. Edit file `ido-converged-edge-experience-kits/host_vars/localhost.yml` and provide correct proxy if is required. + ```yaml ### Proxy settings # Setup proxy on the machine - required if the Internet is accessible via proxy @@ -405,11 +430,11 @@ For CERA deployment some prerequisites have to be fulfilled. proxy_yum_url: "{{ proxy_os_http }}" ``` -3. Build all docker images required and provide all necessary binaries. +4. Build all docker images required and provide all necessary binaries. - [dUPF](#dUPF) - [UPF](#UPF) - [AMF-SMF](#AMF-SMF) -4. Set all necessary settings for `CERA 5G NE` in `ido-converged-edge-experience-kits/host_vars/cera_5g_ne.yml`. +5. Set all necessary settings for `CERA 5G NE` in `ido-converged-edge-experience-kits/host_vars/cera_5g_ne.yml`. See [more details](#dUPF) for dUPF configuration ```yaml # Define PCI addresses (xxxx:xx:xx.x format) for i-upf @@ -439,7 +464,7 @@ For CERA deployment some prerequisites have to be fulfilled. save_video: "enable" target_device: "CPU" ``` -5. Set all necessary settings for `CERA 5G CN` in `ido-converged-edge-experience-kits/host_vars/cera_5g_cn.yml`. +7. Set all necessary settings for `CERA 5G CN` in `ido-converged-edge-experience-kits/host_vars/cera_5g_cn.yml`. For more details check: - [UPF](#UPF) - [AMF-SMF](#AMF-SMF) @@ -457,6 +482,10 @@ For CERA deployment some prerequisites have to be fulfilled. package_name_5gc: "5gc" ``` ```yaml + # psa-upf directory path + upf_binaries_path: '/opt/flexcore-5g-rel/psa-upf/' + ``` + ```yaml ## AMF-SMF vars # Define N2/N4 @@ -475,13 +504,13 @@ For CERA deployment some prerequisites have to be fulfilled. # PF interface name of N4, N6, N9 created VFs host_if_name_N4_N6_n9: "eno1" ``` -6. Provide correct IP for target servers in file `ido-converged-edge-experience-kits/cera_inventory.ini` +8. Provide correct IP for target servers in file `ido-converged-edge-experience-kits/cera_inventory.ini` ```ini [all] cera_5g_ne ansible_ssh_user=root ansible_host=192.168.1.109 cera_5g_cn ansible_ssh_user=root ansible_host=192.168.1.43 ``` -6. Deploy CERA Experience Kit +9. Deploy CERA Experience Kit ```shell ./deploy_cera.sh ``` @@ -501,7 +530,7 @@ The `CERA dUPF` component is deployed on `CERA 5G Near Edge (cera_5g_ne)` node. #### Prerequisites -To deploy dUPF correctly it is needed to provide Docker image to Docker repository on target machine. There is a script on the `open-ness/eddgeapps/network-functions/core-network/5G/UPF` repo provided by CERA , which builds the image automatically. +To deploy dUPF correctly it is needed to provide Docker image to Docker repository on target machine(cera_5g_ne). There is a script on the `open-ness/eddgeapps/network-functions/core-network/5G/UPF` repo provided by CERA, which builds the image automatically. #### Settings Following variables need to be defined in `/host_vars/cera_5g_ne.yml` @@ -533,7 +562,7 @@ The CERA UPF component is deployed on `CERA 5G Core Network (cera_5g_cn)` node. #### Prerequisites -To deploy UPF correctly it is needed to provide a Docker image to Docker Repository on target machine. There is a script on the `open-ness/eddgeapps/network-functions/core-network/5G/UPF` repo provided by CERA , which builds the image automatically. +To deploy UPF correctly it is needed to provide a Docker image to Docker Repository on target machine(cera_5g_ne and cera_5g_cn). There is a script on the `open-ness/eddgeapps/network-functions/core-network/5G/UPF` repo provided by CERA, which builds the image automatically. #### Settings @@ -681,6 +710,8 @@ Steps to do on logged Guest OS After these steps there will be available `.qcow2` image generated by installed Virtual Machine in `/var/lib/libvirt/images` directory. +If AMF-SMF is not working correctly installing these packages should fix it: `qemu-guest-agent,iputils-ping,iproute2,screen,libpcap-dev,tcpdump,libsctp-dev,apache2,python-pip,sudo,ssh`. + ### Remote-DN #### Overview @@ -742,6 +773,11 @@ Where: wget https://storage.googleapis.com/coverr-main/zip/Rainy_Street.zip ``` +The OpenVINO demo, saves its output to saved_video/ov-output.mjpeg file on the cera_5g_cn machine. + +- To stop the OpenVINO demo and interrupt creating the output video file - run on the cera_5g_cn: kubectl delete -f /opt/openvino/yamls/openvino.yaml +- To start the OpenVINO demo and start creating the output video file (use this command if ov-openvino pod does not exist) - run on the cera_5g_cn: kubectl apply -f /opt/openvino/yamls/openvino.yaml + ### EIS Deployment of EIS is completely automated, so there is no need to set or configure anything except providing release package archive. ```yaml diff --git a/doc/reference-architectures/cera-on-prem-images/3gpp_on_prem.png b/doc/reference-architectures/cera-on-prem-images/3gpp_on_prem.png new file mode 100644 index 00000000..f42ca993 Binary files /dev/null and b/doc/reference-architectures/cera-on-prem-images/3gpp_on_prem.png differ diff --git a/doc/reference-architectures/cera-on-prem-images/cera-full-setup.png b/doc/reference-architectures/cera-on-prem-images/cera-full-setup.png new file mode 100755 index 00000000..49750f55 Binary files /dev/null and b/doc/reference-architectures/cera-on-prem-images/cera-full-setup.png differ diff --git a/doc/reference-architectures/cera-on-prem-images/cera-near-edge-orchestration-domains.png b/doc/reference-architectures/cera-on-prem-images/cera-near-edge-orchestration-domains.png new file mode 100644 index 00000000..0f9f93f5 Binary files /dev/null and b/doc/reference-architectures/cera-on-prem-images/cera-near-edge-orchestration-domains.png differ diff --git a/doc/reference-architectures/cera-on-prem-images/cera-on-prem-arch.png b/doc/reference-architectures/cera-on-prem-images/cera-on-prem-arch.png new file mode 100644 index 00000000..5d3f4385 Binary files /dev/null and b/doc/reference-architectures/cera-on-prem-images/cera-on-prem-arch.png differ diff --git a/doc/reference-architectures/cera-on-prem-images/cera_deployment.png b/doc/reference-architectures/cera-on-prem-images/cera_deployment.png new file mode 100644 index 00000000..0a544163 Binary files /dev/null and b/doc/reference-architectures/cera-on-prem-images/cera_deployment.png differ diff --git a/doc/reference-architectures/cera-on-prem-images/image-20200826-122458.png b/doc/reference-architectures/cera-on-prem-images/image-20200826-122458.png new file mode 100644 index 00000000..1b25e9e0 Binary files /dev/null and b/doc/reference-architectures/cera-on-prem-images/image-20200826-122458.png differ diff --git a/doc/reference-architectures/cera-on-prem-images/network_locations.png b/doc/reference-architectures/cera-on-prem-images/network_locations.png new file mode 100644 index 00000000..9391c97f Binary files /dev/null and b/doc/reference-architectures/cera-on-prem-images/network_locations.png differ diff --git a/doc/core-network/5g-nsa-images/5g-nsa.png b/doc/reference-architectures/core-network/5g-nsa-images/5g-nsa.png similarity index 100% rename from doc/core-network/5g-nsa-images/5g-nsa.png rename to doc/reference-architectures/core-network/5g-nsa-images/5g-nsa.png diff --git a/doc/core-network/5g-nsa-images/distributed-epc.png b/doc/reference-architectures/core-network/5g-nsa-images/distributed-epc.png similarity index 100% rename from doc/core-network/5g-nsa-images/distributed-epc.png rename to doc/reference-architectures/core-network/5g-nsa-images/distributed-epc.png diff --git a/doc/core-network/5g-nsa-images/distributed-spgw.png b/doc/reference-architectures/core-network/5g-nsa-images/distributed-spgw.png similarity index 100% rename from doc/core-network/5g-nsa-images/distributed-spgw.png rename to doc/reference-architectures/core-network/5g-nsa-images/distributed-spgw.png diff --git a/doc/core-network/5g-nsa-images/openness-nsa-depc.png b/doc/reference-architectures/core-network/5g-nsa-images/openness-nsa-depc.png similarity index 100% rename from doc/core-network/5g-nsa-images/openness-nsa-depc.png rename to doc/reference-architectures/core-network/5g-nsa-images/openness-nsa-depc.png diff --git a/doc/core-network/5g-nsa-images/option-3.png b/doc/reference-architectures/core-network/5g-nsa-images/option-3.png similarity index 100% rename from doc/core-network/5g-nsa-images/option-3.png rename to doc/reference-architectures/core-network/5g-nsa-images/option-3.png diff --git a/doc/core-network/5g-nsa-images/option-3a.png b/doc/reference-architectures/core-network/5g-nsa-images/option-3a.png similarity index 100% rename from doc/core-network/5g-nsa-images/option-3a.png rename to doc/reference-architectures/core-network/5g-nsa-images/option-3a.png diff --git a/doc/core-network/5g-nsa-images/option-3x-4g-coverage-1.png b/doc/reference-architectures/core-network/5g-nsa-images/option-3x-4g-coverage-1.png similarity index 100% rename from doc/core-network/5g-nsa-images/option-3x-4g-coverage-1.png rename to doc/reference-architectures/core-network/5g-nsa-images/option-3x-4g-coverage-1.png diff --git a/doc/core-network/5g-nsa-images/option-3x-4g-coverage-2.png b/doc/reference-architectures/core-network/5g-nsa-images/option-3x-4g-coverage-2.png similarity index 100% rename from doc/core-network/5g-nsa-images/option-3x-4g-coverage-2.png rename to doc/reference-architectures/core-network/5g-nsa-images/option-3x-4g-coverage-2.png diff --git a/doc/core-network/5g-nsa-images/option-3x-5g-coverage.png b/doc/reference-architectures/core-network/5g-nsa-images/option-3x-5g-coverage.png similarity index 100% rename from doc/core-network/5g-nsa-images/option-3x-5g-coverage.png rename to doc/reference-architectures/core-network/5g-nsa-images/option-3x-5g-coverage.png diff --git a/doc/core-network/5g-nsa-images/option-3x.png b/doc/reference-architectures/core-network/5g-nsa-images/option-3x.png similarity index 100% rename from doc/core-network/5g-nsa-images/option-3x.png rename to doc/reference-architectures/core-network/5g-nsa-images/option-3x.png diff --git a/doc/core-network/5g-nsa-images/sgw-lbo.png b/doc/reference-architectures/core-network/5g-nsa-images/sgw-lbo.png similarity index 100% rename from doc/core-network/5g-nsa-images/sgw-lbo.png rename to doc/reference-architectures/core-network/5g-nsa-images/sgw-lbo.png diff --git a/doc/core-network/epc-images/Openness_highlevel.png b/doc/reference-architectures/core-network/epc-images/Openness_highlevel.png similarity index 100% rename from doc/core-network/epc-images/Openness_highlevel.png rename to doc/reference-architectures/core-network/epc-images/Openness_highlevel.png diff --git a/doc/core-network/epc-images/openness_epc1.png b/doc/reference-architectures/core-network/epc-images/openness_epc1.png similarity index 100% rename from doc/core-network/epc-images/openness_epc1.png rename to doc/reference-architectures/core-network/epc-images/openness_epc1.png diff --git a/doc/core-network/epc-images/openness_epc2.png b/doc/reference-architectures/core-network/epc-images/openness_epc2.png similarity index 100% rename from doc/core-network/epc-images/openness_epc2.png rename to doc/reference-architectures/core-network/epc-images/openness_epc2.png diff --git a/doc/core-network/epc-images/openness_epc3.png b/doc/reference-architectures/core-network/epc-images/openness_epc3.png similarity index 100% rename from doc/core-network/epc-images/openness_epc3.png rename to doc/reference-architectures/core-network/epc-images/openness_epc3.png diff --git a/doc/core-network/epc-images/openness_epc_cnca_1.png b/doc/reference-architectures/core-network/epc-images/openness_epc_cnca_1.png similarity index 100% rename from doc/core-network/epc-images/openness_epc_cnca_1.png rename to doc/reference-architectures/core-network/epc-images/openness_epc_cnca_1.png diff --git a/doc/core-network/epc-images/openness_epcconfig.png b/doc/reference-architectures/core-network/epc-images/openness_epcconfig.png similarity index 100% rename from doc/core-network/epc-images/openness_epcconfig.png rename to doc/reference-architectures/core-network/epc-images/openness_epcconfig.png diff --git a/doc/core-network/epc-images/openness_epctest1.png b/doc/reference-architectures/core-network/epc-images/openness_epctest1.png similarity index 100% rename from doc/core-network/epc-images/openness_epctest1.png rename to doc/reference-architectures/core-network/epc-images/openness_epctest1.png diff --git a/doc/core-network/epc-images/openness_epctest2.png b/doc/reference-architectures/core-network/epc-images/openness_epctest2.png similarity index 100% rename from doc/core-network/epc-images/openness_epctest2.png rename to doc/reference-architectures/core-network/epc-images/openness_epctest2.png diff --git a/doc/core-network/epc-images/openness_epctest3.png b/doc/reference-architectures/core-network/epc-images/openness_epctest3.png similarity index 100% rename from doc/core-network/epc-images/openness_epctest3.png rename to doc/reference-architectures/core-network/epc-images/openness_epctest3.png diff --git a/doc/core-network/epc-images/openness_epctest4.png b/doc/reference-architectures/core-network/epc-images/openness_epctest4.png similarity index 100% rename from doc/core-network/epc-images/openness_epctest4.png rename to doc/reference-architectures/core-network/epc-images/openness_epctest4.png diff --git a/doc/core-network/epc-images/openness_epcupf_add.png b/doc/reference-architectures/core-network/epc-images/openness_epcupf_add.png similarity index 100% rename from doc/core-network/epc-images/openness_epcupf_add.png rename to doc/reference-architectures/core-network/epc-images/openness_epcupf_add.png diff --git a/doc/core-network/epc-images/openness_epcupf_del.png b/doc/reference-architectures/core-network/epc-images/openness_epcupf_del.png similarity index 100% rename from doc/core-network/epc-images/openness_epcupf_del.png rename to doc/reference-architectures/core-network/epc-images/openness_epcupf_del.png diff --git a/doc/core-network/epc-images/openness_epcupf_get.png b/doc/reference-architectures/core-network/epc-images/openness_epcupf_get.png similarity index 100% rename from doc/core-network/epc-images/openness_epcupf_get.png rename to doc/reference-architectures/core-network/epc-images/openness_epcupf_get.png diff --git a/doc/core-network/index.html b/doc/reference-architectures/core-network/index.html similarity index 100% rename from doc/core-network/index.html rename to doc/reference-architectures/core-network/index.html diff --git a/doc/core-network/ngc-images/5g_edge_data_paths.png b/doc/reference-architectures/core-network/ngc-images/5g_edge_data_paths.png similarity index 100% rename from doc/core-network/ngc-images/5g_edge_data_paths.png rename to doc/reference-architectures/core-network/ngc-images/5g_edge_data_paths.png diff --git a/doc/core-network/ngc-images/5g_edge_deployment_scenario1.png b/doc/reference-architectures/core-network/ngc-images/5g_edge_deployment_scenario1.png similarity index 100% rename from doc/core-network/ngc-images/5g_edge_deployment_scenario1.png rename to doc/reference-architectures/core-network/ngc-images/5g_edge_deployment_scenario1.png diff --git a/doc/core-network/ngc-images/5g_edge_deployment_scenario2.png b/doc/reference-architectures/core-network/ngc-images/5g_edge_deployment_scenario2.png similarity index 100% rename from doc/core-network/ngc-images/5g_edge_deployment_scenario2.png rename to doc/reference-architectures/core-network/ngc-images/5g_edge_deployment_scenario2.png diff --git a/doc/core-network/ngc-images/5g_edge_deployment_scenario3.png b/doc/reference-architectures/core-network/ngc-images/5g_edge_deployment_scenario3.png similarity index 100% rename from doc/core-network/ngc-images/5g_edge_deployment_scenario3.png rename to doc/reference-architectures/core-network/ngc-images/5g_edge_deployment_scenario3.png diff --git a/doc/core-network/ngc-images/5g_openess_components.png b/doc/reference-architectures/core-network/ngc-images/5g_openess_components.png similarity index 100% rename from doc/core-network/ngc-images/5g_openess_components.png rename to doc/reference-architectures/core-network/ngc-images/5g_openess_components.png diff --git a/doc/core-network/ngc-images/5g_openess_microservices.png b/doc/reference-architectures/core-network/ngc-images/5g_openess_microservices.png similarity index 100% rename from doc/core-network/ngc-images/5g_openess_microservices.png rename to doc/reference-architectures/core-network/ngc-images/5g_openess_microservices.png diff --git a/doc/core-network/ngc-images/5g_system_architecture.png b/doc/reference-architectures/core-network/ngc-images/5g_system_architecture.png similarity index 100% rename from doc/core-network/ngc-images/5g_system_architecture.png rename to doc/reference-architectures/core-network/ngc-images/5g_system_architecture.png diff --git a/doc/core-network/ngc-images/AF_Policy_Authorization_Notif.png b/doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_Notif.png similarity index 100% rename from doc/core-network/ngc-images/AF_Policy_Authorization_Notif.png rename to doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_Notif.png diff --git a/doc/core-network/ngc-images/AF_Policy_Authorization_Notif_Terminate.png b/doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_Notif_Terminate.png similarity index 100% rename from doc/core-network/ngc-images/AF_Policy_Authorization_Notif_Terminate.png rename to doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_Notif_Terminate.png diff --git a/doc/core-network/ngc-images/AF_Policy_Authorization_Notification.png b/doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_Notification.png similarity index 100% rename from doc/core-network/ngc-images/AF_Policy_Authorization_Notification.png rename to doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_Notification.png diff --git a/doc/core-network/ngc-images/AF_Policy_Authorization_create.png b/doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_create.png similarity index 100% rename from doc/core-network/ngc-images/AF_Policy_Authorization_create.png rename to doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_create.png diff --git a/doc/core-network/ngc-images/AF_Policy_Authorization_delete.png b/doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_delete.png similarity index 100% rename from doc/core-network/ngc-images/AF_Policy_Authorization_delete.png rename to doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_delete.png diff --git a/doc/core-network/ngc-images/AF_Policy_Authorization_event_subscription_delete.png b/doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_event_subscription_delete.png similarity index 100% rename from doc/core-network/ngc-images/AF_Policy_Authorization_event_subscription_delete.png rename to doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_event_subscription_delete.png diff --git a/doc/core-network/ngc-images/AF_Policy_Authorization_event_subscription_put.png b/doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_event_subscription_put.png similarity index 100% rename from doc/core-network/ngc-images/AF_Policy_Authorization_event_subscription_put.png rename to doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_event_subscription_put.png diff --git a/doc/core-network/ngc-images/AF_Policy_Authorization_get.png b/doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_get.png similarity index 100% rename from doc/core-network/ngc-images/AF_Policy_Authorization_get.png rename to doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_get.png diff --git a/doc/core-network/ngc-images/AF_Policy_Authorization_patch.png b/doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_patch.png similarity index 100% rename from doc/core-network/ngc-images/AF_Policy_Authorization_patch.png rename to doc/reference-architectures/core-network/ngc-images/AF_Policy_Authorization_patch.png diff --git a/doc/core-network/ngc-images/AF_Traffic_Influence_Notification.png b/doc/reference-architectures/core-network/ngc-images/AF_Traffic_Influence_Notification.png similarity index 100% rename from doc/core-network/ngc-images/AF_Traffic_Influence_Notification.png rename to doc/reference-architectures/core-network/ngc-images/AF_Traffic_Influence_Notification.png diff --git a/doc/core-network/ngc-images/AF_traffic_influence_add.png b/doc/reference-architectures/core-network/ngc-images/AF_traffic_influence_add.png similarity index 100% rename from doc/core-network/ngc-images/AF_traffic_influence_add.png rename to doc/reference-architectures/core-network/ngc-images/AF_traffic_influence_add.png diff --git a/doc/core-network/ngc-images/AF_traffic_influence_delete.png b/doc/reference-architectures/core-network/ngc-images/AF_traffic_influence_delete.png similarity index 100% rename from doc/core-network/ngc-images/AF_traffic_influence_delete.png rename to doc/reference-architectures/core-network/ngc-images/AF_traffic_influence_delete.png diff --git a/doc/core-network/ngc-images/AF_traffic_influence_get.png b/doc/reference-architectures/core-network/ngc-images/AF_traffic_influence_get.png similarity index 100% rename from doc/core-network/ngc-images/AF_traffic_influence_get.png rename to doc/reference-architectures/core-network/ngc-images/AF_traffic_influence_get.png diff --git a/doc/core-network/ngc-images/AF_traffic_influence_update.png b/doc/reference-architectures/core-network/ngc-images/AF_traffic_influence_update.png similarity index 100% rename from doc/core-network/ngc-images/AF_traffic_influence_update.png rename to doc/reference-architectures/core-network/ngc-images/AF_traffic_influence_update.png diff --git a/doc/core-network/ngc-images/OAuth2Flow.png b/doc/reference-architectures/core-network/ngc-images/OAuth2Flow.png similarity index 100% rename from doc/core-network/ngc-images/OAuth2Flow.png rename to doc/reference-architectures/core-network/ngc-images/OAuth2Flow.png diff --git a/doc/core-network/ngc-images/PFD_Management_transaction_delete.png b/doc/reference-architectures/core-network/ngc-images/PFD_Management_transaction_delete.png similarity index 100% rename from doc/core-network/ngc-images/PFD_Management_transaction_delete.png rename to doc/reference-architectures/core-network/ngc-images/PFD_Management_transaction_delete.png diff --git a/doc/core-network/ngc-images/PFD_Management_transaction_get.png b/doc/reference-architectures/core-network/ngc-images/PFD_Management_transaction_get.png similarity index 100% rename from doc/core-network/ngc-images/PFD_Management_transaction_get.png rename to doc/reference-architectures/core-network/ngc-images/PFD_Management_transaction_get.png diff --git a/doc/core-network/ngc-images/PFD_Managment_transaction_add.png b/doc/reference-architectures/core-network/ngc-images/PFD_Managment_transaction_add.png similarity index 100% rename from doc/core-network/ngc-images/PFD_Managment_transaction_add.png rename to doc/reference-architectures/core-network/ngc-images/PFD_Managment_transaction_add.png diff --git a/doc/core-network/ngc-images/PFD_management_transaction_update.png b/doc/reference-architectures/core-network/ngc-images/PFD_management_transaction_update.png similarity index 100% rename from doc/core-network/ngc-images/PFD_management_transaction_update.png rename to doc/reference-architectures/core-network/ngc-images/PFD_management_transaction_update.png diff --git a/doc/core-network/ngc-images/cntf_in_5G_ref_architecture.png b/doc/reference-architectures/core-network/ngc-images/cntf_in_5G_ref_architecture.png similarity index 100% rename from doc/core-network/ngc-images/cntf_in_5G_ref_architecture.png rename to doc/reference-architectures/core-network/ngc-images/cntf_in_5G_ref_architecture.png diff --git a/doc/core-network/ngc-images/e2e_pfd_pa.png b/doc/reference-architectures/core-network/ngc-images/e2e_pfd_pa.png similarity index 100% rename from doc/core-network/ngc-images/e2e_pfd_pa.png rename to doc/reference-architectures/core-network/ngc-images/e2e_pfd_pa.png diff --git a/doc/core-network/ngc-images/e2e_pfd_tif.png b/doc/reference-architectures/core-network/ngc-images/e2e_pfd_tif.png similarity index 100% rename from doc/core-network/ngc-images/e2e_pfd_tif.png rename to doc/reference-architectures/core-network/ngc-images/e2e_pfd_tif.png diff --git a/doc/core-network/ngc-images/e2e_tif.png b/doc/reference-architectures/core-network/ngc-images/e2e_tif.png similarity index 100% rename from doc/core-network/ngc-images/e2e_tif.png rename to doc/reference-architectures/core-network/ngc-images/e2e_tif.png diff --git a/doc/core-network/ngc-images/ngcoam_af_service_add.png b/doc/reference-architectures/core-network/ngc-images/ngcoam_af_service_add.png similarity index 100% rename from doc/core-network/ngc-images/ngcoam_af_service_add.png rename to doc/reference-architectures/core-network/ngc-images/ngcoam_af_service_add.png diff --git a/doc/core-network/ngc-images/ngcoam_af_service_delete.png b/doc/reference-architectures/core-network/ngc-images/ngcoam_af_service_delete.png similarity index 100% rename from doc/core-network/ngc-images/ngcoam_af_service_delete.png rename to doc/reference-architectures/core-network/ngc-images/ngcoam_af_service_delete.png diff --git a/doc/core-network/ngc-images/ngcoam_af_service_get.png b/doc/reference-architectures/core-network/ngc-images/ngcoam_af_service_get.png similarity index 100% rename from doc/core-network/ngc-images/ngcoam_af_service_get.png rename to doc/reference-architectures/core-network/ngc-images/ngcoam_af_service_get.png diff --git a/doc/core-network/ngc-images/ngcoam_af_service_update.png b/doc/reference-architectures/core-network/ngc-images/ngcoam_af_service_update.png similarity index 100% rename from doc/core-network/ngc-images/ngcoam_af_service_update.png rename to doc/reference-architectures/core-network/ngc-images/ngcoam_af_service_update.png diff --git a/doc/core-network/ngc_flows/AF_Policy_Authorization_Notif.uml b/doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_Notif.uml similarity index 100% rename from doc/core-network/ngc_flows/AF_Policy_Authorization_Notif.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_Notif.uml diff --git a/doc/core-network/ngc_flows/AF_Policy_Authorization_Notif_Terminate.uml b/doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_Notif_Terminate.uml similarity index 100% rename from doc/core-network/ngc_flows/AF_Policy_Authorization_Notif_Terminate.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_Notif_Terminate.uml diff --git a/doc/core-network/ngc_flows/AF_Policy_Authorization_Notification.uml b/doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_Notification.uml similarity index 100% rename from doc/core-network/ngc_flows/AF_Policy_Authorization_Notification.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_Notification.uml diff --git a/doc/core-network/ngc_flows/AF_Policy_Authorization_create.uml b/doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_create.uml similarity index 100% rename from doc/core-network/ngc_flows/AF_Policy_Authorization_create.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_create.uml diff --git a/doc/core-network/ngc_flows/AF_Policy_Authorization_delete.uml b/doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_delete.uml similarity index 100% rename from doc/core-network/ngc_flows/AF_Policy_Authorization_delete.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_delete.uml diff --git a/doc/core-network/ngc_flows/AF_Policy_Authorization_event_subscription_delete.uml b/doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_event_subscription_delete.uml similarity index 100% rename from doc/core-network/ngc_flows/AF_Policy_Authorization_event_subscription_delete.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_event_subscription_delete.uml diff --git a/doc/core-network/ngc_flows/AF_Policy_Authorization_event_subscription_put.uml b/doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_event_subscription_put.uml similarity index 100% rename from doc/core-network/ngc_flows/AF_Policy_Authorization_event_subscription_put.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_event_subscription_put.uml diff --git a/doc/core-network/ngc_flows/AF_Policy_Authorization_get.uml b/doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_get.uml similarity index 100% rename from doc/core-network/ngc_flows/AF_Policy_Authorization_get.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_get.uml diff --git a/doc/core-network/ngc_flows/AF_Policy_Authorization_patch.uml b/doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_patch.uml similarity index 100% rename from doc/core-network/ngc_flows/AF_Policy_Authorization_patch.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_Policy_Authorization_patch.uml diff --git a/doc/core-network/ngc_flows/AF_Traffic_Influence_Notification.uml b/doc/reference-architectures/core-network/ngc_flows/AF_Traffic_Influence_Notification.uml similarity index 100% rename from doc/core-network/ngc_flows/AF_Traffic_Influence_Notification.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_Traffic_Influence_Notification.uml diff --git a/doc/core-network/ngc_flows/AF_traffic_influence_add.uml b/doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_add.uml similarity index 96% rename from doc/core-network/ngc_flows/AF_traffic_influence_add.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_add.uml index 0d0d8eba..148a6a46 100644 --- a/doc/core-network/ngc_flows/AF_traffic_influence_add.uml +++ b/doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_add.uml @@ -1,50 +1,50 @@ -@startuml -/' SPDX-License-Identifier: Apache-2.0 - Copyright (c) 2020 Intel Corporation -'/ -skinparam monochrome false -skinparam roundcorner 20 -skinparam defaultFontName "Intel Clear" -skinparam defaultFontSize 20 -skinparam maxmessagesize 300 -skinparam sequenceArrowThickness 3 - -header Intel Corporation -footer Proprietary and Confidential -title Traffic influencing flows between OpenNESS controller and 5G Core - -actor "User/Admin" as user -box "OpenNESS Controller components" #LightBlue - participant "UI/CLI" as cnca - participant "AF Microservice" as af -end box -box "5G Core components" #LightGreen - participant "NEF" as nef - note over nef - OpenNESS provided - Core component with - limited functionality - end note - participant "NGC\nCP Functions" as ngccp -end box - -group Traffic influence submission flow - user -> cnca : Traffic influencing request - activate cnca - cnca -> af : /af/v1/subscriptions: POST \n {3GPP TS 29.522v15.3 \n Sec. 5.4}* - activate af - af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions : POST \n {3GPP TS 29.522v15.3 \n Sec. 5.4} - activate nef - - nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} - ngccp --> nef : - nef --> af : OK: {subscriptionId} \n ERROR: {400/500} - deactivate nef - af --> cnca : OK: {subscriptionId} \n ERROR: {400/500} - deactivate af - cnca --> user : Success: {subscriptionId} - deactivate cnca -end group - -@enduml - +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 300 +skinparam sequenceArrowThickness 3 + +header Intel Corporation +footer Proprietary and Confidential +title Traffic influencing flows between OpenNESS controller and 5G Core + +actor "User/Admin" as user +box "OpenNESS Controller components" #LightBlue + participant "UI/CLI" as cnca + participant "AF Microservice" as af +end box +box "5G Core components" #LightGreen + participant "NEF" as nef + note over nef + OpenNESS provided + Core component with + limited functionality + end note + participant "NGC\nCP Functions" as ngccp +end box + +group Traffic influence submission flow + user -> cnca : Traffic influencing request + activate cnca + cnca -> af : /af/v1/subscriptions: POST \n {3GPP TS 29.522v15.3 \n Sec. 5.4}* + activate af + af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions : POST \n {3GPP TS 29.522v15.3 \n Sec. 5.4} + activate nef + + nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} + ngccp --> nef : + nef --> af : OK: {subscriptionId} \n ERROR: {400/500} + deactivate nef + af --> cnca : OK: {subscriptionId} \n ERROR: {400/500} + deactivate af + cnca --> user : Success: {subscriptionId} + deactivate cnca +end group + +@enduml + diff --git a/doc/core-network/ngc_flows/AF_traffic_influence_delete.uml b/doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_delete.uml similarity index 96% rename from doc/core-network/ngc_flows/AF_traffic_influence_delete.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_delete.uml index 6fb3a66d..22172586 100644 --- a/doc/core-network/ngc_flows/AF_traffic_influence_delete.uml +++ b/doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_delete.uml @@ -1,51 +1,51 @@ -@startuml -/' SPDX-License-Identifier: Apache-2.0 - Copyright (c) 2020 Intel Corporation -'/ -skinparam monochrome false -skinparam roundcorner 20 -skinparam defaultFontName "Intel Clear" -skinparam defaultFontSize 20 -skinparam maxmessagesize 300 -skinparam sequenceArrowThickness 3 - -header Intel Corporation -footer Proprietary and Confidential -title Traffic influencing flows between OpenNESS controller and 5G Core - -actor "User/Admin" as user -box "OpenNESS Controller components" #LightBlue - participant "UI/CLI" as cnca - participant "AF Microservice" as af -end box -box "5G Core components" #LightGreen - participant "NEF" as nef - note over nef - OpenNESS provided - Core component with - limited functionality - end note - participant "NGC\nCP Functions" as ngccp -end box - - -group Delete a subscribed traffic influence by subscriptionId - user -> cnca : Delete request by subscriptionId - activate cnca - cnca -> af : /af/v1/subscriptions/{subscriptionId} : DELETE - activate af - af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions/{subscriptionId} : DELETE - activate nef - - nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} - ngccp --> nef : - nef --> af : OK : Delete success \n ERROR: {400/500} - deactivate nef - af --> cnca : OK : Delete success \n ERROR: {400/500} - deactivate af - cnca --> user : Success/Error - deactivate cnca -end group - -@enduml - +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 300 +skinparam sequenceArrowThickness 3 + +header Intel Corporation +footer Proprietary and Confidential +title Traffic influencing flows between OpenNESS controller and 5G Core + +actor "User/Admin" as user +box "OpenNESS Controller components" #LightBlue + participant "UI/CLI" as cnca + participant "AF Microservice" as af +end box +box "5G Core components" #LightGreen + participant "NEF" as nef + note over nef + OpenNESS provided + Core component with + limited functionality + end note + participant "NGC\nCP Functions" as ngccp +end box + + +group Delete a subscribed traffic influence by subscriptionId + user -> cnca : Delete request by subscriptionId + activate cnca + cnca -> af : /af/v1/subscriptions/{subscriptionId} : DELETE + activate af + af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions/{subscriptionId} : DELETE + activate nef + + nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} + ngccp --> nef : + nef --> af : OK : Delete success \n ERROR: {400/500} + deactivate nef + af --> cnca : OK : Delete success \n ERROR: {400/500} + deactivate af + cnca --> user : Success/Error + deactivate cnca +end group + +@enduml + diff --git a/doc/core-network/ngc_flows/AF_traffic_influence_get.uml b/doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_get.uml similarity index 96% rename from doc/core-network/ngc_flows/AF_traffic_influence_get.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_get.uml index 135db770..b767fcfc 100644 --- a/doc/core-network/ngc_flows/AF_traffic_influence_get.uml +++ b/doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_get.uml @@ -1,68 +1,68 @@ -@startuml -/' SPDX-License-Identifier: Apache-2.0 - Copyright (c) 2020 Intel Corporation -'/ -skinparam monochrome false -skinparam roundcorner 20 -skinparam defaultFontName "Intel Clear" -skinparam defaultFontSize 20 -skinparam maxmessagesize 300 -skinparam sequenceArrowThickness 3 - -header Intel Corporation -footer Proprietary and Confidential -title Traffic influencing flows between OpenNESS controller and 5G Core - -actor "User/Admin" as user -box "OpenNESS Controller components" #LightBlue - participant "UI/CLI" as cnca - participant "AF Microservice" as af -end box -box "5G Core components" #LightGreen - participant "NEF" as nef - note over nef - OpenNESS provided - Core component with - limited functionality - end note - participant "NGC\nCP Functions" as ngccp -end box - -group Get all subscribed traffic influence info - user -> cnca : Request all traffic influence subscribed - activate cnca - cnca -> af : /af/v1/subscriptions : GET - activate af - af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions : GET - activate nef - - nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} - ngccp --> nef : - nef --> af : OK: traffic influence info \n ERROR: {400/500} - deactivate nef - af --> cnca : OK: traffic influence info \n ERROR: {400/500} - deactivate af - cnca --> user : Traffic influence details - deactivate cnca -end group - -group Get subscribed traffic influence info by subscriptionId - user -> cnca : Request traffic influence using subscriptionId - activate cnca - cnca -> af : /af/v1/subscriptions/{subscriptionId} : GET - activate af - af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions/{subscriptionId} : GET - activate nef - - nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} - ngccp --> nef : - nef --> af : OK: traffic influence info \n ERROR: {400/500} - deactivate nef - af --> cnca : OK: traffic influence info \n ERROR: {400/500} - deactivate af - cnca --> user : Traffic influence details - deactivate cnca -end group - -@enduml - +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 300 +skinparam sequenceArrowThickness 3 + +header Intel Corporation +footer Proprietary and Confidential +title Traffic influencing flows between OpenNESS controller and 5G Core + +actor "User/Admin" as user +box "OpenNESS Controller components" #LightBlue + participant "UI/CLI" as cnca + participant "AF Microservice" as af +end box +box "5G Core components" #LightGreen + participant "NEF" as nef + note over nef + OpenNESS provided + Core component with + limited functionality + end note + participant "NGC\nCP Functions" as ngccp +end box + +group Get all subscribed traffic influence info + user -> cnca : Request all traffic influence subscribed + activate cnca + cnca -> af : /af/v1/subscriptions : GET + activate af + af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions : GET + activate nef + + nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} + ngccp --> nef : + nef --> af : OK: traffic influence info \n ERROR: {400/500} + deactivate nef + af --> cnca : OK: traffic influence info \n ERROR: {400/500} + deactivate af + cnca --> user : Traffic influence details + deactivate cnca +end group + +group Get subscribed traffic influence info by subscriptionId + user -> cnca : Request traffic influence using subscriptionId + activate cnca + cnca -> af : /af/v1/subscriptions/{subscriptionId} : GET + activate af + af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions/{subscriptionId} : GET + activate nef + + nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} + ngccp --> nef : + nef --> af : OK: traffic influence info \n ERROR: {400/500} + deactivate nef + af --> cnca : OK: traffic influence info \n ERROR: {400/500} + deactivate af + cnca --> user : Traffic influence details + deactivate cnca +end group + +@enduml + diff --git a/doc/core-network/ngc_flows/AF_traffic_influence_update.uml b/doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_update.uml similarity index 96% rename from doc/core-network/ngc_flows/AF_traffic_influence_update.uml rename to doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_update.uml index eaf712e0..4fb6732b 100644 --- a/doc/core-network/ngc_flows/AF_traffic_influence_update.uml +++ b/doc/reference-architectures/core-network/ngc_flows/AF_traffic_influence_update.uml @@ -1,50 +1,50 @@ -@startuml -/' SPDX-License-Identifier: Apache-2.0 - Copyright (c) 2020 Intel Corporation -'/ -skinparam monochrome false -skinparam roundcorner 20 -skinparam defaultFontName "Intel Clear" -skinparam defaultFontSize 20 -skinparam maxmessagesize 300 -skinparam sequenceArrowThickness 3 - -header Intel Corporation -footer Proprietary and Confidential -title Traffic influencing flows between OpenNESS controller and 5G Core - -actor "User/Admin" as user -box "OpenNESS Controller components" #LightBlue - participant "UI/CLI" as cnca - participant "AF Microservice" as af -end box -box "5G Core components" #LightGreen - participant "NEF" as nef - note over nef - OpenNESS provided - Core component with - limited functionality - end note - participant "NGC\nCP Functions" as ngccp -end box - -group Update a subscribed traffic influence by subscriptionId - user -> cnca : Update request by subscriptionId - activate cnca - cnca -> af : /af/v1/subscriptions/{subscriptionId} : PUT - activate af - af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions/{subscriptionId} : PUT - activate nef - - nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} - ngccp --> nef : - nef --> af : OK : Update success, traffic influence info \n ERROR: {400/500} - deactivate nef - af --> cnca : OK : Update success, traffic influence info \n ERROR: {400/500} - deactivate af - cnca --> user : Success/Error - deactivate cnca -end group - -@enduml - +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 300 +skinparam sequenceArrowThickness 3 + +header Intel Corporation +footer Proprietary and Confidential +title Traffic influencing flows between OpenNESS controller and 5G Core + +actor "User/Admin" as user +box "OpenNESS Controller components" #LightBlue + participant "UI/CLI" as cnca + participant "AF Microservice" as af +end box +box "5G Core components" #LightGreen + participant "NEF" as nef + note over nef + OpenNESS provided + Core component with + limited functionality + end note + participant "NGC\nCP Functions" as ngccp +end box + +group Update a subscribed traffic influence by subscriptionId + user -> cnca : Update request by subscriptionId + activate cnca + cnca -> af : /af/v1/subscriptions/{subscriptionId} : PUT + activate af + af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions/{subscriptionId} : PUT + activate nef + + nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} + ngccp --> nef : + nef --> af : OK : Update success, traffic influence info \n ERROR: {400/500} + deactivate nef + af --> cnca : OK : Update success, traffic influence info \n ERROR: {400/500} + deactivate af + cnca --> user : Success/Error + deactivate cnca +end group + +@enduml + diff --git a/doc/core-network/ngc_flows/OAuth2Flow.uml b/doc/reference-architectures/core-network/ngc_flows/OAuth2Flow.uml similarity index 100% rename from doc/core-network/ngc_flows/OAuth2Flow.uml rename to doc/reference-architectures/core-network/ngc_flows/OAuth2Flow.uml diff --git a/doc/core-network/ngc_flows/PFD_Management_transaction_delete.uml b/doc/reference-architectures/core-network/ngc_flows/PFD_Management_transaction_delete.uml similarity index 100% rename from doc/core-network/ngc_flows/PFD_Management_transaction_delete.uml rename to doc/reference-architectures/core-network/ngc_flows/PFD_Management_transaction_delete.uml diff --git a/doc/core-network/ngc_flows/PFD_Management_transaction_get.uml b/doc/reference-architectures/core-network/ngc_flows/PFD_Management_transaction_get.uml similarity index 100% rename from doc/core-network/ngc_flows/PFD_Management_transaction_get.uml rename to doc/reference-architectures/core-network/ngc_flows/PFD_Management_transaction_get.uml diff --git a/doc/core-network/ngc_flows/PFD_Managment_transaction_add.uml b/doc/reference-architectures/core-network/ngc_flows/PFD_Managment_transaction_add.uml similarity index 100% rename from doc/core-network/ngc_flows/PFD_Managment_transaction_add.uml rename to doc/reference-architectures/core-network/ngc_flows/PFD_Managment_transaction_add.uml diff --git a/doc/core-network/ngc_flows/PFD_management_transaction_update.uml b/doc/reference-architectures/core-network/ngc_flows/PFD_management_transaction_update.uml similarity index 100% rename from doc/core-network/ngc_flows/PFD_management_transaction_update.uml rename to doc/reference-architectures/core-network/ngc_flows/PFD_management_transaction_update.uml diff --git a/doc/core-network/ngc_flows/e2e_config_flow_for_5g_edge.uml b/doc/reference-architectures/core-network/ngc_flows/e2e_config_flow_for_5g_edge.uml similarity index 100% rename from doc/core-network/ngc_flows/e2e_config_flow_for_5g_edge.uml rename to doc/reference-architectures/core-network/ngc_flows/e2e_config_flow_for_5g_edge.uml diff --git a/doc/core-network/ngc_flows/e2e_flow_pfd_pa.uml b/doc/reference-architectures/core-network/ngc_flows/e2e_flow_pfd_pa.uml similarity index 100% rename from doc/core-network/ngc_flows/e2e_flow_pfd_pa.uml rename to doc/reference-architectures/core-network/ngc_flows/e2e_flow_pfd_pa.uml diff --git a/doc/core-network/ngc_flows/e2e_flow_pfd_tif.uml b/doc/reference-architectures/core-network/ngc_flows/e2e_flow_pfd_tif.uml similarity index 100% rename from doc/core-network/ngc_flows/e2e_flow_pfd_tif.uml rename to doc/reference-architectures/core-network/ngc_flows/e2e_flow_pfd_tif.uml diff --git a/doc/core-network/ngc_flows/e2e_flow_tif.uml b/doc/reference-architectures/core-network/ngc_flows/e2e_flow_tif.uml similarity index 100% rename from doc/core-network/ngc_flows/e2e_flow_tif.uml rename to doc/reference-architectures/core-network/ngc_flows/e2e_flow_tif.uml diff --git a/doc/core-network/ngc_flows/ngcoam_af_service_add.uml b/doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_add.uml similarity index 96% rename from doc/core-network/ngc_flows/ngcoam_af_service_add.uml rename to doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_add.uml index 287f8f16..c657c4bc 100644 --- a/doc/core-network/ngc_flows/ngcoam_af_service_add.uml +++ b/doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_add.uml @@ -1,46 +1,46 @@ -@startuml -/' SPDX-License-Identifier: Apache-2.0 - Copyright (c) 2020 Intel Corporation -'/ - -skinparam monochrome false -skinparam roundcorner 20 -skinparam defaultFontName "Intel Clear" -skinparam defaultFontSize 20 -skinparam maxmessagesize 400 -skinparam sequenceArrowThickness 3 - -header "Intel Corporation" -footer "Proprietary and Confidential" -title "NGC OAM flows between OpenNESS Controller and NGC Core OAM Component" - -actor "Admin" as user -box "OpenNESS Controller" #LightBlue -participant "UI/CLI" as cnca -end box -box "NGC component" #LightGreen -participant "OAM" as oam -note over oam - OpenNESS provided component - with REST based HTTP interface - (for reference) -end note -participant "NGC \n CP Functions" as ngccp -end box - -== AF services operations with NGC Core through OAM Component == -group AF services registration with 5G Core - user -> cnca : Register AF services (UI): \n {dnn, dnai, snssai, tac, dns-ip, upf-ip} - activate cnca - cnca -> oam : /ngcoam/v1/af/services : POST \n {dnn, dnai, snssai, tac, dns-ip, upf-ip} - activate oam - - oam -> ngccp : {Open: 3rd Party NGC integration with OpenNESS(oam)} - ngccp --> oam : - oam --> cnca : OK : {afServiceId} \n ERROR: {400/500} - deactivate oam - cnca --> user : Success/Failure : {afServiceId} - deactivate cnca -end - +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ + +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 400 +skinparam sequenceArrowThickness 3 + +header "Intel Corporation" +footer "Proprietary and Confidential" +title "NGC OAM flows between OpenNESS Controller and NGC Core OAM Component" + +actor "Admin" as user +box "OpenNESS Controller" #LightBlue +participant "UI/CLI" as cnca +end box +box "NGC component" #LightGreen +participant "OAM" as oam +note over oam + OpenNESS provided component + with REST based HTTP interface + (for reference) +end note +participant "NGC \n CP Functions" as ngccp +end box + +== AF services operations with NGC Core through OAM Component == +group AF services registration with 5G Core + user -> cnca : Register AF services (UI): \n {dnn, dnai, snssai, tac, dns-ip, upf-ip} + activate cnca + cnca -> oam : /ngcoam/v1/af/services : POST \n {dnn, dnai, snssai, tac, dns-ip, upf-ip} + activate oam + + oam -> ngccp : {Open: 3rd Party NGC integration with OpenNESS(oam)} + ngccp --> oam : + oam --> cnca : OK : {afServiceId} \n ERROR: {400/500} + deactivate oam + cnca --> user : Success/Failure : {afServiceId} + deactivate cnca +end + @enduml \ No newline at end of file diff --git a/doc/core-network/ngc_flows/ngcoam_af_service_delete.uml b/doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_delete.uml similarity index 96% rename from doc/core-network/ngc_flows/ngcoam_af_service_delete.uml rename to doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_delete.uml index c9e1349c..ebe01659 100644 --- a/doc/core-network/ngc_flows/ngcoam_af_service_delete.uml +++ b/doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_delete.uml @@ -1,47 +1,47 @@ -@startuml -/' SPDX-License-Identifier: Apache-2.0 - Copyright (c) 2020 Intel Corporation -'/ - -skinparam monochrome false -skinparam roundcorner 20 -skinparam defaultFontName "Intel Clear" -skinparam defaultFontSize 20 -skinparam maxmessagesize 400 -skinparam sequenceArrowThickness 3 - -header "Intel Corporation" -footer "Proprietary and Confidential" -title "NGC OAM flows between OpenNESS Controller and NGC Core OAM Component" - -actor "Admin" as user -box "OpenNESS Controller" #LightBlue -participant "UI/CLI" as cnca -end box -box "NGC component" #LightGreen -participant "OAM" as oam -note over oam - OpenNESS provided component - with REST based HTTP interface - (for reference) -end note -participant "NGC \n CP Functions" as ngccp -end box - -== AF services operations with NGC Core through OAM Component == - -group AF services deregistration with 5G Core - user -> cnca : Deregister AF services from 5G Core (UI): \n {afServiceId} - activate cnca - cnca -> oam : /ngcoam/v1/af/services/{afServiceId}: DELETE - activate oam - - oam -> ngccp : {Open: 3rd Party NGC integration with OpenNESS(oam)} - ngccp --> oam : - oam --> cnca : OK \n ERROR: {400/500} - deactivate oam - cnca --> user : Success/Failure - deactivate cnca -end - +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ + +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 400 +skinparam sequenceArrowThickness 3 + +header "Intel Corporation" +footer "Proprietary and Confidential" +title "NGC OAM flows between OpenNESS Controller and NGC Core OAM Component" + +actor "Admin" as user +box "OpenNESS Controller" #LightBlue +participant "UI/CLI" as cnca +end box +box "NGC component" #LightGreen +participant "OAM" as oam +note over oam + OpenNESS provided component + with REST based HTTP interface + (for reference) +end note +participant "NGC \n CP Functions" as ngccp +end box + +== AF services operations with NGC Core through OAM Component == + +group AF services deregistration with 5G Core + user -> cnca : Deregister AF services from 5G Core (UI): \n {afServiceId} + activate cnca + cnca -> oam : /ngcoam/v1/af/services/{afServiceId}: DELETE + activate oam + + oam -> ngccp : {Open: 3rd Party NGC integration with OpenNESS(oam)} + ngccp --> oam : + oam --> cnca : OK \n ERROR: {400/500} + deactivate oam + cnca --> user : Success/Failure + deactivate cnca +end + @enduml \ No newline at end of file diff --git a/doc/core-network/ngc_flows/ngcoam_af_service_get.uml b/doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_get.uml similarity index 96% rename from doc/core-network/ngc_flows/ngcoam_af_service_get.uml rename to doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_get.uml index e23a4112..753819ef 100644 --- a/doc/core-network/ngc_flows/ngcoam_af_service_get.uml +++ b/doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_get.uml @@ -1,46 +1,46 @@ -@startuml -/' SPDX-License-Identifier: Apache-2.0 - Copyright (c) 2020 Intel Corporation -'/ - -skinparam monochrome false -skinparam roundcorner 20 -skinparam defaultFontName "Intel Clear" -skinparam defaultFontSize 20 -skinparam maxmessagesize 400 -skinparam sequenceArrowThickness 3 - -header "Intel Corporation" -footer "Proprietary and Confidential" -title "NGC OAM flows between OpenNESS Controller and NGC Core OAM Component" - -actor "Admin" as user -box "OpenNESS Controller" #LightBlue -participant "UI/CLI" as cnca -end box -box "NGC component" #LightGreen -participant "OAM" as oam -note over oam - OpenNESS provided component - with REST based HTTP interface - (for reference) -end note -participant "NGC \n CP Functions" as ngccp -end box - - -group Get AF registered DNN services from NGC Core - user -> cnca : Get AF registered DNN services info : {afServiceId} - activate cnca - cnca -> oam : /ngcoam/v1/af/services/{afServiceId}: GET - activate oam - - oam -> ngccp : {Open: 3rd Party NGC integration with OpenNESS(oam)} - ngccp --> oam : - oam --> cnca : OK : {dnn, dnai, snssai, tac, dnsIp, upfIp} \n ERROR: {400/500} - deactivate oam - cnca --> user : DNN services info associated with afServiceId - deactivate cnca -end - +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ + +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 400 +skinparam sequenceArrowThickness 3 + +header "Intel Corporation" +footer "Proprietary and Confidential" +title "NGC OAM flows between OpenNESS Controller and NGC Core OAM Component" + +actor "Admin" as user +box "OpenNESS Controller" #LightBlue +participant "UI/CLI" as cnca +end box +box "NGC component" #LightGreen +participant "OAM" as oam +note over oam + OpenNESS provided component + with REST based HTTP interface + (for reference) +end note +participant "NGC \n CP Functions" as ngccp +end box + + +group Get AF registered DNN services from NGC Core + user -> cnca : Get AF registered DNN services info : {afServiceId} + activate cnca + cnca -> oam : /ngcoam/v1/af/services/{afServiceId}: GET + activate oam + + oam -> ngccp : {Open: 3rd Party NGC integration with OpenNESS(oam)} + ngccp --> oam : + oam --> cnca : OK : {dnn, dnai, snssai, tac, dnsIp, upfIp} \n ERROR: {400/500} + deactivate oam + cnca --> user : DNN services info associated with afServiceId + deactivate cnca +end + @enduml \ No newline at end of file diff --git a/doc/core-network/ngc_flows/ngcoam_af_service_update.uml b/doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_update.uml similarity index 96% rename from doc/core-network/ngc_flows/ngcoam_af_service_update.uml rename to doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_update.uml index 09194625..36b94e87 100644 --- a/doc/core-network/ngc_flows/ngcoam_af_service_update.uml +++ b/doc/reference-architectures/core-network/ngc_flows/ngcoam_af_service_update.uml @@ -1,47 +1,47 @@ -@startuml -/' SPDX-License-Identifier: Apache-2.0 - Copyright (c) 2020 Intel Corporation -'/ - -skinparam monochrome false -skinparam roundcorner 20 -skinparam defaultFontName "Intel Clear" -skinparam defaultFontSize 20 -skinparam maxmessagesize 400 -skinparam sequenceArrowThickness 3 - -header "Intel Corporation" -footer "Proprietary and Confidential" -title "NGC OAM flows between OpenNESS Controller and NGC Core OAM Component" - -actor "Admin" as user -box "OpenNESS Controller" #LightBlue -participant "UI/CLI" as cnca -end box -box "NGC component" #LightGreen -participant "OAM" as oam -note over oam - OpenNESS provided component - with REST based HTTP interface - (for reference) -end note -participant "NGC \n CP Functions" as ngccp -end box - -== AF services operations with NGC Core through OAM Component == - -group Update DNS config values for DNN served by Edge DNN - user -> cnca : Update DNS configuration of DNN (UI): \n {afServiceId, dnn, dnai, snssai, tac, dns-ip, upf-ip} - activate cnca - cnca -> oam : /ngcoam/v1/af/services/{afServiceId} : PATCH \n {dnn, dnai, snssai, tac, dns-ip, upf-ip} - activate oam - - oam -> ngccp : {Open: 3rd Party NGC integration with OpenNESS(oam)} - ngccp --> oam : - oam --> cnca : OK \n ERROR: {400/500} - deactivate oam - cnca --> user : Success/Failure - deactivate cnca -end - +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ + +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 400 +skinparam sequenceArrowThickness 3 + +header "Intel Corporation" +footer "Proprietary and Confidential" +title "NGC OAM flows between OpenNESS Controller and NGC Core OAM Component" + +actor "Admin" as user +box "OpenNESS Controller" #LightBlue +participant "UI/CLI" as cnca +end box +box "NGC component" #LightGreen +participant "OAM" as oam +note over oam + OpenNESS provided component + with REST based HTTP interface + (for reference) +end note +participant "NGC \n CP Functions" as ngccp +end box + +== AF services operations with NGC Core through OAM Component == + +group Update DNS config values for DNN served by Edge DNN + user -> cnca : Update DNS configuration of DNN (UI): \n {afServiceId, dnn, dnai, snssai, tac, dns-ip, upf-ip} + activate cnca + cnca -> oam : /ngcoam/v1/af/services/{afServiceId} : PATCH \n {dnn, dnai, snssai, tac, dns-ip, upf-ip} + activate oam + + oam -> ngccp : {Open: 3rd Party NGC integration with OpenNESS(oam)} + ngccp --> oam : + oam --> cnca : OK \n ERROR: {400/500} + deactivate oam + cnca --> user : Success/Failure + deactivate cnca +end + @enduml \ No newline at end of file diff --git a/doc/core-network/openness-core.png b/doc/reference-architectures/core-network/openness-core.png similarity index 100% rename from doc/core-network/openness-core.png rename to doc/reference-architectures/core-network/openness-core.png diff --git a/doc/core-network/openness_5g_nsa.md b/doc/reference-architectures/core-network/openness_5g_nsa.md similarity index 100% rename from doc/core-network/openness_5g_nsa.md rename to doc/reference-architectures/core-network/openness_5g_nsa.md diff --git a/doc/core-network/openness_epc.md b/doc/reference-architectures/core-network/openness_epc.md similarity index 100% rename from doc/core-network/openness_epc.md rename to doc/reference-architectures/core-network/openness_epc.md diff --git a/doc/core-network/openness_ngc.md b/doc/reference-architectures/core-network/openness_ngc.md similarity index 100% rename from doc/core-network/openness_ngc.md rename to doc/reference-architectures/core-network/openness_ngc.md diff --git a/doc/core-network/openness_upf.md b/doc/reference-architectures/core-network/openness_upf.md similarity index 99% rename from doc/core-network/openness_upf.md rename to doc/reference-architectures/core-network/openness_upf.md index 70285b8a..b8ea9c61 100644 --- a/doc/core-network/openness_upf.md +++ b/doc/reference-architectures/core-network/openness_upf.md @@ -139,9 +139,9 @@ Below is a list of minimal configuration parameters for VPP-based applications s 3. Enable the vfio-pci/igb-uio driver on the node. The below example shows the enabling of the `igb_uio` driver: ```bash - ne-node# /opt/dpdk-18.11.6/usertools/dpdk-devbind.py -b igb_uio 0000:af:0a.0 + ne-node# /opt/openness/dpdk-18.11.6/usertools/dpdk-devbind.py -b igb_uio 0000:af:0a.0 - ne-node# /opt/dpdk-18.11.6/usertools/dpdk-devbind.py --status + ne-node# /opt/openness/dpdk-18.11.6/usertools/dpdk-devbind.py --status Network devices using DPDK-compatible driver ============================================ 0000:af:0a.0 'Ethernet Virtual Function 700 Series 154c' drv=igb_uio unused=i40evf,vfio-pci diff --git a/doc/reference-architectures/index.html b/doc/reference-architectures/index.html new file mode 100644 index 00000000..4dad3f78 --- /dev/null +++ b/doc/reference-architectures/index.html @@ -0,0 +1,14 @@ + + +--- +title: OpenNESS Documentation +description: Home +layout: openness +--- +

You are being redirected to the OpenNESS Docs.

+ diff --git a/doc/reference-architectures/openness_sdwan.md b/doc/reference-architectures/openness_sdwan.md new file mode 100644 index 00000000..1e1577cb --- /dev/null +++ b/doc/reference-architectures/openness_sdwan.md @@ -0,0 +1,413 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020 Intel Corporation +``` + +# Converged Edge Reference Architecture for SD-WAN +- [Introduction](#introduction) +- [Universal Customer Premises Equipment (u-CPE)](#universal-customer-premises-equipment-u-cpe) +- [Software-Defined Wide Area Network (SD-WAN)](#software-defined-wide-area-network-sd-wan) +- [SD-WAN Implementation](#sd-wan-implementation) + - [SD-WAN CNF](#sd-wan-cnf) + - [SD-WAN CRD Controller](#sd-wan-crd-controller) + - [Custom Resources (CRs)](#custom-resources-crs) +- [CNF Configuration via OpenWRT Packages](#cnf-configuration-via-openwrt-packages) + - [Multi WAN (Mwan3)](#multi-wan-mwan3) + - [Firewall (fw3)](#firewall-fw3) + - [IPSec](#ipsec) +- [SD-WAN CNF Packet Flow](#sd-wan-cnf-packet-flow) +- [OpenNESS Integration](#openness-integration) + - [Goals](#goals) + - [Networking Implementation](#networking-implementation) + - [Converged Edge Reference Architectures (CERA)](#converged-edge-reference-architectures-cera) + - [SD-WAN Edge Reference Architecture](#sd-wan-edge-reference-architecture) + - [SD-WAN Hub Reference Architecture](#sd-wan-hub-reference-architecture) +- [Deployment](#deployment) + - [E2E Scenarios](#e2e-scenarios) + - [Hardware Specification](#hardware-specification) + - [Scenario 1](#scenario-1) + - [Scenario 2](#scenario-2) + - [Scenario 3](#scenario-3) +- [Resource Consumption](#resource-consumption) + - [Methodology](#methodology) + - [Results](#results) +- [References](#references) +- [Acronyms](#acronyms) + +## Introduction +With the growth of global organizations, there is an increased need to connect branch offices distributed across the world. As enterprise applications move from corporate data centers to the cloud or the on-premise edge, their branches require secure and reliable, low latency, and affordable connectivity. One way to achieve this is to deploy a wide area network (WAN) over the public Internet, and create secure links to the branches where applications are running. +The primary role of a traditional WAN is to connect clients to applications hosted anywhere on the Internet. The applications are accessed via public TCP/IP addresses, supported by routing tables on enterprise routers. Branches were also connected to their headquarter data centers via a combination of configurable routers and leased connections. This made WAN connectivty complex and expensive to manage. Additionally, with the move of applications to the cloud and edge, where applications are hosted in private networks without public addresses, accessing these applications requires even more complex rules and policies. + + +Software-defined WAN (SD-WAN) introduces a new way to operate a WAN. First of all, because it is defined by software, its management can be decoupled from the underlying networking hardware (e.g., routers) and managed in a centralized manner, making it more scalable. Secondly, SD-WAN network functions can now be hosted on Universal Customer Premises Equipment (uCPE), which also host software versions of traditional customer premises equipment. Finally, an SD-WAN can be complemented by edge computing solutions, allowing, for example, latency-sensitive traffic to be steered to edge nodes for local processing, and to allow uCPE functions to be hosted in edge nodes. + + +This paper describes how the Open Network Edge Services Software (OpenNESS) integrates uCPE features and SD-WAN capabilities to create for edge optimization, and how it leverages SD-WAN functionality to allow edge-to-edge communication via a WAN. + +## Universal Customer Premises Equipment (u-CPE) +Universal Customer Premise Equipment (uCPE) is a general-purpose platform that can host network functions, implemented in software, that are traditionally run in hardware-based Customer Premises Equipment (CPE). These network services are implemented as virtual functions or cloud-native network functions. Because they are implemented in software, they are well-suited to be hosted on edge nodes, because the nodes are located close to their end users, but also can be be orchestrated by the Controller of an edge computing system. + +## Software-Defined Wide Area Network (SD-WAN) +An SD-WAN is a set of network functions that enable application-aware, intelligent, and secure routing of traffic across the WAN. An SD-WAN typically uses the public internet to interconnect its branch offices, securing the traffic via encrypted tunnels, basically treating the tunnels as "dumb pipes". Traffic at the endpoints can be highly optimized, because the network functions at a branch are virtualized and centrally managed. The SD-WAN manager can also make use of information about the applications running at a branch to optimize traffic. + + +OpenNESS provides an edge computing-based reference architecture for SD-WAN, consisting of building blocks for SD-WAN network functions and reference implementations of branch office functions and services, all running on an OpenNESS edge node and managed by an OpenNESS Controller. + +The figure below shows an example of an OpenNESS based SD-WAN. In this figure, there are two edge nodes, "Manufacturing Plant" and "Branch Office". In each node are multiple OpenNESS-based clusters, each running the OpenNESS edge platform, but supporting different collections of network functions, such as Private 5G (e.g., the AF, NEF, gNB, UPF functions), SD-WAN network functions, or user applications. + +In this figure, the SD-WAN implementation is depicted in "SD-WAN NFs" boxes appearing in a number of OpenNESS clusters, and an "SD-WAN Controller" appearing in the Orchestration and Management function. Other functions seen in the figure are OpenNESS building blocks that the SD-WAN implementation uses to carry out its function. + + +The next section describes the SD-WAN implementation. + +![OpenNESS reference solution for SD-WAN ](sdwan-images/openness-sdwan-ref.png) + +## SD-WAN Implementation +The CERA SD-WAN is based on OpenWrt, an embedded version of Linux designed for use in routers and other communication devices. OpenWrt is highly customizable, allowing it to be deployed with a small footprint, and has a fully-writable filesystem. More details about OpenWRT can be found [here](https://openwrt.org/). + +The OpenWrt project provides a number of kernel images. The “x86-generic rootfs” image is used in the SD-WAN implementation + +The OpenWrt project contains a number of packages of use in implementing SD-WAN functional elements, which are written as OpenWrt applications. These include: + + - mwan3 (for Multiple WAN link support) [mwan](https://openwrt.org/docs/guide-user/network/wan/multiwan/mwan3/) + + - firewall3 (for firewall, SNAT, DNAT) [fw3](https://openwrt.org/docs/guide-user/firewall/overview) + + - strongswan (for IPsec) [strongswan](https://openwrt.org/docs/guide-user/services/vpn/strongswan/start) + + +These packages support the following functionality: + + - IPsec tunnels across K8s clusters; + + - Support of multiple types of K8s clusters: + + - K8s clusters having static public IP address, + + - K8s clusters having dynamic public IP address with static FQDN, and + + - K8s clusters with no public IP; + + - Stateful inspection firewall (for inbound and outbound connections); + + - Source NAT and Destination NAT for K8s clusters whose POD and ClusterIP subnets are overlapping; + + - Multiple WAN links. + + +The SD-WAN implementation uses the following three primary components: + + - SD-WAN Cloud-Native Network Function (CNF) based on OpenWrt packages; + + - Custom Resource Definition (CRD) Controller; + + - Custom Resource Definitions (CRD). + +The CNF contains the OpenWrt services that perform SD-WANM operations. The CRD Controller and CRDs allow Custom Resources (i.e., extensions to Kubernetes APIs) to be created. Together these components allow information to be sent and received, and commands performed, from the Kubernetes Controller to the SD-WAN. + +This behavior is described in the following subsections. + +### SD-WAN CNF +The SD-WAN CNF is deployed as a pod with external network connections. The CNF runs the mwan, mwan3, and strongswan applications, as described in the previous section. The configuration parameters for the CNF include: + + - LAN interface configuration – to create and connect virtual, local networks within the edge cluster (local branch) to the CNF. + + - WAN interface configuration – to initialize interfaces that connect the CNF and connected LANs to the external Internet - WAN and to initialize the traffic rules (e.g., policy, rules) for the interfaces. The external WAN is also referred to in this document as a provider network. + +SD-WAN traffic rules and WAN interfaces are configured at runtime via a RESTful API. The CNF implements the Luci CGI plugin to provide this API. The API calls are initiated and passed to the CNF by a CRD Controller described in the next paragraph. The API provides the capability to list available SD-WAN services (e.g., mwan3, firewall, and ipsec), get service status, and execute service operations for adding, viewing, and deleting settings for these services. + +### SD-WAN CRD Controller +The CRD Controller (also referred to in the implementation as a Config Agent), interacts with the SD-WAN CNF via RESTful API calls. It monitors CRs applied through K8s APIs and translates them into API calls that carry the CNF configuration to the CNF instance. + +the CRD Controller includes several functions: + + - Mwan3conf Controller, to monitor the Mwan3Conf CR; + + - FirewallConf Controller, to monitor the FirewallConf CR; + + - IPSec Controller, to monitor the IpSec CRs. + + +### Custom Resources (CRs) + +As explained above, the behavior of the SD-WAN is governed by rules established in the CNF services. +In order to set these rules externally, CRs are defined to allow rules to be transmitted from the Kubernetes API. The CRs are created from the CRDs that are part of the SD-WAN implementation. + +The types of rules supported by the CRs are: + + - Mwan3 class, with 2 subclasses, mwan3_policy and mwan3_rule. + + - The firewall class has 5 kinds of rules: firewall_zone, firewall_snat, firewall_dnat, firewall_forwarding, firewall_rule. + + - IPsec class. + + The rules are defined by the OpenWrt services, and can be found in the OpenWrt documentation, e.g., [here](https://openwrt.org/docs/guide-user/network/wan/multiwan/mwan3). + + Each kind of SD-WAN rule corresponds to a CRD, which are used to instantiate the CRs. + +In a Kubernetes namespace, with more than one CNF deployment and many SD-WAN rule CRDs, labels are used to correlate a CNF with SD-WAN rule CRDs. + +## CNF Configuration via OpenWRT Packages + +As explained earlier, the SD-WAN CNF contains a collection of services, implemented by OpenWRT packages. In this section, the services are described in greater detail. + +### Multi WAN (Mwan3) +The OpenWRT mwan3 service provides capabilities for multiple WAN management: WAN interfaces management, outbound traffic rules, traffic load balancing etc. The service allows an edge to connect to WANs of different providers and and to specify different rules for the links. + +According to the OpenWRT [website](https://openwrt.org), mwan3 provides the following functionality and capabilities: + + - Provides outbound WAN traffic load balancing or fail-over with multiple WAN interfaces based on a numeric weight assignment. + + - Monitors each WAN connection using repeated ping tests and can automatically route outbound traffic to another WAN interface if a current WAN interface loses connectivity. + + - Creates outbound traffic rules to customize which outbound connections should use which WAN interface (i.e., policy-based routing). This can be customized based on source IP, destination IP, source port(s), destination port(s), type of IP protocol, and other parameters. + + - Supports physical and/or logical WAN interfaces. + + - Uses the firewall mask (default 0x3F00) to mark outgoing traffic, which can be configured in the /etc/config/mwan3 globals section, and can be mixed with other packages that use the firewall masking feature. This value is also used to set the number of supported interfaces. + +Mwan3 is useful for routers with multiple internet connections, where users have control over the traffic that flows to a specific WAN interface. It can handle multiple levels of primary and backup interfaces, where different sources can have different primary or backup WANs. Mwan3 uses Netfilter mark mask, in order to be compatible with other packages (e.g., OpenVPN, PPTP VPN, QoS-script, Tunnels), so that traffic can also be routed based on the default routing table. + +Mwan3 is triggered by a hotplug event when an interface comes up, causing it to create a new custom routing table and iptables rules for the interface. It then sets up iptables rules and uses iptables MARK to mark certain traffic. Based on these rules, the kernel determines which routing table to use. Once all the routes and rules are initially set up, mwan3 exits. Thereafter, the kernel takes care of all the routing decisions. A monitoring script, mwan3track, runs in the background, running ping to verify that each WAN interface is up. If an interface goes down, mwan3track issues a hotplug event to cause mwan3 to adjust routing tables in response to the interface failure, and to delete all the rules and routes to that interface. + +Another component, mwan3rtmon, keeps the main routing table in sync with the interface routing tables by monitoring routing table changes. + +Mwan3 is configured when it is started, according to a configuration with the following paragraphs: + + - Global: common configuration spec, used to configure routable loopback address (for OpenWRT 18.06). + + - Interface: defines how each WAN interface is tested for up/down status. + + - Member: represents an interface with a metric and a weight value. + + - Policy: defines how traffic is routed through the different WAN interface(s). + + - Rule: describes what traffic to match and what policy to assign for that traffic. + +A SD-WAN CNF will be created with Global and Interface sections initialized based on the interfaces allocated to it. Once the CNF starts, the SD-WAN MWAN3 CNF API can be used to get/create/update/delete an mwan3 rule and policy, on a per member basis. + +### Firewall (fw3) +OpenWrt uses the firewall3 (fw3) netfilter/iptable rule builder application. It runs in user space to parse a configuration file into a set of iptables rules, sending each of the rules to the kernel netfilter modules. The fw3 application is used by OpenWRT to “safely” construct a rule set, while hiding much of the details. The fw3 configuration automatically provides the router with a base set of rules and an understandable configuration file for additional rules. + +Similarly to the iptables application, fw3 is based on libiptc library that is used to communicate with the netfilter kernel modules. Both fw3 and iptables applications follow the same steps to apply rules on Netfilter: + + - Establish a socket and read the netfilter table into the application. + + - Modify the chains, rules, etc. in the table (all parsing and error checking is done in user-space by libiptc). + + - Replace the netfilter table in the kernel + +fw3 is typically managed by invoking the shell script /etc/init.d/firewall, which accepts the following set of arguments (start, stop, restart, reload, flush). Behind the scenes, /etc/init.d/firewall then calls fw3, passing the supplied argument to the binary. + +OpenWRT firewall is configured when it is started, via a configuration file with the following paragraphs: + + - Default: declares global firewall settings that do not belong to specific zones. + + - Include: used to enable customized firewall scripts. + + - Zone: groups one or more interfaces and serves as a source or destination for forwardings, rules, and redirects. + + - Forwarding: control the traffic between zones. + + - Redirect: defines port forwarding (NAT) rules + + - Rule: defines basic accept, drop, or reject rules to allow or restrict access to specific ports or hosts. + +The SD-WAN firewall API provides support to get/create/update/delete Firewall Zone, Redirect, Rule, and Forwardings. + +### IPSec +The SD-WAN leverages IPSec functionality to setup secure tunnels for Edge-to-WAN and Edge-WAN-Edge (i.e., to interconnect two edges) communication. The SD-WAN uses the OpenWrt StrongSwan implementation of IPSec. IPsec rules are integrated with the OpenWRT firewall, which enables custom firewall rules. StrongSwan uses the default firewall mechanism to update the firewall rules and injects all the additionally required settings, according to the IPsec configuration stored in /etc/config/ipsec . + +The SD-WAN configures the IPSec site-to-site tunnels to connect edge networks through a hub located in the external network. The hub is a server that acts as a proxy between pairs of edges. The hub also runs SD-WAN CRD Controller and CNF configured such that it knows how to access SD-WAN CNFs deployed on both edges. In that case, to create the IPsec tunnel, the WAN interface on the edge is treated as one side of the tunnel, and the connected WAN interface on the hub is configured as the "responder". Both edges are configured as "initiator". + +## SD-WAN CNF Packet Flow + +Packets that arrives at the edge come through a WAN link that connects the edge to an external provoder network. This WAN interface should be already configured with traffic rules. If there is an IPSec tunnel created on the WAN interface, the packet enters the IPSec tunnel and is forwarded according to IPSec and Firewall/NAT rules. The packet eventually leaves the CNF via a LAN link connecting the OVN network on the edge. + +The following figure shows the typical packet flow through the SD-WAN CNF for Rx (WAN to LAN) when a packet sent from external network enters the edge cluster: + +![SD-WAN Rx packet flow ](sdwan-images/packet-flow-rx.png) + +Packets that attempt to leave the edge come into the CNF through a LAN link attached to the OVN network on the edge cluster. This packet is then marked by the mwan3 application. This mark is used by the firewall to apply rules on the packet, and steer it to the proper WAN link used by the IPSec tunnel connecting the CNF to the WAN. The packet enters the IPSec tunnel and leaves the edge through the WAN interface. + +The following figure shows the typical packet flow through the SD-WAN CNF for Tx (LAN to WAN), when a packet leaves from the edge cluster to the external network: + +![SD-WAN Tx packet flow ](sdwan-images/packet-flow-tx.png) + +## OpenNESS Integration +The previous sections of this document describe the operation of an SD-WAN implemention built from OpenWrt and its various packages. We now turn to the subject of how the SD-WAN is integrated with OpenNESS. + +### Goals +OpenNESS leverages the SD-WAN project to offer SD-WAN service within an on-premise edge, to enable secure and optimized inter-edge data transfer. This functionality is sought by global corporations with branch offices distributed across many geographical locations, as it creates an optimized WAN between edge locations implemented on top of a public network. + +At least one SD-WAN CNF is expected to run on each OpenNESS cluster (as shown in a previous figure), and act as a proxy for edge applications traffic entering and exiting the cluster. The primary task for the CNF is to provide software-defined routes connecting the edge LANs with the (public network) WAN. + +Currently, the OpenNESS SD-WAN is intended only for single node clusters, accommodating only one instance of a CNF and a CRD Controller. + + + +### Networking Implementation +OpenNESS deployment featuring SD-WAN implements networking within the cluster with three CNIs: + + - calico CNI, that acts as the primary CNI. + - ovn4nfv k8s plugin CNI that acts as the secondary CNI. + - Multus CNI, that allows for attaching multiple network interfaces to pods, required by the CNF pod. Without Multus, Kubernetes pods could support only one network interface. + +The [Calico](https://docs.projectcalico.org/about/about-calico) CNI is used to configure the default network overlay for the OpenNESS cluster. It provides the commuication between the pods of the cluster and acts as the management interface. Calico is considered a lighter solution than Kube-OVN, which currently is the preferable CNI plugin for the primary network in OpenNESS clusters. + +The [ovn4nfv-k8s-plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) is a CNI plugin based on OVN and OpenVSwitch (OVS). It works with the Multus CNI to add multiple interfaces to the pod. If Multus is used, the net1 interface is by convention the OVN default interface that connects to Multus. The other interfaces are added by ovn4nfv-k8s-plugin according to the pod annotation. With ovn4nfv-k8s-plugin, virtual networks can be created at runtime. The CNI plugin also utilises physical interfaces to connect a pod to an external network (provider network). This is particularly important for the SD-WAN CNF. ovn4nfv also enables Service Function Chaining ([SFC](https://github.com/opnfv/ovn4nfv-k8s-plugin/blob/master/demo/sfc-setup/README.md)). + +In order for the SD-WAN CNF to act as a proxy between the virtual LANs in the cluster and the WAN, it needs to have two types of network interfaces configured: + + - A virtual LAN network on one of the CNF's virtual interfaces. This connects application pods belonging to the same OVN network in the cluster. The ovn4nfv plugin allows for simplified creation of a virtual OVN network based on the provided configuration. The network is then attached on one of the CNF's interfaces. + - A provider network, to connect the CNF pod to an external network (WAN). The provider network is attached to the physical network infrastructure via layer-2 (i.e., via bridging/switching). + +### Converged Edge Reference Architectures (CERA) +CERA is a business program that creates and maintains validated reference architectures of edge networks, including both hardware and software elements. The reference architectures are used by ISVs, system integrators, and others to accelerate the development of production edge computing systems. + +The OpenNESS project has created a CERA reference architecture for SD-WAN edge and SD-WAN hub. They are used, with OpenNESS, to create a uCPE platform for an SD-WAN CNF on edge and hub accordingly. Even though there is only one implementation of CNF, it can be used for two different purposes, as described below. + +#### SD-WAN Edge Reference Architecture +The SD-WAN Edge CERA reference implementation is used to deploy SD-WAN CNF on a single-node edge cluster that will also accomodate enterprize edge applications. The major goal of SD-WAN Edge is to support the creation of a Kubernetes-based platform that boosts the performance of deployed edge applications and reduces resource usage by the Kubernetes system. To accomplish this, the underlying platform must be optimized and made ready to use IA accelerators. OpenNESS provides support for the deployment of OpenVINO™ applications and workloads acceleration with the Intel® Movidius™ VPU HDDL-R add-in card. SD-WAN Edge also enables the Node Feature Discovery (NFD) building block on the cluster to provide awareness of the nodes’ features to edge applications. Finally, SD-WAN Edge implements Istio Service Mesh (SM) in the default namespace to connect the edge applications. SM acts as a middleware between edge applications/services and the OpenNESS platform, and provides abstractions for traffic management, observability, and security of the building blocks in the platform. Istio is a cloud-native service mesh that provides capabilities such as Traffic Management, Security, and Observability uniformly across a network of services. OpenNESS integrates with Istio to reduce the complexity of large scale edge applications, services, and network functions. More information on SM in OpenNESS can be found on the OpenNESS [website](https://openness.org/developers/). + + +To minimalize resource consumption by the cluster, SD-WAN Edge disables services such as EAA, Edge DNS, and Kafka. Telemetry service stays active for all the Kubernetes deployments. + +The following figure shows the system architecture of the SD-WAN Edge Reference Architecture. + +![OpenNESS SD-WAN Edge Architecture ](sdwan-images/sdwan-edge-arch.png) + + +#### SD-WAN Hub Reference Architecture +The SD-WAN Hub reference architecture prepares an OpenNESS platform for a single-node cluster that functions primarily as an SD-WAN hub. That cluster will also deploy a SD-WAN CRD Controller and a CNF, but no other corporate applications are expected to run on it. That is why the node does not enable support for an HDDL card or for Network Feature Discovery and Service Mesh. + +The Hub is another OpenNESS single-node cluster that acts as a proxy between different edge clusters. The Hub is essential to connect edges through a WAN when applications within the edge clusters have no public IP addresses, which requires additional routing rules to provide access. These rules can be configured globally on a device acting as a hub for the edge locations. + +The Hub node has two expected use-cases: + +- If the edge application wants to access the internet, or an external application wants to access service running in the edge node, the Hub node can act as a gateway with a security policy in force. + +- For communication between a pair of edge nodes located at different locations (and in different clusters), if both edge nodes have public IP addresses, then an IP Tunnel can be configured directly between the edge clusters, otherwise the Hub node is required to act as a proxy to enable the communication. + +The following figure shows the system architecture of the SD-WAN Hub Reference Architecture. + +![OpenNESS SD-WAN Hub Architecture ](sdwan-images/sdwan-hub-arch.png) + +## Deployment +### E2E Scenarios +Three end-to-end scenarios have been validated to verify deployment of an SD-WAN on OpenNESS. The three scenarios are described in the following sections of this document. + +#### Hardware Specification + +The following table describes the hardware requirements of the scenarios. + +| Hardware | | UE | Edge & Hub | +| ---------|----------------------- | ---------------------------------- | ------------------------------------ | +| CPU | Model name: | Intel(R) Xeon(R) | Intel(R) Xeon(R) D-2145NT | +| | | CPU E5-2658 v3 @ 2.20GHz | CPU @ 1.90GHz | +| | CPU MHz: | 1478.527 | CPU MHz: 1900.000 | +| | L1d cache: | 32K | 32K | +| | L1i cache: | 32K | 32K | +| | L2 cache: | 256K | 1024K | +| | L3 cache: | 30720K | 1126K | +| | NUMA node0 CPU(s): | 0-11 | 0-15 | +| | NUMA node1 CPU(s): | 12-23 | | +| NIC | Ethernet controller: | Intel Corporation | Intel Corporation | +| | | 82599ES 10-Gigabit | Ethernet Connection | +| | | SFI/SFP+ Network Connection | X722 for 10GbE SFP+ | +| | | (rev 01) | Subsystem: Advantech Co. Ltd | +| | | Subsystem: Intel Corporation | Device 301d | +| | | Ethernet Server Adapter X520-2 | | +| HDDL | | | | + +#### Scenario 1 + +In this scenario, two UEs are connected to two separate edge nodes, which are connected to one common hub. The scenario demonstrates basic connectivity accross the edge clusters via the SD-WAN. The traffic flow is initiated on one UE and received on the other UE. + +For this scenario, OpenNESS is deployed on both edges and on the hub. On each edge and hub, an SD-WAN CRD Controller and a CNF are set up. Then CRs are used to configre the CNFs and to set up IPsec tunnels between each edge and the hub, and to configure rules on the WAN interfaces connecting edges with the hub. Each CNF is connected to two provider networks. The CNFs on Edge 1 and Edge 2 use provider network n2 to connect to UEs outside the Edge, and the provider network n3 to connect the hub in another edge location. Currently, the UE connects to the CNF directly without the switch. In the following figure, UE1 is in the same network(NET1) as Edge1 port. It is considered a private network. + +This scenario verifies that sample traffic can be sent from the UE connected to Edge2 to another UE connected to Edge1 over secure WAN links connecting the edges to a hub. To demonstrate this connectivity, traffic from the Iperf-client application running on the Edge2 UE is sent toward the Edge1 UE running the Iperf server application. + +The Edge1 node also deploys an OpenVINO app, and, in this way, this scenario also demonstrates Scenario 3 described below. + +![OpenNESS SD-WAN Scenario 1 ](sdwan-images/e2e-scenario1.png) + +A more detailed description of this E2E test is provided under the link in the OpenNESS documentation for this SD-WAN [scenario](https://github.com/open-ness/edgeapps/blob/master/network-functions/sdewan_cnf/e2e-scenarios/three-single-node-clusters/E2E-Overview.md). + +#### Scenario 2 +This scenario demonstrates an simple OpenNESS SD-WAN with a single node cluster, that deploys an SD-WAN CNF and an application pod running an Iperf client. The scenario is depicted in the following figure. + +The CNF pod and Iperf-client pod are attached to one virtual OVN network, using the n3 and n0 interfaces respectively. The CNF has configured a provider network on interface n2, that is attached to a physical interface on the Edge node to work as a bridge, to connect the external network. This scenario demonstrates that, after configuration of the CNF, the traffic sent from the application pod uses the SD-WAN CNF as a proxy, and arrives at the User Equipment (UE) in the external network. The E2E traffic from the Iperf3 client application on the application pod (which is deployed on the Edge node) travels to the external UE via a 10G NIC port. The UE runs the Iperf3 server application. The OpenNESS cluster, consisting of the Edge Node server, is deployed on the SD-WAN Edge. The Iperf client traffic is expected to pass through the SD-WAN CNF and the attached provider network interface to reach the Iperf server that is listening on the UE. + +A more detailed description of the scenarion can be found in this SD-WAN scenario [documentation](https://github.com/open-ness/edgeapps/blob/master/network-functions/sdewan_cnf/e2e-scenarios/one-single-node-cluster/README.md) + +![OpenNESS SD-WAN Scenario 2 ](sdwan-images/e2e-scenario2.png) + + +#### Scenario 3 +This scenario a sample OpenVINO benchmark application deployed on an OpenNESS edge platform equipped with an HDDL accelerator card. It reflects the use case in which a high performance OpenVINO application is executed on an OpenNESS single node cluster, deployed with an SD-WAN Edge. The SD-WAN Edge enables an HDDL plugin to provide the OpenNESS platform with support for workload acceleration via the HDDL card. More information on the OpenVINO sample application is provided under the following links: + + - [OpenVINO Sample Application White Paper](https://github.com/open-ness/specs/blob/master/doc/applications/openness_openvino.md) + + - [OpenVINO Sample Application Onboarding](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md#onboarding-openvino-application) + + +A more detailed description of this scenario is available in OpenNESS [documentation](https://github.com/open-ness/edgeapps/blob/master/network-functions/sdewan_cnf/e2e-scenarios/openvino-hddl-cluster/README.md) + +![OpenNESS SD-WAN Scenario 3 ](sdwan-images/e2e-scenario3.png) + +## Resource Consumption +### Methodology + +The resource consumption of CPU and memory was measured. + +To measure the CPU and memory resource consumption of the Kubernetes cluster, the “kubctl top pod -A” command was invoked both on the Edge node and the Edge Hub. + +The resource consumption was measured twice: + + - With no IPerf traffic; + + - With IPerf traffic from Edge2-UE to Edge1-UE. + +To measure total memory usage, the command “free -h” was used. + +### Results + +| Option | Resource | Edge | Hub | +| ---------------------- | ------------- | ------------------ | ------------------------------------ | +| Without traffic | CPU | 339m (0.339 CPU) | 327m (0.327 CPU) | +| | RAM | 2050Mi (2.05G) | 2162Mi (2.162G) | +| | Total mem used| 3.1G | 3.1G | +| With Iperf traffic | CPU | 382m(0.382 CPU) | 404m(0.404 CPU) | +| | RAM | 2071Mi(2.071G) | 2186Mi(2.186G) | +| | Total mem used| 3.1G | 3.1 | + +## References +- [ICN SDEWAN documentation](https://wiki.akraino.org/display/AK/ICN+-+SDEWAN) +- [ovn4nfv k8s plugin documentation](https://github.com/opnfv/ovn4nfv-k8s-plugin) +- [Service Function Chaining (SFC) Setup](https://github.com/opnfv/ovn4nfv-k8s-plugin/blob/master/demo/sfc-setup/README.md) +- [Utilizing a Service Mesh for Edge Services in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/applications/openness_service_mesh.md) +- [Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness_hddl.md) +- [Node Feature Discovery support in OpenNESS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-node-feature-discovery.md) +- [OpenVINO™ Sample Application in OpenNESS](https://github.com/open-ness/ido-specs/blob/78d7797cbe0a21ade2fdc61625c2416d8430df23/doc/applications/openness_openvino.md) + +## Acronyms + +| | | +|-------------|---------------------------------------------------------------| +| API | Application Programming Interface | +| CERA | Converged Edge Reference Architectures +| CR | Custom Resource | +| CRD | Custom Resource Definition | +| CNF | Cloud-native Network Function | +| DNAT | Destination Network Address Translation | +| HDDL | High Density Deep Learning | +| IP | Internet Protocol | +| NAT | Network Address Translation | +| NFD | Network Feature Discovery | +| SM | Service Mesh | +| SD-WAN | Software-Defined Wide Area Network | +| SNAT | Source Network Address Translation | +| TCP | Transmission Control Protocol | +| uCPE | Universal Customer Premise Equipment | + diff --git a/doc/ran/index.html b/doc/reference-architectures/ran/index.html similarity index 100% rename from doc/ran/index.html rename to doc/reference-architectures/ran/index.html diff --git a/doc/reference-architectures/ran/openness-ran.png b/doc/reference-architectures/ran/openness-ran.png new file mode 100644 index 00000000..68706f8b Binary files /dev/null and b/doc/reference-architectures/ran/openness-ran.png differ diff --git a/doc/ran/openness_ran.md b/doc/reference-architectures/ran/openness_ran.md similarity index 88% rename from doc/ran/openness_ran.md rename to doc/reference-architectures/ran/openness_ran.md index e235a325..bd3bd9f5 100644 --- a/doc/ran/openness_ran.md +++ b/doc/reference-architectures/ran/openness_ran.md @@ -59,7 +59,13 @@ This section explains the steps involved in building the FlexRAN image. Only L1 >**NOTE**: The environmental variables path must be updated according to your installation and file/directory names. 4. Build L1, WLS interface between L1, L2, and L2-Stub (testmac): `./flexran_build.sh -r 5gnr_sub6 -m testmac -m wls -m l1app -b -c` -5. Once the build has completed, copy the required binary files to the folder where the Docker\* image is built. The list of binary files that are used is documented in [dockerfile](https://github.com/open-ness/edgeapps/blob/master/network-functions/ran/5G/flexRAN-gnb/Dockerfile) +5. Once the build has completed, copy the required binary files to the folder where the Docker\* image is built. This can be done by using a provided example [build-du-dev-image.sh](https://github.com/open-ness/edgeapps/blob/master/network-functions/ran/5G/du-dev/build-du-dev-image.sh) script from Edge Apps OpenNESS repository, it will copy the files from the paths provided as environmental variables in previous step. The script will copy the files into the right directory containing the Dockerfile and commence the docker build. + ```shell + git clone https://github.com/open-ness/edgeapps.git + cd edgeapps/network-functions/ran/5G/du-dev + ./build-du-dev-image.sh + ``` + The list of binary files that are used is documented in [dockerfile](https://github.com/open-ness/edgeapps/blob/master/network-functions/ran/5G/flexRAN-gnb/Dockerfile) - ICC, IPP mpi and mkl Runtime - DPDK build target directory - FlexRAN test vectors (optional) @@ -67,21 +73,26 @@ This section explains the steps involved in building the FlexRAN image. Only L1 - FlexRAN SDK modules - FlexRAN WLS share library - FlexRAN CPA libraries -6. `cd` to the folder where the Docker image is built and start the docker build `docker build -t : .` -The following example reflects the Docker image [expected by Helm chart](https://github.com/open-ness/edgeapps/blob/master/network-functions/ran/charts/flexran/values.yaml): + + +6. The following example reflects the Docker image [expected by Helm chart](https://github.com/open-ness/edgeapps/blob/master/network-functions/ran/charts/du-dev/values.yaml), user needs to adjust the IP address and port of the Harbor registry where Docker image will be pushed: ```shell - docker build -t flexran5g:3.10.0-1062.12.1.rt56 . + image: + repository: :/intel/flexran5g # Change Me! - please provide IP address and port + # of Harbor registry where FlexRAN docker image is uploaded + tag: 3.10.0-1127.19.1.rt56 # The tag identifying the FlexRAN docker image, + # the kernel version used to build FlexRAN can be used as tag ``` -7. Tag the image and push to a local Docker registry (Docker registry deployed as part of OpenNESS Experience Kit) +7. Tag the image and push to a local Harbor registry (Harbor registry deployed as part of OpenNESS Experience Kit) ```shell - docker tag flexran5g :/intel/flexran5g:3.10.0-1062.12.1.rt56 + docker tag flexran5g :/intel/flexran5g:3.10.0-1127.19.1.rt56 - docker push :/intel/flexran5g:3.10.0-1062.12.1.rt56 + docker push :/intel/flexran5g:3.10.0-1127.19.1.rt56 ``` -By the end of step 7, the FlexRAN Docker image is created and available in the Docker registry. This image is copied to the edge node where FlexRAN will be deployed and that is installed with OpenNESS Network edge with all the required EPA features including Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000. Please refer to the document [Using FPGA in OpenNESS: Programming, Resource Allocation, and Configuration](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-fpga.md) for details on setting up Intel® FPGA PAC N3000 with vRAN FPGA image. +By the end of step 7, the FlexRAN Docker image is created and available in the Harbor registry. This image is copied to the edge node where FlexRAN will be deployed and that is installed with OpenNESS Network edge with all the required EPA features including Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000. Please refer to the document [Using FPGA in OpenNESS: Programming, Resource Allocation, and Configuration](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md) for details on setting up Intel® FPGA PAC N3000 with vRAN FPGA image. # FlexRAN hardware platform configuration ## BIOS @@ -99,7 +110,7 @@ Instructions on how to configure the kernel command line in OpenNESS can be foun # Deploying and Running the FlexRAN pod -1. Deploy the OpenNESS cluster with [SRIOV for FPGA enabled](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-fpga.md#fpga-fec-ansible-installation-for-openness-network-edge). +1. Deploy the OpenNESS cluster with [SRIOV for FPGA enabled](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md#fpga-fec-ansible-installation-for-openness-network-edge). 2. Confirm that there are no FlexRAN pods and the FPGA configuration pods are not deployed using `kubectl get pods`. 3. Confirm that all the EPA microservice and enhancements (part of OpenNESS playbook) are deployed `kubectl get po --all-namespaces`. ```yaml @@ -133,13 +144,13 @@ Instructions on how to configure the kernel command line in OpenNESS can be foun openness syslog-master-894hs 1/1 Running 0 7d19h openness syslog-ng-n7zfm 1/1 Running 16 7d19h ``` -4. Deploy the Kubernetes job to program the [FPGA](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-fpga.md#fpga-programming-and-telemetry-on-openness-network-edge) -5. Deploy the Kubernetes job to configure the [BIOS](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-bios.md) (note: only works on select Intel development platforms) -6. Deploy the Kubernetes job to configure the [Intel PAC N3000 FPGA](https://github.com/open-ness/ido-specs/blob/master/doc/enhanced-platform-awareness/openness-fpga.md#fec-vf-configuration-for-openness-network-edge) +4. Deploy the Kubernetes job to program the [FPGA](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md#fpga-programming-and-telemetry-on-openness-network-edge) +5. Deploy the Kubernetes job to configure the [BIOS](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-bios.md) (note: only works on select Intel development platforms) +6. Deploy the Kubernetes job to configure the [Intel PAC N3000 FPGA](https://github.com/open-ness/ido-specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md#fec-vf-configuration-for-openness-network-edge) 7. Deploy the FlexRAN Kubernetes pod using a helm chart provided in Edge Apps repository at `edgeapps/network-functions/ran/charts`: ```shell - helm install flexran-pod flexran + helm install flexran-pod du-dev ``` 8. `exec` into FlexRAN pod `kubectl exec -it flexran -- /bin/bash` diff --git a/doc/ran/openness_xran.md b/doc/reference-architectures/ran/openness_xran.md similarity index 99% rename from doc/ran/openness_xran.md rename to doc/reference-architectures/ran/openness_xran.md index 2b607717..ececca9c 100644 --- a/doc/ran/openness_xran.md +++ b/doc/reference-architectures/ran/openness_xran.md @@ -550,7 +550,7 @@ Check the `/proc/cmd` output. It should look similar to: ```shell #cat /proc/cmdline - BOOT_IMAGE=/vmlinuz-3.10.0-957.10.1.rt56.921.el7.x86_64 root=/dev/mapper/centosroot ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap intel_iommu=on iommu=pt usbcore.autosuspend=-1 selinux=0 enforcing=0 nmi_watchdog=0 softlockup_panic=0 audit=0 intel_pstate=disable cgroup_memory=1 cgroup_enable=memory mce=off idle=poll hugepagesz=1G hugepages=16 hugepagesz=2M hugepages=0 default_hugepagesz=1G isolcpus=1-19,21-39 rcu_nocbs=1-19,21-39 kthread_cpus=0,20 irqaffinity=0,20 nohz_full=1-19,21-39 + BOOT_IMAGE=/vmlinuz-3.10.0-1127.19.1.rt56.1116.el7.x86_64 root=/dev/mapper/centosroot ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap intel_iommu=on iommu=pt usbcore.autosuspend=-1 selinux=0 enforcing=0 nmi_watchdog=0 softlockup_panic=0 audit=0 intel_pstate=disable cgroup_memory=1 cgroup_enable=memory mce=off idle=poll hugepagesz=1G hugepages=16 hugepagesz=2M hugepages=0 default_hugepagesz=1G isolcpus=1-19,21-39 rcu_nocbs=1-19,21-39 kthread_cpus=0,20 irqaffinity=0,20 nohz_full=1-19,21-39 ``` ### Configure Interfaces @@ -597,7 +597,7 @@ Next, the VFs need to be manually bound to the `vfio-pci` driver using the follo Example: ```shell - /opt/dpdk-19.11/usertools/dpdk-devbind.py --bind=vfio_pci 0000:86:0a.1 + /opt/openness/dpdk-19.11/usertools/dpdk-devbind.py --bind=vfio_pci 0000:86:0a.1 ``` Restart the SRIOV device plugin pods from the K8s control plane. diff --git a/doc/ran/openness_xran_images/xran_img1.png b/doc/reference-architectures/ran/openness_xran_images/xran_img1.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img1.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img1.png diff --git a/doc/ran/openness_xran_images/xran_img10.png b/doc/reference-architectures/ran/openness_xran_images/xran_img10.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img10.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img10.png diff --git a/doc/ran/openness_xran_images/xran_img11.png b/doc/reference-architectures/ran/openness_xran_images/xran_img11.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img11.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img11.png diff --git a/doc/ran/openness_xran_images/xran_img12.png b/doc/reference-architectures/ran/openness_xran_images/xran_img12.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img12.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img12.png diff --git a/doc/ran/openness_xran_images/xran_img13.png b/doc/reference-architectures/ran/openness_xran_images/xran_img13.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img13.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img13.png diff --git a/doc/ran/openness_xran_images/xran_img14.png b/doc/reference-architectures/ran/openness_xran_images/xran_img14.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img14.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img14.png diff --git a/doc/ran/openness_xran_images/xran_img16.png b/doc/reference-architectures/ran/openness_xran_images/xran_img16.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img16.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img16.png diff --git a/doc/ran/openness_xran_images/xran_img17.png b/doc/reference-architectures/ran/openness_xran_images/xran_img17.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img17.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img17.png diff --git a/doc/ran/openness_xran_images/xran_img18.png b/doc/reference-architectures/ran/openness_xran_images/xran_img18.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img18.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img18.png diff --git a/doc/ran/openness_xran_images/xran_img19.png b/doc/reference-architectures/ran/openness_xran_images/xran_img19.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img19.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img19.png diff --git a/doc/ran/openness_xran_images/xran_img2.png b/doc/reference-architectures/ran/openness_xran_images/xran_img2.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img2.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img2.png diff --git a/doc/ran/openness_xran_images/xran_img20.png b/doc/reference-architectures/ran/openness_xran_images/xran_img20.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img20.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img20.png diff --git a/doc/ran/openness_xran_images/xran_img21.png b/doc/reference-architectures/ran/openness_xran_images/xran_img21.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img21.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img21.png diff --git a/doc/ran/openness_xran_images/xran_img22.png b/doc/reference-architectures/ran/openness_xran_images/xran_img22.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img22.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img22.png diff --git a/doc/ran/openness_xran_images/xran_img23.png b/doc/reference-architectures/ran/openness_xran_images/xran_img23.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img23.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img23.png diff --git a/doc/ran/openness_xran_images/xran_img24.png b/doc/reference-architectures/ran/openness_xran_images/xran_img24.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img24.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img24.png diff --git a/doc/ran/openness_xran_images/xran_img25.png b/doc/reference-architectures/ran/openness_xran_images/xran_img25.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img25.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img25.png diff --git a/doc/ran/openness_xran_images/xran_img3.png b/doc/reference-architectures/ran/openness_xran_images/xran_img3.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img3.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img3.png diff --git a/doc/ran/openness_xran_images/xran_img4.png b/doc/reference-architectures/ran/openness_xran_images/xran_img4.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img4.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img4.png diff --git a/doc/ran/openness_xran_images/xran_img5.png b/doc/reference-architectures/ran/openness_xran_images/xran_img5.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img5.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img5.png diff --git a/doc/ran/openness_xran_images/xran_img6.png b/doc/reference-architectures/ran/openness_xran_images/xran_img6.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img6.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img6.png diff --git a/doc/ran/openness_xran_images/xran_img7.png b/doc/reference-architectures/ran/openness_xran_images/xran_img7.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img7.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img7.png diff --git a/doc/ran/openness_xran_images/xran_img8.png b/doc/reference-architectures/ran/openness_xran_images/xran_img8.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img8.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img8.png diff --git a/doc/ran/openness_xran_images/xran_img9.png b/doc/reference-architectures/ran/openness_xran_images/xran_img9.png similarity index 100% rename from doc/ran/openness_xran_images/xran_img9.png rename to doc/reference-architectures/ran/openness_xran_images/xran_img9.png diff --git a/doc/reference-architectures/sdwan-images/e2e-scenario1.png b/doc/reference-architectures/sdwan-images/e2e-scenario1.png new file mode 100644 index 00000000..0b853daa Binary files /dev/null and b/doc/reference-architectures/sdwan-images/e2e-scenario1.png differ diff --git a/doc/reference-architectures/sdwan-images/e2e-scenario2.png b/doc/reference-architectures/sdwan-images/e2e-scenario2.png new file mode 100644 index 00000000..c4a0c069 Binary files /dev/null and b/doc/reference-architectures/sdwan-images/e2e-scenario2.png differ diff --git a/doc/reference-architectures/sdwan-images/e2e-scenario3.png b/doc/reference-architectures/sdwan-images/e2e-scenario3.png new file mode 100644 index 00000000..12ce36d6 Binary files /dev/null and b/doc/reference-architectures/sdwan-images/e2e-scenario3.png differ diff --git a/doc/reference-architectures/sdwan-images/openness-sdwan-ref.png b/doc/reference-architectures/sdwan-images/openness-sdwan-ref.png new file mode 100644 index 00000000..4b3f82ee Binary files /dev/null and b/doc/reference-architectures/sdwan-images/openness-sdwan-ref.png differ diff --git a/doc/reference-architectures/sdwan-images/packet-flow-rx.png b/doc/reference-architectures/sdwan-images/packet-flow-rx.png new file mode 100644 index 00000000..53b074ed Binary files /dev/null and b/doc/reference-architectures/sdwan-images/packet-flow-rx.png differ diff --git a/doc/reference-architectures/sdwan-images/packet-flow-tx.png b/doc/reference-architectures/sdwan-images/packet-flow-tx.png new file mode 100644 index 00000000..f7e5b017 Binary files /dev/null and b/doc/reference-architectures/sdwan-images/packet-flow-tx.png differ diff --git a/doc/reference-architectures/sdwan-images/sdwan-edge-arch.png b/doc/reference-architectures/sdwan-images/sdwan-edge-arch.png new file mode 100644 index 00000000..6f4053c2 Binary files /dev/null and b/doc/reference-architectures/sdwan-images/sdwan-edge-arch.png differ diff --git a/doc/reference-architectures/sdwan-images/sdwan-hub-arch.png b/doc/reference-architectures/sdwan-images/sdwan-hub-arch.png new file mode 100644 index 00000000..b8d0a4a4 Binary files /dev/null and b/doc/reference-architectures/sdwan-images/sdwan-hub-arch.png differ diff --git a/index.html b/index.html index bb24dd2a..fd9122d4 100644 --- a/index.html +++ b/index.html @@ -10,5 +10,5 @@ ---

You are being redirected to the OpenNESS Docs.

diff --git a/openness_releasenotes.md b/openness_releasenotes.md index cfdcdf94..386f16ea 100644 --- a/openness_releasenotes.md +++ b/openness_releasenotes.md @@ -6,11 +6,49 @@ Copyright (c) 2019-2020 Intel Corporation # Release Notes This document provides high-level system features, issues, and limitations information for Open Network Edge Services Software (OpenNESS). - [Release history](#release-history) -- [Features for Release](#features-for-release) + - [OpenNESS - 19.06](#openness---1906) + - [OpenNESS - 19.09](#openness---1909) + - [OpenNESS - 19.12](#openness---1912) + - [OpenNESS - 20.03](#openness---2003) + - [OpenNESS - 20.06](#openness---2006) + - [OpenNESS - 20.09](#openness---2009) + - [OpenNESS - 20.12](#openness---2012) - [Changes to Existing Features](#changes-to-existing-features) + - [OpenNESS - 19.06](#openness---1906-1) + - [OpenNESS - 19.06.01](#openness---190601) + - [OpenNESS - 19.09](#openness---1909-1) + - [OpenNESS - 19.12](#openness---1912-1) + - [OpenNESS - 20.03](#openness---2003-1) + - [OpenNESS - 20.06](#openness---2006-1) + - [OpenNESS - 20.09](#openness---2009-1) + - [OpenNESS - 20.12](#openness---2012-1) - [Fixed Issues](#fixed-issues) + - [OpenNESS - 19.06](#openness---1906-2) + - [OpenNESS - 19.06.01](#openness---190601-1) + - [OpenNESS - 19.06.01](#openness---190601-2) + - [OpenNESS - 19.12](#openness---1912-2) + - [OpenNESS - 20.03](#openness---2003-2) + - [OpenNESS - 20.06](#openness---2006-2) + - [OpenNESS - 20.09](#openness---2009-2) + - [OpenNESS - 20.12](#openness---2012-2) - [Known Issues and Limitations](#known-issues-and-limitations) + - [OpenNESS - 19.06](#openness---1906-3) + - [OpenNESS - 19.06.01](#openness---190601-3) + - [OpenNESS - 19.09](#openness---1909-2) + - [OpenNESS - 19.12](#openness---1912-3) + - [OpenNESS - 20.03](#openness---2003-3) + - [OpenNESS - 20.06](#openness---2006-3) + - [OpenNESS - 20.09](#openness---2009-3) + - [OpenNESS - 20.12](#openness---2012-3) - [Release Content](#release-content) + - [OpenNESS - 19.06](#openness---1906-4) + - [OpenNESS - 19.06.01](#openness---190601-4) + - [OpenNESS - 19.09](#openness---1909-3) + - [OpenNESS - 19.12](#openness---1912-4) + - [OpenNESS - 20.03](#openness---2003-4) + - [OpenNESS - 20.06](#openness---2006-4) + - [OpenNESS - 20.09](#openness---2009-4) + - [OpenNESS - 20.12](#openness---2012-4) - [Hardware and Software Compatibility](#hardware-and-software-compatibility) - [Intel® Xeon® D Processor](#intel-xeon-d-processor) - [2nd Generation Intel® Xeon® Scalable Processors](#2nd-generation-intel-xeon-scalable-processors) @@ -18,358 +56,430 @@ This document provides high-level system features, issues, and limitations infor - [Supported Operating Systems](#supported-operating-systems) - [Packages Version](#packages-version) -# Release history -1. OpenNESS - 19.06 -2. OpenNESS - 19.06.01 -3. OpenNESS - 19.09 -4. OpenNESS - 19.12 -5. OpenNESS - 20.03 -6. OpenNESS - 20.06 -7. OpenNESS - 20.09 - -# Features for Release -1. OpenNESS - 19.06 - - Edge Cloud Deployment options - - Controller-based deployment of Applications in Docker Containers/VM–using-Libvirt - - Controller + Kubernetes\* based deployment of Applications in Docker\* Containers - - OpenNESS Controller - - Support for Edge Node Orchestration - - Support for Web UI front end - - OpenNESS APIs - - Edge Application APIs - - Edge Virtualization Infrastructure APIs - - Edge Application life cycle APIs - - Core Network Configuration APIs - - Edge Application authentication APIs - - OpenNESS Controller APIs - - Platform Features - - Microservices based Appliance and Controller agent deployment - - Support for DNS for the edge - - CentOSc\* 7.6 / CentOS 7.6 + RT kernel - - Basic telemetry support - - Sample Reference Applications - - OpenVINO™ based Consumer Application - - Producer Application supporting OpenVINO™ - - Dataplane - - DPDK/KNI based Dataplane – NTS - - Support for deployment on IP, LTE (S1, SGi and LTE CUPS) - - Cloud Adapters - - Support for running Amazon\* Greengrass\* cores as an OpenNESS application - - Support for running Baidu\* Cloud as an OpenNESS application - - Documentation - - User Guide Enterprise and Operator Edge - - OpenNESS Architecture - - Swagger/Proto buff External API Guide - - 4G/CUPS API whitepaper - - Cloud Connector App note - - OpenVINO™ on OpenNESS App note -2. OpenNESS - 19.09 - - Edge Cloud Deployment options - - Asyn method for image download to avoid timeout - - Dataplane - - Support for OVN/OVS based Dataplane and network overlay for Network Edge (based on Kubernetes) - - Cloud Adapters - - Support for running Amazon Greengrass cores as an OpenNESS application with OVN/OVS as Dataplane and network overlay - - Support for Inter-App comms - - Support for OVS-DPDK or Linux\* bridge or Default interface for inter-Apps communication for OnPrem deployment - - Accelerator support - - Support for HDDL-R accelerator for interference in a container environment for OnPrem deployment - - Edge Applications - - Early Access Support for Open Visual Cloud (OVC) based Smart City App on OpenNESS OnPrem - - Support for Dynamic use of VPU or CPU for Inferences - - Gateway - - Support for Edge node and OpenNESS Controller gate way to support route-ability - - Documentation - - OpenNESS Architecture (update) - - OpenNESS Support for OVS as dataplane with OVN - - Open Visual Cloud Smart City Application on OpenNESS - Solution Overview - - Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS - - OpenNESS How-to Guide (update) -3. OpenNESS – 19.12 - - Hardware - - Support for Cascade lake 6252N - - Support for Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000 - - Edge Application - - Fully Cloud-native Open Visual Cloud Smart City Application pipeline on OpenNESS Network edge. - - Edge cloud - - EAA and CNCA microservice as native Kubernetes-managed services - - Support for Kubernetes version 1.16.2 - - Edge Compute EPA features support for Network Edge - - CPU Manager: Support deployment of POD with dedicated pinning - - SRIOV NIC: Support deployment of POD with dedicated SRIOV VF from NIC - - SRIOV FPGA: Support deployment of POD with dedicated SRIOV VF from FPGA - - Topology Manager: Support k8s to manage the resources allocated to workloads in a NUMA topology-aware manner - - BIOS/FW Configuration service - Intel SysCfg based BIOS/FW management service - - Hugepages: Support for allocation of 1G/2M huge pages to the Pod. - - Multus: Support for Multiple network interface in the PODs deployed by Kubernetes - - Node Feature discovery: Support detection of Silicon and Software features and automation of deployment of CNF and Applications - - FPGA Remote System Update service: Support the Open Programmable Acceleration Engine (OPAE) (fpgautil) based image update service for FPGA. - - Non-Privileged Container: Support deployment of non-privileged pods (CNFs and Applications as reference) - - Edge Compute EPA features support for OnPremises - - Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS - - OpenNESS Experience Kit for Network and OnPremises edge - - Offline Release Package: Customers should be able to create an installer package that can be used to install OnPremises version of OpenNESS without the need for Internet access. - - 5G NR Edge Cloud deployment support - - 5G NR edge cloud deployment support with SA mode - - AF: Support for 5G NGC Application function as a microservice - - NEF: Support for 5G NGC Network Exposure function as a microservice - - Support for 5G NR UDM, UPF, AMF, PCF and SCF (not part of the release) - - DNS support - - DNS support for UE - - DNS Support for Edge applications - - Documentation - - Completely reorganized documentation structure for ease of navigation - - 5G NR Edge Cloud deployment Whitepaper - - EPA application note for each of the features -4. OpenNESS – 20.03 - - OVN/OVS-DPDK support for dataplane - - Network Edge: Support for kube-ovn CNI with OVS or OVS-DPDK as dataplane. Support for Calico as CNI. - - OnPremises Edge: Support for OVS-DPDK CNI with OVS-DPDK as dataplane supporting application deployed in containers or VMs - - Support for VM deployments on Kubernetes mode - - Kubevirt based VM deployment support - - EPA Support for SRIOV Virtual function allocation to the VMs deployed using K8s - - EPA support - OnPremises - - Support for dedicated core allocation to applications running as VMs or Containers - - Support for dedicated SRIOV VF allocation to applications running in VM or containers - - Support for system resource allocation into the application running as a container - - Mount point for shared storage - - Pass environment variables - - Configure the port rules - - Core Network Feature (5G) - - PFD Management API support (3GPP 23.502 Sec. 52.6.3 PFD Management service) - - AF: Added support for PFD Northbound API - - NEF: Added support for PFD southbound API, and Stubs to loopback the PCF calls. - - kubectl: Enhanced CNCA kubectl plugin to configure PFD parameters - - WEB UI: Enhanced CNCA WEB UI to configure PFD params in OnPerm mode - - Auth2 based authentication between 5G Network functions: (as per 3GPP Standard) - - Implemented oAuth2 based authentication and validation - - AF and NEF communication channel is updated to authenticated based on oAuth2 JWT token in addition to HTTP2. - - HTTPS support - - Enhanced the 5G OAM, CNCA (web-ui and kube-ctl) to HTTPS interface - - Modular Playbook - - Support for customers to choose real-time or non-real-time kernel for an edge node - - Support for the customer to choose CNIs - Validated with Kube-OVN and Calico - - Edge Apps - - FlexRAN: Dockerfile and pod specification for the deployment of 4G or 5G FlexRAN - - AF: Dockerfile and pod specification - - NEF: Dockerfile and pod specification - - UPF: Dockerfile and pod specification -5. OpenNESS – 20.06 - - OpenNESS is now available in two distributions - - Open source (Apache 2.0 license) - - Intel Distribution of OpenNESS (Intel Proprietary License) - - Includes all the code from the open source distribution plus additional features and enhancements to improve the user experience - - Access requires a signed license. A request for access can be made at openness.org by navigating to the “Products” section and selecting “Intel Distribution of OpenNESS” - - Both distributions are hosted at github.com/open-ness - - On premises configuration now optionally supports Kubernetes - - Core Network Feature (5G) - - Policy Authorization Service support in AF and CNCA over the N5 Interface(3GPP 29.514 - Chapter 5 Npcf_PolicyAuthorization Service API). - - Core Network Notifications for User Plane Path Change event received through Policy Authorization support in AF. - - NEF South Bound Interfaces support to communicate with the Core Network Functions for Traffic Influence and PFD. - - Core Network Test Function (CNTF) microservice added for validating the AF & NEF South Bound Interface communication. - - Flavors added for Core Network control-plane and user-plane. - - OpenNESS assisted Edge cloud deployment in 5G Non Standalone mode whitepaper. - - OpenNESS 20.06 5G features enablement through the enhanced-OpenNESS release (IDO). - - Dataplane - - Support for Calico eBPF as CNI - - Performance baselining of the CNIs - - Visual Compute and Media Analytics - - Intel Visual Cloud Accelerator Card - Analytics (VCAC-A) Kubernetes deployment support (CPU, GPU, and VPU) - - Node feature discovery of VCAC-A - - Telemetry support for VCAC-A - - Provide ansible and Helm -playbook support for OVC codecs Intel® Xeon® CPU mode - video analytics service (REST API) for developers - - Edge Applications - - Smart City Application Pipeline supporting CPU or VCAC-A mode with Helm chart - - CDN Content Delivery using NGINX with SR-IOV capability for higher performance with Helm chart - - CDN transcode sample application using Intel® Xeon® CPU optimized media SDK with Helm Chart - - Support for Transcoding Service using Intel® Xeon® CPU optimized media SDK with Helm chart - - Intel Edge Insights application support with Helm chart - - Edge Network Functions - - FlexRAN DU with Helm Chart (FlexRAN not part of the release) - - xRAN Fronthaul with Helm CHart (xRAN app not part of the release) - - Core Network Function - Application Function with Helm Chart - - Core Network Function - Network Exposure Function With Helm Chart - - Core Network Function - UPF (UPF app not part of the release) - - Core network Support functions - OAM and CNTF - - Helm Chart for Kubernetes enhancements - - NFD, CMK, SRIOV-Device plugin and Multus\* - - Support for local Docker registry setup - - Support for deployment-specific Flavors - - Minimal - - RAN - 4G and 5G - - Core - User plane and Control Plane - - Media Analytics with VCAC-A and with CPU only mode - - CDN - Transcode - - CDN - Content Delivery - - Azure - Deployment of OpenNESS cluster on Microsoft\* Azure\* cloud - - Support for OpenNESS on CSP Cloud - - Azure - Deployment of OpenNESS cluster on Microsoft Azure cloud - - Telemetry Support - - Support for Collectd backend with hardware from Intel and custom metrics - - Cpu, cpufreq, load, hugepages, intel_pmu, intel_rdt, ipmi, ovs_stats, ovs_pmd_stats - - FPGA – PACN3000 (collectd) - Temp, Power draw - - VPU Device memory, VPU device thermal, VPU Device utilization - - Open Telemetry - Support for collector and exporter for metrics (e.g., heartbeat from app) - - Support for PCM counter for Prometheus\* and Grafana\* - - Telemetry Aware Scheduler - - Early Access support for Resource Management Daemon (RMD) - - RMD for cache allocation to the application Pods - - Ability to deploy OpenNESS Master and Node on the same platform -6. OpenNESS – 20.09 - - Native On-premises mode - - Following from the previous release decision of pausing Native on-premises Development the code has been move to a dedicated repository “native-on-prem” - - Kubernetes based solution will now support both Network and on-premises Edge - - Service Mesh support - - Basic support for Service Mesh using istio within an OpenNESS cluster - - Application of Service Mesh openness 5G and Media analytics - A dedicated network for service to service communications - - EAA Update - - EAA microservices has been updated to be more cloud-native friendly - - 5G Core AF and NEF - - User-Plane Path Change event notifications from AF received over N33 I/f [Traffic Influence updates from SMF received through NEF] - - AF/NEF/OAM Configuration and Certificate updates through Configmaps. - - AF and OAM API’s access authorization through Istio Gateway. - - Envoy Sidecar Proxy for all the 5G microservices(AF/NEF/OAM/CNTF) which enables support for telemetry(Request/Response Statistics), certificates management, http 2.0 protocol configuration(with/without TLS) - - Core-cplane flavor is enabled with Istio - - Edge Insights Application (update) - - Industrial Edge Insights Software update to version 2.3. - - Experience Kit now supports multiple detection video’s – Safety equipment detection, PCB default detection and also supports external video streams. - - CERA Near Edge - - Core network and Application reference architecture - - CERA provides reference integration of OpenNESS, Network function 5G UPF (Not part of the release), OpenVINO with EIS application. +# Release history + +## OpenNESS - 19.06 +- Edge Cloud Deployment options + - Controller-based deployment of Applications in Docker Containers/VM–using-Libvirt + - Controller + Kubernetes\* based deployment of Applications in Docker\* Containers +- OpenNESS Controller + - Support for Edge Node Orchestration + - Support for Web UI front end +- OpenNESS APIs + - Edge Application APIs + - Edge Virtualization Infrastructure APIs + - Edge Application life cycle APIs + - Core Network Configuration APIs + - Edge Application authentication APIs + - OpenNESS Controller APIs +- Platform Features + - Microservices based Appliance and Controller agent deployment + - Support for DNS for the edge + - CentOS\* 7.6 / CentOS 7.6 + RT kernel + - Basic telemetry support +- Sample Reference Applications + - OpenVINO™ based Consumer Application + - Producer Application supporting OpenVINO™ +- Dataplane + - DPDK/KNI based Dataplane – NTS + - Support for deployment on IP, LTE (S1, SGi and LTE CUPS) +- Cloud Adapters + - Support for running Amazon\* Greengrass\* cores as an OpenNESS application + - Support for running Baidu\* Cloud as an OpenNESS application +- Documentation + - User Guide Enterprise and Operator Edge + - OpenNESS Architecture + - Swagger/Proto buff External API Guide + - 4G/CUPS API whitepaper + - Cloud Connector App note + - OpenVINO™ on OpenNESS App note + +## OpenNESS - 19.09 +- Edge Cloud Deployment options + - Async method for image download to avoid timeout. +- Dataplane + - Support for OVN/OVS based Dataplane and network overlay for Network Edge (based on Kubernetes) +- Cloud Adapters + - Support for running Amazon Green grass cores as an OpenNESS application with OVN/OVS as Dataplane and network overlay +- Support for Inter-App comms + - Support for OVS-DPDK or Linux\* bridge or Default interface for inter-Apps communication for OnPrem deployment +- Accelerator support + - Support for HDDL-R accelerator for interference in a container environment for OnPrem deployment +- Edge Applications + - Early Access Support for Open Visual Cloud (OVC) based Smart City App on OpenNESS OnPrem + - Support for Dynamic use of VPU or CPU for Inferences +- Gateway + - Support for Edge node and OpenNESS Controller gate way to support route-ability +- Documentation + - OpenNESS Architecture (update) + - OpenNESS Support for OVS as dataplane with OVN + - Open Visual Cloud Smart City Application on OpenNESS - Solution Overview + - Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS + - OpenNESS How-to Guide (update) + +## OpenNESS - 19.12 +- Hardware + - Support for Cascade lake 6252N + - Support for Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000 +- Edge Application + - Fully cloud native Open Visual Cloud Smart City Application pipeline on OpenNESS Network edge. +- Edge cloud + - EAA and CNCA microservice as native Kubernetes-managed services + - Support for Kubernetes version 1.16.2 +- Edge Compute EPA features support for Network Edge + - CPU Manager: Support deployment of POD with dedicated pinning + - SRIOV NIC: Support deployment of POD with dedicated SRIOV VF from NIC + - SRIOV FPGA: Support deployment of POD with dedicated SRIOV VF from FPGA + - Topology Manager: Support k8s to manage the resources allocated to workloads in a NUMA topology-aware manner + - BIOS/FW Configuration service - Intel SysCfg based BIOS/FW management service + - Hugepages: Support for allocation of 1G/2M huge pages to the Pod + - Multus: Support for Multiple network interface in the PODs deployed by Kubernetes + - Node Feature discovery: Support detection of Silicon and Software features and automation of deployment of CNF and Applications + - FPGA Remote System Update service: Support the Open Programmable Acceleration Engine (OPAE) (fpgautil) based image update service for FPGA + - Non-Privileged Container: Support deployment of non-privileged pods (CNFs and Applications as reference) +- Edge Compute EPA features support for On-Premises + - Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS +- OpenNESS Experience Kit for Network and OnPremises edge + - Offline Release Package: Customers should be able to create an installer package that can be used to install OnPremises version of OpenNESS without the need for Internet access. +- 5G NR Edge Cloud deployment support + - 5G NR edge cloud deployment support with SA mode + - AF: Support for 5G NGC Application function as a microservice + - NEF: Support for 5G NGC Network Exposure function as a microservice + - Support for 5G NR UDM, UPF, AMF, PCF and SCF (not part of the release) +- DNS support + - DNS support for UE + - DNS Support for Edge applications +- Documentation + - Completely reorganized documentation structure for ease of navigation + - 5G NR Edge Cloud deployment Whitepaper + - EPA application note for each of the features + +## OpenNESS - 20.03 +- OVN/OVS-DPDK support for dataplane + - Network Edge: Support for kube-ovn CNI with OVS or OVS-DPDK as dataplane. Support for Calico as CNI. + - OnPremises Edge: Support for OVS-DPDK CNI with OVS-DPDK as dataplane supporting application deployed in containers or VMs +- Support for VM deployments on Kubernetes mode + - Kubevirt based VM deployment support + - EPA Support for SRIOV Virtual function allocation to the VMs deployed using Kubernetes +- EPA support - OnPremises + - Support for dedicated core allocation to applications running as VMs or Containers + - Support for dedicated SRIOV VF allocation to applications running in VM or containers + - Support for system resource allocation into the application running as a container + - Mount point for shared storage + - Pass environment variables + - Configure the port rules +- Core Network Feature (5G) + - PFD Management API support (3GPP 23.502 Sec. 52.6.3 PFD Management service) + - AF: Added support for PFD Northbound API + - NEF: Added support for PFD southbound API, and Stubs to loopback the PCF calls. + - kubectl: Enhanced CNCA kubectl plugin to configure PFD parameters + - WEB UI: Enhanced CNCA WEB UI to configure PFD params in OnPerm mode + - Auth2 based authentication between 5G Network functions: (as per 3GPP Standard) + - Implemented oAuth2 based authentication and validation + - AF and NEF communication channel is updated to authenticated based on oAuth2 JWT token in addition to HTTP2. + - HTTPS support + - Enhanced the 5G OAM, CNCA (web-ui and kube-ctl) to HTTPS interface +- Modular Playbook + - Support for customers to choose real-time or non-realtime kernel for an edge node + - Support for customers to choose CNIs - Validated with Kube-OVN and Calico +- Edge Apps + - FlexRAN: Dockerfile and pod specification for the deployment of 4G or 5G FlexRAN + - AF: Dockerfile and pod specification + - NEF: Dockerfile and pod specification + - UPF: Dockerfile and pod specification + +## OpenNESS - 20.06 +- OpenNESS is now available in two distributions + - Open source (Apache 2.0 license) + - Intel Distribution of OpenNESS (Intel Proprietary License) + - Includes all the code from the open source distribution plus additional features and enhancements to improve the user experience + - Access requires a signed license. A request for access can be made at openness.org by navigating to the "Products" section and selecting "Intel Distribution of OpenNESS" + - Both distributions are hosted at github.com/open-ness +- On premises configuration now optionally supports Kubernetes +- Core Network Feature (5G) + - Policy Authorization Service support in AF and CNCA over the N5 Interface(3GPP 29.514 - Chapter 5 Npcf_PolicyAuthorization Service API). + - Core Network Notifications for User Plane Path Change event received through Policy Authorization support in AF. + - NEF South Bound Interfaces support to communicate with the Core Network Functions for Traffic Influence and PFD. + - Core Network Test Function (CNTF) microservice added for validating the AF & NEF South Bound Interface communication. + - Flavors added for Core Network control-plane and user-plane. + - OpenNESS assisted Edge cloud deployment in 5G Non Standalone mode whitepaper. + - OpenNESS 20.06 5G features enablement through the enhanced-OpenNESS release (IDO). +- Dataplane + - Support for Calico eBPF as CNI + - Performance baselining of the CNIs +- Visual Compute and Media Analytics + - Intel Visual Cloud Accelerator Card - Analytics (VCAC-A) Kubernetes deployment support (CPU, GPU, and VPU) + - Node feature discovery of VCAC-A + - Telemetry support for VCAC-A + - Provide ansible and Helm -playbook support for OVC codecs Intel® Xeon® CPU mode - video analytics service (REST API) for developers +- Edge Applications + - Smart City Application Pipeline supporting CPU or VCAC-A mode with Helm chart + - CDN Content Delivery using NGINX with SR-IOV capability for higher performance with Helm chart + - CDN transcode sample application using Intel® Xeon® CPU optimized media SDK with Helm chart + - Support for Transcoding Service using Intel® Xeon® CPU optimized media SDK with Helm chart + - Intel Edge Insights application support with Helm chart +- Edge Network Functions + - FlexRAN DU with Helm Chart (FlexRAN not part of the release) + - xRAN Fronthaul with Helm CHart (xRAN app not part of the release) + - Core Network Function - Application Function with Helm chart + - Core Network Function - Network Exposure Function With Helm chart + - Core Network Function - UPF (UPF app not part of the release) + - Core network Support functions - OAM and CNTF +- Helm Chart for Kubernetes enhancements + - NFD, CMK, SRIOV-Device plugin and Multus\* + - Support for local Docker registry setup +- Support for deployment-specific Flavors + - Minimal + - RAN - 4G and 5G + - Core - User plane and Control Plane + - Media Analytics with VCAC-A and with CPU only mode + - CDN - Transcode + - CDN - Content Delivery + - Azure - Deployment of OpenNESS cluster on Microsoft\* Azure\* cloud +- Support for OpenNESS on CSP Cloud + - Azure - Deployment of OpenNESS cluster on Microsoft\* Azure\* cloud +- Telemetry Support + - Support for Collectd backend with hardware from Intel and custom metrics + - cpu, cpufreq, load, hugepages, intel_pmu, intel_rdt, ipmi, ovs_stats, ovs_pmd_stats + - FPGA – PACN3000 (collectd) - Temp, Power draw + - VPU Device memory, VPU device thermal, VPU Device utilization + - Open Telemetry - Support for collector and exporter for metrics (e.g., heartbeat from app) + - Support for PCM counter for Prometheus\* and Grafana\* + - Telemetry Aware Scheduler +- Early Access support for Resource Management Daemon (RMD) + - RMD for cache allocation to the application Pods +- Ability to deploy OpenNESS Master and Node on the same platform + +## OpenNESS - 20.09 +- Native On-premises mode + - Following from the previous release decision of pausing Native on-premises Development the code has been move to a dedicated repository “native-on-prem” + - Kubernetes based solution will now support both Network and on-premises Edge +- Service Mesh support + - Basic support for Service Mesh using Istio Service Mesh within an OpenNESS cluster. + > **NOTE**: When deploying Istio Service Mesh in VMs, a minimum of 8 CPU core and 16GB RAM must be allocated to each worker VM so that Istio operates smoothly + - Application of Service Mesh openness 5G and Media analytics - A dedicated network for service to service communications +- EAA Update + - EAA microservices has been updated to be more cloud-native friendly +- 5G Core AF and NEF + - User-Plane Path Change event notifications from AF received over N33 I/f [Traffic Influence updates from SMF received through NEF] + - AF/NEF/OAM Configuration and Certificate updates through Configmaps. + - AF and OAM API’s access authorization through Istio Gateway. + - Envoy Sidecar Proxy for all the 5G microservices(AF/NEF/OAM/CNTF) which enables support for telemetry(Request/Response Statistics), certificates management, http 2.0 protocol configuration(with/without TLS) + - Core-cplane flavor is enabled with Istio +- Edge Insights Application (update) + - Industrial Edge Insights Software update to version 2.3. + - Experience Kit now supports multiple detection video's – Safety equipment detection, PCB default detection and also supports external video streams. +- CERA Near Edge + - Core network and Application reference architecture + - CERA provides reference integration of OpenNESS, Network function 5G UPF (Not part of the release), OpenVINO with EIS application. + +## OpenNESS - 20.12 +- Reference Converged Edge Reference Architecture (CERA) On-Premises Edge and Private Wireless deployment focusing on On-Premises, Private Wireless and Ruggedized Outdoor deployments, presenting a scalable solution across the On-Premises edge. +- Reference deployment with Kubernetes enhancements for High performance compute and networking for an SD-WAN node (Edge) that runs Applications, Services, and SD-WAN CNF. AI/ML applications and services are targeted in this flavor with support for Hardware offload for inferencing. +- Reference deployment with Kubernetes enhancements for high-performance compute and networking for an SD-WAN node (Hub) that runs SD-WAN CNF. +- Reference deployment for high-performance Computing and Networking using SR-IOV for reference Untrusted Non-3GPP Access as defined by 3GPP Release 15. +- Reference implementation of the offline installation package for the CERA Access Edge flavor enabling installation of Kubernetes and related enhancements for Access edge deployments. +- Early access release of Edge Multi-Cluster Orchestration(EMCO), a Geo-distributed application orchestrator for Kubernetes. This release supports EMCO deploying and managing the life cycle of the Smart City Application pipeline on the edge cluster. More details in the [EMCO Release Notes](https://github.com/open-ness/EMCO/blob/main/ReleaseNotes.md). +- Azure Development kit (Devkit) supporting the installation of an OpenNESS Kubernetes cluster on the Microsoft* Azure* cloud. This is typically used by a customer who wants to develop applications and services for the edge using OpenNESS building blocks. +- Support Intel® vRAN Dedicated Accelerator ACC100, Kubernetes Cloud-native deployment supporting higher capacity 4G/LTE and 5G vRANs cells/carriers for FEC offload. +- Major system Upgrades: Kubernetes 1.19.3, CentOS 7.8, Calico 3.16, and Kube-OVN 1.5.2. # Changes to Existing Features - - **OpenNESS 19.06** There are no unsupported or discontinued features relevant to this release. - - **OpenNESS 19.06.01** There are no unsupported or discontinued features relevant to this release. - - **OpenNESS 19.09** There are no unsupported or discontinued features relevant to this release. - - **OpenNESS 19.12** - - NTS Dataplane support for Network edge is discontinued. - - Controller UI for Network edge has been discontinued except for the CNCA configuration. Customers can optionally leverage the Kubernetes dashboard to onboard applications. - - Edge node only supports non-realtime kernel. - - **OpenNESS 20.03** - - Support for HDDL-R only restricted to non-real-time or non-customized CentOS 7.6 default kernel. - - **OpenNESS 20.06** - - Offline install for Native mode OnPremises has be deprecated - - **OpenNESS 20.09** - - Native on-premises is now located in a dedicated repository with no further feature updates from previous release. + +## OpenNESS - 19.06 +There are no unsupported or discontinued features relevant to this release. + +## OpenNESS - 19.06.01 +There are no unsupported or discontinued features relevant to this release. + +## OpenNESS - 19.09 +There are no unsupported or discontinued features relevant to this release. + +## OpenNESS - 19.12 +- NTS Dataplane support for Network edge is discontinued. +- Controller UI for Network edge has been discontinued except for the CNCA configuration. Customers can optionally leverage the Kubernetes dashboard to onboard applications. +- Edge node only supports non-realtime kernel. + +## OpenNESS - 20.03 +- Support for HDDL-R only restricted to non-real-time or non-customized CentOS 7.6 default kernel. + +## OpenNESS - 20.06 +- Offline install for Native mode OnPremises has be deprecated + +## OpenNESS - 20.09 +- Native on-premises is now located in a dedicated repository with no further feature updates from previous release. + +## OpenNESS - 20.12 +There are no unsupported or discontinued features relevant to this release. # Fixed Issues -- **OpenNESS 19.06** There are no non-Intel issues relevant to this release. -- **OpenNESS 19.06.01** There are no non-Intel issues relevant to this release. -- **OpenNESS 19.06.01** - - VHOST HugePages dependency - - Bug in getting appId by IP address for the container - - Wrong value of appliance verification key printed by ansible script - - NTS is hanging when trying to add same traffic policy to multiple interfaces - - Application in VM cannot be started - - Bug in libvirt deployment - - Invalid status after app un-deployment - - Application memory field is in MB -- **OpenNESS 19.12** - - Improved usability/automation in Ansible scripts -- **OpenNESS 20.03** - - Realtime Kernel support for network edge with K8s. - - Modular playbooks -- **OpenNESS 20.06** - - Optimized the Kubernetes-based deployment by supporting multiple Flavors -- **OpenNESS 20.09** - - Further optimized the Kubernetes based deployment by supporting multiple Flavors - - cAdvisor occasional failure issue is resolved + +## OpenNESS - 19.06 +There are no non-Intel issues relevant to this release. + +## OpenNESS - 19.06.01 +There are no non-Intel issues relevant to this release. + +## OpenNESS - 19.06.01 +- VHOST HugePages dependency +- Bug in getting appId by IP address for the container +- Wrong value of appliance verification key printed by ansible script +- NTS is hanging when trying to add same traffic policy to multiple interfaces +- Application in VM cannot be started +- Bug in libvirt deployment +- Invalid status after app un-deployment +- Application memory field is in MB + +## OpenNESS - 19.12 +- Improved usability/automation in Ansible scripts + +## OpenNESS - 20.03 +- Realtime Kernel support for network edge with K8s. +- Modular playbooks + +## OpenNESS - 20.06 +- Optimized the Kubernetes based deployment by supporting multiple Flavors + +## OpenNESS - 20.09 +- Further optimized the Kubernetes based deployment by supporting multiple Flavors +- cAdvisor occasional failure issue is resolved +- "Traffic rule creation: cannot parse filled and cleared fields" in Legacy OnPremises is fixed +- Issue fixed when removing Edge Node from Controller when its offline and traffic policy is configured or app deployed + +## OpenNESS - 20.12 +- Known issue with Pod that uses hugepage get stuck in terminating state on deletion hs been fixed after upgrading to Kubernetes 1.19.3 +- Upgraded to Kube-OVN v1.5.2 for further Kube-OVN CNI enhancements # Known Issues and Limitations -- **OpenNESS 19.06** There are no issues relevant to this release. -- **OpenNESS 19.06.01** There is one issue relevant to this release: it is not possible to remove the application from Edge Node in case of error during application deployment. The issue concerns applications in a Virtual Machine. -- **OpenNESS 19.09** - - Gateway in multi-node - will not work when few nodes will have the same public IP (they will be behind one common NAT) - - Ansible in K8s can cause problems when rerun on a machine: - - If after running all 3 scripts - - Script 02 will be run again (it will not remove all necessary K8s related artifacts) - - We would recommend cleaning up the installation on the node -- **OpenNESS 19.12** - - Gateway in multi-node - will not work when few nodes will have the same public IP (they will be behind one common NAT) - - OpenNESS OnPremises: Cannot remove a failed/disconnected the edge node information/state from the controller - - The CNCA API (4G & 5G) supported in this release is an early access reference implementation and does not support authentication - - Real-time kernel support has been temporarily disabled to address the Kubernetes 1.16.2 and Realtime kernel instability. -- **OpenNESS 20.03** - - On-Premises edge installation takes more than 1.5 hours because of the Docker image build for OVS-DPDK - - Network edge installation takes more than 1.5 hours because of the Docker image build for OVS-DPDK - - OpenNESS controller allows management NICs to be in the pool of configuration, which might allow configuration by mistake. Thus, disconnecting the node from master - - When using the SRIOV EPA feature added in 20.03 with OVNCNI, the container cannot access the CNI port. This is due to the SRIOV port being set by changing the network used by the container from default to a custom network. This overwrites the OVNCNI network setting configured before this to enable the container to work with OVNCNI. Another issue with the SRIOV, is that this also overwrites the network configuration with the EAA and edgedns, agents, which prevents the SRIOV enabled container from communicating with the agents. - - Cannot remove Edge Node from Controller when its offline and traffic policy is configured or the app is deployed. -- **OpenNESS 20.06** - - On-Premises edge installation takes 1.5hrs because of the Docker image build for OVS-DPDK - - Network edge installation takes 1.5hrs because of docker image build for OVS-DPDK - - OpenNESS controller allows management NICs to be in the pool of configuration, which might allow configuration by mistake and thereby disconnect the node from master - - When using the SRIOV EPA feature added in 20.03 with OVNCNI, the container cannot access the CNI port. This is due to the SRIOV port being set by changing the network used by the container from default to a custom network, This overwrites the OVNCNI network setting configured prior to this to enable the container to work with OVNCNI. Another issue with the SRIOV, is that this also overwrites the network configuration with the EAA and edgedns, agents, which prevents the SRIOV enabled container from communicating with the agents. - - Cannot remove Edge Node from Controller when its offline and traffic policy is configured or app is deployed. - - Legacy OnPremises - Traffic rule creation: cannot parse filled and cleared fields - - There is an issue with using CDI when uploading VM images when CMK is enabled due to missing CMK taint toleration. The CDI upload pod does not get deployed and the `virtctl` plugin command times out waiting for the action to complete. A workaround for the issue is to invoke the CDI upload command, edit the taint toleration for the CDI upload to tolerate CMK, update the pod, create the PV, and let the pod run to completion. - - There is a known issue with cAdvisor which in certain scenarios occasionally fails to expose the metrics for the Prometheus endpoint. See the following GitHub\* link: https://github.com/google/cadvisor/issues/2537 -- **OpenNESS 20.09** - - Pod which uses hugepage get stuck in terminating state on deletion. This is a known issue on Kubernetes 1.18.x and is planned to be fixed in 1.19.x - - Calico cannot be used as secondary CNI with Multus in OpenNESS. It will work only as primary CNI. Calico must be the only network provider in each cluster. We do not currently support migrating a cluster with another network provider to use Calico networking. https://docs.projectcalico.org/getting-started/kubernetes/requirements - - collectd Cache telemetry using RDT does not work when RMD is enabled because of resource conflict. Workaround is to disable collectd RDT plugin when using RMD - this by default is implemented globally. With this workaround customers will be able to allocate the Cache but not use Cache related telemetry. In case where RMD is not being enabled customers who desire RDT telemetry can re-enable collectd RDT. - +## OpenNESS - 19.06 +There are no issues relevant to this release. + +## OpenNESS - 19.06.01 +There is one issue relevant to this release: it is not possible to remove the application from Edge Node in case of error during application deployment. The issue concerns applications in a Virtual Machine. + +## OpenNESS - 19.09 +- Gateway in multi-node - will not work when few nodes will have the same public IP (they will be behind one common NAT) +- Ansible in K8s can cause problems when rerun on a machine: + - If after running all 3 scripts + - Script 02 will be run again (it will not remove all necessary K8s related artifacts) + - We would recommend cleaning up the installation on the node + +## OpenNESS - 19.12 +- Gateway in multi-node - will not work when few nodes will have the same public IP (they will be behind one common NAT) +- OpenNESS On-Premises: Cannot remove a failed/disconnected the edge node information/state from the controller +- The CNCA API (4G & 5G) supported in this release is an early access reference implementation and does not support authentication +- Real-time kernel support has been temporarily disabled to address the Kubernetes 1.16.2 and Realtime kernel instability. + +## OpenNESS - 20.03 +- On-Premises edge installation takes more than 1.5 hours because of the Docker image build for OVS-DPDK +- Network edge installation takes more than 1.5 hours because of the Docker image build for OVS-DPDK +- OpenNESS controller allows management NICs to be in the pool of configuration, which might allow configuration by mistake. Thus, disconnecting the node from control plane +- When using the SRIOV EPA feature added in 20.03 with OVNCNI, the container cannot access the CNI port. This is due to the SRIOV port being set by changing the network used by the container from default to a custom network. This overwrites the OVNCNI network setting configured before this to enable the container to work with OVNCNI. Another issue with the SRIOV, is that this also overwrites the network configuration with the EAA and edgedns, agents, which prevents the SRIOV enabled container from communicating with the agents. +- Cannot remove Edge Node from Controller when its offline and traffic policy is configured or the app is deployed. + +## OpenNESS - 20.06 +- On-Premises edge installation takes 1.5hrs because of the Docker image build for OVS-DPDK +- Network edge installation takes 1.5hrs because of docker image build for OVS-DPDK +- OpenNESS controller allows management NICs to be in the pool of configuration, which might allow configuration by mistake and thereby disconnect the node from control plane +- When using the SRIOV EPA feature added in 20.03 with OVNCNI, the container cannot access the CNI port. This is due to the SRIOV port being set by changing the network used by the container from default to a custom network, This overwrites the OVNCNI network setting configured prior to this to enable the container to work with OVNCNI. Another issue with the SRIOV, is that this also overwrites the network configuration with the EAA and edgedns, agents, which prevents the SRIOV enabled container from communicating with the agents. +- Cannot remove Edge Node from Controller when its offline and traffic policy is configured or app is deployed. +- Legacy OnPremises - Traffic rule creation: cannot parse filled and cleared fields +- There is an issue with using CDI when uploading VM images when CMK is enabled due to missing CMK taint toleration. The CDI upload pod does not get deployed and the `virtctl` plugin command times out waiting for the action to complete. A workaround for the issue is to invoke the CDI upload command, edit the taint toleration for the CDI upload to tolerate CMK, update the pod, create the PV, and let the pod run to completion. +- There is a known issue with cAdvisor which in certain scenarios occasionally fails to expose the metrics for the Prometheus endpoint. See the following GitHub\* link: https://github.com/google/cadvisor/issues/2537 + +## OpenNESS - 20.09 +- Pod which uses hugepage get stuck in terminating state on deletion. This is a known issue on Kubernetes 1.18.x and is planned to be fixed in 1.19.x +- Calico cannot be used as secondary CNI with Multus in OpenNESS. It will work only as primary CNI. Calico must be the only network provider in each cluster. We do not currently support migrating a cluster with another network provider to use Calico networking. https://docs.projectcalico.org/getting-started/kubernetes/requirements +- Collectd Cache telemetry using RDT does not work when RMD is enabled because of resource conflict. Workaround is to disable collectd RDT plugin when using RMD - this by default is implemented globally. With this workaround customers will be able to allocate the Cache but not use Cache related telemetry. In case where RMD is not being enabled customers who desire RDT telemetry can re-enable collectd RDT. + +## OpenNESS - 20.12 +- cAdvisor CPU utilization of Edge Node is high and could cause a delay to get an interactive SSH session. A work around is to remove CAdvisor if not needed using `helm uninstall cadvisor -n telemetry` +- An issue appears when the KubeVirt Containerized Data Importer (CDI) upload pod is deployed with Kube-OVN CNI, the deployed pods readiness probe fails and pod is never in ready state. It is advised that the user uses other CNI such as Calico CNI when using CDI with OpenNESS +- Limitation of AF/NEF APIs usage: AF and NEF support only queued requests, hence the API calls should be made in sequence one after another using CNCA for the deterministic responses. If the API calls are made directly from multiple threads concurrently, the behavior is nondeterministic +- Telemetry deployment with PCM enabled will cause a deployment failure in single node cluster deployments due to PCM dashboards for Grafana not being found + # Release Content -- **OpenNESS 19.06** OpenNESS Edge node, OpenNESS Controller, Common, Spec, and OpenNESS Applications. -- **OpenNESS 19.06.01** OpenNESS Edge node, OpenNESS Controller, Common, Spec, and OpenNESS Applications. -- **OpenNESS 19.09** OpenNESS Edge node, OpenNESS Controller, Common, Spec, and OpenNESS Applications. -- **OpenNESS 19.12** OpenNESS Edge node, OpenNESS Controller, Common, Spec, OpenNESS Applications, and Experience kit. -- **OpenNESS 20.03** OpenNESS Edge node, OpenNESS Controller, Common, Spec, OpenNESS Applications, and Experience kit. -- **OpenNESS 20.06** - - Open Source: Edge node, Controller, Epcforedge, Common, Spec, Applications, and Experience kit. - - IDO: IDO Edge node, IDO Controller, IDO Epcforedge, IDO Spec, and IDO Experience kit. -- **OpenNESS 20.09** - - Open Source: Edge node, Controller, Epcforedge, Common, Spec, Applications and Experience kit. - - IDO: IDO Edge node, IDO Controller, IDO Epcforedge, IDO Spec and IDO Experience kit.> Note: Application repo common to Open Source and IDO - >**NOTE**: Application repo common to Open Source and IDO - + +## OpenNESS - 19.06 +OpenNESS Edge node, OpenNESS Controller, Common, Spec, and OpenNESS Applications. + +## OpenNESS - 19.06.01 +OpenNESS Edge node, OpenNESS Controller, Common, Spec, and OpenNESS Applications. + +## OpenNESS - 19.09 +OpenNESS Edge node, OpenNESS Controller, Common, Spec, and OpenNESS Applications. + +## OpenNESS - 19.12 +OpenNESS Edge node, OpenNESS Controller, Common, Spec, OpenNESS Applications, and Experience kit. + +## OpenNESS - 20.03 +OpenNESS Edge node, OpenNESS Controller, Common, Spec, OpenNESS Applications, and Experience kit. + +## OpenNESS - 20.06 +- Open Source: Edge node, Controller, Epcforedge, Common, Spec, Applications, and Experience kit. +- IDO: IDO Edge node, IDO Controller, IDO Epcforedge, IDO Spec, and IDO Experience kit. + +## OpenNESS - 20.09 +- Open Source: Edge node, Controller, Epcforedge, Common, Spec, Applications and Experience kit. +- IDO: IDO Edge node, IDO Controller, IDO Epcforedge, IDO Spec and IDO Experience kit. + +## OpenNESS - 20.12 +- Open Source: Edge node, Controller, Epcforedge, Common, Spec, Applications and Experience kit. +- IDO: IDO Edge node, IDO Controller, IDO Epcforedge, IDO Spec and IDO Experience kit. + +> **NOTE**: Edge applications repo is common to Open Source and IDO + # Hardware and Software Compatibility OpenNESS Edge Node has been tested using the following hardware specification: ## Intel® Xeon® D Processor - - Supermicro\* 3U form factor chassis server, product SKU code: 835TQ-R920B - - Motherboard type: [X11SDV-16C-TP8F](https://www.supermicro.com/products/motherboard/Xeon/D/X11SDV-16C-TP8F.cfm) - - Intel® Xeon® Processor D-2183IT +- Supermicro\* 3U form factor chassis server, product SKU code: 835TQ-R920B +- Motherboard type: [X11SDV-16C-TP8F](https://www.supermicro.com/products/motherboard/Xeon/D/X11SDV-16C-TP8F.cfm) +- Intel® Xeon® Processor D-2183IT + ## 2nd Generation Intel® Xeon® Scalable Processors - - -| | | -|------------------|---------------------------------------------------------------| -| CLX-SP | Compute Node based on CLX-SP(6252N) | -| Board | S2600WFT server board | -| | 2 x Intel(R) Xeon(R) Gold 6252N CPU @ 2.30GHz | -| | 2 x associated Heatsink | -| Memory | 12x Micron 16GB DDR4 2400MHz DIMMS * [2666 for PnP] | -| Chassis | 2U Rackmount Server Enclosure | -| Storage | Intel M.2 SSDSCKJW360H6 360G | -| NIC | 1x Intel Fortville NIC X710DA4 SFP+ ( PCIe card to CPU-0) | -| QAT | Intel Quick Assist Adapter Device 37c8 | -| | (Symmetrical design) LBG integrated | -| NIC on board | Intel-Ethernet-Controller-I210 (for management) | -| Other card | 2x PCIe Riser cards | + +| | | +| ------------ | ---------------------------------------------------------- | +| CLX-SP | Compute Node based on CLX-SP(6252N) | +| Board | S2600WFT server board | +| | 2 x Intel® Xeon® Gold 6252N CPU @ 2.30GHz | +| | 2 x associated Heatsink | +| Memory | 12x Micron 16GB DDR4 2400MHz DIMMS* [2666 for PnP] | +| Chassis | 2U Rackmount Server Enclosure | +| Storage | Intel M.2 SSDSCKJW360H6 360G | +| NIC | 1x Intel® Fortville NIC X710DA4 SFP+ ( PCIe card to CPU-0) | +| QAT | Intel® Quick Assist Adapter Device 37c8 | +| | (Symmetrical design) LBG integrated | +| NIC on board | Intel-Ethernet-Controller-I210 (for management) | +| Other card | 2x PCIe Riser cards | ## Intel® Xeon® Scalable Processors -| | | -|------------------|---------------------------------------------------------------| -| SKX-SP | Compute Node based on SKX-SP(6148) | -| Board | WolfPass S2600WFQ server board(symmetrical QAT)CPU | -| | 2 x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz | -| | 2 x associated Heatsink | -| Memory | 12x Micron 16GB DDR4 2400MHz DIMMS * [2666 for PnP] | -| Chassis | 2U Rackmount Server Enclosure | -| Storage | Intel M.2 SSDSCKJW360H6 360G | -| NIC | 1x Intel Fortville NIC X710DA4 SFP+ ( PCIe card to CPU-0) | -| QAT | Intel Quick Assist Adapter Device 37c8 | -| | (Symmetrical design) LBG integrated | -| NIC on board | Intel-Ethernet-Controller-I210 (for management) | -| Other card | 2x PCIe Riser cards | -| HDDL-R | [Mouser Mustang-V100](https://www.mouser.ie/datasheet/2/763/Mustang-V100_brochure-1526472.pdf) | -| VCAC-A | [VCAC-A Accelerator for Media Analytics](https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/media-analytics-vcac-a-accelerator-card-by-celestica-datasheet.pdf) | -| PAC-N3000 | [Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000 ](https://www.intel.com/content/www/us/en/programmable/products/boards_and_kits/dev-kits/altera/intel-fpga-pac-n3000/overview.html) | +| | | +| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| SKX-SP | Compute Node based on SKX-SP(6148) | +| Board | WolfPass S2600WFQ server board(symmetrical QAT)CPU | +| | 2 x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz | +| | 2 x associated Heatsink | +| Memory | 12x Micron 16GB DDR4 2400MHz DIMMS* [2666 for PnP] | +| Chassis | 2U Rackmount Server Enclosure | +| Storage | Intel® M.2 SSDSCKJW360H6 360G | +| NIC | 1x Intel® Fortville NIC X710DA4 SFP+ ( PCIe card to CPU-0) | +| QAT | Intel® Quick Assist Adapter Device 37c8 | +| | (Symmetrical design) LBG integrated | +| NIC on board | Intel-Ethernet-Controller-I210 (for management) | +| Other card | 2x PCIe Riser cards | +| HDDL-R | [Mouser Mustang-V100](https://www.mouser.ie/datasheet/2/763/Mustang-V100_brochure-1526472.pdf) | +| VCAC-A | [VCAC-A Accelerator for Media Analytics](https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/media-analytics-vcac-a-accelerator-card-by-celestica-datasheet.pdf) | +| PAC-N3000 | [Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000 ](https://www.intel.com/content/www/us/en/programmable/products/boards_and_kits/dev-kits/altera/intel-fpga-pac-n3000/overview.html) | +| ACC100 | [Intel® vRAN Dedicated Accelerator ACC100](https://networkbuilders.intel.com/solutionslibrary/intel-vran-dedicated-accelerator-acc100-product-brief) | # Supported Operating Systems -> OpenNESS was tested on CentOS Linux release 7.6.1810 (Core) : Note: OpenNESS is tested with CentOS 7.6 Pre-empt RT kernel to ensure VNFs and Applications can co-exist. There is no requirement from OpenNESS software to run on a Pre-empt RT kernel. + +OpenNESS was tested on CentOS Linux release 7.8.2003 (Core) +> **NOTE**: OpenNESS is tested with CentOS 7.8 Pre-empt RT kernel to ensure VNFs and Applications can co-exist. There is no requirement from OpenNESS software to run on a Pre-empt RT kernel. # Packages Version -Package: telemetry, cadvisor 0.36.0, grafana 7.0.3, prometheus 2.16.0, prometheus: node exporter 1.0.0-rc.0, tas 0., golang 1.14.9, docker 19.03.12, kubernetes 1.18.4, dpdk 18.11.6, ovs 2.12.0, ovn 2.12.0, helm 3.0, kubeovn 1.0.1, flannel 0.12.0, calico 3.14.0 , multus 3.6, sriov cni 2.3, nfd 0.6.0, cmk v1.4.1 TAS we build from specific commit “a13708825e854da919c6fdf05d50753113d04831” \ No newline at end of file + +Package: telemetry, cadvisor 0.36.0, grafana 7.0.3, prometheus 2.16.0, prometheus: node exporter 1.0.0-rc.0, golang 1.15, docker 19.03.12, kubernetes 1.19.3, dpdk 19.11, ovs 2.14.0, ovn 2.14.0, helm 3.0, kubeovn 1.5.2, flannel 0.12.0, calico 3.16.0, multus 3.6, sriov cni 2.3, nfd 0.6.0, cmk v1.4.1, TAS (from specific commit "a13708825e854da919c6fdf05d50753113d04831")