diff --git a/.gitignore b/.gitignore new file mode 100644 index 00000000..9e4a2100 --- /dev/null +++ b/.gitignore @@ -0,0 +1,2 @@ + +*.pdf diff --git a/README.md b/README.md index 3a505b92..09e2057a 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ ```text SPDX-License-Identifier: Apache-2.0 -Copyright (c) 2019 Intel Corporation +Copyright (c) 2019-2020 Intel Corporation ``` # OpenNESS Quick Start @@ -40,11 +40,17 @@ Below is the complete list of OpenNESS solution documentation * [openness-interface-service.md: Using network interfaces management service](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/openness-interface-service.md) * [using-openness-cnca.md: Steps for configuring 4G CUPS or 5G Application Function for Edge deployment for Network and OnPremises Edge](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/using-openness-cnca.md) +## Radio Access Network (RAN) +* [ran: Folder containing details of 4G and 5G RAN deployment support](https://github.com/open-ness/specs/tree/master/doc/ran) + * [openness_ran.md: Whitepaper detailing the 4G and 5G RAN deployment support on OpenNESS for Network Edge](https://github.com/open-ness/specs/blob/master/doc/ran/openness_ran.md) + + ## Core Network - 4G and 5G * [core-network: Folder containing details of 4G CUPS and 5G edge cloud deployment support](https://github.com/open-ness/specs/tree/master/doc/core-network) * [openness_epc.md: Whitepaper detailing the 4G CUPS support for Edge cloud deployment in OpenNESS for Network and OnPremises Edge](https://github.com/open-ness/specs/blob/master/doc/core-network/openness_epc.md) * [openness_ngc.md: Whitepaper detailing the 5G Edge Cloud deployment support in OpenNESS for Network and OnPremises Edge](https://github.com/open-ness/specs/blob/master/doc/core-network/openness_ngc.md) + * [openness_upf.md: Whitepaper detailing the UPF, AF, NEF deployment support on OpenNESS for Network Edge](https://github.com/open-ness/specs/blob/master/doc/core-network/openness_upf.md) ## Enhanced Platform Awareness @@ -57,6 +63,10 @@ Below is the complete list of OpenNESS solution documentation * [openness-fpga.md: Dedicated FPGA IP resource allocation support for Edge Applications and Network Functions](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-fpga.md) * [openness_hddl.md: Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness_hddl.md) * [openness-topology-manager.md: Resource Locality awareness support through Topology manager in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-topology-manager.md) + * [openness-environment-variables.md: Environment Variable configuration support in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-environment-variables.md) + * [openness-tunable-exec.md: Configurable startup command support for containers in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-tunable-exec.md) + * [openness-port-forward.md: Support for setting up port forwarding of a container in OpenNESS On-Prem mode](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-port-forward.md) + * [openness-shared-storage.md: Shared storage for containers in OpenNESS On-Prem mode](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-shared-storage.md) ## Dataplane @@ -64,6 +74,7 @@ Below is the complete list of OpenNESS solution documentation * [openness-interapp.md: InterApp Communication support in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/dataplane/openness-interapp.md) * [openness-ovn.md: OpenNESS Support for OVS as dataplane with OVN](https://github.com/open-ness/specs/blob/master/doc/dataplane/openness-ovn.md) * [openness-nts.md: Dataplane support for Edge Cloud between ENB and EPC (S1-U) Deployment](https://github.com/open-ness/specs/blob/master/doc/dataplane/openness-nts.md) + * [openness-userspace-cni.md: Userspace CNI - Container Network Interface Kubernetes plugin](https://github.com/open-ness/specs/blob/master/doc/dataplane/openness-userspace-cni.md) ## Edge Applications diff --git a/doc/applications-onboard/network-edge-app-onboarding-images/ovc-smartcity-setup.png b/doc/applications-onboard/network-edge-app-onboarding-images/ovc-smartcity-setup.png index bceda2dc..0173827a 100644 Binary files a/doc/applications-onboard/network-edge-app-onboarding-images/ovc-smartcity-setup.png and b/doc/applications-onboard/network-edge-app-onboarding-images/ovc-smartcity-setup.png differ diff --git a/doc/applications-onboard/network-edge-applications-onboarding.md b/doc/applications-onboard/network-edge-applications-onboarding.md index 0adfa919..9b034869 100644 --- a/doc/applications-onboard/network-edge-applications-onboarding.md +++ b/doc/applications-onboard/network-edge-applications-onboarding.md @@ -1,6 +1,6 @@ ```text SPDX-License-Identifier: Apache-2.0 -Copyright (c) 2019 Intel Corporation +Copyright (c) 2019-2020 Intel Corporation ``` - [Introduction](#introduction) @@ -23,8 +23,9 @@ Copyright (c) 2019 Intel Corporation - [Installing OpenNESS](#installing-openness) - [Building Smart City ingredients](#building-smart-city-ingredients) - [Running Smart City](#running-smart-city) -- [Inter Application Communication](#inter-application-communication) + - [Inter Application Communication](#inter-application-communication) - [Enhanced Platform Awareness](#enhanced-platform-awareness) +- [VM support for Network Edge](#vm-support-for-network-edge) - [Troubleshooting](#troubleshooting) - [Useful Commands:](#useful-commands) @@ -255,6 +256,22 @@ The purpose of this section is to guide the user on the complete process of onbo ip a a 192.168.1.10/24 dev route add -net 10.16.0.0/24 gw 192.168.1.1 dev ``` + + > **NOTE:** The subnet `192.168.1.0/24` is allocated by Ansible playbook to the physical interface which is attached to the first edge node. The second edge node joined to the cluster is allocated the next subnet `192.168.2.0/24` and so on. + + > **NOTE:** To identify which subnet is allocated to which node, use this command: + > ```shell + > $ kubectl get subnets + > NAME PROTOCOL CIDR PRIVATE NAT DEFAULT GATEWAYTYPE USED AVAILABLE + > jfsdm001-local IPv4 192.168.1.0/24 false false false distributed 0 255 + > jfsdm002-local IPv4 192.168.2.0/24 false false false distributed 0 255 + > ... + > ``` + > + > The list presents which subnet (CIDR) is bridged to which edgenode, e.g: node `jfsdm001` is bridged to subnet `192.168.1.0/24` and node `jfsdm002` is bridged to subnet `192.168.2.0/24` + + > **NOTE:** Ingress traffic originating from `192.168.1.0/24` can *only* reach the pods deployed on `jfsdm001`, and similarly for `192.168.2.0/24` can reach the pods deployed on `jfsdm002`. + 2. From the Edge Controller, set up the interface service to connect the Edge Node's physical interface used for the communication between Edge Node and traffic generating host to OVS. This allows the Client Simulator to communicate with the OpenVINO application K8s Pod located on the Edge Node (sample output separated by `"..."`, PCI Bus Function ID of the interface used my vary). ``` kubectl interfaceservice get @@ -275,11 +292,11 @@ The purpose of this section is to guide the user on the complete process of onbo ## Deploying the Application -1. An application `yaml` specification file for the OpenVINO producer used to deploy the K8s pod can be found in the Edge Apps repository at [./openvino/producer/openvino-prod-app.yaml](https://github.com/open-ness/edgeapps/blob/master/openvino/producer/openvino-prod-app.yaml). The pod will use the Docker image which must be [built](#building-openvino-application-images) and available on the platform. Deploy the producer application by running: +1. An application `yaml` specification file for the OpenVINO producer used to deploy the K8s pod can be found in the Edge Apps repository at [./applications/openvino/producer/openvino-prod-app.yaml](https://github.com/open-ness/edgeapps/blob/master/applications/openvino/producer/openvino-prod-app.yaml). The pod will use the Docker image which must be [built](#building-openvino-application-images) and available on the platform. Deploy the producer application by running: ``` kubectl apply -f openvino-prod-app.yaml ``` -2. An application `yaml` specification file for the OpenVINO consumer used to deploy K8s pod can be found in the Edge Apps repository at [./build/openvino/producer/openvino-cons-app.yaml](https://github.com/open-ness/edgeapps/blob/master/openvino/producer/openvino-cons-app.yaml). The pod will use the Docker image which must be [built](#building-openvino-application-images) and available on the platform. Deploy consumer application by running: +2. An application `yaml` specification file for the OpenVINO consumer used to deploy K8s pod can be found in the Edge Apps repository at [./applications/openvino/consumer/openvino-cons-app.yaml](https://github.com/open-ness/edgeapps/blob/master/applications/openvino/consumer/openvino-cons-app.yaml). The pod will use the Docker image which must be [built](#building-openvino-application-images) and available on the platform. Deploy consumer application by running: ``` kubectl apply -f openvino-cons-app.yaml ``` @@ -379,7 +396,7 @@ The following is an example of how to set up DNS resolution for OpenVINO consume dig openvino.openness ``` 3. On the traffic generating host build the image for the [Client Simulator](#building-openvino-application-images) -4. Run the following from [edgeapps/openvino/clientsim](https://github.com/open-ness/edgeapps/blob/master/openvino/clientsim/run-docker.sh) to start the video traffic via the containerized Client Simulator. Graphical user environment is required to observed the results of the returning augmented videos stream. +4. Run the following from [edgeapps/applications/openvino/clientsim](https://github.com/open-ness/edgeapps/blob/master/applications/openvino/clientsim/run-docker.sh) to start the video traffic via the containerized Client Simulator. Graphical user environment is required to observed the results of the returning augmented videos stream. ``` ./run_docker.sh ``` @@ -389,11 +406,16 @@ The following is an example of how to set up DNS resolution for OpenVINO consume > $ setenforce 0 > ``` -> **NOTE:** If the video window is not popping up and/or an error like `Could not find codec parameters for stream 0` appears, add a rule in firewall to permit ingress traffic on port `5001`: +> **NOTE:** If the video window is not popping up and/or an error like `Could not find codec parameters for stream 0` appears, 1) check whether receive traffic on port `5001` via tcpdump 2) add a rule in firewall to permit ingress traffic on port `5001` if observe `host administratively prohibited`: > ```shell -> firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 -p tcp --dport 5001 -j ACCEPT +> firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 -p udp --dport 5001 -j ACCEPT > firewall-cmd --reload > ``` +> or directly shutdown firewall as +> ```shell +> systemctl stop firewalld.service +> ``` + # Onboarding Smart City Sample Application @@ -401,7 +423,7 @@ Smart City sample application is a sample applications that is built on top of t The full pipeline of the Smart City sample application on OpenNESS is distributed across three regions: - 1. Client-side Cameras Simulator + 1. Client-side Cameras Simulator(s) 2. OpenNESS Cluster 3. Smart City Cloud Cluster @@ -415,74 +437,91 @@ _Figure - Smart City Setup with OpenNESS_ ## Installing OpenNESS The OpenNESS must be installed before going forward with Smart City application deployment. Installation is performed through [OpenNESS playbooks](https://github.com/open-ness/specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md). -> **NOTE**: At the time of writing this guide, there was no [Network Policy for Kubernetes](https://kubernetes.io/docs/concepts/services-networking/network-policies/) defined yet for the Smart City application. So, it is advised to remove the default OpenNESS network policy using this command: +> **NOTE**: At the time of writing this guide, there was no [Network Policy for Kubernetes](https://kubernetes.io/docs/concepts/services-networking/network-policies/) defined yet for the Smart City application. So, it is advised to remove the default OpenNESS network policies using this command: > ```shell -> kubectl delete netpol block-all-ingress +> kubectl delete netpol block-all-ingress cdi-upload-proxy-policy > ``` From the OpenNESS Controller, attach the physical ethernet interface to be used for dataplane traffic using the `interfaceservice` kubectl plugin by providing the office hostname and the PCI Function ID corresponding to the ethernet interface (the PCI ID below is just a sample and may vary on other setups): ```shell -kubectl interfaceservice get +kubectl interfaceservice get ... 0000:86:00.0 | 3c:fd:fe:b2:42:d0 | detached ... -kubectl interfaceservice attach 0000:86:00.0 +kubectl interfaceservice attach 0000:86:00.0 ... Interface: 0000:86:00.0 successfully attached ... -kubectl interfaceservice get +kubectl interfaceservice get ... 0000:86:00.0 | 3c:fd:fe:b2:42:d0 | attached ... ``` +> **NOTE:** When adding office 2 and so on, attach their corresponding physical interfaces accordingly. + ## Building Smart City ingredients -1. Clone the Smart City Reference Pipeline source code from [GitHub](https://github.com/OpenVisualCloud/Smart-City-Sample.git) to: (1) Camera simulator machine, (2) OpenNESS Controller machine, and (3) Smart City cloud master machine. + 1. Clone the Smart City Reference Pipeline source code from [GitHub](https://github.com/OpenVisualCloud/Smart-City-Sample.git) to: (1) Camera simulator machines, (2) OpenNESS Controller machine, and (3) Smart City cloud master machine. -2. Build the Smart City application on each of the 3 machines as explained in [Smart City deployment on OpenNESS](https://github.com/OpenVisualCloud/Smart-City-Sample/tree/openness-k8s/deployment/openness). At least 2 offices must be installed on OpenNESS. + 2. Build the Smart City application on all of the machines as explained in [Smart City deployment on OpenNESS](https://github.com/OpenVisualCloud/Smart-City-Sample/tree/openness-k8s/deployment/openness). At least 2 offices (edge nodes) must be installed on OpenNESS. ## Running Smart City -1. On the Camera simulator machine, assign IP address to the ethernet interface which the dataplane traffic will be transmitted through to the edge office1 node: - ```shell - ip a a 192.168.1.10/24 dev - route add -net 10.16.0.0/24 gw 192.168.1.1 dev - ``` + 1. On the Camera simulator machines, assign IP address to the ethernet interface which the dataplane traffic will be transmitted through to the edge office1 & office2 nodes: + + On camera-sim1: + ```shell + ip a a 192.168.1.10/24 dev + route add -net 10.16.0.0/24 gw 192.168.1.1 dev + ``` -2. On the Camera simulator machine, run the camera simulator containers - ```shell - make start_openness_camera - ``` + > **NOTE:** When adding office 2 and so on, change the CIDR (i.e: `192.168.1.0/24`) to corresponding subnet. Allocated subnets to individual offices can be retrieved by entering this command in the OpenNESS controller shell: + > ```shell + > kubectl get subnets + > ``` + > + > The subnet name represents the node which is allocated to it and appended with `-local`. -3. On the Smart City cloud master machine, run the Smart City cloud containers - ```shell - make start_openness_cloud - ``` - > **NOTE**: At the time of writing this guide, there was no firewall rules defined for the Camera simulator & Smart City cloud containers. If none is defined, firewall must be stopped or disabled before continuing. All communication back to the office nodes will be blocked. Run the below on both machines. - > ```shell - > systemctl stop firewalld - > ``` + On camera-sim2: + ```shell + ip a a 192.168.2.10/24 dev + route add -net 10.16.0.0/24 gw 192.168.2.1 dev + ``` - > **NOTE**: Do not stop firewall on OpenNESS nodes. + 2. On the Camera simulator machines, run the camera simulator containers + ```shell + make start_openness_camera + ``` -4. On the OpenNESS Controller machine, build & run the Smart City cloud containers - ```shell - export CAMERA_HOST=192.168.1.10 - export CLOUD_HOST= + 3. On the Smart City cloud master machine, run the Smart City cloud containers + ```shell + make start_openness_cloud + ``` - make - make update - make start_openness_office - ``` + > **NOTE**: At the time of writing this guide, there was no firewall rules defined for the camera simulators & Smart City cloud containers. If none is defined, firewall must be stopped or disabled before continuing. All communication back to the office nodes will be blocked. Run the below on both machines. + > ```shell + > systemctl stop firewalld + > ``` - > **NOTE**: `` is where the Smart City cloud master machine can be reached on the management/cloud network. + > **NOTE**: Do not stop firewall on OpenNESS nodes. -5. From the web browser, launch the Smart City web UI at URL `https:///` + 4. On the OpenNESS Controller machine, build & run the Smart City cloud containers + ```shell + export CAMERA_HOSTS=192.168.1.10,192.168.2.10 + export CLOUD_HOST= + make + make update + make start_openness_office + ``` + + > **NOTE**: `` is where the Smart City cloud master machine can be reached on the management/cloud network. + + 5. From the web browser, launch the Smart City web UI at URL `https:///` ## Inter Application Communication The IAC is available via the default overlay network used by Kubernetes - Kube-OVN. @@ -491,7 +530,10 @@ For more information on Kube-OVN refer to the Kube-OVN support in OpenNESS [docu # Enhanced Platform Awareness Enhanced platform awareness is supported in OpenNESS via the use of the Kubernetes NFD plugin. This plugin is enabled in OpenNESS for Network Edge by default please refer to the [NFD whitepaper](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-node-feature-discovery.md) for information on how to make your application pods aware of the supported platform capabilities. -Refer to [supported-epa.md](https://github.com/open-ness/specs/blob/master/doc/getting-started/network-edge/supported-epa.md) for the list of supported EPA features on OpenNESS network edge. +Refer to [supported-epa.md](https://github.com/open-ness/specs/blob/master/doc/getting-started/network-edge/supported-epa.md) for the list of supported EPA features on OpenNESS network edge. + +# VM support for Network Edge +Support for VM deployment on OpenNESS for Network Edge is available and enabled by default, certain configuration and pre-requisites may need to be fulfilled in order to use all capabilities. For information on application deployment in VM please see [openness-network-edge-vm-support.md](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/openness-network-edge-vm-support.md). # Troubleshooting In this sections steps for debugging Edge applications in Network Edge will be covered. diff --git a/doc/applications-onboard/on-premises-app-onboarding-images/AddingTrafficPolicyToInterface1.png b/doc/applications-onboard/on-premises-app-onboarding-images/AddingTrafficPolicyToInterface1.png new file mode 100644 index 00000000..c141aa38 Binary files /dev/null and b/doc/applications-onboard/on-premises-app-onboarding-images/AddingTrafficPolicyToInterface1.png differ diff --git a/doc/applications-onboard/on-premises-app-onboarding-images/AddingTrafficPolicyToInterface2.png b/doc/applications-onboard/on-premises-app-onboarding-images/AddingTrafficPolicyToInterface2.png new file mode 100644 index 00000000..9a3695c7 Binary files /dev/null and b/doc/applications-onboard/on-premises-app-onboarding-images/AddingTrafficPolicyToInterface2.png differ diff --git a/doc/applications-onboard/on-premises-app-onboarding-images/AddingTrafficPolicyToInterface3.png b/doc/applications-onboard/on-premises-app-onboarding-images/AddingTrafficPolicyToInterface3.png new file mode 100644 index 00000000..dc93d487 Binary files /dev/null and b/doc/applications-onboard/on-premises-app-onboarding-images/AddingTrafficPolicyToInterface3.png differ diff --git a/doc/applications-onboard/on-premises-app-onboarding-images/CreatingTrafficPolicy.png b/doc/applications-onboard/on-premises-app-onboarding-images/CreatingTrafficPolicy.png new file mode 100644 index 00000000..204ddac1 Binary files /dev/null and b/doc/applications-onboard/on-premises-app-onboarding-images/CreatingTrafficPolicy.png differ diff --git a/doc/applications-onboard/on-premises-app-onboarding-images/CreatingTrafficPolicy2.png b/doc/applications-onboard/on-premises-app-onboarding-images/CreatingTrafficPolicy2.png new file mode 100644 index 00000000..8dbad754 Binary files /dev/null and b/doc/applications-onboard/on-premises-app-onboarding-images/CreatingTrafficPolicy2.png differ diff --git a/doc/applications-onboard/on-premises-app-onboarding-images/CreatingTrafficPolicy3.png b/doc/applications-onboard/on-premises-app-onboarding-images/CreatingTrafficPolicy3.png new file mode 100644 index 00000000..44d878d1 Binary files /dev/null and b/doc/applications-onboard/on-premises-app-onboarding-images/CreatingTrafficPolicy3.png differ diff --git a/doc/applications-onboard/on-premises-applications-onboarding.md b/doc/applications-onboard/on-premises-applications-onboarding.md index 95a47980..ebc5a8e1 100644 --- a/doc/applications-onboard/on-premises-applications-onboarding.md +++ b/doc/applications-onboard/on-premises-applications-onboarding.md @@ -8,6 +8,9 @@ Copyright (c) 2019 Intel Corporation - [How to setup apache step by step for IP address](#how-to-setup-apache-step-by-step-for-ip-address) - [Instruction to generate certificate for a domain](#instruction-to-generate-certificate-for-a-domain) - [Instruction to upload and access images](#instruction-to-upload-and-access-images) + - [Instruction to create Traffic Policy and assign it to Interface](#instruction-to-create-traffic-policy-and-assign-it-to-interface) + - [Creating Traffic Policy](#creating-traffic-policy) + - [Adding Traffic Policy to Interface](#adding-traffic-policy-to-interface) - [Building Applications](#building-applications) - [Building the OpenVINO Application images](#building-the-openvino-application-images) - [Onboarding applications](#onboarding-applications) @@ -84,8 +87,79 @@ chmod a+r /var/www/html/* ``` The URL (Source in Controller UI) should be constructed as: `https://controller_hostname/test_image.tar.gz` +### Instruction to create Traffic Policy and assign it to Interface + +#### Creating Traffic Policy + +Prerequisites: + +- Enrollment phase completed successfully. +- User is logged in to UI. + +The steps to create a sample traffic policy are as follows: + +1. From UI navigate to 'TRAFFIC POLICIES' tab. +2. Click 'ADD POLICY'. + +> Note: This specific traffic policy is only an example. + +![Creating Traffic Policy 1](on-premises-app-onboarding-images/CreatingTrafficPolicy.png) + +3. Give policy a name. +4. Click 'ADD' next to 'Traffic Rules*' field. +5. Fill in following fields: + +- Description: "Sample Description" +- Priority: 99 +- Source -> IP Filter -> IP Address: 1.1.1.1 +- Source -> IP Filter -> Mask: 24 +- Source -> IP Filter -> Begin Port: 10 +- Source -> IP Filter -> End Port: 20 +- Source -> IP Filter -> Protocol: all +- Target -> Description: "Sample Description" +- Target -> Action: accept + +6. Click on "CREATE". + +![Creating Traffic Policy 2](on-premises-app-onboarding-images/CreatingTrafficPolicy2.png) + +After creating Traffic Policy it will be visible under 'List of Traffic Policies' in 'TRAFFIC POLICIES' tab. + +![Creating Traffic Policy 3](on-premises-app-onboarding-images/CreatingTrafficPolicy3.png) + +#### Adding Traffic Policy to Interface + +Prerequisites: + +- Enrollment phase completed successfully. +- User is logged in to UI. +- Traffic Policy Created. + +To add a previously created traffic policy to an interface available on Edge Node the following steps need to be completed: + +1. From UI navigate to "NODES" tab. +2. Find Edge Node on the 'List Of Edge Nodes'. +3. Click "EDIT". + +> Note: This step is instructional only, users can decide if they need/want a traffic policy designated for their interface, or if they desire traffic policy designated per application instead. + +![Adding Traffic Policy To Interface 1](on-premises-app-onboarding-images/AddingTrafficPolicyToInterface1.png) + +4. Navigate to "INTERFACES" tab. +5. Find desired interface which will be used to add traffic policy. +6. Click 'ADD' under 'Traffic Policy' column for that interface. +7. A window titled 'Assign Traffic Policy to interface' will pop-up. Select a previously created traffic policy. +8. Click on 'ASSIGN'. + +![Adding Traffic Policy To Interface 2](on-premises-app-onboarding-images/AddingTrafficPolicyToInterface2.png) + +On success the user is able to see 'EDIT' and 'REMOVE POLICY' buttons under 'Traffic Policy' column for desired interface. These buttons can be respectively used for editing and removing traffic rule policy on that interface. + +![Adding Traffic Policy To Interface 3](on-premises-app-onboarding-images/AddingTrafficPolicyToInterface3.png) + # Building Applications -User needs to prepare the applications that will be deployed on the OpenNESS platform in OnPromises mode. Applications should be built as Docker images and should be hosted on some HTTPS server that is available to the EdgeNode. +User needs to prepare the applications that will be deployed on the OpenNESS platform in OnPremises mode. Applications should be built as Docker container images or VirtualBox vm images and should be hosted on some HTTPS server that is available to the EdgeNode. Format for a docker application image is .tar.gz, format for a VirtualBox one is qcow2. +Currently the applications are limited to 4096 MB RAM and 8 cores. Memory limit can be rised up to 16384 in eva.json file. The OpenNESS [EdgeApps](https://github.com/open-ness/edgeapps) repository provides images for OpenNESS supported applications. They should be downloaded on machine where docker is installed. @@ -104,6 +178,8 @@ To build sample application Docker images for testing OpenVINO consumer and prod ``` ./build-image.sh ``` + **Note**: Default consumer inference process is using 'CPU 8' to avoid conflicts with NTS. If the desired CPU is changed, environmental variable `OPENVINO_TASKSET_CPU` must be set within Dockerfile available in the directory + **NOTE**: fwd.sh is using 'CPU 1'. If the desired CPU is changed, user can change fwd.sh accordingly. 3. Check that the images built successfully and are available in local Docker image registry: ``` docker images | grep openvino-prod-app @@ -247,6 +323,8 @@ This chapter describes how to deploy OpenVINO applications on OpenNESS platform ![Defining openvino traffic policy](on-premises-app-onboarding-images/openvino-policy2.png) +> Note: For creating Traffic Policy refer to [Instruction to create Traffic Policy and assign it to Interface](#instruction-to-create-traffic-policy-and-assign-it-to-interface) + 4. Move to the EdgeNode interfaces setup. It should be available under button `Edit` next to the EdgeNode position on Dashboard page. * Find the port that is directly connected to the OpenVINO client machine port (eg. 0000:04:00.1) * Edit interface settings: @@ -293,7 +371,7 @@ This chapter describes how to deploy OpenVINO applications on OpenNESS platform ### Starting traffic from Client Simulator 1. On the traffic generating host build the image for the [Client Simulator](#building-openvino-application-images), before building the image, in `tx_video.sh` in the directory containing the image Dockerfile edit the RTP endpoint with IP address of OpenVINO consumer application pod (to get IP address of the pod run: `kubectl exec -it openvino-cons-app ip a`) -2. Run the following from [edgeapps/openvino/clientsim](https://github.com/open-ness/edgeapps/blob/master/openvino/clientsim/run-docker.sh) to start the video traffic via the containerized Client Simulator. Graphical user environment is required to observe the results of the returning video stream. +2. Run the following from [edgeapps/applications/openvino/clientsim](https://github.com/open-ness/edgeapps/blob/master/applications/openvino/clientsim/run-docker.sh) to start the video traffic via the containerized Client Simulator. Graphical user environment is required to observe the results of the returning video stream. ``` ./run_docker.sh ``` @@ -305,6 +383,6 @@ This chapter describes how to deploy OpenVINO applications on OpenNESS platform > **NOTE:** If the video window is not popping up and/or an error like `Could not find codec parameters for stream 0` appears, add a rule in firewall to permit ingress traffic on port `5001`: > ```shell -> firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 -p tcp --dport 5001 -j ACCEPT +> firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 -p udp --dport 5001 -j ACCEPT > firewall-cmd --reload > ``` diff --git a/doc/applications-onboard/openness-edgedns.md b/doc/applications-onboard/openness-edgedns.md index 5b1ab2d9..d1c64324 100644 --- a/doc/applications-onboard/openness-edgedns.md +++ b/doc/applications-onboard/openness-edgedns.md @@ -69,10 +69,10 @@ Additionally manual configuration needs to be run from a terminal on the EdgeNod Configure DNS container's KNI interface: ``` -docker exec -it ip link set dev vEth0 arp off -docker exec -it ip a a 53.53.53.53/24 dev vEth0 -docker exec -it ip link set dev vEth0 up -docker exec -it ip route add 192.168.200.0/24 dev vEth0 +docker exec -it sudo ip link set dev vEth0 arp off +docker exec -it sudo ip a a 53.53.53.53/24 dev vEth0 +docker exec -it sudo ip link set dev vEth0 up +docker exec -it sudo ip route add 192.168.200.0/24 dev vEth0 ``` Make a request on the DNS interface subnet to register the KNI interface with NTS client (press CTRL + C buttons as soon as a request is made (no expectation for hostname to resolve)): diff --git a/doc/applications-onboard/openness-interface-service.md b/doc/applications-onboard/openness-interface-service.md index a5653ae7..a382ca3c 100644 --- a/doc/applications-onboard/openness-interface-service.md +++ b/doc/applications-onboard/openness-interface-service.md @@ -1,57 +1,186 @@ SPDX-License-Identifier: Apache-2.0 -Copyright © 2019 Intel Corporation +Copyright © 2019-2020 Intel Corporation # OpenNESS Interface Service + - [Overview](#overview) - [Traffic from external host](#traffic-from-external-host) - [Usage](#usage) - - [Example usage](#example-usage) + - [Default parameters](#default-parameters) + - [Supported drivers](#supported-drivers) + - [Userspace (DPDK) bridge](#userspace-dpdk-bridge) + - [Hugepages (DPDK)](#hugepages-dpdk) + - [Examples](#examples) + - [Getting information about node's interfaces](#getting-information-about-nodes-interfaces) + - [Attaching kernel interfaces](#attaching-kernel-interfaces) + - [Attaching DPDK interfaces](#attaching-dpdk-interfaces) + - [Detaching interfaces](#detaching-interfaces) ## Overview -Interface service is an application running in K8s pod on each worker node of OpenNESS K8s cluster. It allows to attach additional network interfaces of the worker host to `br-local` OVS bridge, enabling external traffic scenarios for applications deployed in K8s pods. Services on each worker can be controlled from master node using kubectl plugin. +Interface service is an application running in Kubernetes pod on each worker node of OpenNESS Kubernetes cluster. It allows to attach additional network interfaces of the worker host to provided OVS bridge, enabling external traffic scenarios for applications deployed in Kubernetes pods. Services on each worker can be controlled from master node using kubectl plugin. + +Interface service can attach both kernel and userspace (DPDK) network interfaces to OVS bridges of suitable type. ## Traffic from external host -Machines connected to attached interface can communicate with K8s pods of the worker node (`10.16.0.0/16` subnet) through `192.168.1.1` gateway. Therefore, correct address and routing should be used. Eg: +A machine (client-sim) that is physically connected to OpenNESS edge node over a cable would be able to communicate to the pods in the Kubernetes cluster when the physical network interface (through which the cable is attached) is bridged over to the Kubernetes cluster subnet. This is done by providing the PCI ID or MAC address to the `interfaceservice` kubectl plugin. + +The machine that is connected to the edge node must be configured as below in order to allow the traffic originating from the client-sim (`192.168.1.0/24` subnet) to be routed over to the Kubernetes cluster (`10.16.0.0/16` subnet). + +Update the physical ethernet interface with an IP from `192.168.1.0/24` subnet and the Linux IP routing table with the routing rule as: ```bash - ip a a 192.168.1.5/24 dev eth1 + ip a a 192.168.1.10/24 dev eth1 route add -net 10.16.0.0/16 gw 192.168.1.1 dev eth1 ``` -> NOTE: Default OpenNESS network policy applies to pods in `default` namespace and blocks all ingress traffic. Refer to [Kubernetes NetworkPolicies](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/openness_howto.md#kubernetes-networkpolicies) for example policy allowing ingress traffic from `192.168.1.0` subnet on specific port. +> **NOTE:** Default OpenNESS network policy applies to pods in `default` namespace and blocks all ingress traffic. Refer to [Kubernetes NetworkPolicies](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md#applying-kubernetes-network-policies) for example policy allowing ingress traffic from `192.168.1.0/24` subnet on specific port. + +> **NOTE:** The subnet `192.168.1.0/24` is allocated by Ansible playbook to the physical interface which is attached to the first edge node. The second edge node joined to the cluster is allocated the next subnet `192.168.2.0/24` and so on. + +> **NOTE:** To identify which subnet is allocated to which node, use this command: +> ```shell +> $ kubectl get subnets +> NAME PROTOCOL CIDR PRIVATE NAT DEFAULT GATEWAYTYPE USED AVAILABLE +> jfsdm001-local IPv4 192.168.1.0/24 false false false distributed 0 255 +> jfsdm002-local IPv4 192.168.2.0/24 false false false distributed 0 255 +> ... +> ``` +> +> The list presents which subnet (CIDR) is bridged to which edgenode, e.g: node `jfsdm001` is bridged to subnet `192.168.1.0/24` and node `jfsdm002` is bridged to subnet `192.168.2.0/24` + +> **NOTE:** Ingress traffic originating from `192.168.1.0/24` can *only* reach the pods deployed on `jfsdm001`, and similarly for `192.168.2.0/24` can reach the pods deployed on `jfsdm002`. ## Usage * `kubectl interfaceservice --help` to learn about usage * `kubectl interfaceservice get ` to list network interfaces of node -* `kubectl interfaceservice attach ` to attach interfaces to OVS br_local bridge +* `kubectl interfaceservice attach ` to attach interfaces to OVS bridge `` using specified `driver`. * `kubectl interfaceservice detach ` to detach interfaces from OVS br_local bridge > NOTE: `node_hostname` must be valid worker node name - can be found using `kubectl get nodes` -> NOTE: Invalid/non-existent PCI addreses passed to attach/detach requests will be ignored +> NOTE: Invalid/non-existent PCI addresses passed to attach/detach requests will be ignored + +## Default parameters + +Parameters `` and `` are optional for `attach` command. The defaults values are respectively `br-local` for OVS bridge and `kernel` for driver. User can omit both values or driver only if they like to. + +## Supported drivers -### Example usage +Currently interface service supports following values of `driver` parameter: +- `kernel` - this will use default kernel driver +- `dpdk` - userspace driver `igb_uio` will be used -```bash - [root@master1 ~] kubectl interfaceservice get worker1 - 0000:86:00.0 | 3c:fd:fe:b2:42:d0 | detached - 0000:86:00.1 | 3c:fd:fe:b2:42:d1 | detached - 0000:86:00.2 | 3c:fd:fe:b2:42:d2 | detached - 0000:86:00.3 | 3c:fd:fe:b2:42:d3 | detached +> NOTE: Please remember that `dpdk` devices can be only attached to DPDK-enabled bridges, and `kernel` devices can be only attached to OVS `system` bridges. + +## Userspace (DPDK) bridge + +Default DPDK-enabled bridge `br-userspace` will be only available if OpenNESS was deployed with support for [Userspace CNI](https://github.com/open-ness/specs/blob/master/doc/dataplane/openness-userspace-cni.md) and at least one pod was deployed using Userspace CNI. You can check if `br-userspace` bridge exists executing the following command on your node: + +```shell +ovs-vsctl list-br +``` + +Output may be similar to: + +```shell +[root@node01 ~]# ovs-vsctl list-br +br-int +br-local +br-userspace +``` + +If `br-userspace` does not exists you can create it manually executing on your node: + +```shell +ovs-vsctl add-br br-userspace -- set bridge br-userspace datapath_type=netdev +``` +## Hugepages (DPDK) - [root@master1 ~] kubectl interfaceservice attach worker1 0000:86:00.0,0000:86:00.1,0000:86:00.4,00:123:123 - Invalid PCI address: 00:123:123. Skipping... - Interface: 0000:86:00.4 not found. Skipping... - Interface: 0000:86:00.0 successfully attached - Interface: 0000:86:00.1 successfully attached +Please be aware that DPDK apps will require specific amount of HugePages enabled. By default the ansible scripts will enable 1024 of 2M HugePages in system, and then start OVS-DPDK with 1GB of those HugePages reserved for NUMA node 0. If you would like to change this settings to reflect your specific requirements please set ansible variables as defined in the example below. This example enables 4 of 1GB HugePages and appends 2GB to OVS-DPDK leaving 2 pages for DPDK applications that will be running in the pods. This example uses Edge Node with 2 NUMA nodes, each one with 1GB of HugePages reserved. +```yaml +# network_edge.yml +- hosts: controller_group + vars: + hugepage_amount: "4" + +- hosts: edgenode_group + vars: + hugepage_amount: "4" +``` + +```yaml +# roles/machine_setup/grub/defaults/main.yml +hugepage_size: "1G" +``` + +>The variable `hugepage_amount` that can be found in `roles/machine_setup/grub/defaults/main.yml` can be left at default value of `5000` as this value will be overridden by values of `hugepage_amount` variables that were set earlier in `network_edge.yml`. + +```yaml +# roles/kubernetes/cni/kubeovn/common/defaults/main.yml +ovs_dpdk_socket_mem: "1024,1024" # Will reserve 1024MB of hugepages for NUNA node 0 and NUMA node 1 respectively. +ovs_dpdk_hugepage_size: "1Gi" # This is the size of single hugepage to be used by DPDK. Can be 1Gi or 2Mi. +ovs_dpdk_hugepages: "2Gi" # This is overall amount of hugepags available to DPDK. +``` + +> NOTE: DPDK PCI device connected to specific NUMA node cannot be attached to OVS if hugepages for this NUMA node will not be reserved with `ovs_dpdk_socket_mem` variable. + +## Examples + +### Getting information about node's interfaces +```shell +[root@master1 ~] kubectl interfaceservice get worker1 + +Kernel interfaces: + 0000:02:00.0 | 00:1e:67:d2:f2:06 | detached + 0000:02:00.1 | 00:1e:67:d2:f2:07 | detached + 0000:04:00.0 | a4:bf:01:02:20:c4 | detached + 0000:04:00.3 | a4:bf:01:02:20:c5 | detached + 0000:07:00.0 | 3c:fd:fe:a1:34:c8 | attached | br-local + 0000:07:00.2 | 3c:fd:fe:a1:34:ca | detached + 0000:07:00.3 | 3c:fd:fe:a1:34:cb | detached + 0000:82:00.0 | 68:05:ca:3a:a7:1c | detached + 0000:82:00.1 | 68:05:ca:3a:a7:1d | detached + +DPDK interfaces: + 0000:07:00.1 | attached | br-userspace +``` + +### Attaching kernel interfaces +```shell +[root@master1 ~] kubectl interfaceservice attach worker1 0000:07:00.2,0000:99:00.9,0000:07:00.3,00:123:123 br-local kernel +Invalid PCI address: 00:123:123. Skipping... +Interface: 0000:99:00.9 not found. Skipping... +Interface: 0000:07:00.2 successfully attached +Interface: 0000:07:00.3 successfully attached +``` + +Attaching to kernel-spaced bridges can be shortened to: + +```shell +kubectl interfaceservice attach worker1 0000:07:00.2 +``` +or: + +```shell +kubectl interfaceservice attach worker1 0000:07:00.2 bridge-name +``` + +### Attaching DPDK interfaces + +> NOTE: Please remember, that the device that is intended to be attached to DPDK bridge, should initially use kernel-space driver and should be not attached to any bridges. +```shell +[root@master1 ~] kubectl interfaceservice attach worker1 0000:07:00.2,0000:07:00.3 br-userspace dpdk +Interface: 0000:07:00.2 successfully attached +Interface: 0000:07:00.3 successfully attached +``` - [root@master1 ~] kubectl interfaceservice get worker1 - 0000:86:00.0 | 3c:fd:fe:b2:42:d0 | attached - 0000:86:00.1 | 3c:fd:fe:b2:42:d1 | attached - 0000:86:00.2 | 3c:fd:fe:b2:42:d2 | detached - 0000:86:00.3 | 3c:fd:fe:b2:42:d3 | detached +### Detaching interfaces +```shell +[root@master1 ~] kubectl interfaceservice detach worker1 0000:07:00.2,0000:07:00.3 +Interface: 0000:07:00.2 successfully detached +Interface: 0000:07:00.3 successfully detached ``` diff --git a/doc/applications-onboard/openness-network-edge-vm-support.md b/doc/applications-onboard/openness-network-edge-vm-support.md new file mode 100644 index 00000000..81449633 --- /dev/null +++ b/doc/applications-onboard/openness-network-edge-vm-support.md @@ -0,0 +1,446 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020 Intel Corporation +``` + +# VM support in OpenNESS for Network Edge - Setup, deployment, and management considerations. + +- [VM support in OpenNESS for Network Edge - Setup, deployment, and management considerations.](#vm-support-in-openness-for-network-edge---setup-deployment-and-management-considerations) + - [Overview](#overview) + - [KubeVirt](#kubevirt) + - [Stateless vs Stateful VMs](#stateless-vs-stateful-vms) + - [VMs with ephemeral storage](#vms-with-ephemeral-storage) + - [VMs with persistent Local Storage](#vms-with-persistent-local-storage) + - [VMs with Cloud Storage](#vms-with-cloud-storage) + - [Creating Docker image for stateless VM](#creating-docker-image-for-stateless-vm) + - [Enabling in OpenNESS](#enabling-in-openness) + - [VM deployment](#vm-deployment) + - [Stateless VM deployment](#stateless-vm-deployment) + - [Stateful VM deployment](#stateful-vm-deployment) + - [VM deployment with SRIOV NIC support](#vm-deployment-with-sriov-nic-support) + - [VM snapshot](#vm-snapshot) + - [Limitations](#limitations) + - [Cloud Storage](#cloud-storage) + - [Storage Orchestration and PV/PVC management](#storage-orchestration-and-pvpvc-management) + - [Snapshot Creation](#snapshot-creation) + - [Useful Commands and Troubleshooting](#useful-commands-and-troubleshooting) + - [Commands](#commands) + - [Troubleshooting](#troubleshooting) + - [Helpful Links](#helpful-links) + +## Overview + +This document explores the support of `VM` (Virtual machine) deployment in OpennNESS for Network Edge. Items covered include but are not limited to: design considerations and currently available solutions, limitations, configuration of the environment for OpenNESS, deployment of VMs with various requirements for storage and SRIOV support, and lastly troubleshooting. + +When designing support for VM deployment for Network Edge, a key consideration was how this support will fit into a `K8s` (Kubernetes) based solution. Two popular open source projects allowing VM deployments within a K8s environment were identified; `KubeVirt` and `Virtlet`, both of these projects support deployment of the VMs running inside pods. + +Virtlet VMs are treated as ordinary pods and can be controlled from `kubectl` natively but the deployment requires the introduction of an additional CRI (Container Runtime Interface) and CRI proxy into the cluster. In comparison KubeVirt VMs are enabled by extending the functionality of K8s via CRDs (Custom Resource Definition) and easy to deploy KubeVirt agents and controllers - no CRI proxy introduction is necessary in the cluster. + +Due to easy deployment and no impact on the existing OpenNESS architecture, KubeVirt is the solution of choice in OpenNESS. + +## KubeVirt + +`KubeVirt` is an open source project extending K8s with VM support via the previously mentioned CRDs, agents and controllers. It addresses a need to allow non-containerizable applications/workloads inside VMs to be treated as K8s managed workloads. This allows for both VM and Container/Pod applications to coexist within a shared K8s environment, allowing for communication between the Pods, VMs, and services on same cluster. KubeVirt provides a command line utility (`virtctl`) which allows for management of the VM (Create, start, stop, etc). Additionally, it provides a `CDI` (Containerized Data Importer) utility which allows for loading existing `qcow2` VM images (and other data) into the created VM. Support for the K8s `CNI` (Container Network Interface) plugin Multus and SRIOV is also provided which allows to attach `NICs` (Network Interface Card) `VFs` (Virtual Functions) to a deployed VM. More information about KubeVirt can be found on the [official website](https://kubevirt.io/) and [github](https://github.com/kubevirt/kubevirt). + +## Stateless vs Stateful VMs + +The types of VM deployments can be split into two categories based on the storage required by the workload. Stateless VMs are backed by ephemeral storage - meaning that the data will disappear with VM restart/reboot. Stateful VMs are backed by persistent storage - meaning that data will persist after VM restart/reboot. The type of storage required will be determined based on the use-case. In OpenNESS support for both types of VM is available with aid of KubeVirt. + +### VMs with ephemeral storage +These are VMs with ephemeral storage, such as ContainerDisk storage that would be deployed in similar manner to ordinary container pods. The data contained in the VM would be erased on each VM deletion/restart thus it is suitable for stateless applications running inside the pods. Although usually a better fit for such application would be running the workload in container pod, for various reasons such as a Legacy application a user may not want to containerize their workload. The advantage of this deployment from an K8s/OpenNESS perspective is that there is no additional storage configuration and the user only needs to have a cluster with KubeVirt deployed and a dockerized version of a VM image in order to spin up the VM. + +### VMs with persistent Local Storage +These are VMs which require persistent storage, the data of this kind of VM stays persistent between restarts of the VM. In case of local storage, Kubernetes provides ‘local volume plugin' which can be used to define a `PV` (Persistent Volume). In the case of the local volume plugin there is no support for dynamic creation (k8s 1.17.0) and the PVs must be created by a user before a Persistent Volume Claim (`PVC`) can be claimed by the pod/VM. This manual creation of PVs must be taken into consideration for an OpenNESS deployment as a PV will need to be created per each VM per each node as the storage is local to the Edge Node. In case of persistent storage the VM image must be loaded to the PVC, this can be done via use of KubeVirt's Container Data Importer (CDI). This kind of storage is meant for use with stateful workloads, and being local to the node is suitable for Edge deployments. + +### VMs with Cloud Storage +Support for persistent storage via Cloud Storage providers is possible in K8s but it is currently not supported in OpenNESS. There are no plans to support it from OpenNESS right now - this may change in the future depending on demand. + +### Creating Docker image for stateless VM +In order to create a Docker image for a stateless VM, the VM image needs to be wrapped inside the Docker image. In order to do this the user needs to create a `Dockerfile` and place the VM image in the same directory, then build the Docker image as per the example below: +``` +#Dockerfile +FROM scratch +ADD CentOS-7-x86_64-GenericCloud.qcow2 /disk +``` +```shell +docker build -t centosimage:1.0 . +``` +## Enabling in OpenNESS + +The KubeVirt role responsible for bringing up KubeVirt components is enabled by default in the OpenNESS experience kit via Ansible automation. In this default state it does not support SRIOV in a VM so additional steps are required to enable it. The following is a complete list of steps to bring up all components related to VM support in Network Edge. VM support also requires Virtualization and VT-d to be enabled in BIOS of the Edge Node. + + 1. Configure Ansible for KubeVirt: + + - Enable kubeovn CNI and SRIOV: + ```yaml + # group_vars/all.yml + kubernetes_cnis: + - kubeovn + - sriov + ``` + - Enable SRIOV for KubeVirt: + ```yaml + # roles/kubernetes/cni/sriov/common/defaults/main.yml + # VM support + kubevirt: + enabled: true + ``` + - Enable necessary Network Interfaces with SRIOV: + ```yaml + # host_vars/node01.yml + sriov: + network_interfaces: {: 1} + ``` + - Make sure the kubevirt role is enabled: + ```yaml + # "network_edge.yml" + roles: + - role: kubevirt/master + roles: + - role: kubevirt/worker + ``` + - Set up the maximum number of stateful VMs and directory where the Virtual Disks will be stored on Edge Node: + ```yaml + # roles/kubevirt/worker/defaults/main.yml + default_pv_dir: /var/vd/ + default_pv_vol_name: vol + pv_vm_max_num: 64 + ``` + 2. Set up other common configurations for the cluster and enable other EPA features as needed and deploy the cluster using the `deploy_ne.sh` script in OpenNESS experience kit top level directory. + + 3. On successful deployment the following pods will be in running state: + ```shell + [root@controller ~]# kubectl get pods -n kubevirt + + kubevirt virt-api-684fdfbd57-9zwt4 1/1 Running + kubevirt virt-api-684fdfbd57-nctfx 1/1 Running + kubevirt virt-controller-64db8cd74c-cn44r 1/1 Running + kubevirt virt-controller-64db8cd74c-dbslw 1/1 Running + kubevirt virt-handler-jdsdx 1/1 Running + kubevirt virt-operator-c5cbfb9ff-9957z 1/1 Running + kubevirt virt-operator-c5cbfb9ff-dj5zj 1/1 Running + + [root@controller ~]# kubectl get pods -n cdi + + cdi cdi-apiserver-5f6457f4cb-8m9cb 1/1 Running + cdi cdi-deployment-db8c54f8d-t4559 1/1 Running + cdi cdi-operator-7796c886c5-sfmsb 1/1 Running + cdi cdi-uploadproxy-556bf8d455-f8hn4 1/1 Running + ``` + +## VM deployment +Provided below are sample deployment instructions for different types of VMs. +Please use sample `.yaml` specification files provided in OpenNESS Edge Controller repo - [edgecontroller/kubevirt/examples/](https://github.com/open-ness/edgecontroller/tree/master/kubevirt/examples) in order to deploy the workloads - some of the files will require modification in order to suit the environment they will be deployed in, specific instructions on modifications are provided in steps below. + +### Stateless VM deployment +To deploy a sample stateless VM with containerDisk storage: + + 1. Deploy VM + ```shell + [root@controller ~]# kubectl create -f /opt/edgecontroller/kubevirt/examples/statelessVM.yaml + ``` + 2. Start the VM: + ```shell + [root@controller ~]# kubectl virt start cirros-stateless-vm + ``` + 3. Check that the VM pod got deployed and VM is deployed: + ```shell + [root@controller ~]# kubectl get pods | grep launcher + [root@controller ~]# kubectl get vms + ``` + 4. Execute into the VM (pass/login cirros/gocubsgo): + ```shell + [root@controller ~]# kubectl virt console cirros-stateless-vm + ``` + 5. Check that IP address of OpenNESS/K8s overlay network is available in VM. + ```shell + [root@vm ~]# ip addr + ``` + +### Stateful VM deployment +To deploy a sample stateful VM with persistent storage and additionally use Generic Cloud CentOS image which requires user to initially log in with ssh key instead of login/password over ssh: + +> Note: Please note that each stateful VM with a new `PVC` (Persistent Volume Claim) requires a new `PV` (Persistent Volume) to be created - see more in [limitations section](#limitations). Also note that CDI needs two PVs when creating a PVC and loading VM image from qcow2 file, one PV for the actual PVC to be created and one PV to translate the qcow2 image to raw input. + + 1. Create a persistent volume for the VM: + + - Edit the sample yaml with hostname of the worker node: + ```yaml + # /opt/edgecontroller/kubevirt/examples/persistentLocalVolume.yaml + # For both pv0 and pv1 enter correct hostname + - key: kubernetes.io/hostname + operator: In + values: + - + ``` + - Create the PV: + ```shell + [root@controller ~]# kubectl create -f /opt/edgecontroller/kubevirt/examples/persistentLocalVolume.yaml + ``` + - Check that PV is created: + ```shell + [root@controller ~]# kubectl get pv + NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE + pv0 15Gi RWO Retain Available local-storage 7s + pv1 15Gi RWO Retain Available local-storage 7s + ``` + 2. Download the Generic Cloud qcow image for CentOS 7 + ```shell + [root@controller ~]# wget https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1907.qcow2 + ``` + 3. Get the address of the CDI upload proxy: + ```shell + [root@controller ~]# kubectl get services -A | grep cdi-uploadproxy + ``` + 4. Create and upload the image to PVC via CDI + ```shell + [root@controller ~]# kubectl virt image-upload dv centos-dv --image-path=/root/kubevirt/CentOS-7-x86_64-GenericCloud-1907.qcow2 --insecure --size=15Gi --storage-class=local-storage --uploadproxy-url=https://:443 + + DataVolume default/centos-dv created + Waiting for PVC centos-dv upload pod to be ready... + Pod now ready + Uploading data to https://:443 + + 898.75 MiB / 898.75 MiB [======================================================================================================================================================================================] 100.00% 15s + + Uploading data completed successfully, waiting for processing to complete, you can hit ctrl-c without interrupting the progress + Processing completed successfully + Uploading /root/kubevirt/CentOS-7-x86_64-GenericCloud-1907.qcow2 completed successfully + ``` + 5. Check that PV, DV, PVC are correctly created: + ```shell + [root@controller ~]# kubectl get pv + pv0 15Gi RWO Retain Bound default/centos-dv local-storage 2m48s + pv1 15Gi RWO Retain Released default/ centos-dv-scratch local-storage 2m48s + [root@controller ~]# kubectl get dv + centos-dv Succeeded 105s + [root@controller ~]# kubectl get pvc + centos-dv Bound pv0 15Gi RWO local-storage 109s + ``` + 6. Create ssh key: + ```shell + [root@controller ~]# ssh-keygen + ``` + 7. Get the Controllers public key: + ```shell + [root@controller ~]# cat ~/.ssh/id_rsa.pub + ``` + 8. Edit .yaml file for the VM with updated public key: + ```yaml + # /opt/edgecontroller/kubevirt/examples/cloudGenericVM.yaml + users: + - name: root + password: root + sudo: ALL=(ALL) NOPASSWD:ALL + ssh_authorized_keys: + - ssh-rsa @ + ``` + 9. Deploy the VM: + ```shell + [root@controller ~]# kubectl create -f /opt/edgecontroller/kubevirt/examples/cloudGenericVM.yaml + ``` + 10. Start the VM: + ```shell + [root@controller ~]# kubectl virt start centos-vm + ``` + 11. Check that the VM container has deployed: + ```shell + [root@controller ~]# kubectl get pods | grep virt-launcher-centos + ``` + 12. Get the IP address of the VM: + ```shell + [root@controller ~]# kubectl get vmi + ``` + 13. SSH into the VM: + ```shell + [root@controller ~]# ssh + ``` + +### VM deployment with SRIOV NIC support + +To deploy a VM requesting SRIOV VF of NIC: + 1. Bind SRIOV interface to VFIO driver on Edge Node: + ```shell + [root@worker ~]# /opt/dpdk-18.11.2/usertools/dpdk-devbind.py --bind=vfio-pci + ``` + 2. Delete/Restart SRIOV device plugin on the node: + ```shell + [root@controller ~]# kubectl delete pod kube-sriov-device-plugin-amd64- -n kube-system + ``` + 3. Check that the SRIOV VF for VM is available as allocatable resource for DP (wait a few seconds after restart): + ``` + [root@controller ~]# kubectl get node -o json | jq '.status.allocatable' + { + "cpu": "79", + "devices.kubevirt.io/kvm": "110", + "devices.kubevirt.io/tun": "110", + "devices.kubevirt.io/vhost-net": "110", + "ephemeral-storage": "241473945201", + "hugepages-1Gi": "0", + "hugepages-2Mi": "2Gi", + "intel.com/intel_sriov_dpdk": "0", + "intel.com/intel_sriov_netdevice": "0", + "intel.com/intel_sriov_vm": "1", <---- This one + "memory": "194394212Ki", + "pods": "110" + } + ``` + 4. Deploy VM requesting SRIOV device (adjust the amount of Hugepages Required in .yaml if a smaller amount available on the platform): + ```shell + [root@controller ~]# kubectl create -f /opt/edgecontroller/kubevirt/examples/sriovVM.yaml + ``` + 5. Start VM: + ```shell + [root@controller ~]# kubectl virt start debian-sriov-vm + ``` + 6. Execute into VM (login/pass root/toor): + ```shell + [root@controller ~]# kubectl virt console debian-sriov-vm + ``` + 7. Fix networking in the VM for Eth1: + ```shell + [root@vm ~]# vim /etc/network/interfaces + # Replace info for Eth1 + # Maybe the VM has 2 NICs? + auto eth1 + iface eth1 inet static + address 192.168.100.2 + netmask 255.255.255.0 + network 192.168.100.0 + broadcast 192.168.100.255 + gateway 192.168.100.1 + + + #restart networking service + [root@vm ~]# sudo /etc/init.d/networking restart + ``` + + 8. Check the SRIOV interface has an assigned IP address. + ```shell + [root@vm ~]# ip addr + 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host + valid_lft forever preferred_lft forever + 2: eth0: mtu 1400 qdisc pfifo_fast state UP group default qlen 1000 + link/ether aa:10:79:10:00:18 brd ff:ff:ff:ff:ff:ff + inet 10.16.0.23/16 brd 10.16.255.255 scope global eth0 + valid_lft forever preferred_lft forever + inet6 fe80::a810:79ff:fe10:18/64 scope link + valid_lft forever preferred_lft forever + 3: eth1: mtu 1500 qdisc mq state DOWN group default qlen 1000 + link/ether 4a:b0:80:d8:a9:b3 brd ff:ff:ff:ff:ff:ff + inet 192.168.100.2/24 brd 192.168.100.255 scope global eth1 + valid_lft forever preferred_lft forever + ``` + +### VM snapshot + +Currently support for VM snapshot is allowed by manually taking snapshot of the VMs virtual disk with `QEMU` utility - see more in [limitations](#limitations). In order to restore the snapshot or create a new VM the user is required to copy new qcow2 (snapshot) file to controller and create the VM as per [stateless VM example](#Statefull-VM-deployment) using new qcow2 image instead of the one provided in example. The following are steps to create a snapshot: + + 1. Log into the Edge Node + 2. Go to virtual disk directory for previously created VM: + ```shell + [root@worker ~]# cd /var/vd/vol0/ && ls + ``` + 3. Create a qcow2 snapshot image out of the virtual disk present in the directory (`disk.img`) + ```shell + /opt/qemu-4.0.0/qemu-img convert -f raw -O qcow2 disk.img ./my-vm-snapshot.qcow2 + ``` + +## Limitations + +### Cloud Storage + +Currently there is no support for Cloud Storage in OpenNESS. + +### Storage Orchestration and PV/PVC management + +Currently in Network Edge OpenNESS there is no mechanism to manage storage, the assumption is made that the default HDD/SSD of the Edge Node is used for storage. Additionally, various methods of optimizing storage (ie. by using various file system types etc.) for performance are not in scope of OpenNESS at this time. In Network Edge deployment when using K8s with local persistent volumes, there is a need to create a directory per each PV which will be used to store VMs' virtual disk. The creation of directories to store PV is automated from OpenNESS but the creation of the PV itself and keeping track of which PV corresponds to which VM is currently a responsibility of the user - this is due to the local volume plugin enabling local storage in K8s which does not provide dynamic PV creation when a PVC claim is made. A consideration of how to automate and simplify PV management for the user will be made in the future - an evaluation of currently available solutions is needed. + +### Snapshot Creation + +Currently snapshot creation of the stateful VM is to be done by the user manually using the QEMU utility. K8s does provide a Volume Snapshot and Volume Snapshot Restore functionality but at time of writing it is only available for out-off tree K8s device storage plugins supporting CSI driver - the local volume plugin used in this implementation is not yet supported as a CSI plugin. A consideration of how to automate and simplify a VM snapshot for the user will be made in the future. + +## Useful Commands and Troubleshooting + +### Commands + +``` +kubectl get pv # get Persistent Volumes +kubectl get pvc # get Persistent Volume Claims +kubectl get dv # get Data Volume +kubectl get sc # get Storage Classes +kubectl get vms # get VM state +kubectl get vmi # get VM IP +kubectl virt start # start VM +kubectl virt restart # restart VM +kubectl virt stop # stop VM +kubectl virt pause # pause VM +kubectl virt console # Get console connection to VM +kubectl virt help # See info about rest of virtctl commands +``` +### Troubleshooting + +1. PVC image not being uploaded through CDI - check that the IP address of cdi-upload-proxy is correct and that the Network Traffic policy for CDI is applied: + ```shell + kubectl get services -A | grep cdi-uploadproxy + kubectl get networkpolicy | grep cdi-upload-proxy-policy + ``` + +2. Cannot SSH to stateful VM with Cloud Generic Image due to public key being denied - double check that the public key provided in `/opt/edgecontroller/kubevirt/examples/cloudGenericVM.yaml` is valid and in a correct format. Example of a correct format: + ```yaml + # /opt/edgecontroller/kubevirt/examples/cloudGenericVM.yaml + users: + - name: root + password: root + sudo: ALL=(ALL) NOPASSWD:ALL + ssh_authorized_keys: + - ssh-rsa Askdfjdiisd?-SomeLongSHAkey-?dishdxxxx root@controller + ``` +3. Completely deleting a stateful VM - delete VM, DV, PV, PVC and the Virtual Disk related to VM from Edge Node. + ```shell + [controller]# kubectl delete vm + [controller]# kubectl delete dv + [controller]# kubectl delete pv + [node]# rm /var/vd/vol/disk.img + ``` + +4. Cleanup script `cleanup_ne.sh` does not properly clean up KubeVirt/CDI components if user has intentionally/unintentionally deleted one of these components outside the script - The KubeVirt/CDI components must be cleaned up/deleted in a specific order to wipe them sucesfully and the cleanup script does that for the user. In an event when user tries to delete the KubeVirt/CDI operator in wrong order the namespace for the component may be stuck indefinately in a `terminating` state. This is not an issue if user runs the script to completely clean the cluster but might be troublesome if user wants to run cleanup for KubeVirt only. In order to fix this user must do the following: + + 1. Check which namespace is stuck in `terminating` state: + ```shell + [controller]# kubectl get namespace + NAME STATUS AGE + cdi Active 30m + default Active 6d1h + kube-node-lease Active 6d1h + kube-ovn Active 6d1h + kube-public Active 6d1h + kube-system Active 6d1h + kubevirt Terminating 31m + openness Active 6d1h + ``` + + 2. Delete the finalizer for the terminating namespace: + ```shell + ##replace instances of `kubevirt` with 'cdi' in the command if CDI is the issue. + [controller]# kubectl get namespace "kubevirt" -o json | tr -d "\n" | sed "s/\"finalizers\": \[[^]]\+\]/\"finalizers\": []/" | kubectl replace --raw /api/v1/namespaces/kubevirt/finalize -f - + ``` + + 3. Run clean up script for kubeVirt again: + ```shell + [controller]# ./cleanup_ne.sh + ``` + +## Helpful Links + +- [KubeVirt](https://kubevirt.io/) + +- [KubeVirt components](https://github.com/kubevirt/kubevirt/blob/master/docs/components.md) +- [Container Storage Interface](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/) +- [K8s Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) +- [Containerized Data Importer](https://github.com/kubevirt/containerized-data-importer/blob/master/README.md) +- [Local Volume Plugin](https://kubernetes.io/docs/concepts/storage/volumes/#local) \ No newline at end of file diff --git a/doc/applications-onboard/using-openness-cnca-images/af_pfd_transaction_home.png b/doc/applications-onboard/using-openness-cnca-images/af_pfd_transaction_home.png new file mode 100644 index 00000000..77d735fd Binary files /dev/null and b/doc/applications-onboard/using-openness-cnca-images/af_pfd_transaction_home.png differ diff --git a/doc/applications-onboard/using-openness-cnca-images/ngc_af_service_config_log.png b/doc/applications-onboard/using-openness-cnca-images/ngc_af_service_config_log.png index 49a2ef32..6648a62f 100644 Binary files a/doc/applications-onboard/using-openness-cnca-images/ngc_af_service_config_log.png and b/doc/applications-onboard/using-openness-cnca-images/ngc_af_service_config_log.png differ diff --git a/doc/applications-onboard/using-openness-cnca-images/ngc_homepage.png b/doc/applications-onboard/using-openness-cnca-images/ngc_homepage.png index c6a0796e..e62e6329 100644 Binary files a/doc/applications-onboard/using-openness-cnca-images/ngc_homepage.png and b/doc/applications-onboard/using-openness-cnca-images/ngc_homepage.png differ diff --git a/doc/applications-onboard/using-openness-cnca-images/oam_services_display.png b/doc/applications-onboard/using-openness-cnca-images/oam_services_display.png index 4a849849..8d05e7bd 100644 Binary files a/doc/applications-onboard/using-openness-cnca-images/oam_services_display.png and b/doc/applications-onboard/using-openness-cnca-images/oam_services_display.png differ diff --git a/doc/applications-onboard/using-openness-cnca-images/oam_services_home.png b/doc/applications-onboard/using-openness-cnca-images/oam_services_home.png index b2e1bd28..f72145fd 100644 Binary files a/doc/applications-onboard/using-openness-cnca-images/oam_services_home.png and b/doc/applications-onboard/using-openness-cnca-images/oam_services_home.png differ diff --git a/doc/applications-onboard/using-openness-cnca-images/pfd_transaction_create.png b/doc/applications-onboard/using-openness-cnca-images/pfd_transaction_create.png new file mode 100644 index 00000000..c7c2887c Binary files /dev/null and b/doc/applications-onboard/using-openness-cnca-images/pfd_transaction_create.png differ diff --git a/doc/applications-onboard/using-openness-cnca-images/pfd_transaction_delete.png b/doc/applications-onboard/using-openness-cnca-images/pfd_transaction_delete.png new file mode 100644 index 00000000..585e73ee Binary files /dev/null and b/doc/applications-onboard/using-openness-cnca-images/pfd_transaction_delete.png differ diff --git a/doc/applications-onboard/using-openness-cnca-images/pfd_transaction_delete_appID.png b/doc/applications-onboard/using-openness-cnca-images/pfd_transaction_delete_appID.png new file mode 100644 index 00000000..aac04c4b Binary files /dev/null and b/doc/applications-onboard/using-openness-cnca-images/pfd_transaction_delete_appID.png differ diff --git a/doc/applications-onboard/using-openness-cnca-images/pfd_transaction_display.png b/doc/applications-onboard/using-openness-cnca-images/pfd_transaction_display.png new file mode 100644 index 00000000..394be15e Binary files /dev/null and b/doc/applications-onboard/using-openness-cnca-images/pfd_transaction_display.png differ diff --git a/doc/applications-onboard/using-openness-cnca-images/pfd_transaction_edit.png b/doc/applications-onboard/using-openness-cnca-images/pfd_transaction_edit.png new file mode 100644 index 00000000..30822138 Binary files /dev/null and b/doc/applications-onboard/using-openness-cnca-images/pfd_transaction_edit.png differ diff --git a/doc/applications-onboard/using-openness-cnca-images/pfd_transaction_edit_appID.png b/doc/applications-onboard/using-openness-cnca-images/pfd_transaction_edit_appID.png new file mode 100644 index 00000000..5c6c055b Binary files /dev/null and b/doc/applications-onboard/using-openness-cnca-images/pfd_transaction_edit_appID.png differ diff --git a/doc/applications-onboard/using-openness-cnca.md b/doc/applications-onboard/using-openness-cnca.md index 210177a5..67d7d2fb 100644 --- a/doc/applications-onboard/using-openness-cnca.md +++ b/doc/applications-onboard/using-openness-cnca.md @@ -1,5 +1,7 @@ -SPDX-License-Identifier: Apache-2.0 -Copyright © 2019 Intel Corporation +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2019-2020 Intel Corporation +``` - [4G/LTE Core Configuration using CNCA](#4glte-core-configuration-using-cnca) - [Configuring in Network Edge mode](#configuring-in-network-edge-mode) @@ -20,12 +22,16 @@ Copyright © 2019 Intel Corporation - [Registration of UPF services associated with Edge-node with 5G Core](#registration-of-upf-services-associated-with-edge-node-with-5g-core) - [Traffic influence operations with 5G Core (through AF interface)](#traffic-influence-operations-with-5g-core-through-af-interface) - [Sample YAML NGC AF subscription configuration](#sample-yaml-ngc-af-subscription-configuration) + - [Packet Flow Description operations with 5G Core (through AF interface)](#packet-flow-description-operations-with-5g-core-through-af-interface) + - [Sample YAML NGC AF PFD transaction configuration](#sample-yaml-ngc-af-pfd-transaction-configuration) - [On-Premises mode](#on-premises-mode) - [Bringing up NGC components in On-Premises mode](#bringing-up-ngc-components-in-on-premises-mode) - [Configuring in On-Premises mode](#configuring-in-on-premises-mode-1) + - [Certificates Management for communicating with 5G core micro-services](#certificates-management-for-communicating-with-5g-core-micro-services) - [Edge Node services operations with 5G Core (through OAM interface)](#edge-node-services-operations-with-5g-core-through-oam-interface-1) - [Registration of UPF services associated with Edge-node with 5G Core](#registration-of-upf-services-associated-with-edge-node-with-5g-core-1) - [Traffic influence operations with 5G Core (through AF interface)](#traffic-influence-operations-with-5g-core-through-af-interface-1) + - [Packet Flow Description operation with 5G Core (through AF interface)](#packet-flow-description-operation-with-5g-core-through-af-interface) - [Traffic Influence Subscription description](#traffic-influence-subscription-description) - [Identification (Mandatory)](#identification-mandatory) - [Traffic Description Group (Mandatory)](#traffic-description-group-mandatory) @@ -36,6 +42,7 @@ Copyright © 2019 Intel Corporation - [Temporal Validity (Optional)](#temporal-validity-optional) - [UPF Event Notifications (Optional)](#upf-event-notifications-optional) - [AF to NEF specific (Optional)](#af-to-nef-specific-optional) + - [Packet Flow Description transaction description](#packet-flow-description-transaction-description) # 4G/LTE Core Configuration using CNCA @@ -120,7 +127,7 @@ In case of On-Premises deployment mode, Core network can be configured through t ### CUPS UI Prerequisites - Controller installation, configuration and run as root. Before building, setup the controller env file for CUPS as below: - + ``` REACT_APP_CONTROLLER_API=http://>:8080 REACT_APP_CUPS_API=http://<>:8080 @@ -135,15 +142,15 @@ In case of On-Premises deployment mode, Core network can be configured through t `make all-up` > NOTE: To bring up just the CUPS UI run `make cups-ui-up` - - Check whether controller CUPS UI already bring up by: + - Check whether controller CUPS UI already bring up by: ``` - Docker ps + Docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS 0eaaafc01013 cups:latest "docker-entrypoint.s…" 8 days ago Up 8 days 0.0.0.0:3010->80/tcp d732e5b93326 ui:latest "docker-entrypoint.s…" 9 days ago Up 9 days 0.0.0.0:3000->80/tcp 8f055896c767 cce:latest "/cce -adminPass cha…" 9 days ago Up 9 days 0.0.0.0:6514->6514/tcp, 0.0.0.0:8080-8081->8080-8081/tcp, 0.0.0.0:8125->8125/tcp - d02b5179990c mysql:8.0 "docker-entrypoint.s…" 13 days ago Up 9 days 33060/tcp, 0.0.0.0:8083->3306/tcp + d02b5179990c mysql:8.0 "docker-entrypoint.s…" 13 days ago Up 9 days 33060/tcp, 0.0.0.0:8083->3306/tcp ``` - OAMAgent(called EPC-OAM) and EPC Control plane installation, configuration and run as `root`. @@ -158,11 +165,11 @@ In case of On-Premises deployment mode, Core network can be configured through t - REACT_APP_CUPS_API=http://<>:8080 added to Controller's "~/controller/.env" file. - Controller full stack including CUPS UI are running. - Oamagent and EPC are running. -- Confirm connection between controller and oamagent (EPC). +- Confirm connection between controller and oamagent (EPC). #### Steps to access UI -- Open any internet browser +- Open any internet browser - Type in "http://:3010/userplanes" in address bar. - This will display all the existing EPC user planes list as shown below:   @@ -172,16 +179,16 @@ In case of On-Premises deployment mode, Core network can be configured through t - Identify the specific userplane using the UUID to get additional information - Click on **EDIT** as shown below -   +   ![Edit screen](cups-howto-images/edit.png)   - User plane information is displayed as shown below -   +   ![Userplane5 screen](cups-howto-images/userplane5.png)   -- Update parameters: any of the parameters _{S1-U , S5-U(SGW), S5-U(PGW), MNC,MCC, TAC, APN}_ as needed and then click on **Save**. +- Update parameters: any of the parameters _{S1-U , S5-U(SGW), S5-U(PGW), MNC,MCC, TAC, APN}_ as needed and then click on **Save**. **NOTE** A pop up window will appear with “successfully updated userplane”   ![Userplane5Update screen](cups-howto-images/userplane5_update.png) @@ -202,7 +209,7 @@ In case of On-Premises deployment mode, Core network can be configured through t   - After that, web page will automatically return back to the updated user plane list as shown below -   +   ![UserplaneCreateList screen](cups-howto-images/userplane_create_thenlist.png)   @@ -211,7 +218,7 @@ In case of On-Premises deployment mode, Core network can be configured through t - Find the user plane to delete using UUID and click **EDIT** - Then web page will list the user plane information, and then click on **DELETE USERPLANE** with popup message with **successfully deleted userplane** as shown below -   +   ![UserplaneDelete screen](cups-howto-images/userplane_delete.png)   @@ -232,18 +239,18 @@ OpenNESS provides ansible scripts for setting up NGC components for two scenario ## Network Edge mode -### Bring-up of NGC components in Network Edge mode +### Bring-up of NGC components in Network Edge mode -1. If the Edge controller is not yet deployed through openness-experience-kit then: - Enable the role for ngc by un-commenting the line `role: ngc_test/master` in the file `openness-experience-kits/ne_controller.yml` before starting `deploy_ne_controller.sh` or `deploy_ne.sh` as described in [OpenNESS Network Edge: Controller and Edge node setup](../getting-started/network-edge/controller-edge-node-setup.md) document, **otherwise skip this step.** +1. If the Edge controller is not yet deployed through openness-experience-kit then: + Enable the role for ngc by un-commenting the line `role: ngc_test/master` in the file `openness-experience-kits/network_edge.yml` before running `deploy_ne.sh controller` or `deploy_ne.sh` as described in [OpenNESS Network Edge: Controller and Edge node setup](../getting-started/network-edge/controller-edge-node-setup.md) document, **otherwise skip this step.** 2. If Edge-controller is already deployed (but without enabling ngc role) and at a later stage you want to enable NGC components on edge-controller then, - Enable the role for ngc by un-commenting the line `role: ngc_test/master` in the file `openness-experience-kits/ne_controller.yml` and then re-run `deploy_ne_controller.sh` as described in [OpenNESS Network Edge: Controller and Edge node setup](../getting-started/network-edge/controller-edge-node-setup.md) document. + Enable the role for ngc by un-commenting the line `role: ngc_test/master` in the file `openness-experience-kits/network_edge.yml` and then re-run `deploy_ne.sh controller` as described in [OpenNESS Network Edge: Controller and Edge node setup](../getting-started/network-edge/controller-edge-node-setup.md) document. - **NOTE:** - In addition to the OpenNESS controller bringup, by enabling the ngc rule the playbook scripts performs: Clone epcforedge repo from github, builds AF, NEF and OAM micro services, generates certificate files, creates docker images and starts PODs. + **NOTE:** + In addition to the OpenNESS controller bringup, by enabling the ngc rule the playbook scripts performs: Clone epcforedge repo from github, builds AF, NEF and OAM micro services, generates certificate files, creates docker images and starts PODs. -3. On successful start of AF, NEF and OAM PODs, status of PODS and Services can verified using the below commands: +3. On successful start of AF, NEF and OAM PODs, status of PODS and Services can verified using the below commands: - `kubectl get pods --all-namespaces` expected out as below: ![NGC list of PODS](using-openness-cnca-images/ngc_pods_list_output.png) @@ -251,12 +258,12 @@ OpenNESS provides ansible scripts for setting up NGC components for two scenario - `kubectl get services--all-namespaces` expected out as below: ![NGC list of PODS](using-openness-cnca-images/ngc_services_list_output.png) - + *NOTE: In general, below steps #4 and #5 are not needed. If user wants to change the hostname/ip-address parameters for AF/NEF/OAM then #4 and #5 will provide the guidance.* 4. After all the PODs are successfully up and running, few AF and OAM configuration parameters need to be updated (as per your deployment configuration) and then re-start the AF. - * Open the file `/etc/openness/configs/ngc/af.json` and modify the below parameters. + * Open the file `/etc/openness/configs/ngc/af.json` and modify the below parameters. * `"UIEndpoint": "http://localhost:3020"` : Replace the `localhost` with `IP Address` of edge-controller, and no change to port number. * `"NEFHostname": "localhost"` : Replace the `localhost` with `nefservice` ie., service name NEF POD. * Save and exit. @@ -268,7 +275,7 @@ OpenNESS provides ansible scripts for setting up NGC components for two scenario ![NGC list of PODS](using-openness-cnca-images/ngc_af_service_config_log.png) 5. To update OAM configuration and restart OAM micro service: - * Open the file `/etc/openness/configs/ngc/oam.json` and modify the below parameters. + * Open the file `/etc/openness/configs/ngc/oam.json` and modify the below parameters. * `"UIEndpoint": "http://localhost:3020"` : Replace the `localhost` with `IP Address` of edge-controller, and no change to port number. * Save and exit. * Now restart OAM POD using the below command: @@ -385,19 +392,148 @@ policy: routeProfId: default ``` +#### Packet Flow Description operations with 5G Core (through AF interface) + +Supported operations through `kube-cnca` plugin: + + * Creation of packet flow description (PFD) transactions through the AF micro service to perform accurate detection of application traffic for UPF in 5G Core + * Deletion of transactions and applications within a transaction + * Updating (patching) transactions and applications within a transaction + * get or get-all transactions. + * get a specific application within a transaction + +Creation of the AF PFD transaction is performed based on the configuration provided by the given YAML file. The YAML configuration should follow the provided sample YAML in the [Sample YAML NGC AF transaction configuration](#sample-yaml-ngc-af-transaction-configuration) section. Use the `apply` command as below to post a PFD transaction creation request onto AF: +```shell +kubectl cnca pfd apply -f +``` + +When the PFD transaction is successfully created, the `apply` command will return the transaction URL, that includes transaction identifier at the end of the string. Only this transaction identifier `` should be used in further correspondence with AF concerning this particular transaction. For example, https://localhost:8050/af/v1/pfd/transactions/10000 and transaction-id is 10000. **It is the responsibility of the user to retain the `` as `kube-cnca` is a stateless function.** + +To retrieve an existing PFD transaction with a known transaction ID, use the below command: +```shell +kubectl cnca pfd get transaction +``` + +To retrieve all active PFD transactions at AF, execute this command: +```shell +kubectl cnca pfd get transactions +``` + +To modify an active PFD transaction, use the `patch` command providing a YAML file with the subset of the configuration to be modified: +```shell +kubectl cnca pfd patch transaction -f +``` + +To delete an active PFD transaction, use the `delete` command as below: +```shell +kubectl cnca pfd delete transaction +``` + +To retrieve an existing application within a PFD transaction with a known application ID and transaction ID, use the below command: +```shell +kubectl cnca pfd get transaction application +``` + +To modify an application within an active PFD transaction, use the `patch` command providing a YAML file with the subset of the configuration to be modified: +```shell +kubectl cnca pfd patch transaction application -f +``` + +To delete an application within an active PFD transaction, use the `delete` command as below: +```shell +kubectl cnca pfd delete transaction application +``` + + +##### Sample YAML NGC AF PFD transaction configuration + +The `kube-cnca pfd apply` expects the YAML configuration as in the format below. The file must contain the topmost configurations; `apiVersion`, `kind` and `policy`. The configuration `policy` retains the NGC AF-specific transaction information. + +```yaml +apiVersion: v1 +kind: ngc_pfd +policy: + pfdDatas: + - externalAppID: afApp01 + allowedDelay: 1000 + cachingTime: 1000 + pfds: + - pfdID: pfdId01 + flowDescriptions: + - "permit in ip from 10.11.12.123 80 to any" + domainNames: + - "www.google.com" + - pfdID: pfdId02 + urls: + - "^http://test.example2.net(/\\S*)?$" + - pfdID: pfdId03 + domainNames: + - "www.example.com" + - externalAppID: afApp02 + allowedDelay: 1000 + cachingTime: 1000 + pfds: + - pfdID: pfdId03 + flowDescriptions: + - "permit in ip from 10.68.28.39 80 to any" + - pfdID: pfdId04 + urls: + - "^http://test.example1.net(/\\S*)?$" + - pfdID: pfdId05 + domainNames: + - "www.example.com" +``` + +Sample yaml file for updating a single application + +```yaml +apiVersion: v1 +kind: ngc_pfd +policy: + externalAppID: afApp01 + allowedDelay: 1000 + cachingTime: 1000 + pfds: + - pfdID: pfdId01 + flowDescriptions: + - "permit in ip from 10.11.12.123 80 to any" + - pfdID: pfdId02 + urls: + - "^http://test.example2.net(/\\S*)?$" + - pfdID: pfdId03 + domainNames: + - "www.latest_example.com" +``` + ## On-Premises mode ### Bringing up NGC components in On-Premises mode - To bring-up the NGC components in on-premises mode, enable the rule `ngc_test/onprem/master` in the file: `openness-experience-kits/onprem_controller.yml`. and then run the script `deploy_onprem_controller.sh` as described in [OpenNESS On-Premise: Controller and Edge node setup document](../getting-started/on-premises/controller-edge-node-setup.md). + To bring-up the NGC components in on-premises mode, enable the rule `ngc_test/onprem/master` in the file: `openness-experience-kits/on_premises.yml`. and then run the script `deploy_onprem.sh controller` as described in [OpenNESS On-Premise: Controller and Edge node setup document](../getting-started/on-premises/controller-edge-node-setup.md). ### Configuring in On-Premises mode OpenNESS On-Premises management homepage: - sample url: http://:3000/landing + sample url: http:///landing ![OpenNESS NGC homepage](using-openness-cnca-images/ngc_homepage.png) + **NOTE**: `LANDING_UI_URL` can be retrieved from `.env` file. + +#### Certificates Management for communicating with 5G core micro-services + 5G Core micro-services uses HTTPS protocol over HTTP2 for communication. To communicate with 5G micro-services, the certificates used by 5G core micro-services (AF/NEF/OAM) should be imported into web browser. The root certificate `root-ca-cert.pem` which need to be imported is available at location `/etc/openness/certs/ngc/` where OpenNess Experience Kit is installed. + + **NOTE:** The certificates generated as part of OpenNess Experience Kit are self signed certificates are for testing purpose only. + + The certificate can be imported in the different browsers as: + + * Google Chrome (ver 80.0.3987): Go to settings --> Under "Privacy and security" Section Click on "More" --> Select "Manage Certificates" --> in the pop up window select "Intermediate Certification Authorities" --> Select "Import" and provide the downloaded certificate file (root-ca-cert.pem). + * Mozilla Firefox (ver 72.0.2): Go to options --> Under "Privacy and security" Section Click on "View Certificates..." --> Under "Authorities" section click on "import" --> Provide the certificate (root-ca-cert.pem) and import it for accessing websites. + + **NOTE:** If a user don't want to import certificate in the browser or failed to import the certificates, other steps can be followed to trust the certificates: + * User needs to access these specific 5G core components URL to trust the certificates used by the 5G core components. + * First access the urls `https://controller_ip:8070/ngcoam/v1/af/services, https://controller_ip:8050/af/v1/pfd/transactions`. + * On accessing these url, browser will show the warning for trusting the self-signed certificate. Proceed by trusting the certificates. #### Edge Node services operations with 5G Core (through OAM interface) @@ -416,7 +552,7 @@ policy: * Display of registered edge servers with 5G Core ![Edge services display](using-openness-cnca-images/oam_services_display.png) - * To edit a registered services + * To edit a registered services ![Edge services edit](using-openness-cnca-images/oam_services_edit.png) * To delete a registered service @@ -430,7 +566,7 @@ policy: * Edge traffic subscription submissions with 5G-Core (NEF) click on the "Create" button on the above homepage - NOTE: "AF Service Id" field should be the same as the value returned through the AF services create request. In the below sample screen capture shows a different value. + NOTE: "AF Service Id" field should be the same as the value returned through the AF services create request. In the below sample screen capture shows a different value. ![Subscription service create](using-openness-cnca-images/af_subscription_create_part1.png) ![Subscription service create](using-openness-cnca-images/af_subscription_create_part2.png) ![Subscription service create](using-openness-cnca-images/af_subscription_create_part3.png) @@ -446,77 +582,119 @@ policy: * To delete a submitted edge traffic subscription ![Subscription service delete](using-openness-cnca-images/af_subscription_delete.png) + +#### Packet Flow Description operation with 5G Core (through AF interface) + + * Edge traffic PFD transaction submission homepage + sample url: http://:3020/pfds + ![PFD transaction services homepage](using-openness-cnca-images/af_pfd_transaction_home.png) + + * Edge PFD transaction submissions with 5G-Core (NEF) + click on the "Create" button on the above homepage + ![Subscription service create](using-openness-cnca-images/pfd_transaction_create.png) + + * Display of submitted Edge PFD transaction + ![PFD transaction service display](using-openness-cnca-images/pfd_transaction_display.png) + + * To edit a submitted edge PFD transaction + ![PFD transaction service edit](using-openness-cnca-images/pfd_transaction_edit.png) + + * To edit a submitted edge PFD transaction application + ![PFD transaction service patch](using-openness-cnca-images/pfd_transaction_edit_appID.png) + + * To delete a submitted edge PFD transaction + ![PFD transaction service delete](using-openness-cnca-images/pfd_transaction_delete.png) + + * To delete a submitted edge PFD transaction application + ![PFD transaction service delete](using-openness-cnca-images/pfd_transaction_delete_appID.png) + ## Traffic Influence Subscription description This sections describes the paramters that are used in the Traffic Influce subscription POST request. Groups mentioned as Mandatory needs te provided, in the absence of the Mandatory parameters a 400 response would be returned. ### Identification (Mandatory) -|Attribute name|Description| -|--------------|-----------| -|afTransId | Identifies an NEF Northbound interface transaction, generated by the AF | -|self| Link to this resource. This parameter shall be supplied by the NEF in HTTP POST responses, which is used by AF for further operations | +| Attribute name | Description | +| -------------- | ------------------------------------------------------------------------------------------------------------------------------------- | +| afTransId | Identifies an NEF Northbound interface transaction, generated by the AF | +| self | Link to this resource. This parameter shall be supplied by the NEF in HTTP POST responses, which is used by AF for further operations | ### Traffic Description Group (Mandatory) -|Attribute name|Description| -|--------------|-----------| -|afServiceId|Identifies a service on behalf of which the AF is issuing the request| -|dnn|Identifies a DNN| -|snssai|Identifies an S-NSSAI| +| Attribute name | Description | +| -------------- | --------------------------------------------------------------------- | +| afServiceId | Identifies a service on behalf of which the AF is issuing the request | +| dnn | Identifies a DNN | +| snssai | Identifies an S-NSSAI | -Note: One of afServiceId or dnn shall be included +Note: One of afServiceId or dnn shall be included -|Attribute name|Description| -|--------------|-----------| -|afAppId|Identifies an application| -|trafficFilters|Identifies IP packet filters| -|ethTrafficFilters|Identifies Ethernet packet filters| +| Attribute name | Description | +| ----------------- | ---------------------------------- | +| afAppId | Identifies an application | +| trafficFilters | Identifies IP packet filters | +| ethTrafficFilters | Identifies Ethernet packet filters | -Note: One of "afAppId", "trafficFilters" or "ethTrafficFilters" shall be included +Note: One of "afAppId", "trafficFilters" or "ethTrafficFilters" shall be included ### Target UE Identifier (Mandatory) -|Attribute name|Description| -|--------------|-----------| -|externalGroupId|Identifies a group of users| -|anyUeInd|Identifies whether the AF request applies to any UE. This attribute shall set to "true" if applicable for any UE, otherwise, set to "false"| -|gpsi|Identifies a user| -|ipv4Addr|Identifies the IPv4 address| -|ipv6Addr|Identifies the IPv6 address| -|macAddr|Identifies the MAC address| +| Attribute name | Description | +| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | +| externalGroupId | Identifies a group of users | +| anyUeInd | Identifies whether the AF request applies to any UE. This attribute shall set to "true" if applicable for any UE, otherwise, set to "false" | +| gpsi | Identifies a user | +| ipv4Addr | Identifies the IPv4 address | +| ipv6Addr | Identifies the IPv6 address | +| macAddr | Identifies the MAC address | Note: One of individual UE identifier (i.e. "gpsi", "ipv4Addr", "ipv6Addr" or macAddr), External Group Identifier (i.e. "externalGroupId") or any UE indication "anyUeInd" shall be included ### Application Relocation (Optional) -|Attribute name|Description| -|--------------|-----------| -|appReloInd |Identifies whether an application can be relocated once a location of the application has been selected. Set to "true" if it can be relocated; otherwise set to "false". Default value is "false" if omitted | +| Attribute name | Description | +| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| appReloInd | Identifies whether an application can be relocated once a location of the application has been selected. Set to "true" if it can be relocated; otherwise set to "false". Default value is "false" if omitted | -### Traffic Routing (Optional) -|Attribute name|Description| -|--------------|-----------| -|trafficRoutes|Identifies the N6 traffic routing requirement| +### Traffic Routing (Optional) +| Attribute name | Description | +| -------------- | --------------------------------------------- | +| trafficRoutes | Identifies the N6 traffic routing requirement | ### Spatial Validity (Optional) -|Attribute name|Description| -|--------------|-----------| -|validGeoZoneIds|Identifies a geographic zone that the AF request applies only to the traffic of UE(s) located in this specific zone | +| Attribute name | Description | +| --------------- | ------------------------------------------------------------------------------------------------------------------- | +| validGeoZoneIds | Identifies a geographic zone that the AF request applies only to the traffic of UE(s) located in this specific zone | ### Temporal Validity (Optional) -|Attribute name|Description| -|--------------|-----------| -|tempValidities|Indicates the time interval(s) during which the AF request is to be applied| +| Attribute name | Description | +| -------------- | --------------------------------------------------------------------------- | +| tempValidities | Indicates the time interval(s) during which the AF request is to be applied | ### UPF Event Notifications (Optional) -|Attribute name|Description| -|--------------|-----------| -|subscribedEvents|Identifies the requirement to be notified of the event(s)| -|dnaiChgType|Identifies a type of notification regarding UP path management event| -|notificationDestination|Contains the Callback URL to receive the notification from the NEF. It shall be present if the "subscribedEvents" is present| +| Attribute name | Description | +| ----------------------- | ---------------------------------------------------------------------------------------------------------------------------- | +| subscribedEvents | Identifies the requirement to be notified of the event(s) | +| dnaiChgType | Identifies a type of notification regarding UP path management event | +| notificationDestination | Contains the Callback URL to receive the notification from the NEF. It shall be present if the "subscribedEvents" is present | ### AF to NEF specific (Optional) -|Attribute name|Description| -|--------------|-----------| -|suppFeat|Indicates the list of Supported features used as described in subclause 5.4.4. This attribute shall be provided in the POST request and in the response of successful resource creation. Values 1 - Notification_websocket 2 - Notification_test_event | -|requestTestNotification|Set to true by the AF to request the NEF to send a test notification as defined in subclause 5.2.5.3 of 3GPP TS 29.122 [4]. Set to false or omitted otherwise| -|websockNotifConfig|Configuration parameters to set up notification delivery over Websocket protocol| - +| Attribute name | Description | +| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| suppFeat | Indicates the list of Supported features used as described in subclause 5.4.4. This attribute shall be provided in the POST request and in the response of successful resource creation. Values 1 - Notification_websocket 2 - Notification_test_event | +| requestTestNotification | Set to true by the AF to request the NEF to send a test notification as defined in subclause 5.2.5.3 of 3GPP TS 29.122 [4]. Set to false or omitted otherwise | +| websockNotifConfig | Configuration parameters to set up notification delivery over Websocket protocol | + +## Packet Flow Description transaction description + +This sections describes the parameters that are used in the Packet flow description POST request. Groups mentioned as Mandatory needs to be provided, in the absence of the Mandatory parameters a 400 response would be returned. + +| Attribute name | Mandatory | Description | +| ---------------- | --------- | -------------------------------------------------------------------------------------------------------------------------------------- | +| externalAppID | Yes | Unique Application identifier of a PFD | +| Allowed Delay | No | Indicates that the list of PFDs in this request should be deployed within the time interval indicated by the Allowed Delay | +| Caching Time | No | It shall be included when the allowed delayed cannot be satisfied, i.e. it is smaller than the caching time configured in fetching PFD | +| pfdId | Yes | Identifies a PFD of an application identifier. | +| flowDescriptions | NOTE | Represents a 3-tuple with protocol, server ip and server port for UL/DL application traffic. | +| Urls | NOTE | Indicates a URL or a regular expression which is used to match the significant parts of the URL. | +| domainName | NOTE | Indicates an FQDN or a regular expression as a domain name matching criteria. | + + **NOTE:** + One of the attribute of flowDescriptions, URls and domainName is mandatory. diff --git a/doc/applications/openness_ovc.md b/doc/applications/openness_ovc.md index 13b97c5e..1506e2cf 100644 --- a/doc/applications/openness_ovc.md +++ b/doc/applications/openness_ovc.md @@ -10,17 +10,18 @@ Copyright (c) 2019 Intel Corporation - [Smart City Edge Application Introduction](#smart-city-edge-application-introduction) - [The Smart City Building Blocks](#the-smart-city-building-blocks) - [Smart City App Deployment with OpenNESS](#smart-city-app-deployment-with-openness) - - [Open Visual Cloud and OpenNESS Integration](#open-visual-cloud-and-openness-integration) + - [Open Visual Cloud and OpenNESS Integration using Virtual Machines](#open-visual-cloud-and-openness-integration-using-virtual-machines) - [The Infrastructure Challenges](#the-infrastructure-challenges) - [The Smart City Application Challenges](#the-smart-city-application-challenges) + - [Open Visual Cloud and OpenNESS Integration as cloud-native](#open-visual-cloud-and-openness-integration-as-cloud-native) - [Conclusion](#conclusion) ## OpenNESS Introduction -OpenNESS is an open source software toolkit to enable easy orchestration of edge services across diverse network platform and access technologies in multi-cloud environments. It is inspired by the edge computing architecture defined by the ETSI Multi-access Edge Computing standards (e.g., [ETSI_MEC 003]), as well as the 5G network architecture ([3GPP_23501]). +OpenNESS is an open source software toolkit that enables easy orchestration of edge services across diverse network platform and access technologies in multi-cloud environments. It is inspired by the edge computing architecture defined by the ETSI Multi-access Edge Computing standards (e.g., [ETSI_MEC 003]), as well as the 5G network architecture ([3GPP_23501]). -It leverages major industry edge orchestration frameworks, such as Kubernetes and OpenStack, to implement a cloud-native architecture that is multi-platform, multi-access, and multi-cloud. It goes beyond these frameworks, however, by providing the ability for applications to publish their presence and capabilities on the platform, and for other applications to subscribe to those services. Services may be very diverse, from providing location and radio network information, to operating a computer vision system that recognize pedestrians and cars, and forwards metadata from those objects to to downstream traffic safety applications. +OpenNESS leverages major industry edge orchestration frameworks, such as Kubernetes and OpenStack, to implement a cloud-native architecture that is multi-platform, multi-access, and multi-cloud. It goes beyond these frameworks, however, by providing the ability for applications to publish their presence and capabilities on the platform, and for other applications to subscribe to those services. Services may be very diverse, from providing location and radio network information, to operating a computer vision system that recognize pedestrians and cars, and forwards metadata from those objects to to downstream traffic safety applications. -OpenNESS is access network agnostic, as it provides an architecture that interoperates with LTE, 5G, WiFi, and wired networks. In edge computing, dataplane flows must be routed to edge nodes with regard to physical location (e.g., proximity to the endpoint, system load on the edge node, special hardware requirements). OpenNESS provides APIs that allow network orchestrators and edge computing controllers to configure routing policies in a uniform manner. +OpenNESS is access network agnostic, as it provides an architecture that interoperates with LTE, 5G, WiFi, and wired networks. In edge computing, dataplane flows must be routed to edge nodes with respect to physical location (e.g., proximity to the endpoint, system load on the edge node, special hardware requirements). OpenNESS provides APIs that allow network orchestrators and edge computing controllers to configure routing policies in a uniform manner. ## Open Visual Cloud Introduction The Open Visual Cloud is an open source project that offers a set of pre-defined reference pipelines for various target visual cloud use cases. These reference pipelines are based on optimized open source ingredients across four core building blocks (encode, decode, inference, and render), which are used to deliver visual cloud services. @@ -36,7 +37,7 @@ OpenNESS provides the underpinning network edge infrastructure which comprises o ![Smart City Architecure Deployed with OpenNESS](ovc-images/smart-city-architecture.png) -The Open Visual Cloud website is located at [Open Visual Cloud project](https://01.org/openvisualcloud). The Smart City sample source code & documentation are available on [GitHub](https://github.com/OpenVisualCloud/Smart-City-Sample) and its integration with OpenNESS is available at this [branch](https://github.com/OpenVisualCloud/Smart-City-Sample/tree/openness). +The Open Visual Cloud website is located at [Open Visual Cloud project](https://01.org/openvisualcloud). The Smart City sample source code & documentation are available on [GitHub](https://github.com/OpenVisualCloud/Smart-City-Sample) and its integration with OpenNESS is available at [OpenNESS branch](https://github.com/OpenVisualCloud/Smart-City-Sample/tree/openness). ## The Smart City Building Blocks The Smart City sample consists of the following major building blocks: @@ -58,17 +59,14 @@ Each building block is implemented as one or a set of container services that qu For example, the analytics service when launched queries the database for available camera and its service URI. Then the service connects to the camera and analyzes the camera feeds. The resulted analytics data is stored back to the database for any subsequent processing such as triggering alerts and actions. -## Smart City App Deployment with OpenNESS +## Smart City App Deployment with OpenNESS + +The Smart City application is deployed through the OpenNESS Network Edge architecture which required the application micro-services to be adapted in order to match the distributed nature of the telco network. The application micro-services are deployed across the following sub-networks: -For simplicity, the Smart City sample provides a deployment script that deploys all sample building blocks to a docker swarm cluster. Working with OpenNESS, we need to adapt the sample to deploy the building blocks to different networks: - **Cloud**: The UI and the database master run in the cloud, where the UI displays a summarization view of the active offices and the database master coordinates the database requests. - **Office**: Most processing logics (multiple containers) and a local database reside in a regional office. The services include camera discovery, object detection, and other maintenance tasks such as clean up and health check. There can be multiple offices. - **Camera**: A set of cameras, possibly connected through the wireless network, are hosted on a different camera network. -The deployment solution is described as follows: - -![OVC Smart City Solution Deployment](ovc-images/setup.png) - The three edge nodes (representing three regional offices) are connected to the OpenNESS controller. All the three nodes also have connectivity to the public/private cloud. The following are the typical steps involved in the deployment of the application using OpenNESS. 1. The OpenNESS controller enrolls the three Edge nodes. 2. Each Edge node sends the request for interface configuration. @@ -87,9 +85,9 @@ The **Cloud** and **Camera** parts of the Smart City Application are not part of ![Smart City Application UI](ovc-images/screenshot.gif) -## Open Visual Cloud and OpenNESS Integration +## Open Visual Cloud and OpenNESS Integration using Virtual Machines -The integration of the Smart City application with the OpenNESS infrastructure presents unique challenges on both the application and the infrastructure. +The integration of the Smart City application with the OpenNESS infrastructure presents unique challenges on both the application and the infrastructure. The following challenges were faced when packaging and deploying the Smart City application as Virtual Machines (VM) on OpenNESS: ### The Infrastructure Challenges @@ -105,6 +103,10 @@ OpenNESS limits service requests initiated from the cloud to the Edge nodes. The The deployment script is also rewritten to separate the launch of the services into three networks: cloud, edge and camera. Using VM as a launch vehicle, we also have to develop automation scripts to bring up the containers within VM and to establish secure connections to the cloud for registration and service redirection. +## Open Visual Cloud and OpenNESS Integration as cloud-native + +Integrating the [cloud-native Smart City application](https://github.com/OpenVisualCloud/Smart-City-Sample/blob/master/deployment/kubernetes/README.md) with OpenNESS was a seamless process due to the OpenNESS adoption of Kubernetes standard features such as: Namespaces, Services, DaemonSets and Network Policies. In one step, The Smart City application is deployed on the OpenNESS setup based on the reference deployment on vanilla Kubernetes. More details on onboarding the cloud-native Smart City application with OpenNESS is covered at the [application onboarding guide](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md#onboarding-smart-city-sample-application). + ## Conclusion The Smart City sample when deployed on the Edge nodes based on OpenNESS creates an impactful edge computing use case that utilizes the capability of OpenNESS, Open Visual Cloud and OpenVINO. The integration shows that the three can run together to show scalable Edge deployment and low-latency analytics processing on Edge nodes. diff --git a/doc/applications/ovc-images/setup.png b/doc/applications/ovc-images/setup.png deleted file mode 100644 index 28e4620a..00000000 Binary files a/doc/applications/ovc-images/setup.png and /dev/null differ diff --git a/doc/arch-images/openness-cera.png b/doc/arch-images/openness-cera.png new file mode 100644 index 00000000..5a771168 Binary files /dev/null and b/doc/arch-images/openness-cera.png differ diff --git a/doc/arch-images/openness-core.png b/doc/arch-images/openness-core.png new file mode 100644 index 00000000..099dc748 Binary files /dev/null and b/doc/arch-images/openness-core.png differ diff --git a/doc/arch-images/openness-flexran.png b/doc/arch-images/openness-flexran.png new file mode 100644 index 00000000..60ab63c4 Binary files /dev/null and b/doc/arch-images/openness-flexran.png differ diff --git a/doc/arch-images/openness-onprem.png b/doc/arch-images/openness-onprem.png new file mode 100644 index 00000000..9254b6e1 Binary files /dev/null and b/doc/arch-images/openness-onprem.png differ diff --git a/doc/arch-images/openness-ovc.png b/doc/arch-images/openness-ovc.png new file mode 100644 index 00000000..1599d3cb Binary files /dev/null and b/doc/arch-images/openness-ovc.png differ diff --git a/doc/arch-images/openness_apponboard.png b/doc/arch-images/openness_apponboard.png index 5e20ea1b..61a4c104 100644 Binary files a/doc/arch-images/openness_apponboard.png and b/doc/arch-images/openness_apponboard.png differ diff --git a/doc/arch-images/openness_epcconfig.png b/doc/arch-images/openness_epcconfig.png index 008689cc..5634a61d 100644 Binary files a/doc/arch-images/openness_epcconfig.png and b/doc/arch-images/openness_epcconfig.png differ diff --git a/doc/arch-images/openness_flow.png b/doc/arch-images/openness_flow.png index 359634a0..abd78e05 100644 Binary files a/doc/arch-images/openness_flow.png and b/doc/arch-images/openness_flow.png differ diff --git a/doc/arch-images/openness_k8sapponboard.png b/doc/arch-images/openness_k8sapponboard.png index 72d124c3..445f3b99 100644 Binary files a/doc/arch-images/openness_k8sapponboard.png and b/doc/arch-images/openness_k8sapponboard.png differ diff --git a/doc/arch-images/openness_k8snodemicro.png b/doc/arch-images/openness_k8snodemicro.png index 38f26a0b..1a208355 100644 Binary files a/doc/arch-images/openness_k8snodemicro.png and b/doc/arch-images/openness_k8snodemicro.png differ diff --git a/doc/arch-images/openness_networkedge_ovs.png b/doc/arch-images/openness_networkedge_ovs.png index fccef67e..9349047b 100644 Binary files a/doc/arch-images/openness_networkedge_ovs.png and b/doc/arch-images/openness_networkedge_ovs.png differ diff --git a/doc/arch-images/openness_ngc.png b/doc/arch-images/openness_ngc.png index 456697b9..ba3aa17f 100644 Binary files a/doc/arch-images/openness_ngc.png and b/doc/arch-images/openness_ngc.png differ diff --git a/doc/arch-images/openness_nodeauth.png b/doc/arch-images/openness_nodeauth.png index 451093fe..ce9c329d 100644 Binary files a/doc/arch-images/openness_nodeauth.png and b/doc/arch-images/openness_nodeauth.png differ diff --git a/doc/arch-images/openness_nodemicro.png b/doc/arch-images/openness_nodemicro.png index 45a1d9f9..914d853a 100644 Binary files a/doc/arch-images/openness_nodemicro.png and b/doc/arch-images/openness_nodemicro.png differ diff --git a/doc/arch-images/openness_onprem.png b/doc/arch-images/openness_onprem.png index df2eed5f..20422184 100644 Binary files a/doc/arch-images/openness_onprem.png and b/doc/arch-images/openness_onprem.png differ diff --git a/doc/architecture.md b/doc/architecture.md index 51ed386e..28873e6c 100644 --- a/doc/architecture.md +++ b/doc/architecture.md @@ -1,6 +1,6 @@ ```text SPDX-License-Identifier: Apache-2.0 -Copyright (c) 2019 Intel Corporation +Copyright (c) 2020 Intel Corporation ``` # OpenNESS Architecture and Solution overview @@ -9,8 +9,8 @@ Copyright (c) 2019 Intel Corporation - [Key Terminologies defining OpenNESS](#key-terminologies-defining-openness) - [Overview](#overview) - [OpenNESS Controller Community Edition](#openness-controller-community-edition) - - [Details of Edge Controller Microservices functionality in Native deployment mode:](#details-of-edge-controller-microservices-functionality-in-native-deployment-mode) - - [Details of Edge Controller Microservices functionality in Infrastructure deployment mode:](#details-of-edge-controller-microservices-functionality-in-infrastructure-deployment-mode) + - [Details of Edge Controller Microservices functionality in Native deployment mode](#details-of-edge-controller-microservices-functionality-in-native-deployment-mode) + - [Details of Edge Controller Microservices functionality in Infrastructure deployment mode](#details-of-edge-controller-microservices-functionality-in-infrastructure-deployment-mode) - [Edge Application Onboarding](#edge-application-onboarding) - [Application onboarding in OpenNESS Native deployment mode](#application-onboarding-in-openness-native-deployment-mode) - [Application onboarding in OpenNESS Infrastructure deployment mode](#application-onboarding-in-openness-infrastructure-deployment-mode) @@ -25,6 +25,12 @@ Copyright (c) 2019 Intel Corporation - [Deployment Scenarios](#deployment-scenarios) - [On-Premises Edge Deployment Scenario](#on-premises-edge-deployment-scenario) - [Network Edge Deployment Scenario](#network-edge-deployment-scenario) + - [OpenNESS Support for Deployment flavors](#openness-support-for-deployment-flavors) + - [RAN node flavor](#ran-node-flavor) + - [Core node flavor](#core-node-flavor) + - [Application node flavor](#application-node-flavor) + - [OnPremises application node flavor](#onpremises-application-node-flavor) + - [OnPremises all-in-one node - CERA](#onpremises-all-in-one-node---cera) - [Enhanced Platform Awareness through OpenNESS](#enhanced-platform-awareness-through-openness) - [OpenNESS Edge Node Applications](#openness-edge-node-applications) - [Producer Application](#producer-application) @@ -59,7 +65,7 @@ Because it is an open source platform, OpenNESS enables operators, ISVs, and OSV ### Key Terminologies defining OpenNESS - **Orchestration**: Orchestration in the context of OpenNESS refers to exposing northbound APIs for Deploying, Managing, Automating the Edge compute cluster and Applications that run on the cluster. E.g. OpenNESS northbound APIs that can be used by Orchestrators like ONAP for managing the OpenNESS edge solution. - **Edge Services**: Edge Services in the context of OpenNESS refers to the Applications that service end-user traffic and Applications that provide services to other Edge compute Applications. E.g. CDN is an Edge application that services end-user traffic whereas Transcoding services is an application that provides service to CDN application. -- **Network Functions**: Network Functions in the context of OpenNESS refers to typical Container Networking Functions (CNFs) that enable edge cloud deployment in Wireless access, Wireline and WiFi deployments. E.g. 5G UPF is as CNF supports steering application traffic towards edge cloud applications, gNodeB that servers User equipment (UE) in 5G NR millimeter wave or Sub6 deployments etc. +- **Network Functions**: Network Functions in the context of OpenNESS refers to typical Container Networking Functions (CNFs) that enable edge cloud deployment in Wireless access, Wireline and WiFi deployments. E.g. 5G UPF is as CNF supports steering application traffic towards edge cloud applications, gNodeB that serves User equipment (UE) in 5G NR millimeter wave or Sub6 GHz deployments etc. - **Network platform**: Network platform in the context of OpenNESS refers to nodes that are deployed in Network or On-Premises edge compute processing. These are typically COTS platforms which can host both Applications and VNFs. - **Access technologies**: Access technologies in the context of OpenNESS refers to various types of traffic that OpenNESS solution can be handled. They include 5G, LTE (GTP/IP), Wireline (IP) and Wifi (IP). - **Multi Cloud**: Multi Cloud in the context of OpenNESS refers to support in OpenNESS to host multiple Public or Private cloud application on the same node or in the OpenNESS compute cluster. These cloud applications can come from e.g. Amazon AWS Greengrass, Baidu cloud etc. @@ -107,7 +113,7 @@ Hence OpenNESS Controller deployment is described in these two modes: - OpenNESS Native deployment: OpenNESS deployed using controller which interfaces NFV infrastructure directly (libvirt/docker runtime) - OpenNESS Infrastructure deployment: OpenNESS deployed using Kubernetes as an orchestrator -#### Details of Edge Controller Microservices functionality in Native deployment mode: +#### Details of Edge Controller Microservices functionality in Native deployment mode - **Web UI front end**: Reference HTML5 based web front end for Administrator management of Edge Nodes. - **User account management**: Create administrator and user accounts for Edge Node management. - **Edge compute application catalogue**: Provide capability of adding applications to Controller catalogue. @@ -117,18 +123,21 @@ Hence OpenNESS Controller deployment is described in these two modes: - Configuration of interfaces and microservices on Edge Nodes - Configuration of traffic policy for the interfaces including Local Breakout (LBO) interface - **Edge Application Lifecycle Management**: Support applications through their lifecycle: - - Expose the silicon micro architecture features on CPU, Accelerator, Network interface etc. through Enhanced Platform Awareness (EPA) framework to the applications for lower overhead and high performance execution. + - Expose the silicon micro architecture features on CPU, HW Accelerator, Network interface etc. through Enhanced Platform Awareness (EPA) framework to the applications for lower overhead and high performance execution. - Deploy edge compute applications from the image repository - Configure the Edge compute application specific Traffic policy - Configure the Edge compute application specific DNS policy +- **Node Feature Discovery (NFD)**: Contains two microservices. One on the controller (master) and one on the edge nodes (worker). NFD workers gets the hardware and software features on the edge node and the information is passed on to the NFD Master on the controller node. This information can be used by the user to deploy the applications to the edge node which meets the specific resource requirement. This ensures reliable performance. +- **Enhanced Platform Awareness**: is a subsystem of the application life cycle management that enables users to provide key hardware or software features that needs to be made available to the applications when deployed on the edge node. The user is presented with a key:Value pair to choose from the supported EPA features. NFD when combined with EPA provides a powerful mechanism for achieving application performance reliability. - **Edge virtualization infrastructure management**: Use underlying virtualization infrastructure, whether directly via libvirt or Docker, or indirectly via Kubernetes, to manage the Edge Node platform and applications. - **Telemetry**: Get basic edge compute microservices telemetry from connected Edge Nodes. - + The Controller microservices make extensive use of the Go programming language and its runtime libraries. -#### Details of Edge Controller Microservices functionality in Infrastructure deployment mode: +#### Details of Edge Controller Microservices functionality in Infrastructure deployment mode - **Core Network Configuration**: Configure the access network (e.g., LTE/CUPS, 5G) control plane. - **Telemetry**: Get basic edge compute microservices telemetry from connected Edge Nodes. +- **Microservices and Enhancements for K8s master**: Set of microservice as daemon set deployed on Master to enable deployment. E.g. NFD (master), SRI-OV device plugin, etc. OpenNESS when deployed using Kubernetes supports key features that expose the silicon micro architecture features of the platform to the applications and network functions to achieve better and reliable performance. This will be described in the Enhanced Platform Awareness (EPA) section later in the document. @@ -149,11 +158,12 @@ OpenNESS Controller is used to onboard an application to the OpenNESS Edge Node. 1. User sets up the HTTPS based Application image server. The image source needs to support HTTPS download. Edge Node trusts public CAs and the one from the controller. 2. User uploads the application image (container tar.gz image or VM qcow2) to the HTTPs server and ensures uploaded image is available for download over HTTPS. -3. User initiates the Application deploy step using the Controller UI. This step initiates the download of the image from the HTTPS server to the Edge Node. After this step EVA registers the Application image. -4. User starts the Application, which kick starts the Container/Pod/VM. +3. User uses NFD to identify the edge node that supports the required EPA. +4. User initiates the Application deploy step using the Controller UI to the node that provides the required EPA for the application. This step initiates the download of the image from the HTTPS server to the Edge Node. After this step EVA registers the Application image. +5. User starts the Application, which kick starts the Container/Pod/VM. ##### Application onboarding in OpenNESS Infrastructure deployment mode -OpenNESS users need to use the Kubernetes Master to onboard and application to the OpenNESS Edge Node. OpenNESS support applications that can run in a docker container. Docker image tar.gz. The image source can be a docker registry or HTTPS image repository. The image repository can be an external image server or one that can be deployed on the controller. The figure below shows the steps involved in application onboarding. +OpenNESS users need to use the Kubernetes Master to onboard and application to the OpenNESS Edge Node. OpenNESS support applications that can run in a docker container (Docker image tar.gz). The image source can be a docker registry or HTTPS image repository. The image repository can be an external image server or one that can be deployed on the controller. The figure below shows the steps involved in application onboarding. ![Edge Application Onboarding](arch-images/openness_k8sapponboard.png) @@ -161,7 +171,7 @@ OpenNESS users need to use the Kubernetes Master to onboard and application to t 1. User sets up the HTTPS based Application image server / Docker Registry where application container image is stored. 2. User uploads the application image (container tar.gz) to the HTTPs server. User downloads the application container image to the Edge Node. -3. User initiates the Application deploy step using Kubernetes (kubectl). +3. User initiates the Application deploy step using Kubernetes (kubectl). In this step user uses all the Kubernetes enhancements for EPA using NFD to deploy the application pod on the right node. ### OpenNESS Edge Node @@ -170,7 +180,7 @@ OpenNESS users need to use the Kubernetes Master to onboard and application to t OpenNESS Edge Node hosts a set of microservices to enable Edge compute deployment. The type of OpenNESS microservices deployed on the Edge Node depends on the type of deployment - OpenNESS Infrastructure deployment or OpenNESS Native deployment. This is similar to OpenNESS controller deployment types described above. ##### Edge Node Microservices OpenNESS Native deployment mode -Microservices deployed on the Edge Node in this mode include ELA, EVA, EAA, Syslog, DNS Server and NTS Dataplane. Although ELA, EVA and EAA can be deployed in separate containers. +Microservices deployed on the Edge Node in this mode include ELA, EVA, EAA, Syslog, DNS Server and OVN/OVS-DPDK dataplane or optionally NTS Dataplane. ![Edge Node Microservices](arch-images/openness_nodemicro.png) @@ -190,8 +200,13 @@ Details of Edge Node Microservices functionality: - **DNS service**: Support DNS resolution and forwarding services for the application deployed on the edge compute. DNS server is implemented based on Go DNS library. DNS service supports resolving DNS requests from User Equipment (UE) and Applications on the edge cloud. - **Edge Node Virtualization infrastructure**: Receive commands from the controller/NFV infrastructure managers to start and stop Applications. This functionality is implemented in the EVA (Edge virtualization Agent) microservice and is implemented in Go lang. - **Edge application traffic policy**: Interface to set traffic policy for application deployed on the Edge Node. This functionality is implemented in the EDA (Edge Dataplane Agent) microservice and is implemented in Go lang. -- **Dataplane Service**: Steers traffic towards applications running on the Edge Node or the Local Break-out Port. - - Utilizing the Data Plane NTS (Network Transport Service), which runs on every Edge Node. It is implemented in C lang using DPDK for high performance IO. This is the recommended dataplane when incoming and outgoing flows is mix of pure IP + S1u (GTPu). +- **Dataplane Service**: Steers traffic towards applications running on the Edge Node or the Local Break-out Port. + + NTS + - NTS (Network Transport Service) is the primary dataplane supported + - Its mainly developed to support S1u deployments + - When NTS is used as Dataplane OVS-DPDK can be used as inter-app service. + - Utilizing the Data Plane NTS (Network Transport Service), which runs on every Edge Node. It is implemented in C lang using DPDK for high performance IO. This is the recommended dataplane when incoming and outgoing flows are a mix of pure IP + S1u (GTPu). - Provide Reference ACL based Application specific packet tuple filtering - Provide reference GTPU base packet learning for S1 deployment - Provide reference Simultaneous IP and S1 deployment @@ -205,6 +220,15 @@ Details of Edge Node Microservices functionality: - Dedicated interface created for dataplane based on vhost-user for VM, dpdk-kni for Containers - Container or VM default Interface can be used for Inter-App, management and Internet access from application - Dedicated OVS-DPDK interface for inter-apps communication can be created in case of On-Premises deployment. + +OVN/OVS-DPDK + - The secondary dataplane that is supported in native mode is OVN/OVS-DPDK. + - For non-S1u deployments this should be the dataplane of choice + - OVN manages the IP addresses allocated to the applications + - In this mode both north-south and east-west traffic is supported by OVS-DPDK. + - vEth pair is used as interface for container and vitrio for VMs + +>Note: In the future releases OVN/OVS-DPDK will be primary dataplane supported. - **Application Authentication**: Ability to authenticate an Edge compute application deployed from the Controller so that application can avail/call Edge Application APIs. Only applications that intend to call the Edge Application APIs need to be authenticated. TLS certificate based Authentication is implemented. @@ -228,11 +252,12 @@ Details of Edge Node Microservices functionality: - **DNS service**: Support DNS resolution and forwarding services for the application deployed on the edge compute. DNS server is implemented based on Go DNS library. DNS service supports resolving DNS requests from User Equipment (UE) and Applications on the edge cloud. - **Dataplane Service**: Steers traffic towards applications running on the Edge Node or the Local Break-out Port. - Using OVN/OVS as Dataplane - recommended dataplane when incoming and outgoing flows are based on pure IP. + - Dataplane is supported in both OVS only or OVS-DPDK mode for higher performance. - Implemented using [kube-ovn](https://github.com/alauda/kube-ovn) - Provides IP 5-tuple based flow filtering and forwarding - Same Interface can be used for Inter-App, management, Internet and Dataplane interface - -- **Application Authentication**: Ability to authenticate Edge compute application deployed from Controller so that application can avail/call Edge Application APIs. Only applications that intend to call the Edge Application APIs need to be authenticated. TLS certificate based Authentication is implemented. +- **Application Authentication**: Ability to authenticate Edge compute application deployed from the Controller so that the application can avail/call of Edge Application APIs. Only applications that intend to call the Edge Application APIs need to be authenticated. TLS certificate based Authentication is implemented. +- **Microservices and Enhancements for node**: Set of microservice as daemon/replica set deployed on node to enable Cloud Native deployment. E.g. NFD (worker), multus, SRI-OV device plugin, etc. ![OpenNESS Application Authentication](arch-images/openness_k8sappauth.png) @@ -258,8 +283,9 @@ API endpoint for edge applications is implemented in the EAA (Edge Application A - **Edge Node telemetry**: Utilizing the rsyslog, all OpenNESS microservices send telemetry updates which includes the logging and packet forwarding statistics data from the dataplane. This is also the mechanism that is encouraged for OpenNESS users for Debugging and Troubleshooting. **OpenNESS Edge Node Resource usage**: -- All non-critical/non-realtime microservices on the OpenNESS Edge Node execute OS core typically Core 0. -- Dataplane NTS and DPDK PMD thread requires a dedicated core/thread for high performance. +- All non-critical/non-realtime microservices on the OpenNESS Edge Node execute on OS core typically Core 0. +- Dataplane NTS and DPDK PMD thread requires a dedicated core/thread for high performance. +- Dataplane OVS-DPDK requires dedicated core/thread for high performance. - DPDK library is used for the dataplane implementation 1G/2M hugepages support is required on the host. #### Edge Compute Applications: Native on the Edge Node @@ -282,14 +308,11 @@ OpenNESS may be deployed on 5G, LTE or IP (wireless or wireline) networks. The n OpenNESS supports multiple deployment options on an 5G Stand alone and LTE cellular network, as shown in Figure below. - - 5G Standalone edge cloud deployment OpenNESS supports deployment of the Edge cloud as per the [3GPP_29.522 Rel v15.3]. In this mode, OpenNESS uses the 3GPP defined Service Based Architecture (SBA) REST APIs. The APIs use the "traffic influence" feature of the Application Function (AF) for Local Data Network (Edge cloud) processing. - Edge cloud deployment on CUPS or SGi The Edge Node may be attached to the SGi interface of an EPC. Traffic from the EPC arrives as IP traffic, and is steered as appropriate to edge applications. EPCs may combine the control or user plane, or they may follow the Control-User Plane Separation (CUPS) architecture of [3GPP_23214], which provides for greater flexibility in routing data plane traffic through the LTE network. When EPC CUPS is deployed OpenNESS supports reference Core Network Configuration for APN based traffic steering for local edge cloud. - - S1-U deployment in On-Premises Private LTE deployment : Following [3GPP_23401], the Edge Node may be deployed on the S1 interface from an eNB. In this mode, traffic is intercepted by the Edge Node dataplane, which either redirects the traffic to edge applications or passes it through an upstream EPC. In this option, arriving traffic is encapsulated in a GTP tunnel; the dataplane handles decapsulation/encapsulation as required. - ![OpenNESS Multi-access support](arch-images/openness_multiaccess.png) @@ -313,7 +336,7 @@ Certain On-Premises Edge deployments might not have a dedicated infrastructure m ![On-Premises Edge compute](arch-images/openness_onprem.png) -_Figure - On-Premises Edge Deployment Scenario without external Orchestrator_ +_Figure - On-Premises Edge Deployment Scenario without external Orchestrator_ ### Network Edge Deployment Scenario The network edge deployment scenario is depicted in Figure below. In this scenario, Edge Nodes are located in facilities owned by a network operator (e.g., a central office, Regional Data Center), and to be part of a data network including access network (4G, 5GNR), core network (EPC, NGC), and edge computing infrastructure owned by a network operator. For economy of scale, this network is likely to be multi-tenant, and to be of very large scale (a national network operator may have thousands, or tens of thousands, of Edge Nodes). This network is likely to employ managed virtualization (e.g., OpenStack, Kubernetes) and be integrated with an operations and support system through which not only the edge computing infrastructure, but the network infrastructure, is managed. @@ -332,6 +355,36 @@ In this mode OVN/OVS can support: _Figure - Network Edge Deployment Scenario with OVS as dataplane_ +## OpenNESS Support for Deployment flavors +Having looked at the Deployment scenarios let us now look a the individual Deployment flavors supported by OpenNESS. Deployment flavors here refers to the types of nodes that typically are deployed at the edge using OpenNESS. Flavors are mainly categorized by the workloads that is running on the node. Below are the example of Flavors supported on the network edge: + +### RAN node flavor +RAN node here typically refers to RAN DU and CU 4G/5G nodes deployed on the edge or far edge. In some cases DU might be integrated in to the radio. The example RAN deployment flavor uses FlexRAN as reference DU. + +![RAN node flavor](arch-images/openness-flexran.png) + +### Core node flavor +Core nodes here typically refers to User plane and Control plane Core workloads for 4G and 5G deployed on the edge and central location. In most of the edge deployments UPF/SPGW-U plane is located on the edge along with the applications and services. For the ease of representation the diagram shows how OpenNESS can be used to deploy both User plane and Control plane Core nodes. + +![Core node flavor](arch-images/openness-core.png) + +### Application node flavor +Application nodes here typically refers to nodes running edge applications and services. The Applications can be Smart City, CDN, AR/VR, Cloud Gaming, etc. In the example flavor below Smart City application pipeline is used. + +![Application node flavor](arch-images/openness-ovc.png) + +Below are the example flavors for the On-premises deployment: + +### OnPremises application node flavor +OnPremises node typically host userplane core network function and edge applications. + +![OnPremises application node flavor](arch-images/openness-onprem.png) + +### OnPremises all-in-one node - CERA +CERA (Converged Edge Reference Architecture) is another flavor of OnPremises where along with userplane even the Wireless access/RAN is part of the node. + +![OnPremises CERA node flavor](arch-images/openness-cera.png) + ## Enhanced Platform Awareness through OpenNESS Enhanced Platform Awareness (EPA) represents a methodology and a related suite of changes across multiple layers of the orchestration stack targeting intelligent platform capability, configuration & capacity consumption. EPA features include Huge Pages support, NUMA topology awareness, CPU pinning, integration with OVS-DPDK, support for I/O Pass-through via SR-IOV, HDDL support, FPGA resource allocation support and many others. @@ -341,8 +394,8 @@ OpenNESS provides a one-stop solution to integrate key EPA features that are cri Edge Compute EPA- feature for Network edge and availability for CNF, Apps and Services on the edge - CPU Manager: Support deployment of a POD with dedicated pinning using CPU manager for K8s -- SRIOV NIC: Support deployment of a POD with dedicated SRIOV Virtual Function (VF) from Network Interface Card (NIC) -- SRIOV FPGA: Support deployment of a POD with dedicated SRIOV VF from FPGA (Demonstrated through Intel® FPGA Programmable Acceleration Card PAC N3000 with FPGA IP Wireless 5G FEC/LDPC) +- SR-IOV NIC: Support deployment of a POD with dedicated SR-IOV Virtual Function (VF) from Network Interface Card (NIC) +- SR-IOV FPGA: Support deployment of a POD with dedicated SR-IOV VF from FPGA (Demonstrated through Intel® FPGA Programmable Acceleration Card PAC N3000 with FPGA IP Wireless 5G FEC/LDPC) - Topology Manager: Supports k8s to manage the resources allocated to workloads in a Non-uniform memory access (NUMA) topology-aware manner - BIOS/Firmware Configuration service: Use intel syscfg tool to build a Pod that is scheduled by K8s as a job that configures the BIOS/FW with the given specification - Hugepages: Support for allocation of 1G/2M huge pages to the Pod. Huge page allocation is done through K8s @@ -350,17 +403,28 @@ Edge Compute EPA- feature for Network edge and availability for CNF, Apps and Se - Node Feature discovery: Support detection of Silicon and Software features and automation of deployment of CNF, Applications and services - FPGA Remote System Update service: Support Intel OPAE (fpgautil) tool to build a Pod that is scheduled by K8s as a job that updated the FPGA with the new RTL - Real-time Kernel - Support for the K8s Edge Node running real time kernel +- Support for running legacy application in VM mode using Kubervirt and allocation of SRI-OV ethernet interfaces to VMs - Non-Privileged Container: Support deployment of non-privileged pods (CNFs and Applications as reference) + +Edge Compute EPA- feature for On-Premises edge - Support for allocation of Intel® Movidius™ VPUs to the OnPrem applications running in Docker containers. +- Support for dedicated core allocation to application running as VMs or Containers +- Support for dedicated SR-IOV VF allocation to application running in VM or containers +> Note: when using SR-IOV VFs in containers the VF is bound to the kernel driver. +- Support for system resource allocation into the application running as container + - Mount point for shared storage + - Pass environment variables + - Configure the port rules +- Non-Privileged Container: Support deployment of non-privileged containers ## OpenNESS Edge Node Applications -OpenNESS Applications are onboarded and provisioned on the Edge Node through OpenNESS Controller in Native mode and K8s master in K8s mode. In K8s mode OpenNESS also supports onboarding of the Network Functions like RAN, Core, Firewall, etc. +OpenNESS Applications are onboarded and provisioned on the Edge Node through OpenNESS Controller in Native mode, and through K8s master in K8s mode. In K8s mode OpenNESS also supports onboarding of the Network Functions like RAN, Core, Firewall, etc. OpenNESS application can be categorized in different ways depending on the scenarios. - Depending on the OpenNESS APIs support - Edge Cloud applications: Applications calling EAA APIs for providing or consuming services on the edge compute along with servicing end-users traffic - - Unmodified cloud applications: Applications not availing of any services on the edge compute just servicing end-user traffic + - Unmodified cloud applications: Applications not availing of any services on the edge compute, just servicing end-user traffic - Depending on the Application Execution platform - Application running natively on Edge Node in a VM/Container provisioned by the OpenNESS controller @@ -401,7 +465,7 @@ OpenNESS APIs provide a mechanism to utilize platform resources efficiently in t ![OpenNESS Reference Application](arch-images/openness_hddlr.png) -More details about HDDL-R support in OpenNESS for Applications using OpenVINO SDK can be found here [Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/openness_hddl.md). +More details about HDDL-R support in OpenNESS for Applications using OpenVINO SDK can be found here [Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness_hddl.md). ### Cloud Adapter Edge compute Application All the major Cloud Service providers are implementing frameworks to deploy edge applications that link back to their cloud via connectors. For example, Amazon Greengrass enables lambda functions to be deployed on the edge and connecting to the AWS cloud using the GreenGrass service. While it was originally intended to host this type of edge software on IoT gateways, the same framework can be utilized by Service Providers and Enterprises, to implement a multi-cloud strategy for their Edge Nodes. @@ -414,9 +478,9 @@ OpenNESS supports this by providing the ability to deploy public cloud IOT gatew _Figure - Example of Cloud Adapter Edge Application in OpenNESS Platform_ -More details about running Baidu OpenEdge as OpenNESS application can be found here [Baidu OpenEdge Edge Application](https://github.com/open-ness/specs/blob/master/doc/openness_baiducloud.md). +More details about running Baidu OpenEdge as OpenNESS application can be found here [Baidu OpenEdge Edge Application](https://github.com/open-ness/specs/blob/master/doc/cloud-adapters/openness_baiducloud.md). -More details about running Amazon AWS IoT Greengrass as OpenNESS application can be found here [Amazon AWS IoT Greengrass Edge Application](https://github.com/open-ness/specs/blob/master/doc/openness_awsgreengrass.md). +More details about running Amazon AWS IoT Greengrass as OpenNESS application can be found here [Amazon AWS IoT Greengrass Edge Application](https://github.com/open-ness/specs/blob/master/doc/cloud-adapters/openness_awsgreengrass.md). ## OpenNESS Microservices and APIs @@ -438,7 +502,7 @@ Edge Application APIs are implemented by the EAA. Edge Application APIs are impo OpenNESS supports deployment of both types of applications mentioned above. The Edge Application Agent is a service that runs on the Edge Node and operates as a discovery service and basic message bus between applications via pubsub. The connectivity and discoverability of applications by one another is governed by an entitlement system and is controlled by policies set with the OpenNESS Controller. The entitlement system is still in its infancy, however, and currently allows all applications on the executing Edge Node to discover one another as well as publish and subscribe to all notifications. The Figure below provides the sequence diagram of the supported APIs for the application -More details about the APIs can be found here [Edge Application APIs](https://www.openness.org/resources) +More details about the APIs can be found here [Edge Application APIs](https://www.openness.org/api-documentation/?api=eaa) ![Edge Application APIs](arch-images/openness_eaa.png) @@ -453,12 +517,12 @@ For applications executing on the Local breakout the Authentication is not appli Authentication APIs are implemented as HTTP REST APIs. -More details about the APIs can be found here [Application Authentication APIs](https://www.openness.org/resources) +More details about the APIs can be found here [Application Authentication APIs](https://www.openness.org/api-documentation/?api=auth) ### Edge Lifecycle Management APIs ELA APIs are implemented by the ELA microservice on the Edge Node. The ELA runs on the Edge Node and operates as a deployment and lifecycle service for Edge applications and VNFs (Virtual Network Functions) that are needed for Edge compute deployment like e.g. 4G EPC CUPS User plane and DNS server. It also provides network interface, network zone, and application/interface policy services. -ELA APIs are implemented over gRPC. For the purpose of visualization they are converted to json and can be found here [Edge Lifecycle Management APIs](https://www.openness.org/resources) +ELA APIs are implemented over gRPC. For the purpose of visualization they are converted to json and can be found here [Edge Lifecycle Management APIs](https://github.com/open-ness/specs/blob/master/schema/pb/ela.proto) ### Edge Virtualization Infrastructure APIs EVA APIs are implemented by the EVA microservice on the Edge Node. The EVA operates as a mediator between the infrastructure that the apps run on and the other edge components. @@ -467,7 +531,7 @@ The EVA abstracts how applications were deployed. In order to achieve this, ther As an example, an RPC to list the running applications on the node is achieved by calling the Docker daemon and virsh list on the Edge Node, get its response data and show the status of the running applications. -EVA APIs are implemented over gRPC. For the purpose of visualization they are converted to json and can be found here [Edge Virtualization Infrastructure APIs](https://www.openness.org/resources) +EVA APIs are implemented over gRPC. For the purpose of visualization they are converted to json and can be found here [Edge Virtualization Infrastructure APIs](https://github.com/open-ness/specs/blob/master/schema/pb/eva.proto) ### Core Network Configuration APIs for edge compute @@ -478,9 +542,33 @@ OpenNESS controller community edition supports configuration of the 5G Applicati _Figure - OpenNESS 5G end-to-end test setup_ -More details about the APIs can be found here [CNCA APIs](https://www.openness.org/resources). +Features supported by 5G Components of OpenNESS (AF, NEF, CNCA, WEB UI): + +Traffic Influence Submission API support : 3GPP 23.502 Sec. 52.6.7 Traffic Influence Service +- AF: Added support for traffic influence submission northbound API +- NEF: added support for traffic influence submission API with stubs to loopback UDM and PCF calls +- kubectl support for traffic influence config in Network Edge mode +- WEB UI support for traffic influence config in OnPerm mode + +OAM service +- Registration of AF service with 5G CP + +PFD Management API support (3GPP 23.502 Sec. 52.6.3 PFD Management service) +- AF: Added support for PFD Northbound API +- NEF: Added support for PFD southbound API, and Stubs to loopback the PCF calls. +- kubectl: Enhanced CNCA kube-ctl plugin to configure PFD parameters +- WEB UI: Enhanced CNCA WEB UI to configure PFD params in OnPerm mode + +oAuth2 based authentication between 5G Network functions: (as per 3GPP Standard) +- Implemented oAuth2 based authentication and validation +- AF and NEF communication channel is updated to authenticated based on oAuth2 JWT token in addition to HTTP2. + +HTTPS support +- Enhanced the OAM, CNCA (web-ui and kube-ctl) to HTTPS interface + +More details about the APIs can be found here [CNCA APIs](https://www.openness.org/api-documentation/?api=cups). -Whitepaper describing the details of the Edge Computing support in 5G NGC can be found here [5G Edge Compute supports in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/openness_ngc.md). +Whitepaper describing the details of the Edge Computing support in 5G NGC can be found here [5G Edge Compute supports in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/core-network/openness_ngc.md). ### Core Network Configuration API for 4G CUPS As part of the OpenNESS reference edge stack the OpenNESS controller community edition is used for configuring the traffic policy for CUPS EPC to steer traffic towards the edge compute, This API is based on HTTP REST. Since 3GPP or ETSI MEC does not provide a reference for these APIs various implementation of this Edge Controller to CUPS EPC might exist. OpenNESS has tried to take the approach of minimal changes to 3GPP CUPS EPC to achieve the edge compute deployment. OpenNESS and HTTP REST APIs for the EPC CUPS is a reference implementation to enable customers using OpenNESS to integrate their own HTTP REST APIs to the EPC CUPS into the OpenNESS Controller. Special care has been taken to make these components Modular microservices. The diagram below show the LTE environment that was used for testing OpenNESS edge compute end-to-end. @@ -495,9 +583,9 @@ The OpenNESS reference solution provides a framework for managing multiple Edge _Figure - LTE EPC Configuration_ -More details about the APIs can be found here [CNCA APIs](https://www.openness.org/resources). +More details about the APIs can be found here [CNCA APIs](https://www.openness.org/api-documentation/?api=cups). -Whitepaper describing the details of the CUPS support in EPC can be found here [4G CUPS Edge Compute supports in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/openness_epc.md). +Whitepaper describing the details of the CUPS support in EPC can be found here [4G CUPS Edge Compute supports in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/core-network/openness_epc.md). ### OpenNESS Controller APIs OpenNESS Controller APIs are important APIs for those managing one or more OpenNESS Edge Nodes. OpenNESS Controller APIs are called by the UI frontend and can be called by external orchestrators. These APIs allow centralized management of OpenNESS Edge Nodes. The API enables a developer to maintain a list of OpenNESS Edge Nodes, configure apps, manage policies and DNS, and more. The OpenNESS Controller API represents an abstraction layer for an operations administrator. While individual OpenNESS Edge Nodes may be managed singularly, the OpenNESS Controller API allows for management in a scalable way. Furthermore, it allows for secure communication to the many Edge Nodes. @@ -587,6 +675,6 @@ _Figure - Setting up OpenNESS_ - UPF: User Plane Function - DN: Data Network - AF: Application Function -- SRIOV: Single Root I/O Virtualization +- SR-IOV: Single Root I/O Virtualization - NUMA: Non-Uniform Memory Access - COTS: Commercial Off-The-Shelf diff --git a/doc/core-network/ngc-images/OAuth2.png b/doc/core-network/ngc-images/OAuth2.png new file mode 100644 index 00000000..e8c531db Binary files /dev/null and b/doc/core-network/ngc-images/OAuth2.png differ diff --git a/doc/core-network/ngc-images/PFD_Management_transaction_add.png b/doc/core-network/ngc-images/PFD_Management_transaction_add.png new file mode 100644 index 00000000..76a5bf0b Binary files /dev/null and b/doc/core-network/ngc-images/PFD_Management_transaction_add.png differ diff --git a/doc/core-network/ngc-images/PFD_Management_transaction_del.png b/doc/core-network/ngc-images/PFD_Management_transaction_del.png new file mode 100644 index 00000000..7322575c Binary files /dev/null and b/doc/core-network/ngc-images/PFD_Management_transaction_del.png differ diff --git a/doc/core-network/ngc-images/PFD_Management_transaction_get.png b/doc/core-network/ngc-images/PFD_Management_transaction_get.png new file mode 100644 index 00000000..442b58d7 Binary files /dev/null and b/doc/core-network/ngc-images/PFD_Management_transaction_get.png differ diff --git a/doc/core-network/ngc-images/PFD_Management_transaction_update.png b/doc/core-network/ngc-images/PFD_Management_transaction_update.png new file mode 100644 index 00000000..7222d94c Binary files /dev/null and b/doc/core-network/ngc-images/PFD_Management_transaction_update.png differ diff --git a/doc/core-network/ngc-images/e2e_edge_deployment_flows.png b/doc/core-network/ngc-images/e2e_edge_deployment_flows.png index 4e9d90e4..b0f22bd0 100644 Binary files a/doc/core-network/ngc-images/e2e_edge_deployment_flows.png and b/doc/core-network/ngc-images/e2e_edge_deployment_flows.png differ diff --git a/doc/core-network/ngc_flows/AF_traffic_influence_add.uml b/doc/core-network/ngc_flows/AF_traffic_influence_add.uml new file mode 100644 index 00000000..618e72d4 --- /dev/null +++ b/doc/core-network/ngc_flows/AF_traffic_influence_add.uml @@ -0,0 +1,50 @@ +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 300 +skinparam sequenceArrowThickness 3 + +header Intel Corporation +footer Proprietary and Confidential +title Traffic influencing flows between OpenNESS controller and 5G Core + +actor "User/Admin" as user +box "OpenNESS Controller components" #LightBlue + participant "UI/CLI" as cnca + participant "AF Microservice" as af +end box +box "5G Core components" #LightGreen + participant "NEF" as nef + note over nef + OpenNESS provided + Core component with + limited functionality + end note + participant "NGC\nCP Functions" as ngccp +end box + +group Traffic influence submission flow + user -> cnca : Traffic influencing request + activate cnca + cnca -> af : /af/v1/subscriptions: POST \n {3GPP TS 29.522v15.3 \n Sec. 5.4}* + activate af + af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions : POST \n {3GPP TS 29.522v15.3 \n Sec. 5.4} + activate nef + nef -> nef : NGC_STUB(PCF,UDR,BSF) + nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} + ngccp --> nef : + nef --> af : OK: {subscriptionId} \n ERROR: {400/500} + deactivate nef + af --> cnca : OK: {subscriptionId} \n ERROR: {400/500} + deactivate af + cnca --> user : Success: {subscriptionId} + deactivate cnca +end group + +@enduml + diff --git a/doc/core-network/ngc_flows/AF_traffic_influence_delete.uml b/doc/core-network/ngc_flows/AF_traffic_influence_delete.uml new file mode 100644 index 00000000..037c12c8 --- /dev/null +++ b/doc/core-network/ngc_flows/AF_traffic_influence_delete.uml @@ -0,0 +1,51 @@ +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 300 +skinparam sequenceArrowThickness 3 + +header Intel Corporation +footer Proprietary and Confidential +title Traffic influencing flows between OpenNESS controller and 5G Core + +actor "User/Admin" as user +box "OpenNESS Controller components" #LightBlue + participant "UI/CLI" as cnca + participant "AF Microservice" as af +end box +box "5G Core components" #LightGreen + participant "NEF" as nef + note over nef + OpenNESS provided + Core component with + limited functionality + end note + participant "NGC\nCP Functions" as ngccp +end box + + +group Delete a subscribed traffic influence by subscriptionId + user -> cnca : Delete request by subscriptionId + activate cnca + cnca -> af : /af/v1/subscriptions/{subscriptionId} : DELETE + activate af + af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions/{subscriptionId} : DELETE + activate nef + nef -> nef : NGC_STUB(PCF,UDR,BSF) + nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} + ngccp --> nef : + nef --> af : OK : Delete success \n ERROR: {400/500} + deactivate nef + af --> cnca : OK : Delete success \n ERROR: {400/500} + deactivate af + cnca --> user : Success/Error + deactivate cnca +end group + +@enduml + diff --git a/doc/core-network/ngc_flows/AF_traffic_influence_get.uml b/doc/core-network/ngc_flows/AF_traffic_influence_get.uml new file mode 100644 index 00000000..fb051437 --- /dev/null +++ b/doc/core-network/ngc_flows/AF_traffic_influence_get.uml @@ -0,0 +1,68 @@ +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 300 +skinparam sequenceArrowThickness 3 + +header Intel Corporation +footer Proprietary and Confidential +title Traffic influencing flows between OpenNESS controller and 5G Core + +actor "User/Admin" as user +box "OpenNESS Controller components" #LightBlue + participant "UI/CLI" as cnca + participant "AF Microservice" as af +end box +box "5G Core components" #LightGreen + participant "NEF" as nef + note over nef + OpenNESS provided + Core component with + limited functionality + end note + participant "NGC\nCP Functions" as ngccp +end box + +group Get all subscribed traffic influence info + user -> cnca : Request all traffic influence subscribed + activate cnca + cnca -> af : /af/v1/subscriptions : GET + activate af + af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions : GET + activate nef + nef -> nef : NGC_STUB(PCF,UDR,BSF) + nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} + ngccp --> nef : + nef --> af : OK: traffic influence info \n ERROR: {400/500} + deactivate nef + af --> cnca : OK: traffic influence info \n ERROR: {400/500} + deactivate af + cnca --> user : Traffic influence details + deactivate cnca +end group + +group Get subscribed traffic influence info by subscriptionId + user -> cnca : Request traffic influence using subscriptionId + activate cnca + cnca -> af : /af/v1/subscriptions/{subscriptionId} : GET + activate af + af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions/{subscriptionId} : GET + activate nef + nef -> nef : NGC_STUB(PCF,UDR,BSF) + nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} + ngccp --> nef : + nef --> af : OK: traffic influence info \n ERROR: {400/500} + deactivate nef + af --> cnca : OK: traffic influence info \n ERROR: {400/500} + deactivate af + cnca --> user : Traffic influence details + deactivate cnca +end group + +@enduml + diff --git a/doc/core-network/ngc_flows/AF_traffic_influence_update.uml b/doc/core-network/ngc_flows/AF_traffic_influence_update.uml new file mode 100644 index 00000000..eb5c26a8 --- /dev/null +++ b/doc/core-network/ngc_flows/AF_traffic_influence_update.uml @@ -0,0 +1,50 @@ +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 300 +skinparam sequenceArrowThickness 3 + +header Intel Corporation +footer Proprietary and Confidential +title Traffic influencing flows between OpenNESS controller and 5G Core + +actor "User/Admin" as user +box "OpenNESS Controller components" #LightBlue + participant "UI/CLI" as cnca + participant "AF Microservice" as af +end box +box "5G Core components" #LightGreen + participant "NEF" as nef + note over nef + OpenNESS provided + Core component with + limited functionality + end note + participant "NGC\nCP Functions" as ngccp +end box + +group Update a subscribed traffic influence by subscriptionId + user -> cnca : Update request by subscriptionId + activate cnca + cnca -> af : /af/v1/subscriptions/{subscriptionId} : PUT + activate af + af -> nef : /3gpp-traffic-Influence/v1/{afId}/subscriptions/{subscriptionId} : PUT + activate nef + nef -> nef : NGC_STUB(PCF,UDR,BSF) + nef -> ngccp : {Open: 3rd party NGC integration with OpenNESS(NEF)} + ngccp --> nef : + nef --> af : OK : Update success, traffic influence info \n ERROR: {400/500} + deactivate nef + af --> cnca : OK : Update success, traffic influence info \n ERROR: {400/500} + deactivate af + cnca --> user : Success/Error + deactivate cnca +end group + +@enduml + diff --git a/doc/core-network/ngc_flows/PFD_Management_transaction_delete.uml b/doc/core-network/ngc_flows/PFD_Management_transaction_delete.uml new file mode 100644 index 00000000..dfaca683 --- /dev/null +++ b/doc/core-network/ngc_flows/PFD_Management_transaction_delete.uml @@ -0,0 +1,60 @@ +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 300 +skinparam sequenceArrowThickness 3 + +header Intel Corporation +footer Proprietary and Confidential + +title PFD flow between Openness and 5G core + +actor Admin as user +participant "UI/CLI" as UI +participant "Af Microservice" as AF +participant "NEF" as NEF +participant "NGC CP Function" as 5GC + +box "OpenNESS Controller Components" #LightBlue + participant UI + participant AF +end box + +box "5G Core Components" #LightGreen + participant NEF + note over NEF + OpenNESS provided + Core component with + limited functionality + end note + participant 5GC +end box + +group Detete PFD transaction with transactionId + user -> UI : Detete request by transactionId + UI -> AF : af/v1/pfd/transactions/{transactionId} DELETE + AF -> NEF : 3gpp-pfd-management/v1/{scsAsId}/transactions/{transactionId} DELETE + NEF -> NEF : NGC_STUB (UDR SMF) + + NEF -> AF : OK \ ERROR + AF -> UI : OK: \ ERROR + UI -> user : Success \ ERROR +end + +group Detete applicationId in PFD transaction + user -> UI : Detete request by applicationId + UI -> AF : af/v1/pfd/transactions/{transactionId}/{transactionId}/applications/{appId} DELETE + AF -> NEF : 3gpp-pfd-management/v1/{scsAsId}/transactions/{transactionId}/{transactionId}/applications/{appId} DELETE + NEF -> NEF : NGC_STUB (UDR SMF) + + NEF -> AF : OK \ ERROR + AF -> UI : OK: \ ERROR + UI -> user : Success \ ERROR +end + +@enduml \ No newline at end of file diff --git a/doc/core-network/ngc_flows/PFD_Management_transaction_get.uml b/doc/core-network/ngc_flows/PFD_Management_transaction_get.uml new file mode 100644 index 00000000..b521f8c1 --- /dev/null +++ b/doc/core-network/ngc_flows/PFD_Management_transaction_get.uml @@ -0,0 +1,72 @@ +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 300 +skinparam sequenceArrowThickness 3 + +header Intel Corporation +footer Proprietary and Confidential + +title PFD flow between Openness and 5G core + + +actor Admin as user +participant "UI/CLI" as UI +participant "Af Microservice" as AF +participant "NEF" as NEF +participant "NGC CP Function" as 5GC + +box "OpenNESS Controller Components" #LightBlue + participant UI + participant AF +end box + +box "5G Core Components" #LightGreen + participant NEF + note over NEF + OpenNESS provided + Core component with + limited functionality + end note + participant 5GC +end box + +group Get all transactions with PFD + user -> UI : Request all PFD transactions + UI -> AF : af/v1/pfd/transaction GET + AF -> NEF : 3gpp-pfd-management/v1/{scsAsId}/transactions/ GET + NEF -> NEF : NGC_STUB (UDR SMF) + + NEF -> AF : OK: PFD information \ ERROR + AF -> UI : OK: PFD information \ ERROR + UI -> user : PFD details +end + +group Get transactions with transactionId + user -> UI : Request transaction with transactionId + UI -> AF : af/v1/pfd/transactions/{transactionId} GET + AF -> NEF : 3gpp-pfd-management/v1/{scsAsId}/transactions/{transactionId} GET + NEF -> NEF : NGC_STUB (UDR SMF) + + NEF -> AF : OK: PFD information \ ERROR + AF -> UI : OK: PFD information \ ERROR + UI -> user : PFD details +end + +group Get transactions with applicationId + user -> UI : Request transactions with applicationId + UI -> AF : af/v1/pfd/transactions/{transactionId}/applications/{appId} GET + AF -> NEF : 3gpp-pfd-management/v1/{scsAsId}/transactions/{transactionId}/applications/{appId} GET + NEF -> NEF : NGC_STUB (UDR SMF) + + NEF -> AF : OK: PFD information \ ERROR + AF -> UI : OK: PFD information \ ERROR + UI -> user : PFD details +end + +@enduml \ No newline at end of file diff --git a/doc/core-network/ngc_flows/PFD_Managment_transaction_add.uml b/doc/core-network/ngc_flows/PFD_Managment_transaction_add.uml new file mode 100644 index 00000000..3e279dbd --- /dev/null +++ b/doc/core-network/ngc_flows/PFD_Managment_transaction_add.uml @@ -0,0 +1,49 @@ +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 300 +skinparam sequenceArrowThickness 3 + +header Intel Corporation +footer Proprietary and Confidential + +title PFD flow between Openness and 5G core + + +actor "User/Admin" as user +participant "UI/CLI" as UI +participant "Af Microservice" as AF +participant "NEF" as NEF +participant "NGC CP Function" as 5GC + +box "OpenNESS Controller Components" #LightBlue + participant UI + participant AF +end box + +box "5G Core Components" #LightGreen + participant NEF + note over NEF + OpenNESS provided + Core component with + limited functionality + end note + participant 5GC +end box + +group PFD transaction creation flow + user -> UI : PFD transaction create + UI -> AF : /af/v1/pfd/transaction POST + AF -> NEF : /3gpp-pfd-management/v1/{scsAsId}/transactions/ POST + NEF -> NEF : NGC_STUB (UDR SMF) + + NEF -> AF : OK: transactionId \ ERROR + AF -> UI : OK: transactionId \ ERROR + UI -> user : Success: transactionId +end +@enduml \ No newline at end of file diff --git a/doc/core-network/ngc_flows/PFD_management_transaction_update.uml b/doc/core-network/ngc_flows/PFD_management_transaction_update.uml new file mode 100644 index 00000000..31d0cc4a --- /dev/null +++ b/doc/core-network/ngc_flows/PFD_management_transaction_update.uml @@ -0,0 +1,60 @@ +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 300 +skinparam sequenceArrowThickness 3 + +header Intel Corporation +footer Proprietary and Confidential + +title PFD flow between Openness and 5G core + +actor Admin as user +participant "UI/CLI" as UI +participant "Af Microservice" as AF +participant "NEF" as NEF +participant "NGC CP Function" as 5GC + +box "OpenNESS Controller Components" #LightBlue + participant UI + participant AF +end box + +box "5G Core Components" #LightGreen + participant NEF + note over NEF + OpenNESS provided + Core component with + limited functionality + end note + + participant 5GC +end box + +group Update PFD transaction with transactionId + user -> UI : Update request by transactionId + UI -> AF : af/v1/pfd/transactions/{transactionId} PUT + AF -> NEF : 3gpp-pfd-management/v1/{scsAsId}/transactions/{transactionId} PUT + NEF -> NEF : NGC_STUB (UDR SMF) + + NEF -> AF : OK: Updated PFD information \ ERROR + AF -> UI : OK: Updated PFD information \ ERROR + UI -> user : Updated PFD details \ ERROR +end + +group Update applicationId PFD in a transaction + user -> UI : Update request with applicationId + UI -> AF : af/v1/pfd/transactions/{transactionId}/applications/{appId} PUT + AF -> NEF : 3gpp-pfd-management/v1/{scsAsId}/transactions/{transactionId}/applications/{appId} PUT + NEF -> NEF : NGC_STUB (UDR SMF) + + NEF -> AF : OK: Updated PFD information \ ERROR + AF -> UI : OK: Updated PFD information \ ERROR + UI -> user : PFD details updated \ERROR +end +@enduml \ No newline at end of file diff --git a/doc/core-network/ngc_flows/e2e_config_flow_for_5g_edge.uml b/doc/core-network/ngc_flows/e2e_config_flow_for_5g_edge.uml new file mode 100644 index 00000000..5ab7c1f0 --- /dev/null +++ b/doc/core-network/ngc_flows/e2e_config_flow_for_5g_edge.uml @@ -0,0 +1,82 @@ +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 400 +skinparam sequenceArrowThickness 3 + +header "Intel Corporation" +footer "Proprietary and Confidential" +title "5G End to End Edge deployment flows" + +participant "UE" as ue + +box "5G Core NF components" #LightGreen +participant "AMF" as amf +participant "SMF" as smf +participant "PCF/UDR" as pcf +participant "NEF" as nef +end box + +box "Edge components" #LightBlue +participant "OpenNESS\nAF" as af +participant "OSS" as oss +end box + +box "Data path components" +participant "UPF" as upf +participant "Edge" as edge +participant "Cloud" as cloud +end box + +== 5G CN pre-established == + +group PFD Profile management in PCF/UDR by Operator or AF + note over nef + PFD profiles/rules can be created + Network operator or can be pushed + an external NF. + end note +af -> nef : Create PFD rules \n(as suggested in \n3GPP TS 29.502 Sec. 4.18) + +note over pcf + CP components are updated + with new UPF information. +end note + +nef --> pcf : +nef --> af : +smf --> nef : Fetch the added/modified PFD's + +note over upf + New UPF or UPF with new service + is started to serve edge location. +end note +end + +group Traffic influence in UPF by AF +oss -> af : or traffic influence create action\n can be initiated by external component +af -> nef : Create traffic influence rule and \n Request for notification \n DNAI change \n (as suggested in \n3GPP TS 23.502 Sec. 4.3.6.2) +nef --> pcf : configured to PCF or UDR \n based on traffic rule info. + + +smf -> pcf : UE initiated PDU Session Modification Request \n (as suggested in TS 23.502 Sec. 4.3.3.2.1) +pcf --> pcf : or Network iniated PDU Session Modification Request +pcf --> smf : Updated PFD rules (as suggested by AF) +smf --> upf +smf -> nef : DNAI change notification \n (as suggested in TS 23.502 Sec. 4.3.6.3.1) +end + +group Data traffic from UE +ue -> upf : Edge traffic +upf --> edge : Based on updated PFD rules by AF + +ue -> upf : Traffic towards Cloud (or all other traffic) +upf --> cloud : +end + +@enduml \ No newline at end of file diff --git a/doc/core-network/ngc_flows/ngcoam_af_service_add.uml b/doc/core-network/ngc_flows/ngcoam_af_service_add.uml new file mode 100644 index 00000000..da2bd7c7 --- /dev/null +++ b/doc/core-network/ngc_flows/ngcoam_af_service_add.uml @@ -0,0 +1,46 @@ +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ + +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 400 +skinparam sequenceArrowThickness 3 + +header "Intel Corporation" +footer "Proprietary and Confidential" +title "NGC OAM flows between OpenNESS Controller and NGC Core OAM Component" + +actor "Admin" as user +box "OpenNESS Controller" #LightBlue +participant "UI/CLI" as cnca +end box +box "NGC component" #LightGreen +participant "OAM" as oam +note over oam + OpenNESS provided component + with REST based HTTP interface + (for reference) +end note +participant "NGC \n CP Functions" as ngccp +end box + +== AF services operations with NGC Core through OAM Component == +group AF services registration with 5G Core + user -> cnca : Register AF services (UI): \n {dnn, dnai, snssai, tac, dns-ip, upf-ip} + activate cnca + cnca -> oam : /ngcoam/v1/af/services : POST \n {dnn, dnai, snssai, tac, dns-ip, upf-ip} + activate oam + oam -> oam : NGC_OAM_STUB() + oam -> ngccp : {Open: 3rd Party NGC integration with OpenNESS(oam)} + ngccp --> oam : + oam --> cnca : OK : {afServiceId} \n ERROR: {400/500} + deactivate oam + cnca --> user : Success/Failure : {afServiceId} + deactivate cnca +end + +@enduml \ No newline at end of file diff --git a/doc/core-network/ngc_flows/ngcoam_af_service_delete.uml b/doc/core-network/ngc_flows/ngcoam_af_service_delete.uml new file mode 100644 index 00000000..aa299743 --- /dev/null +++ b/doc/core-network/ngc_flows/ngcoam_af_service_delete.uml @@ -0,0 +1,47 @@ +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ + +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 400 +skinparam sequenceArrowThickness 3 + +header "Intel Corporation" +footer "Proprietary and Confidential" +title "NGC OAM flows between OpenNESS Controller and NGC Core OAM Component" + +actor "Admin" as user +box "OpenNESS Controller" #LightBlue +participant "UI/CLI" as cnca +end box +box "NGC component" #LightGreen +participant "OAM" as oam +note over oam + OpenNESS provided component + with REST based HTTP interface + (for reference) +end note +participant "NGC \n CP Functions" as ngccp +end box + +== AF services operations with NGC Core through OAM Component == + +group AF services deregistration with 5G Core + user -> cnca : Deregister AF services from 5G Core (UI): \n {afServiceId} + activate cnca + cnca -> oam : /ngcoam/v1/af/services/{afServiceId}: DELETE + activate oam + oam -> oam : NGC_OAM_STUB() + oam -> ngccp : {Open: 3rd Party NGC integration with OpenNESS(oam)} + ngccp --> oam : + oam --> cnca : OK \n ERROR: {400/500} + deactivate oam + cnca --> user : Success/Failure + deactivate cnca +end + +@enduml \ No newline at end of file diff --git a/doc/core-network/ngc_flows/ngcoam_af_service_get.uml b/doc/core-network/ngc_flows/ngcoam_af_service_get.uml new file mode 100644 index 00000000..ae77bece --- /dev/null +++ b/doc/core-network/ngc_flows/ngcoam_af_service_get.uml @@ -0,0 +1,46 @@ +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ + +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 400 +skinparam sequenceArrowThickness 3 + +header "Intel Corporation" +footer "Proprietary and Confidential" +title "NGC OAM flows between OpenNESS Controller and NGC Core OAM Component" + +actor "Admin" as user +box "OpenNESS Controller" #LightBlue +participant "UI/CLI" as cnca +end box +box "NGC component" #LightGreen +participant "OAM" as oam +note over oam + OpenNESS provided component + with REST based HTTP interface + (for reference) +end note +participant "NGC \n CP Functions" as ngccp +end box + + +group Get AF registered DNN services from NGC Core + user -> cnca : Get AF registered DNN services info : {afServiceId} + activate cnca + cnca -> oam : /ngcoam/v1/af/services/{afServiceId}: GET + activate oam + oam -> oam : NGC_OAM_STUB() + oam -> ngccp : {Open: 3rd Party NGC integration with OpenNESS(oam)} + ngccp --> oam : + oam --> cnca : OK : {dnn, dnai, snssai, tac, dnsIp, upfIp} \n ERROR: {400/500} + deactivate oam + cnca --> user : DNN services info associated with afServiceId + deactivate cnca +end + +@enduml \ No newline at end of file diff --git a/doc/core-network/ngc_flows/ngcoam_af_service_update.uml b/doc/core-network/ngc_flows/ngcoam_af_service_update.uml new file mode 100644 index 00000000..9c566161 --- /dev/null +++ b/doc/core-network/ngc_flows/ngcoam_af_service_update.uml @@ -0,0 +1,47 @@ +@startuml +/' SPDX-License-Identifier: Apache-2.0 + Copyright (c) 2020 Intel Corporation +'/ + +skinparam monochrome false +skinparam roundcorner 20 +skinparam defaultFontName "Intel Clear" +skinparam defaultFontSize 20 +skinparam maxmessagesize 400 +skinparam sequenceArrowThickness 3 + +header "Intel Corporation" +footer "Proprietary and Confidential" +title "NGC OAM flows between OpenNESS Controller and NGC Core OAM Component" + +actor "Admin" as user +box "OpenNESS Controller" #LightBlue +participant "UI/CLI" as cnca +end box +box "NGC component" #LightGreen +participant "OAM" as oam +note over oam + OpenNESS provided component + with REST based HTTP interface + (for reference) +end note +participant "NGC \n CP Functions" as ngccp +end box + +== AF services operations with NGC Core through OAM Component == + +group Update DNS config values for DNN served by Edge DNN + user -> cnca : Update DNS configuration of DNN (UI): \n {afServiceId, dnn, dnai, snssai, tac, dns-ip, upf-ip} + activate cnca + cnca -> oam : /ngcoam/v1/af/services/{afServiceId} : PATCH \n {dnn, dnai, snssai, tac, dns-ip, upf-ip} + activate oam + oam -> oam : NGC_OAM_STUB() + oam -> ngccp : {Open: 3rd Party NGC integration with OpenNESS(oam)} + ngccp --> oam : + oam --> cnca : OK \n ERROR: {400/500} + deactivate oam + cnca --> user : Success/Failure + deactivate cnca +end + +@enduml \ No newline at end of file diff --git a/doc/core-network/openness-core.png b/doc/core-network/openness-core.png new file mode 100644 index 00000000..b3865c0f Binary files /dev/null and b/doc/core-network/openness-core.png differ diff --git a/doc/core-network/openness_ngc.md b/doc/core-network/openness_ngc.md index 5025248e..abc7e615 100644 --- a/doc/core-network/openness_ngc.md +++ b/doc/core-network/openness_ngc.md @@ -1,6 +1,9 @@ SPDX-License-Identifier: Apache-2.0 Copyright © 2019 Intel Corporation +# Edge Cloud Deployment with 3GPP 5G Stand Alone + +- [Edge Cloud Deployment with 3GPP 5G Stand Alone](#edge-cloud-deployment-with-3gpp-5g-stand-alone) - [Introduction](#introduction) - [5G Systems Architecture](#5g-systems-architecture) - [Edge 5G Architecture view](#edge-5g-architecture-view) @@ -13,13 +16,19 @@ Copyright © 2019 Intel Corporation - [Application Function](#application-function) - [Traffic steering NB APIs](#traffic-steering-nb-apis) - [AF supported Traffic steering API (South bound)](#af-supported-traffic-steering-api-south-bound) + - [PFD Management NB APIs](#pfd-management-nb-apis) + - [AF supported PFD management API (South bound)](#af-supported-pfd-management-api-south-bound) - [NGC notifications](#ngc-notifications) - [Network Exposure Function](#network-exposure-function) - [OAM Interface](#oam-interface) - [Edge service registration](#edge-service-registration) - [Core Network Configuration Agent](#core-network-configuration-agent) + - [Security between OpenNess 5GC micro-services](#security-between-openness-5gc-micro-services) + - [HTTPS support](#https-support) + - [OAuth2 Support between AF and NEF micro-services](#oauth2-support-between-af-and-nef-micro-services) - [REST based API flows](#rest-based-api-flows) - [AF-NEF interface for traffic influence](#af-nef-interface-for-traffic-influence) + - [AF-NEF interface for PFD Management](#af-nef-interface-for-pfd-management) - [OAM interface for edge service registration](#oam-interface-for-edge-service-registration) - [OAM API flows](#oam-api-flows) - [5G End to End flows for Edge by OpenNESS](#5g-end-to-end-flows-for-edge-by-openness) @@ -146,9 +155,9 @@ Below pictures shows the Micro service architectural view of OpenNESS solution w ### Application Function -An Application Function (AF) is a micro service in the OpenNESS edge controller solution, developed in golang. In the scope of the current release (OpenNESS 19.12), AF supports the Traffic influencing subscription functionality to help in steering the Edge specific traffic in UPF towards the applications deployed on the OpenNESS edge node. +An Application Function (AF) is a micro service in the OpenNESS edge controller solution, developed in golang. In the scope of the current release (OpenNESS 20.03), AF supports the Traffic influencing subscription and Packet Flow Description Management functionality to help in steering the Edge specific traffic in UPF towards the applications deployed on the OpenNESS edge node. -Other AF functionalities as discussed in 3GPP 5G standard [3GPP_29122], PFD Management Section 4.4.10, Changing chargeable party Section 4.4.4, configuration QoS for AF sessions Section 4.4.13, Monitoring Section 4.4.2, Device triggering Section 4.4.6 and resource management of Background Data Transfer (BDT) Section 4.4.3 are in under consideration for implementation in future OpenNESS releases. +Other AF functionalities as discussed in 3GPP 5G standard [3GPP_29122], Changing chargeable party Section 4.4.4, configuration QoS for AF sessions Section 4.4.13, Monitoring Section 4.4.2, Device triggering Section 4.4.6 and resource management of Background Data Transfer (BDT) Section 4.4.3 are in under consideration for implementation in future OpenNESS releases. The OpenNESS AF micro service provides a northbound (NB) REST based API interface for other micro services which provide a user interface (i.e. CNCA/UI or CLI) and also these NB API can be invoked from external services which provides infrastructure for automation and/or orchestration. @@ -164,6 +173,18 @@ The OpenNESS AF micro service provides a northbound (NB) REST based API interfac * Supported methods: POST,PUT,PATCH,GET,DELETE * Request/Response body: _5G NEF North Bound APIs schema at openness.org_ +#### PFD Management NB APIs + +* API End point: _/af/v1/pfd/transactions_ +* Supported methods: POST,PUT,PATCH,GET,DELETE +* Request/Response body: _5G AF North Bound APIs schema at openness.org_ + +#### AF supported PFD management API (South bound) + +* API End point: _/3gpp-pfd-management/v1/{scsAsId}/transactions_ +* Supported methods: POST,PUT,PATCH,GET,DELETE +* Request/Response body: _5G NEF North Bound APIs schema at openness.org_ + #### NGC notifications As part of the traffic subscription API exchange, SMF generated notifications related to DNAI change can be forwarded to AF through NEF. NEF Reference implementation has place holders to integrate with 5G Core control plane. @@ -176,7 +197,7 @@ According to 3GPP 5G System Architecture [3GPP TS 23.501-f30], NEF is a function * Trivial, but still may be helpful for 5G Core partners who are looking for NEF service to add to their solution for OpenNESS integration. -In the OpenNESS provided NEF reference implementation for Traffic influence is as per 3GPP TS 23.502 Section 5.2.6. Supported API endpoints, Nnef_TrafficInfluence {CREATE,UPDATE,DELETE}, are terminated and looped back at NEF itself, which allows partner the flexibility to integrate and validate without a Core solution. +In the OpenNESS provided NEF reference implementation for Traffic influence and PFD management is as per 3GPP TS 23.502 Section 5.2.6. Supported API endpoints, Nnef_TrafficInfluence {CREATE,UPDATE,DELETE} and Nnef_PfdManagement {CREATE, UPDATE, DELETE}, are terminated and looped back at NEF itself, which allows partner the flexibility to integrate and validate without a Core solution. ### OAM Interface @@ -195,6 +216,25 @@ OAM agent functionality is another component which should be part of 5G Core sol Core Network Configuration Agent (CNCA) is a micro service that provides an interface for end users of OpenNESS controller to interact with 5G Core network solution. CNCA provides a web based UI and CLI (kube-ctl plugin) interface to interact with the AF and OAM services. +### Security between OpenNess 5GC micro-services + +The security among OpenNess 5GC micro-services is supported through https and OAuth2. + +#### HTTPS support + +The OpenNess 5GC micro-services such as OAM, CNCA-UI, CLI kube-ctl, AF and NEF communicates using REST API's over https interface among then from 20.03 onwards. + +#### OAuth2 Support between AF and NEF micro-services + +The AF and NEF micro-services supports the OAuth2 with grant type as "client_credentials" over an https interface. This is in accordance to subclause 13.4.1 of 3GPP TS 33.501 (also refer 3GPP 29.122, 3GPP 29.500 and 3GPP 29.510 ). A reference OAuth2 library is provided which generates the OAuth2 token and validates it. + +*Note: When using 5GC core from any vendor the OAuth2 library need to be implemented as described by the vendor.* + +The OAuth2 flow between AF and NEF is as shown in below diagram. + +![OAuth2 flow between AF and NEF](ngc-images/OAuth2.png) + + ## REST based API flows The flow diagrams below depict the scenarios for the traffic influence subscription operations from an end user of OpenNESS controller towards 5G core. @@ -213,6 +253,20 @@ The flow diagrams below depict the scenarios for the traffic influence subscript * Deletion of traffic influencing rules subscription through AF ![Traffic influence subscription Delete](ngc-images/traffic_subscription_del.png) +### AF-NEF interface for PFD Management + +* Addition of PFD Management transaction rules through AF +![PFD Management transaction Addition](ngc-images/PFD_Management_transaction_add.png) + +* Update of PFD Management transaction rules through AF +![PFD Management transaction update](ngc-images/PFD_Management_transaction_update.png) + +* Get PFD Management transaction rules through AF +![PFD Management transaction Get](ngc-images/PFD_Management_transaction_get.png) + +* Deletion of PFD Management transaction rules through AF +![PFD Management transaction Delete](ngc-images/PFD_Management_transaction_del.png) + ### OAM interface for edge service registration #### OAM API flows @@ -239,18 +293,19 @@ The flow diagram below depicts a possible end to end edge deployment scenario in ![AF Service Delete](ngc-images/e2e_edge_deployment_flows.png) -* AF registration and PFD management - * AF authenticates and registers with the 5G Core - * AF registers for DNAI change notifications through the TrafficInfluence request API - * When a new UPF is deployed in the 5G network or a new DN service is started on an existing UPF, SMF may generate a trigger to AF about the DNAI change notification. - * PFD profiles can be created based on the trigger in PCF by AF or the 5G Network operator can create PFD profiles. +* AF authenticates and registers with the 5G Core + +* PFD Profile management + * PFD profiles can be created based on the trigger in PCF/UDR by AF or the 5G Network operator can create PFD profiles. + * SMF on getting notification from NEF on PFD's addition/modification, pulls the PFD's from NEF * Traffic influence in UPF by AF - * Traffic influence requests can be sent by AF towards the PCF (via NEF) for PFD profiles created in PCF. The action of a traffic influence request created in AF can be triggered by an external applications like OSS or a DNAI change notification events from the NEF or Device triggering events from NEF. + * Traffic influence requests can be sent by AF towards the PCF (via NEF) for PFD profiles created in PCF. The action of a traffic influence request created in AF can be triggered by an external applications like OSS or a DNAI change notification events from the NEF or Device triggering events from NEF. AF registers for DNAI change notifications through the TrafficInfluence request API * Traffic influence requests will be consumed by PCF or UDR based on the requested information. * UE may initiate the PDU Session Modification procedure towards SMF, because of the location change event. Or the PCF may initiate a Network initiated PDU Session Modification request procedure towards SMF because of a traffic influence request generated by AF. * SMF may push this updated PFD profiles to UPF - + * When a new UPF is deployed in the 5G network or a new DN service is started on an existing UPF, SMF may generate a trigger to AF about the DNAI change notification. + * Data path from UE * Edge traffic sent by the UE reaches the UPF, the UPF routes the edge-traffic towards the local DN where the OpenNESS Edge Node is configured. * All other traffic sent by the UE that reaches the UPF will be sent to another UPF or to a remote gateway. diff --git a/doc/core-network/openness_upf.md b/doc/core-network/openness_upf.md new file mode 100644 index 00000000..87dedc88 --- /dev/null +++ b/doc/core-network/openness_upf.md @@ -0,0 +1,143 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020 Intel Corporation +``` +- [Introduction](#introduction) +- [How to build](#how-to-build) +- [UPF configure](#upf-configure) + - [Platform specific information:](#platform-specific-information) + - [UPF application specific information:](#upf-application-specific-information) +- [How to start](#how-to-start) + - [Deploy UPF POD from OpenNESS controller](#deploy-upf-pod-from-openness-controller) + - [To start UPF](#to-start-upf) + +# Introduction + +The User Plane Function (UPF) is the evolution of the Control and User Plane Separation (CUPS) which part of the Rel.14 in Evolved Packet core (EPC). CUPS enabled PGW to be split in to PGW-C and PGW-U. By doing this PGW-U can be distributed and could be used for Edge Cloud deployment. + +Defined in 3GPP technical specification 23.501, the UPF provides: + +- Anchor point for Intra-/Inter-RAT mobility (when applicable). +- External PDU Session point of interconnect to Data Network. +- Packet routing & forwarding (e.g. support of Uplink classifier to route traffic flows to an instance of a data network, support of Branching point to support multi-homed PDU Session). +- Packet inspection (e.g. Application detection based on service data flow template and the optional PFDs received from the SMF in addition). +- User Plane part of policy rule enforcement, e.g. Gating, Redirection, Traffic steering). +- Lawful intercept (UP collection). +- Traffic usage reporting. +- QoS handling for user plane, e.g. UL/DL rate enforcement, Reflective QoS marking in DL. +- Uplink Traffic verification (SDF to QoS Flow mapping). +- Transport level packet marking in the uplink and downlink. +- Downlink packet buffering and downlink data notification triggering. +- Sending and forwarding of one or more "end marker" to the source NG-RAN node. +- Functionality to respond to Address Resolution Protocol (ARP) requests and / or IPv6 Neighbor Solicitation requests based on local cache information for the Ethernet PDUs. The UPF responds to the ARP and / or the IPv6 Neighbor Solicitation Request by providing the MAC address corresponding to the IP address sent in the request. + +As part of the end-to-end integration of the Edge cloud deployment using OpenNESS a reference 5G Core network is used along with reference RAN (FlexRAN). The diagram below shows UPF and NGC Control plane deployed on the OpenNESS platform with the necessary microservice and Kubernetes enhancements required for high throughput user plane workload deployment. + +![UPF and NGC Control plane deployed on OpenNESS](openness-core.png) + +> Note: UPF source or binary is not released as part of the OpenNESS. + +This document aims to provide the steps involved in deploying UPF on the OpenNESS platform. 4G/LTE or 5G User Plane Functions (UPF) can run as network functions on Edge node in a virtualized environment. The reference [Dockerfile](https://github.com/open-ness/edgeapps/blob/master/network-functions/core-network/5G/UPF/Dockerfile) and [5g-upf.yaml](https://github.com/open-ness/edgeapps/blob/master/network-functions/core-network/5G/UPF/5g-upf.yaml) provide refrence on how to deploy UPF as a Container Networking function (CNF) in a K8s pod on OpenNESS edge node using OpenNESS Enhanced Platform Awareness (EPA) features. + +These scripts are validated through a reference UPF solution (implementation based Vector Packet Processing (VPP)), is not part of OpenNESS release. + +> Note: AF and NEF dockerfile and pod specification can be found here +> - AF - [dockerfile](https://github.com/open-ness/epcforedge/blob/master/ngc/build/networkedge/af/Dockerfile). [Pod Specification](https://github.com/open-ness/epcforedge/blob/master/ngc/scripts/networkedge/ngctest/podAF.yaml) +> - NEF - [dockerfile](https://github.com/open-ness/epcforedge/blob/master/ngc/build/networkedge/nef/Dockerfile). [Pod Specification](https://github.com/open-ness/epcforedge/blob/master/ngc/scripts/networkedge/ngctest/podNEF.yaml) +> - OAM - [dockerfile](https://github.com/open-ness/epcforedge/blob/master/ngc/build/networkedge/oam/Dockerfile). [Pod Specification](https://github.com/open-ness/epcforedge/blob/master/ngc/scripts/networkedge/ngctest/podOAM.yaml) + +# How to build + +To keep the build and deploy process simple for reference, docker build and image are stored on the Edge node itself. + +```code +ne-node02# cd <5g-upf-binary-package> +``` + +Copy Dockerfile and 5g-upf.yaml files + +```code +ne-node02# docker build --build-arg http_proxy=$http_proxy --build-arg https_proxy=$https_proxy --build-arg no_proxy=$no_proxy -t 5g-upf:1.0 . +``` + +# UPF configure + +To keep the bring-up setup simple and to the point, UPF configuration was made static through config files placed inside the UPF binary package. However one can think of ConfigMaps and/or Secrets services in Kubernetes to provide configuration information to UPF workloads. + +Below are the list of minimal configuration parameters that one can think of for a VPP based applications like UPF, + +## Platform specific information: + +- SR-IOV PCIe interface(s) bus address +- CPU core dedicated for UPF workloads +- Amount of Huge pages + +## UPF application specific information: +- N3, N4, N6 and N9 Interface IP addresses + +# How to start + +Ensure all the EPA microservice and Enhancements (part of OpenNESS play book) are deployed `kubectl get po --all-namespaces` + ```yaml + NAMESPACE NAME READY STATUS RESTARTS AGE + kube-ovn kube-ovn-cni-8x5hc 1/1 Running 17 7d19h + kube-ovn kube-ovn-cni-p6v6s 1/1 Running 1 7d19h + kube-ovn kube-ovn-controller-578786b499-28lvh 1/1 Running 1 7d19h + kube-ovn kube-ovn-controller-578786b499-d8d2t 1/1 Running 3 5d19h + kube-ovn ovn-central-5f456db89f-l2gps 1/1 Running 0 7d19h + kube-ovn ovs-ovn-56c4c 1/1 Running 17 7d19h + kube-ovn ovs-ovn-fm279 1/1 Running 5 7d19h + kube-system coredns-6955765f44-2lqm7 1/1 Running 0 7d19h + kube-system coredns-6955765f44-bpk8q 1/1 Running 0 7d19h + kube-system etcd-silpixa00394960 1/1 Running 0 7d19h + kube-system kube-apiserver-silpixa00394960 1/1 Running 0 7d19h + kube-system kube-controller-manager-silpixa00394960 1/1 Running 0 7d19h + kube-system kube-multus-ds-amd64-bpq6s 1/1 Running 17 7d18h + kube-system kube-multus-ds-amd64-jf8ft 1/1 Running 0 7d19h + kube-system kube-proxy-2rh9c 1/1 Running 0 7d19h + kube-system kube-proxy-7jvqg 1/1 Running 17 7d19h + kube-system kube-scheduler-silpixa00394960 1/1 Running 0 7d19h + kube-system kube-sriov-cni-ds-amd64-crn2h 1/1 Running 17 7d19h + kube-system kube-sriov-cni-ds-amd64-j4jnt 1/1 Running 0 7d19h + kube-system kube-sriov-device-plugin-amd64-vtghv 1/1 Running 0 7d19h + kube-system kube-sriov-device-plugin-amd64-w4px7 1/1 Running 0 4d21h + openness eaa-78b89b4757-7phb8 1/1 Running 3 5d19h + openness edgedns-mdvds 1/1 Running 16 7d18h + openness interfaceservice-tkn6s 1/1 Running 16 7d18h + openness nfd-master-82dhc 1/1 Running 0 7d19h + openness nfd-worker-h4jlt 1/1 Running 37 7d19h + openness syslog-master-894hs 1/1 Running 0 7d19h + openness syslog-ng-n7zfm 1/1 Running 16 7d19h + ``` + +## Deploy UPF POD from OpenNESS controller + +```code +ne-controller# kubectl create -f 5g-upf.yaml +``` + +## To start UPF +- In this reference validation, UPF application will be started manually after UPF POD deployed successfully. +```code +ne-controller# kubectl exec -it test1-app -- /bin/bash + +5g-upf# cd /root/upf +5g-upf# ./start_upf.sh +``` + +- Verify UPF pod is up and running `kubectl get po` +```code +[root@ne-controller ~]# kubectl get po +NAME READY STATUS RESTARTS AGE +udp-server-app 1/1 Running 0 6d19h +upf 1/1 Running 0 6d19h +``` + +- Verify AF, NEF and OAM pods are running `kubectl get po -n ngc` +```code +[root@ne-controller ~]# kubectl get po -n ngc +NAME READY STATUS RESTARTS AGE +af 1/1 Running 0 172m +nef 1/1 Running 0 173m +oam 1/1 Running 0 173m +``` \ No newline at end of file diff --git a/doc/dataplane/openness-interapp.md b/doc/dataplane/openness-interapp.md index c0456d8b..b6c98031 100644 --- a/doc/dataplane/openness-interapp.md +++ b/doc/dataplane/openness-interapp.md @@ -3,7 +3,7 @@ SPDX-License-Identifier: Apache-2.0 Copyright (c) 2019 Intel Corporation ``` -# InterApp Communication support in OpenNESS +# InterApp Communication support in OpenNESS - [InterApp Communication support in OpenNESS](#interapp-communication-support-in-openness) - [Overview](#overview) @@ -13,27 +13,27 @@ Copyright (c) 2019 Intel Corporation ## Overview -Multi-core edge cloud platforms typically host multiple containers or virtual machines as PODs. These applications sometimes need to communicate with each other as part of a service or consuming services from another application instance. This means that an edge cloud platform should provide not just the Dataplane interface but also the infrastructure to enable applications communicate with each other whether they are on the same platform or spanning across multiple platform. OpenNESS provides the infrastructure for both the On-Premises and Network edge modes. +Multi-core edge cloud platforms typically host multiple containers or virtual machines as PODs. These applications sometimes need to communicate with each other as part of a service or consuming services from another application instance. This means that an edge cloud platform should provide not just the Dataplane interface but also the infrastructure to enable applications communicate with each other whether they are on the same platform or spanning across multiple platform. OpenNESS provides the infrastructure for both the On-Premises and Network edge modes. -> Note: InterApps Communication mentioned here are not just for applications but also applicable for Network functions like Core Network User plane, Base station and so on. +> Note: InterApps Communication mentioned here are not just for applications but also applicable for Network functions like Core Network User plane, Base station and so on. -## InterApp Communication support in OpenNESS On-Premises Edge -InterApp communication on the OpenNESS On-Premises version is supported using OVS-DPDK as the infrastructure. +## InterApp Communication support in OpenNESS On-Premises Edge +InterApp communication on the OpenNESS On-Premises version is supported using OVS-DPDK as the infrastructure. ![OpenNESS On-Premises InterApp Interface](iap-images/iap3.png) - + _Figure - OpenNESS On-Premises InterApp Interface_ -The current version of OpenNESS On-Premises is mainly targeted at private LTE deployments. The Network Transport Services (NTS) Dataplane is used which supports Edge Cloud deployment on S1-U and IP (WiFi/Wireline/SGi). Because the Dataplane is separate to the InterApp interface there are at least three interfaces into the Application. +The current version of OpenNESS On-Premises is mainly targeted at private LTE deployments. The Network Transport Services (NTS) Dataplane is used which supports Edge Cloud deployment on S1-U and IP (WiFi/Wireline/SGi). Because the Dataplane is separate to the InterApp interface there are at least three interfaces into the Application. ![OpenNESS OnPremises Application Interfaces](iap-images/iap1.png) - + _Figure - OpenNESS On-Premises Application Interfaces_ Each container and VM running an Application will have an additional OVS-DPDK managed interface added. Additionally, the OVS-DPDK interface would be automatically added for every deployed app. OVS-DPDK will run as a host service. In total three interfaces would be allocated: -- Default interface connected to a kernel bridge. This interface is used for management of the App and also communication to the cloud if the throughput requirement is low. -- NTS interface for Dataplane traffic - Support S1-U and IP (WiFi/Wireline/SGi) upstream and downstream traffic +- Default interface connected to a kernel bridge. This interface is used for management of the App and also communication to the cloud if the throughput requirement is low. +- NTS interface for Dataplane traffic - Support S1-U and IP (WiFi/Wireline/SGi) upstream and downstream traffic - OVS-DPDK interface for inter-app communication. OVS will be used with DPDK and physical ports may be assigned to it(PMD drivers) Ports assigned to OVS will be ignored by ELA, so it is not possible for them to be used by NTS. It should be possible to optionally install and configure OVS-DPDK using ansible automation scripts. @@ -42,7 +42,7 @@ Ports assigned to OVS will be ignored by ELA, so it is not possible for them to To enable OVS-DPDK for inter app communication follow the steps below. -1. Enable `ovs` role in `openness-experience-kits/onprem_node.yml` +1. Enable `ovs` role in `openness-experience-kits/on_premises.yml` 2. Set the `ovs_ports` variable in `host_vars/node-name-in-inventory.yml`. Example: ``` @@ -50,15 +50,15 @@ To enable OVS-DPDK for inter app communication follow the steps below. ``` 3. Setup the cluster using automation scripts -## InterApp Communication support in OpenNESS Network Edge +## InterApp Communication support in OpenNESS Network Edge InterApp communication on the OpenNESS Network edge version is supported using OVN/OVS as the infrastructure. OVN/OVS in the network edge is supported through the Kubernetes kube-OVN Container Network Interface (CNI). OVN/OVS is used as default networking infrastructure for: -- Dataplane Interface: UE's to edge applications -- InterApp Interface : Communication infrastructure for applications to communicate +- Dataplane Interface: UE's to edge applications +- InterApp Interface : Communication infrastructure for applications to communicate - Default Interface: Interface for managing the Application POD (e.g. ssh to application POD) -- Cloud/Internet Interface: Interface for Edge applications to communicate with the cloud/Internet +- Cloud/Internet Interface: Interface for Edge applications to communicate with the cloud/Internet ![OpenNESS Network Edge Interfaces](iap-images/iap2.png) - + _Figure - OpenNESS Network Edge Interfaces_ diff --git a/doc/dataplane/openness-ovn.md b/doc/dataplane/openness-ovn.md index 2d847208..8f9ba5e2 100644 --- a/doc/dataplane/openness-ovn.md +++ b/doc/dataplane/openness-ovn.md @@ -1,20 +1,29 @@ ```text SPDX-License-Identifier: Apache-2.0 -Copyright (c) 2019 Intel Corporation +Copyright (c) 2019-2020 Intel Corporation ``` -# OpenNESS Support for OVS as dataplane with OVN +# OpenNESS Support for OVS as dataplane with OVN - [OpenNESS Support for OVS as dataplane with OVN](#openness-support-for-ovs-as-dataplane-with-ovn) - [OVN Introduction](#ovn-introduction) - - [OVN/OVS support in OpenNESS](#ovnovs-support-in-openness) + - [OVN/OVS support in OpenNESS Network Edge](#ovnovs-support-in-openness-network-edge) + - [OVS/OVN support in OpenNESS On Premises (OVN CNI)](#ovsovn-support-in-openness-on-premises-ovn-cni) + - [Enable OVNCNI](#enable-ovncni) + - [OVS-DPDK Parameters](#ovs-dpdk-parameters) + - [CNI Implementation](#cni-implementation) + - [The Network](#the-network) + - [Cluster architecture](#cluster-architecture) + - [Additional physical ports](#additional-physical-ports) + - [Traffic rules](#traffic-rules) + - [Example: Block pod's ingress IP traffic but allow ICMP](#example-block-pods-ingress-ip-traffic-but-allow-icmp) - [Summary](#summary) ## OVN Introduction Open Virtual Network (OVN) is an open source solution based on the Open vSwitch-based (OVS) software-defined networking (SDN) solution for providing network services to instances. OVN adds to the capabilities of OVS to provide native support for virtual network abstractions, such as virtual L2 and L3 overlays and security groups. Further information about the OVN architecture can be found [here](https://www.openvswitch.org/support/dist-docs/ovn-architecture.7.html) -## OVN/OVS support in OpenNESS -The primary objective of supporting OVN/OVS in OpenNESS is to demonstrate the capability of using a standard dataplane like OVS for an Edge Compute platform. Using OVN/OVS further provides standard SDN based flow configuration for the edge Dataplane. +## OVN/OVS support in OpenNESS Network Edge +The primary objective of supporting OVN/OVS in OpenNESS is to demonstrate the capability of using a standard dataplane like OVS for an Edge Compute platform. Using OVN/OVS further provides standard SDN based flow configuration for the edge Dataplane. The diagram below shows OVS as a dataplane and OVN overlay. This mode of deployment is recommended when the Edge node terminates IP traffic (Wireline, Wireless, LTE CUPS, SGi) @@ -23,13 +32,260 @@ The diagram below shows OVS as a dataplane and OVN overlay. This mode of deploym [Kube-OVN](https://github.com/alauda/kube-ovn) has been chosen as the CNI implementation for OpenNESS. Additionally, in the following configuration, OpenNESS applications on Edge Nodes are deployed as DaemonSet Pods (in separate "openness" namespace) and exposed to client applications by k8s services. OVN/OVS is used as the default networking infrastructure for: -- Dataplane Interface: UE's to edge applications -- InterApp Interface : Communication infrastructure for applications to communicate +- Dataplane Interface: UE's to edge applications +- InterApp Interface: Communication infrastructure for applications to communicate - Default Interface: Interface for managing the Application POD (e.g. ssh to application POD) -- Cloud/Internet Interface: Interface for Edge applications to communicate with the Cloud/Internet +- Cloud/Internet Interface: Interface for Edge applications to communicate with the Cloud/Internet -> Note: It should be noted that the current release does not support OVS-DPDK as a dataplane, but it is planned to be added in the future. +The platform supports OVS-DPDK as a dataplane. OVS-DPDK can be used to high-performance data transmission scenarios in userspace. More about OVS-DPDK can be found [here](http://docs.openvswitch.org/en/latest/howto/dpdk/). -## Summary -OpenNESS is built with a microservices architecture. Depending on the deployment, there may be a requirement to service pure IP traffic and configure the dataplane using standard SDN based tools. OpenNESS demonstrates such a requirement this by providing OVS as a dataplane in the place of NTS without changing the APIs from an end user perspective. +## OVS/OVN support in OpenNESS On Premises (OVN CNI) +For On Premises mode OVS/OVN can be used in place of the default On Premises dataplane which is NTS. +To distinguish it from OVS InterApp this dataplane is often referred to as OVN CNI. +OVN CNI supports both virtual machines and docker containers. + +For information on deploying On Premises mode with OVS/OVN instead of NTS refer to [On Premises setup guide](../getting-started/on-premises/controller-edge-node-setup.md#dataplanes). + +OVNCNI plugin has been implemented as the CNI for OpenNESS in On-Premises mode. The plugin has been developed based on the specifications provided as part of the [CNCF](https://www.cncf.io/) project. OVNCNI provides network connectivity for Edge applications on the OpenNESS Edge nodes. The applications can be deployed as Docker containers or VMs and are exposed to client applications by Docker services. + +The OpenNESS platform supports OVN/OVS-DPDK as a dataplane. However, it is a work in progress. OVN dataplane implementation is not complete, thus, it is not the default networking infrastructure and [NTS](openness-nts.md) still works as such. + +OVN/OVS can be used as: +- InterApp Interface: Communication infrastructure for applications to communicate +- Default Interface: Interface for managing the application container and VM (e.g. ssh to application container or VM) +- Cloud/Internet Interface: Interface for Edge applications to communicate with the Cloud/Internet + +### Enable OVNCNI +To enable OVNCNI instead of NTS, "onprem_dataplane" variable needs to be set to "ovncni", before executing deploy_onprem.yml file to start OpenNESS installation. + +```yaml +# group_vars/all.yml +onprem_dataplane: "ovncni" +``` +OVS role used for _Inter App Communication_ with _nts_ dataplane has to be disabled(Disabled by default): +```yaml +# on_premises.yml +# - role: ovs +``` +> NOTE: When deploying virtual machine with OVNCNI dataplane, `/etc/resolv.conf` must be edited to use `192.168.122.1` nameserver. + +The ansible scripts configure the OVN infrastructure to be used by OpenNESS. OVN-OVS container is created on each controller and Edge node where OVS is installed and configured to use DPDK. Network connectivity is set for the controller and all the nodes in the OpenNESS cluster. On each Edge node the CNI plugin is built which can be later used to add and delete OVN ports to connect/disconnect Edge applications to/from the cluster. + +CNI configuration is retrieved from roles/openness/onprem/dataplane/ovncni/master/files/cni.conf file. Additional arguments used by CNI are stored in roles/openness/onprem/dataplane/ovncni/master/files/cni_args.json file. The user is not expected to modify the files. + +### OVS-DPDK Parameters +The following parameters are used to configure DPDK within OVS. They are set in roles/openness/onprem/dataplane/ovncni/common/defaults/main.yml. + +"ovs_dpdk_lcore_mask" parameter is used to set core bitmask that is used for DPDK initialization. Its default value is "0x2" to select core 1. + +"ovs_dpdk_pmd_cpu_mask" parameter is used to set the cores that are used by OVS-DPDK for datapath packet processing. Its default value is "0x4" to select core 2. It can be set at any time using ovs-vsctl: + +```shell +ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x4 +``` + +"ovs_dpdk_socket_mem" parameter is used to set how hugepage memory is allocated across NUMA nodes. By default it is set to "1024,0" to allocate 1G of hugepage memory on numa 0. + +### CNI Implementation +OpenNESS EdgeNode has two built-in packages that are used for handling OVN: + +"cni" package is implemented based on a CNI skeleton available as a GO package [here](https://godoc.org/github.com/containernetworking/cni/pkg/skel). OpenNESS adds its own implementations of functions the skeleton calls to ADD, DELETE and CHECK OVN ports to existing OVS bridges for connecting applications. + +"ovncni" package provides OVN client implementation used to add, delete and get OVN ports. This client is part of the CNI context used for ADD, DELETE and GET commands issued towards OVN. Additionally, the package provides helper functions used by EVA to deploy application VMs and containers. + +### The Network +The Controller node acts as the OVN-Central node. The OVN-OVS container deployed on the Controller contains ovn-northd server with north and south databases that store the information on logical ports, switches and routes as well as the physical network components spread across all the connected nodes. The OVN-OVS container deployed on each node runs ovn-controller that connects the node to the south DB on the Controller, and ovs-vswitch daemon that manages the switches on the node. + +OVNCNI plugin is installed on each node to provide networking connectivity for its application containers and VMs, keeping their deployment differences transparent to the user and providing homogenous networks in terms of IP addressing. + +### Cluster architecture +Following diagram contains overview on cluster architecture for OVN CNI dataplane. +![OVN CNI cluster architecture](ovn_images/ovncni_cluster.png) + +* Machines in OpenNESS deployment are connected via `node-switch` switch. Each machine contains `br-int` OVS bridge which is accessible by `ovn-local` interface. +* Deployed applications are attached to `cluster-switch` OVN switch which has `10.100.0.0/16` network. +* Attaching other physical interfaces to OVN cluster is possible using `br-local` OVS bridge. + +### Additional physical ports +To enable more physical port connections run the following once, from the controller: +``` +NET="192.168.1.1/24" + + +mac="0e:00:$(openssl rand -hex 4 | sed 's/\(..\)/\1:/g; s/:$//')" +sw="local" +swp="local-to-cluster-router" +rp="cluster-router-to-local" +docker exec -it ovs-ovn ovn-nbctl --may-exist lrp-add cluster-router "${rp}" "${mac}" "${NET}" +docker exec -it ovs-ovn ovn-nbctl --may-exist lsp-add "${sw}" "${swp}" -- \ + set logical_switch_port "${swp}" type=router -- \ + set logical_switch_port "${swp}" options:router-port="${rp}" -- \ + set logical_switch_port "${swp}" addresses=\""${mac}"\" +``` + +To add the desired interface run the following command on a specific node: +``` +docker exec -it ovs-ovn ovs-vsctl add-port br-local +``` +Configure ovn routing for the externally connected interface: +``` +docker exec -it ovs-ovn ovn-nbctl --may-exist --policy=dst-ip lr-route-add cluster-router 192.168.1.1 +``` +where `client_ip` is the IP address of a client connected to the new OVN port(192.168.1.0/24 subnet) + +### Traffic rules +For On Premises deployment with OVN CNI dataplane traffic rules are handled by OVN's ACL (Access-Control List). +> NOTE: Currently, managing the rules is only possible using `ovn-nbctl` utility. + +As per [`ovn-nbctl`](http://www.openvswitch.org/support/dist-docs/ovn-nbctl.8.html) manual, adding new ACL using `ovn-nbctl`: +``` +ovn-nbctl + [--type={switch | port-group}] + [--log] [--meter=meter] [--severity=severity] [--name=name] + [--may-exist] + acl-add entity direction priority match verdict +``` +Deleting ACL: +``` +ovn-nbctl + [--type={switch | port-group}] + acl-del entity [direction [priority match]] +``` + +Listing ACLs: +``` +ovn-nbctl + [--type={switch | port-group}] + acl-list entity +``` + +Where: +* `--type={switch | port-group}` allows to specify type of the _entity_ if there's a switch and a port-group with same name +* `--log` enables packet logging for the ACL + * `--meter=meter` can be used to limit packet logging, _meter_ is created using `ovn-nbctl meter-add` + * `--severity=severity` is a log level: `alert, debug, info, notice, warning` + * `--name=name` specifies a log name +* `--may-exist` don't return an error when creating duplicated rule +* `entity` entity (switch or port-group) to which ACL will be applied, can be UUID or name, in case of OVN CNI it's most likely to be a `cluster-switch` +* `direction` either `from-lport` or `to-lport`: + * `from-lport` for filters on traffic arriving from a logical port (logical switch's ingress) + * `to-lport` for filters on traffic forwarded to a logical port (logical switch's egress) +* `priority` integer in range from 0 to 32767, greater the number, more important is the rule +* `match` rule for matching the packets +* `verdict` action, one of the `allow`, `allow-related`, `drop`, `reject` + +> NOTE: By default all traffic is allowed. When restricting traffic, remember to allow flows like ARP or other essential networking protocols. + +For more information refer to [`ovn-nbctl`](http://www.openvswitch.org/support/dist-docs/ovn-nbctl.8.html) and [`ovn-nb`](http://www.openvswitch.org/support/dist-docs/ovn-nb.5.html) manuals. + +#### Example: Block pod's ingress IP traffic but allow ICMP + +Following examples use nginx container which is a HTTP server for OVN cluster. In the examples, it has IP address `10.100.0.4` and application ID = `f5bd3404-df38-47e7-8907-4adbc4d24a7f`. +Second container acts as a supposed client of the server. + +First, make sure that there is a connectivity between two containers: +```shell +$ ping 10.100.0.4 -w 3 + +PING 10.100.0.4 (10.100.0.4) 56(84) bytes of data. +64 bytes from 10.100.0.4: icmp_seq=1 ttl=64 time=0.640 ms +64 bytes from 10.100.0.4: icmp_seq=2 ttl=64 time=0.580 ms +64 bytes from 10.100.0.4: icmp_seq=3 ttl=64 time=0.221 ms + +--- 10.100.0.4 ping statistics --- +3 packets transmitted, 3 received, 0% packet loss, time 54ms +rtt min/avg/max/mdev = 0.221/0.480/0.640/0.185 ms + +$ curl 10.100.0.4 + + + + +Welcome to nginx! + + + +

Welcome to nginx!

+

If you see this page, the nginx web server is successfully installed and +working. Further configuration is required.

+ +

For online documentation and support please refer to +nginx.org.
+Commercial support is available at +nginx.com.

+ +

Thank you for using nginx.

+ + +``` + +To block pod's IP, but allow ICMP run following command either on the Edge Node or the Edge Controller: + +```shell +$ docker exec ovs-ovn ovn-nbctl acl-add cluster-switch to-lport 100 'outport == "f5bd3404-df38-47e7-8907-4adbc4d24a7f" && ip && icmp' allow-related +$ docker exec ovs-ovn ovn-nbctl acl-add cluster-switch to-lport 99 'outport == "f5bd3404-df38-47e7-8907-4adbc4d24a7f" && ip' drop +``` + +Explanation: +* `docker exec ovs-ovn` allows us to enter the ovs-ovn container which has access to OVN's north bridge +* `ovn-nbctl acl-add` adds an ACL rule +* `cluster-switch` is switch to which all application containers are connected +* `to-lport` means that we're adding rule affecting traffic going from switch to the logical port (application) +* `100` or `99` is a priority, rule with ICMP has greater priority to be considered before DROP on all IP traffic +* `'outport == "" && ip && icmp'` is a match string for rule, rule will executed for traffic going out via switch's port named "" (which is connected to container's internal port) and protocols are IP and ICMP +* `allow-related` means that both incoming request and outgoing response will not be dropped or rejected +* `drop` drops all the traffic + +Result of the ping and curl after applying these two rules: +```shell +$ ping 10.100.0.4 -w 3 + +PING 10.100.0.4 (10.100.0.4) 56(84) bytes of data. +64 bytes from 10.100.0.4: icmp_seq=1 ttl=64 time=2.48 ms +64 bytes from 10.100.0.4: icmp_seq=2 ttl=64 time=0.852 ms +^C +--- 10.100.0.4 ping statistics --- +2 packets transmitted, 2 received, 0% packet loss, time 3ms +rtt min/avg/max/mdev = 0.852/1.664/2.477/0.813 ms + +$ curl --connect-timeout 10 10.100.0.4 +curl: (28) Connection timed out after 10001 milliseconds +``` + +If we run command `ovn-nbctl acl-list cluster-switch` we'll receive list of ACLs: +```shell +$ docker exec ovs-ovn ovn-nbctl acl-list cluster-switch + + to-lport 1000 (ip4.src==10.20.0.0/16) allow-related + to-lport 100 (outport == "f5bd3404-df38-47e7-8907-4adbc4d24a7f" && ip && icmp) allow-related + to-lport 99 (outport == "f5bd3404-df38-47e7-8907-4adbc4d24a7f" && ip) drop +``` + +Now, let's remove rule for ICMP: +```shell +$ docker exec ovs-ovn ovn-nbctl acl-del cluster-switch to-lport 101 'outport == "f5bd3404-df38-47e7-8907-4adbc4d24a7f" && ip && icmp' + +$ docker exec ovs-ovn ovn-nbctl acl-list cluster-switch + + to-lport 1000 (ip4.src==10.20.0.0/16) allow-related + to-lport 99 (outport == "f5bd3404-df38-47e7-8907-4adbc4d24a7f" && ip) drop +``` +and make a ping once again to see if it's dropped: +```shell +$ ping 10.100.0.4 -w 3 + +PING 10.100.0.4 (10.100.0.4) 56(84) bytes of data. + +--- 10.100.0.4 ping statistics --- +3 packets transmitted, 0 received, 100% packet loss, time 36ms +``` + +## Summary +OpenNESS is built with a microservices architecture. Depending on the deployment, there may be a requirement to service pure IP traffic and configure the dataplane using standard SDN based tools. OpenNESS demonstrates such a requirement this by providing OVS as a dataplane in the place of NTS without changing the APIs from an end user perspective. diff --git a/doc/dataplane/openness-userspace-cni.md b/doc/dataplane/openness-userspace-cni.md new file mode 100644 index 00000000..39869961 --- /dev/null +++ b/doc/dataplane/openness-userspace-cni.md @@ -0,0 +1,100 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2019 Intel Corporation +``` + +- [Userspace CNI](#userspace-cni) + - [Setup Userspace CNI](#setup-userspace-cni) + - [HugePages configuration](#hugepages-configuration) + - [Pod deployment](#pod-deployment) + - [Virtual interface usage](#virtual-interface-usage) + +# Userspace CNI + +Userspace CNI is a Container Network Interface Kubernetes plugin that was designed to simplify the process of deployment of DPDK based applications in Kubernetes pods. The plugin uses Kubernetes and Multus CNI's CRD to provide pod with virtual DPDK-enabled ethernet port. In this document you can find details about how to install OpenNESS with Userspace CNI support and how to use it's main features. + +## Setup Userspace CNI + +OpenNESS for Network Edge has been integrated with Userspace CNI to allow user to easily run DPDK based applications inside Kubernetes pods. To install OpenNESS Network Edge with Userspace CNI support, please add value `userspace` to variable `kubernetes_cnis` in `group_vars/all.yml` and set value of the variable `ovs_dpdk` in `roles/kubernetes/cni/kubeovn/common/defaults/main.yml` to `true`: + +```yaml +# group_vars/all.yml +kubernetes_cnis: +- kubeovn +- userspace +``` + +```yaml +# roles/kubernetes/cni/kubeovn/common/defaults/main.yml +ovs_dpdk: true +``` + +## HugePages configuration + +Please be aware that DPDK apps will require specific amount of HugePages enabled. By default the ansible scripts will enable 1024 of 2M HugePages in system, and then start OVS-DPDK with 1Gb of those HugePages. If you would like to change this settings to reflect your specific requirements please set ansible variables as defined in the example below. This example enables 4 of 1GB HugePages and appends 1 GB to OVS-DPDK leaving 3 pages for DPDK applications that will be running in the pods. + +```yaml +# network_edge.yml +- hosts: controller_group + vars: + hugepage_amount: "4" + +- hosts: edgenode_group + vars: + hugepage_amount: "4" +``` + +```yaml +# roles/machine_setup/grub/defaults/main.yml +hugepage_size: "1G" +``` + +>The variable `hugepage_amount` that can be found in `roles/machine_setup/grub/defaults/main.yml` can be left at default value of `5000` as this value will be overridden by values of `hugepage_amount` variables that were set earlier in `network_edge.yml`. + +```yaml +# roles/kubernetes/cni/kubeovn/common/defaults/main.yml +ovs_dpdk_hugepage_size: "1Gi" # This is the size of single hugepage to be used by DPDK. Can be 1Gi or 2Mi. +ovs_dpdk_hugepages: "1Gi" # This is overall amount of hugepags available to DPDK. +``` + +## Pod deployment + +To deploy pod with DPDK interface please create pod with `hugepages` mounted to `/dev/hugepages`, host directory `/var/run/openvswitch/` (with mandatory trailing slash character) mounted into pod with the volume name `shared-dir` (the name `shared-dir` is mandatory) and `userspace-openness` network annotation. You can find example pod definition with two DPDK ports below: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: userspace-example + annotations: + k8s.v1.cni.cncf.io/networks: userspace-openness, userspace-openness +spec: + containers: + - name: userspace-example + image: image-name + imagePullPolicy: Never + securityContext: + privileged: true + volumeMounts: + - mountPath: /ovs + name: shared-dir + - mountPath: /dev/hugepages + name: hugepages + resources: + requests: + memory: 1Gi + limits: + hugepages-1Gi: 2Gi + command: ["sleep", "infinity"] + volumes: + - name: shared-dir + hostPath: + path: /var/run/openvswitch/ + - name: hugepages + emptyDir: + medium: HugePages +``` + +## Virtual interface usage + +Socket files for virtual interfaces generated by Userspace CNI are created on host machine in `/var/run/openvswitch` directory. This directory has to be mounted into your pod by volume with **obligatory name `shared-dir`** (in our [example pod definition](#pod-deployment) `/var/run/openvswitch` is mounted to pod as `/ovs`). Then you can use sockets available in your mount-point directory in your DPDK-enabled application deployed inside pod. You can find further example in [Userspace CNI's documentation](https://github.com/intel/userspace-cni-network-plugin#testing-with-dpdk-testpmd-application). diff --git a/doc/dataplane/ovn_images/ovncni_cluster.png b/doc/dataplane/ovn_images/ovncni_cluster.png new file mode 100644 index 00000000..9aa04456 Binary files /dev/null and b/doc/dataplane/ovn_images/ovncni_cluster.png differ diff --git a/doc/enhanced-platform-awareness/multussriov-images/sriov-onprem-container.png b/doc/enhanced-platform-awareness/multussriov-images/sriov-onprem-container.png new file mode 100644 index 00000000..3ba72754 Binary files /dev/null and b/doc/enhanced-platform-awareness/multussriov-images/sriov-onprem-container.png differ diff --git a/doc/enhanced-platform-awareness/multussriov-images/sriov-onprem-vm.png b/doc/enhanced-platform-awareness/multussriov-images/sriov-onprem-vm.png new file mode 100644 index 00000000..3c7b6f66 Binary files /dev/null and b/doc/enhanced-platform-awareness/multussriov-images/sriov-onprem-vm.png differ diff --git a/doc/enhanced-platform-awareness/nfd-images/nfd3_onp_app.png b/doc/enhanced-platform-awareness/nfd-images/nfd3_onp_app.png new file mode 100644 index 00000000..10ad35f4 Binary files /dev/null and b/doc/enhanced-platform-awareness/nfd-images/nfd3_onp_app.png differ diff --git a/doc/enhanced-platform-awareness/openness-bios.md b/doc/enhanced-platform-awareness/openness-bios.md index 31c37d0b..4fa46beb 100644 --- a/doc/enhanced-platform-awareness/openness-bios.md +++ b/doc/enhanced-platform-awareness/openness-bios.md @@ -13,27 +13,27 @@ Copyright (c) 2019 Intel Corporation - [Usage](#usage) - [Reference](#reference) -## Overview +## Overview -BIOS and Firmware are the fundamental platform configurations of a typical Commercial off-the-shelf (COTS) platform. BIOS and Firmware configuration has very low level configurations that can determine the environment that will be available for the Network Functions or Applications. A typical BIOS configuration that would be of relevance for a network function or application may include CPU configuration, Cache and Memory configuration, PCIe Configuration, Power and Performance configuration, etc. Some Network Functions and Applications need certain BIOS and Firmware settings to be configured in a specific way for optimal functionality and behavior. +BIOS and Firmware are the fundamental platform configurations of a typical Commercial off-the-shelf (COTS) platform. BIOS and Firmware configuration has very low level configurations that can determine the environment that will be available for the Network Functions or Applications. A typical BIOS configuration that would be of relevance for a network function or application may include CPU configuration, Cache and Memory configuration, PCIe Configuration, Power and Performance configuration, etc. Some Network Functions and Applications need certain BIOS and Firmware settings to be configured in a specific way for optimal functionality and behavior. -## Usecase for edge +## Usecase for edge -Let's take an AI Inference Application as an example that uses an Accelerator like an FPGA. To get optimal performance, when this application is deployed by the Resource Orchestrator, it is recommended to place the Application on the same Node and CPU Socket to which the Accelerator is attached. To ensure this, NUMA, PCIe Memory mapped IO and Cache configuration needs to be configured optimally. Similarly for a Network Function like a Base station or Core network instruction set, cache and hyper threading play an important role in the performance and density. +Let's take an AI Inference Application as an example that uses an Accelerator like an FPGA. To get optimal performance, when this application is deployed by the Resource Orchestrator, it is recommended to place the Application on the same Node and CPU Socket to which the Accelerator is attached. To ensure this, NUMA, PCIe Memory mapped IO and Cache configuration needs to be configured optimally. Similarly for a Network Function like a Base station or Core network instruction set, cache and hyper threading play an important role in the performance and density. -OpenNESS provides a reference implementation demonstrating how to configure the low level platform settings like BIOS and Firmware and the capability to check if they are configured as per a required profile. To implement this feature, OpenNESS uses the Intel® System Configuration Utility. The Intel® System Configuration Utility (Syscfg) is a command-line utility that can be used to save and restore BIOS and firmware settings to a file or to set and display individual settings. +OpenNESS provides a reference implementation demonstrating how to configure the low level platform settings like BIOS and Firmware and the capability to check if they are configured as per a required profile. To implement this feature, OpenNESS uses the Intel® System Configuration Utility. The Intel® System Configuration Utility (Syscfg) is a command-line utility that can be used to save and restore BIOS and firmware settings to a file or to set and display individual settings. -> Important Note: Intel® System Configuration Utility is only supported on certain Intel® Server platforms. Please refer to the Intel® System Configuration Utility user guide for the supported server products. +> Important Note: Intel® System Configuration Utility is only supported on certain Intel® Server platforms. Please refer to the Intel® System Configuration Utility user guide for the supported server products. -> Important Note: Intel® System Configuration Utility is not intended for and should not be used on any non-Intel Server Products. +> Important Note: Intel® System Configuration Utility is not intended for and should not be used on any non-Intel Server Products. -The OpenNESS Network Edge implementation goes a step further and provides an automated process using Kubernetes to save and restore BIOS and firmware settings. To do this, the Intel® System Configuration Utility is packaged as a Pod deployed as a Kubernetes job that uses ConfigMap. This ConfigMap provides a mount point that has the BIOS and Firmware profile that needs to be used for the Worker node. A platform reboot is required for the BIOS and Firmware configuration to be applied. To enable this, the BIOS and Firmware Job is deployed as a privileged Pod. +The OpenNESS Network Edge implementation goes a step further and provides an automated process using Kubernetes to save and restore BIOS and firmware settings. To do this, the Intel® System Configuration Utility is packaged as a Pod deployed as a Kubernetes job that uses ConfigMap. This ConfigMap provides a mount point that has the BIOS and Firmware profile that needs to be used for the Worker node. A platform reboot is required for the BIOS and Firmware configuration to be applied. To enable this, the BIOS and Firmware Job is deployed as a privileged Pod. ![BIOS and Firmware configuration on OpenNESS](biosfw-images/openness_biosfw.png) _Figure - BIOS and Firmware configuration on OpenNESS_ -## Details: BIOS and Firmware Configuration on OpenNESS Network Edge +## Details: BIOS and Firmware Configuration on OpenNESS Network Edge BIOS and Firmware Configuration feature is wrapped in a kubectl plugin. Knowledge of Intel SYSCFG utility is required for usage. @@ -42,11 +42,10 @@ Intel SYSCFG must be manually downloaded by user after accepting the license. ### Setup In order to enable BIOSFW following steps need to be performed: -1. `biosfw/master` role needs to be uncommented in OpenNESS Experience Kits' `ne_controller.yml` -2. SYSCFG package must be downloaded and stored inside OpenNESS Experience Kits' `biosfw/` directory as a syscfg_package.zip, i.e. +1. SYSCFG package must be downloaded and stored inside OpenNESS Experience Kits' `biosfw/` directory as a syscfg_package.zip, i.e. `openness-experience-kits/biosfw/syscfg_package.zip` -3. `biosfw/worker` role needs to be uncommented in OpenNESS Experience Kits' `ne_node.yml` -4. OpenNESS Experience Kits' NetworkEdge deployment for both controller and nodes can be started. +2. `biosfw/master` and `biosfw/worker` roles must be uncommented in OpenNESS Experience Kits' `network_edge.yml` +3. OpenNESS Experience Kits' NetworkEdge deployment for both controller and nodes can be started. ### Usage @@ -61,4 +60,3 @@ In order to enable BIOSFW following steps need to be performed: ## Reference - [Intel Save and Restore System Configuration Utility (SYSCFG)](https://downloadcenter.intel.com/download/28713/Save-and-Restore-System-Configuration-Utility-SYSCFG-) - diff --git a/doc/enhanced-platform-awareness/openness-dedicated-core.md b/doc/enhanced-platform-awareness/openness-dedicated-core.md index e74a9fc9..4d018b0f 100644 --- a/doc/enhanced-platform-awareness/openness-dedicated-core.md +++ b/doc/enhanced-platform-awareness/openness-dedicated-core.md @@ -1,6 +1,6 @@ ```text SPDX-License-Identifier: Apache-2.0 -Copyright (c) 2019 Intel Corporation +Copyright (c) 2019-2020 Intel Corporation ``` # Dedicated CPU core for workload support in OpenNESS @@ -10,9 +10,10 @@ Copyright (c) 2019 Intel Corporation - [Details - CPU Manager support in OpenNESS](#details---cpu-manager-support-in-openness) - [Setup](#setup) - [Usage](#usage) + - [OnPremises Usage](#onpremises-usage) - [Reference](#reference) -## Overview +## Overview Multi-core COTS platforms are typical in any cloud or Cloudnative deployment. Parallel processing on multiple cores helps achieve better density. On a Multi-core platform, one challenge for applications and network functions that are latency and throughput density is deterministic compute. To achieve deterministic compute allocating dedicated resources is important. Dedicated resource allocation avoids interference with other applications (Noisy Neighbor). When deploying on a cloud native platform, applications are deployed as PODs, therefore, providing required information to the container orchestrator on dedicated CPU cores is key. CPU manager allows provisioning of a POD to dedicated cores. @@ -32,7 +33,7 @@ _Figure - CPU Manager support on OpenNESS_ The following section outlines some considerations for using CPU Manager(CMK): -- If the workload already uses a threading library (e.g. pthread) and uses set affinity like APIs then CMK might not be needed. For such workloads, in order to provide cores to use for deployment, Kubernetes ConfigMaps is the recommended methodology. ConfigMaps can be used to pass the CPU core mask to the application to use. +- If the workload already uses a threading library (e.g. pthread) and uses set affinity like APIs then CMK might not be needed. For such workloads, in order to provide cores to use for deployment, Kubernetes ConfigMaps is the recommended methodology. ConfigMaps can be used to pass the CPU core mask to the application to use. - The workload is a medium to long-lived process with inter-arrival times of the order of ones to tens of seconds or greater. - After a workload has started executing, there is no need to dynamically update its CPU assignments. - Machines running workloads explicitly isolated by cmk must be guarded against other workloads that do not consult the cmk tool chain. The recommended way to do this is for the operator to taint the node. The provided cluster-init subcommand automatically adds such a taint. @@ -56,21 +57,20 @@ CMK documentation available on github includes: **Edge Controller / Kubernetes master** -1. Configure Edge Controller in Network Edge mode using `ne_controller.yml`, following roles must be enabled kubernetes/master, kubeovn/master and cmk/master. +1. Configure Edge Controller in Network Edge mode using `network_edge.yml`, following roles must be enabled `kubernetes/master`, `kubernetes/cni` (both enabled by default) and `cmk/master` (disabled by default). 2. CMK is enabled with following default values of parameters in `roles/cmk/master/defaults/main.yml` (adjust the values if needed): - `cmk_num_exclusive_cores` set to `4` - `cmk_num_shared_cores` set to `1` - `cmk_host_list` set to `node01,node02` (it should contain comma separated list of nodes' hostnames). -3. Deploy the controller with deploy_ne_controller.sh. +3. Deploy the controller with `deploy_ne.sh controller`. **Edge Node / Kubernetes worker** -1. Configure Edge Node in Network Edge mode using ne_node.yml, following roles must be enabled kubernetes/worker, kubeovn/worker and cmk/worker. -2. To change core isolation and tuned realtime profile settings edit `os_kernel_rt_tuned_vars` in `roles/os_kernelrt/defaults/main.yml`. -The changes will affect all edge nodes in the inventory, to set the parameter only for a specific node add the variable `os_kernel_rt_tuned_vars` to host_vars/node-name-in-inventory.yml. -3. Deploy the node with deploy_ne_node.sh. +1. Configure Edge Node in Network Edge mode using `network_edge.yml`, following roles must be enabled `kubernetes/worker`, `kubernetes/cni` (both enabled by default) and `cmk/worker` (disabled by default). +2. To change core isolation set isolated cores in `host_vars/node-name-in-inventory.yml` as `additional_grub_params` for your node e.g. in `host_vars/node01.yml` set `additional_grub_params: "isolcpus=1-10,49-58"` +3. Deploy the node with `deploy_ne.sh node`. Environment setup can be validated using steps from [CMK operator manual](https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/operator.md#validating-the-environment). @@ -129,6 +129,32 @@ spec: name: cmk-conf-dir EOF ``` -## Reference + +> NOTE: CMK requires modification of deployed pod manifest for **all** deployed pods: +> - nodeName: must be added under pod spec section before deploying application (to point node on which pod is to be deployed) +> +> alternatively +> - toleration must be added to deployed pod under spec: +> +> ```yaml +> ... +> tolerations: +> +> - ... +> +> - effect: NoSchedule +> key: cmk +> operator: Exists +> ``` + +### OnPremises Usage +Dedicated core pinning is also supported for container and virtual machine deployment in OnPremises mode. This is done using the EPA Features section provided when creating applications for onboarding. For more details on application creation and onboarding in OnPremises mode, please see the [Application Onboarding Document](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/on-premises-applications-onboarding.md). + +To set dedicated core pinning for an application, *EPA Feature Key* should be set to `cpu_pin` and *EPA Feature Value* should be set to one of the following options: + +1. A single core e.g. `EPA Feature Value = 3` if pinning to core 3 only. +2. A sequential series of cores, e.g. `EPA Feature Value = 2-7` if pinning to cores 2 to 7 inclusive. +3. A comma separated list of cores, e.g. `EPA Feature Value = 1,3,6,7,9` if pinning to cores 1,3,6,7 and 9 only. +## Reference - [CPU Manager Repo](https://github.com/intel/CPU-Manager-for-Kubernetes) - More examples of Kubernetes manifests available in [CMK repository](https://github.com/intel/CPU-Manager-for-Kubernetes/tree/master/resources/pods) and [documentation](https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/user.md). diff --git a/doc/enhanced-platform-awareness/openness-environment-variables.md b/doc/enhanced-platform-awareness/openness-environment-variables.md new file mode 100644 index 00000000..c918884d --- /dev/null +++ b/doc/enhanced-platform-awareness/openness-environment-variables.md @@ -0,0 +1,24 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020 Intel Corporation +``` + +# Support for setting Environment Variables in OpenNESS + +- [Support for setting Environment Variables in OpenNESS](#support-for-setting-environment-variables-in-openness) + - [Overview](#overview) + - [Details of Environment Variable support in OpenNESS](#details-of-environment-variable-support-in-openness) + +## Overview + +Environment variables can be configured when creating a new Docker container. Once the container is running, any Application located in that container can detect and use the variable. These variables can be used to point to information needed by the processes being run by the Application. For example, an environment variable can be set to point to a file containing information to be read in by an Application or to the address of a device that the Application needs to use. + +When using environment variables, the value should be either a static value or some environment information that the Application cannot easily determine. Care should also be taken when setting environment variables, as using an incorrect variable name or value may cause the Application to operate in an unexpected way. + +## Details of Environment Variable support in OpenNESS + +Setting environment variables is supported when deploying containers in OnPrem mode during application onboarding. Please refer to the [Application Onboarding Document](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/on-premises-applications-onboarding.md) for more details on onboarding an application in OpenNESS. The general steps outlined in the document should be performed with the following addition. When creating a new application to add to the controller's application library, the user should set the *EPA Feature Key* and *EPA Feature Value* settings. The key to be used for environment variables is `env_vars` and the value should be set as `VARIABLE_NAME=VARIABLE_VALUE`. + +***Note:*** When setting environment variables, multiple variables can be provided in the *EPA Feature Value* field for a single `env_vars` key. To do so place a semi-colon (;) between each variable as follows: + + VAR1_NAME=VAR1_VALUE;VAR2_NAME=VAR2_VALUE diff --git a/doc/enhanced-platform-awareness/openness-fpga.md b/doc/enhanced-platform-awareness/openness-fpga.md index ba92de53..456bf507 100644 --- a/doc/enhanced-platform-awareness/openness-fpga.md +++ b/doc/enhanced-platform-awareness/openness-fpga.md @@ -1,6 +1,6 @@ ```text SPDX-License-Identifier: Apache-2.0 -Copyright (c) 2019 Intel Corporation +Copyright (c) 2019-2020 Intel Corporation ``` # Using FPGA in OpenNESS: Programming, Resource Allocation and Configuration @@ -28,18 +28,18 @@ The FPGA Programmable Acceleration Card plays a key role in accelerating certain - Integration - Today’s FPGAs include on-die processors, transceiver I/O’s at 28 Gbps (or faster), RAM blocks, DSP engines, and more. - Total Cost of Ownership (TCO) - While ASICs may cost less per unit than an equivalent FPGA, building them requires a non-recurring expense (NRE), expensive software tools, specialized design teams, and long manufacturing cycles. -Deployment of AI/ML applications at the edge is increasing the adoption of FPGA acceleration. This trend of devices performing machine learning at the edge locally versus relying solely on the cloud is driven by the need to lower latency, persistent availability, lower costs and address privacy concerns. +Deployment of AI/ML applications at the edge is increasing the adoption of FPGA acceleration. This trend of devices performing machine learning at the edge locally versus relying solely on the cloud is driven by the need to lower latency, persistent availability, lower costs and address privacy concerns. -This paper explains how the FPGA resource can be used on the OpenNESS platform for accelerating Network Functions and Edge application workloads. We use the Intel® FPGA Programmable Acceleration Card (Intel FPGA PAC) N3000 as a reference FPGA and use LTE/5G Forward Error Correction (FEC) as an example workload that accelerates the 5G or 4G L1 Base station Network function. The same concept and mechanism is applicable for application acceleration workloads like AI/ML on FPGA for Inference applications. +This paper explains how the FPGA resource can be used on the OpenNESS platform for accelerating Network Functions and Edge application workloads. We use the Intel® FPGA Programmable Acceleration Card (Intel FPGA PAC) N3000 as a reference FPGA and use LTE/5G Forward Error Correction (FEC) as an example workload that accelerates the 5G or 4G L1 Base station Network function. The same concept and mechanism is applicable for application acceleration workloads like AI/ML on FPGA for Inference applications. -The Intel® FPGA Programmable Acceleration Card (Intel FPGA PAC) N3000 is a full-duplex 100 Gbps in-system re-programmable acceleration card for multi-workload networking application acceleration. It has the right memory mixture designed for network functions, with integrated network interface card (NIC) in a small form factor that enables high throughput, low latency, low-power/bit for custom networking pipeline. +The Intel® FPGA Programmable Acceleration Card (Intel FPGA PAC) N3000 is a full-duplex 100 Gbps in-system re-programmable acceleration card for multi-workload networking application acceleration. It has the right memory mixture designed for network functions, with integrated network interface card (NIC) in a small form factor that enables high throughput, low latency, low-power/bit for custom networking pipeline. -FlexRAN is a reference Layer 1 pipeline of 4G eNb and 5G gNb on Intel architecture. The FlexRAN reference pipeline consists of L1 pipeline, optimized L1 processing modules, BBU pooling framework, Cloud and Cloud native deployment support and accelerator support for hardware offload. Intel® PAC N3000 card is used by FlexRAN to offload FEC (Forward Error Correction) for 4G and 5G and IO for Fronthaul/Midhaul. +FlexRAN is a reference Layer 1 pipeline of 4G eNb and 5G gNb on Intel architecture. The FlexRAN reference pipeline consists of L1 pipeline, optimized L1 processing modules, BBU pooling framework, Cloud and Cloud native deployment support and accelerator support for hardware offload. Intel® PAC N3000 card is used by FlexRAN to offload FEC (Forward Error Correction) for 4G and 5G and IO for Fronthaul/Midhaul. ## Intel® PAC N3000 FlexRAN host interface overview -The PAC N3000 card used in the FlexRAN solution exposes the following physical functions to the CPU host. -- 2x25G Ethernet interface that can be used for Fronthaul or Midhaul -- One FEC Interface that can be used of 4G or 5G FEC acceleration +The PAC N3000 card used in the FlexRAN solution exposes the following physical functions to the CPU host. +- 2x25G Ethernet interface that can be used for Fronthaul or Midhaul +- One FEC Interface that can be used of 4G or 5G FEC acceleration - The LTE FEC IP components have Turbo Encoder / Turbo decoder and rate matching / de-matching - The 5GNR FEC IP components have Low-density parity-check (LDPC) Encoder / LDPC Decoder, rate matching / de-matching, UL HARQ combining - Interface for managing and updating the FPGA Image – Remote system Update (RSU). @@ -55,16 +55,16 @@ FlexRAN is the network function that implements the FEC and is a low latency net _Figure - Intel PAC N3000 Orchestration and deployment with OpenNESS Network Edge for FlexRAN_ -## Intel PAC N3000 remote system update flow in OpenNESS Network edge Kubernetes -Remote System Update (RSU) of the FPGA is enabled through Open Programmable Acceleration Engine (OPAE). The OPAE package consists of a kernel driver and user space FPGA utils package that enables programming of the FPGA. OpenNESS automates the process of deploying the OPAE stack as a Kubernetes POD that detects the FPGA and programs it. There is a separate FPGA Configuration POD which is deployed as a Kubernetes job which configures the FPGA resources such as Virtual Functions and queues. +## Intel PAC N3000 remote system update flow in OpenNESS Network edge Kubernetes +Remote System Update (RSU) of the FPGA is enabled through Open Programmable Acceleration Engine (OPAE). The OPAE package consists of a kernel driver and user space FPGA utils package that enables programming of the FPGA. OpenNESS automates the process of deploying the OPAE stack as a Kubernetes POD that detects the FPGA and programs it. There is a separate FPGA Configuration POD which is deployed as a Kubernetes job which configures the FPGA resources such as Virtual Functions and queues. ![OpenNESS Network Edge Intel PAC N3000 RSU and resource allocation](fpga-images/openness-fpga3.png) _Figure - OpenNESS Network Edge Intel PAC N3000 RSU and resource allocation_ -## Using FPGA on OpenNESS - Details +## Using FPGA on OpenNESS - Details -Further sections provide instructions on how to use all three FPGA features - Programming, Configuration and accessing from application on OpenNESS Network and OnPremises Edge. +Further sections provide instructions on how to use all three FPGA features - Programming, Configuration and accessing from application on OpenNESS Network and OnPremises Edge. When the PAC N3000 FPGA is programmed with a vRAN 5G FPGA image it exposes the Single Root I/O Virtualization (SRIOV) Virtual Function (VF) devices which can be used to accelerate the FEC in the vRAN workload. In order to take advantage of this functionality for a Cloud Native deployment the PF (Physical Function) of the device must be bound to DPDK IGB_UIO user-space driver in order to create a number of VFs (Virtual Functions). Once the VFs are created they must also be bound to a DPDK user-space driver in order to allocate them to specific K8s pods running the vRAN workload. @@ -81,17 +81,26 @@ It is assumed that the FPGA is always used with OpenNESS Network Edge, paired wi ### FPGA (FEC) Ansible installation for OpenNESS Network Edge To run the OpenNESS package with FPGA (FEC) functionality the feature needs to be enabled on both Edge Controller and Edge Node. -#### Edge Controller +#### Edge Controller -To enable on Edge Controller set/uncomment following in `ne_controller.yml` in OpenNESS-Experience-Kits top level directory: -``` +To enable on Edge Controller set/uncomment following in `network_edge.yml` in OpenNESS-Experience-Kits top level directory: +```yaml +# network_edge.yml - role: opae_fpga/master -- role: multus -- role: sriov/master ``` -Also enable/configure following options in `roles/sriov/common/defaults/main.yml`. -The following device config is the default config for the PAC N3000 with 5GNR vRAN user image tested (this configuration is common both to EdgeNode and EdgeController setup). + +Additionally SRIOV must be enabled in OpenNESS: +```yaml +# group_vars/all.yml +kubernetes_cnis: +- kubeovn +- sriov ``` + +Also enable/configure following options in `roles/kubernetes/cni/sriov/common/defaults/main.yml`. +The following device config is the default config for the PAC N3000 with 5GNR vRAN user image tested (this configuration is common both to EdgeNode and EdgeController setup). +```yaml +# roles/kubernetes/cni/sriov/common/defaults/main.yml fpga_sriov_userspace: enabled: true fpga_userspace_vf: @@ -100,17 +109,15 @@ fpga_userspace_vf: vf_device_id: "0d90" pf_device_id: "0d8f" vf_number: "2" + vf_driver: "vfio-pci" ``` -Run setup script `deploy_ne_controller.sh`. +#### Edge Node -#### Edge Node - -To enable on the Edge Node set following in `ne_node.yml` (Please note that the `sriov/worker` role needs to be executed before `kubernetes/worker` role): +To enable on the Edge Node set following in `network_edge.yml`: ``` - role: opae_fpga/worker -- role: sriov/worker ``` The following packages need to be placed into specific directories in order for the feature to work: @@ -121,7 +128,42 @@ The following packages need to be placed into specific directories in order for 3. Factory image configuration package `n3000-1-3-5-beta-cfg-2x2x25g-setup.zip` needs to be placed inside `openness-experience-kits/opae_fpga` directory. The package can be obtained as part of PAC N3000 OPAE beta release (Please contact your Intel representative or visit [Resource Design Centre](https://cdrdv2.intel.com/v1/dl/getContent/616080 ) to obtain the package) -Run setup script `deploy_ne_node.sh`. +Run setup script `deploy_ne.sh`. + +On successful deployment following pods will be available in the cluster: +```shell +kubectl get pods -A + +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-ovn kube-ovn-cni-hdgrl 1/1 Running 0 3d19h +kube-ovn kube-ovn-cni-px79b 1/1 Running 0 3d18h +kube-ovn kube-ovn-controller-578786b499-74vzm 1/1 Running 0 3d19h +kube-ovn kube-ovn-controller-578786b499-j22gl 1/1 Running 0 3d19h +kube-ovn ovn-central-5f456db89f-z7d6x 1/1 Running 0 3d19h +kube-ovn ovs-ovn-46k8f 1/1 Running 0 3d18h +kube-ovn ovs-ovn-5r2p6 1/1 Running 0 3d19h +kube-system coredns-6955765f44-mrc82 1/1 Running 0 3d19h +kube-system coredns-6955765f44-wlvhc 1/1 Running 0 3d19h +kube-system etcd-silpixa00394960 1/1 Running 0 3d19h +kube-system kube-apiserver-silpixa00394960 1/1 Running 0 3d19h +kube-system kube-controller-manager-silpixa00394960 1/1 Running 0 3d19h +kube-system kube-multus-ds-amd64-2zdqt 1/1 Running 0 3d18h +kube-system kube-multus-ds-amd64-db8fd 1/1 Running 0 3d19h +kube-system kube-proxy-dd259 1/1 Running 0 3d19h +kube-system kube-proxy-sgn9g 1/1 Running 0 3d18h +kube-system kube-scheduler-silpixa00394960 1/1 Running 0 3d19h +kube-system kube-sriov-cni-ds-amd64-k9wnd 1/1 Running 0 3d18h +kube-system kube-sriov-cni-ds-amd64-pclct 1/1 Running 0 3d19h +kube-system kube-sriov-device-plugin-amd64-fhbv8 1/1 Running 0 3d18h +kube-system kube-sriov-device-plugin-amd64-lmx9k 1/1 Running 0 3d19h +openness eaa-78b89b4757-xzh84 1/1 Running 0 3d18h +openness edgedns-dll9x 1/1 Running 0 3d18h +openness interfaceservice-grjlb 1/1 Running 0 3d18h +openness nfd-master-dd4ch 1/1 Running 0 3d19h +openness nfd-worker-c24wn 1/1 Running 0 3d18h +openness syslog-master-9x8hc 1/1 Running 0 3d19h +openness syslog-ng-br92z 1/1 Running 0 3d18h +``` ### FPGA Programming and telemetry on OpenNESS Network Edge In order to program the FPGA factory image (One Time Secure Upgrade) or the user image (5GN FEC vRAN) of the PAC N3000 via OPAE a `kubectl` plugin for K8s is provided. The plugin also allows for obtaining basic FPGA telemetry. This plugin will deploy K8s jobs which will run to completion on desired host and display the logs/output of the command. @@ -154,6 +196,19 @@ kubectl rsu program -f -n -d kubectl rsu get power -n kubectl rsu get fme -n + + +# Sample output for correctly programmed card with `get fme` command +//****** FME ******// +Object Id : 0xED00000 +PCIe s\:b\:d.f : 0000:1b:00.0 +Device Id : 0x0b30 +Numa Node : 0 +Ports Num : 01 +Bitstream Id : 0x2145042A010304 +Bitstream Version : 0.2.1 +Pr Interface Id : a5d72a3c-c8b0-4939-912c-f715e5dc10ca +Boot Page : user ``` 7. For more information on usage of each `kubectl rsu` plugin capability run each command with `-h` argument. @@ -228,10 +283,14 @@ Expected: `Mode of operation = VF-mode FPGA_LTE PF [0000:xx:00.0] configuration ### Requesting resources and running pods for OpenNESS Network Edge As part of OpenNESS Ansible automation a K8s device plugin to orchestrate the FPGA VFs bound to user-space driver is running. This will enable scheduling of pods requesting this device/devices. Number of devices available on the Edge Node can be checked from Edge Controller by running: -`kubectl get node -o json | jq '.status.allocatable'` +```shell +kubectl get node silpixa00400489 -o json | jq '.status.allocatable' -To request the device as a resource in the pod add the request for the resource into the pod specification file, by specifying its name and amount of resources required. If the resource is not available or the amount of resources requested is greater than the amount of resources available, the pod status will be 'Pending' until the resource is available. -Note that the name of the resource must match the name specified in the configMap for the K8s devices plugin (`./fpga/configMap.yml`). +"intel.com/intel_fec_5g": "2" +``` + +To request the device as a resource in the pod add the request for the resource into the pod specification file, by specifying its name and amount of resources required. If the resource is not available or the amount of resources requested is greater than the amount of resources available, the pod status will be 'Pending' until the resource is available. +Note that the name of the resource must match the name specified in the configMap for the K8s devices plugin (`./fpga/configMap.yml`). A sample pod requesting the FPGA (FEC) VF may look like this: @@ -252,8 +311,8 @@ spec: requests: intel.com/intel_fec_5g: '1' limits: - intel.com/intel_fec_5g: '1' -``` + intel.com/intel_fec_5g: '1' +``` In order to test the resource allocation to the pod, save the above snippet to the sample.yaml file and create the pod. @@ -277,7 +336,7 @@ Navigate to: `edgeapps/fpga-sample-app` -Copy the necessary `flexran-dpdk-bbdev-v19-10.patch` file into the directory. This patch is available as part of FlexRAN 19.10 release package. To obtain this FlexRAN patch allowing 5G functionality for BBDEV in DPDK please contact your Intel representative or visit [Resource Design Centre](https://cdrdv2.intel.com/v1/dl/getContent/615743 ) +Copy the necessary `dpdk_19.11_new.patch` file into the directory. This patch is available as part of FlexRAN 20.02 release package. To obtain this FlexRAN patch allowing 5G functionality for BBDEV in DPDK please contact your Intel representative or visit [Resource Design Centre](https://cdrdv2.intel.com/v1/dl/getContent/615743 ) Build image: @@ -288,16 +347,53 @@ From the Edge Controller deploy the application pod, pod specification located a `kubectl create -f fpga-sample-app.yaml` Execute into the application pod and run the sample app: -``` +```shell +# enter the pod kubectl exec -it pod-bbdev-sample-app -- /bin/bash +# run test application ./test-bbdev.py --testapp-path ./testbbdev -e="-w ${PCIDEVICE_INTEL_COM_INTEL_FEC_5G}" -i -n 1 -b 1 -l 1 -c validation -v ./test_vectors/ldpc_dec_v7813.data + +# sample output +EAL: Detected 48 lcore(s) +EAL: Detected 2 NUMA nodes +EAL: Multi-process socket /var/run/dpdk/rte/mp_socket +EAL: Selected IOVA mode 'VA' +EAL: No available hugepages reported in hugepages-1048576kB +EAL: Probing VFIO support... +EAL: VFIO support initialized +EAL: PCI device 0000:20:00.1 on NUMA socket 0 +EAL: probe driver: 8086:d90 intel_fpga_5gnr_fec_vf +EAL: using IOMMU type 1 (Type 1) + +=========================================================== +Starting Test Suite : BBdev Validation Tests +Test vector file = ./test_vectors/ldpc_dec_v7813.data +mcp fpga_setup_queuesDevice 0 queue 16 setup failed +Allocated all queues (id=16) at prio0 on dev0 +Device 0 queue 16 setup failed +All queues on dev 0 allocated: 16 ++ ------------------------------------------------------- + +== test: validation/latency +dev: 0000:20:00.1, burst size: 1, num ops: 1, op type: RTE_BBDEV_OP_LDPC_DEC +Operation latency: + avg: 17744 cycles, 12.6743 us + min: 17744 cycles, 12.6743 us + max: 17744 cycles, 12.6743 us +TestCase [ 0] : latency_tc passed + + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + + + Test Suite Summary : BBdev Validation Tests + + Tests Total : 1 + + Tests Skipped : 0 + + Tests Passed : 1 + + Tests Failed : 0 + + Tests Lasted : 95.2308 ms + + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + ``` The output of the application should indicate total of ‘1’ tests and ‘1’ test passing, this concludes the validation of the FPGA VF working correctly inside K8s pod. -## Reference +## Reference - [Intel® FPGA Programmable Acceleration Card N3000](https://www.intel.com/content/www/us/en/programmable/products/boards_and_kits/dev-kits/altera/intel-fpga-pac-n3000/overview.html) - [FlexRAN 19.10 release - Resource Design Centre](https://cdrdv2.intel.com/v1/dl/getContent/615743) - [PAC N3000 OPAE beta release - Resource Design Centre](https://cdrdv2.intel.com/v1/dl/getContent/616082) -- [PAC N3000 OPAE beta release (2) - Resource Design Centre](https://cdrdv2.intel.com/v1/dl/getContent/616080) - +- [PAC N3000 OPAE beta release (2) - Resource Design Centre](https://cdrdv2.intel.com/v1/dl/getContent/616080) diff --git a/doc/enhanced-platform-awareness/openness-hugepage.md b/doc/enhanced-platform-awareness/openness-hugepage.md index dc68a688..c0c2dc2f 100644 --- a/doc/enhanced-platform-awareness/openness-hugepage.md +++ b/doc/enhanced-platform-awareness/openness-hugepage.md @@ -1,85 +1,125 @@ ```text SPDX-License-Identifier: Apache-2.0 -Copyright (c) 2019 Intel Corporation +Copyright (c) 2019-2020 Intel Corporation ``` -# Hugepage support on OpenNESS +# Hugepage support on OpenNESS - [Hugepage support on OpenNESS](#hugepage-support-on-openness) - [Overview](#overview) - [Details of Hugepage support on OpenNESS](#details-of-hugepage-support-on-openness) - - [Network edge mode](#network-edge-mode) - - [OnPrem mode](#onprem-mode) + - [Examples](#examples) + - [Changing size of the hugepage for both controllers and nodes](#changing-size-of-the-hugepage-for-both-controllers-and-nodes) + - [Setting different hugepage amount for Edge Controller or Edge Nodes in Network Edge mode](#setting-different-hugepage-amount-for-edge-controller-or-edge-nodes-in-network-edge-mode) + - [Setting different hugepage amount for Edge Controller or Edge Nodes in On Premises mode](#setting-different-hugepage-amount-for-edge-controller-or-edge-nodes-in-on-premises-mode) + - [Setting hugepage size for Edge Controller or Edge Node in Network Edge mode](#setting-hugepage-size-for-edge-controller-or-edge-node-in-network-edge-mode) + - [Setting hugepage size for Edge Controller or Edge Node in On Premises mode](#setting-hugepage-size-for-edge-controller-or-edge-node-in-on-premises-mode) + - [Customizing hugepages for specific machine](#customizing-hugepages-for-specific-machine) - [Reference](#reference) -## Overview +## Overview -Memory is allocated to application processes in terms of pages - by default the 4K pages are supported. For Applications dealing with larger datasets, using 4K pages may lead to performance degradation and overhead because of TLB misses. To address this, modern CPUs support huge pages which are typically 2M and 1G. This helps avoid TLB miss overhead and therefore improves performance. +Memory is allocated to application processes in terms of pages - by default the 4K pages are supported. For Applications dealing with larger datasets, using 4K pages may lead to performance degradation and overhead because of TLB misses. To address this, modern CPUs support huge pages which are typically 2M and 1G. This helps avoid TLB miss overhead and therefore improves performance. -Both Applications and Network functions can gain in performance from using hugepages. Huge page support, added to Kubernetes v1.8, enables the discovery, scheduling and allocation of huge pages as a native first-class resource. This support addresses low latency and deterministic memory access requirements. +Both Applications and Network functions can gain in performance from using hugepages. Huge page support, added to Kubernetes v1.8, enables the discovery, scheduling and allocation of huge pages as a native first-class resource. This support addresses low latency and deterministic memory access requirements. ## Details of Hugepage support on OpenNESS -Hugepages are enabled by default. There are two parameters that are describing the hugepages: the size of single page (can be 2MB or 1GB) and amount of those pages. In network edge deployment there is, enabled by default, 500 of 2MB hugepages (which equals to 2GB of memory) per node/controller, and in OnPrem deployment hugepages are enabled only for nodes and the default is 5000 of 2MB pages (10GB). If you want to change those settings you will need to edit config files as described below. All the settings have to be adjusted before OpenNESS installation. - -### Network edge mode - -You can change the size of single page editing the variable `hugepage_size` in `roles/grub/defaults/main.yml`: - -To set the page size of 2 MB: - +OpenNESS deployment enables the hugepages by default and provides parameters for tuning the hugepages: +* `hugepage_size` - size, which can be either `2M` or `1G` +* `hugepage_amount` - amount + +By default, these variables have values: +| Mode | Machine type | `hugepage_amount` | `hugepage_size` | Comments | +|--------------|--------------|:-----------------:|:---------------:|----------------------------------------------| +| Network Edge | Controller | `1024` | `2M` | | +| | Node | `1024` | `2M` | | +| On-Premises | Controller | `1024` | `2M` | For OVNCNI dataplane, otherwise no hugepages | +| | Node | `5000` | `2M` | | + +Guide on changing these values is below. Customizations must be made before OpenNESS deployment. + +Variables for hugepage customization can be placed in several files: +* `group_vars/all.yml` will affect all modes and machine types +* `group_vars/controller_group.yml` and `group_vars/edgenode_group.yml` will affect Edge Controller and Edge Nodes respectively in all modes +* `host_vars/.yml` will only affect `` host present in `inventory.ini` (in all modes) +* To configure hugepages for specific mode, they can be placed in `network_edge.yml` and `on_premises.yml` under + ```yaml + - hosts: # e.g. controller_group or edgenode_group + vars: + hugepage_amount: "10" + hugepage_size: "1G" + ``` + +This is summarized in a following table: + +| File | Network Edge | On Premises | Edge Controller | Edge Node | Comment | +|---------------------------------------|:------------:|:-----------:|:--------------------------------------:|:------------------------------------:|:-------------------------------------------------------------------------------:| +| `group_vars/all.yml` | yes | yes | yes | yes - every node | | +| `group_vars/controller_group.yml` | yes | yes | yes | | | +| `group_vars/edgenode_group.yml` | yes | yes | | yes - every node | | +| `host_vars/.yml` | yes | yes | yes | yes | affects machine specified in `inventory.ini` with name `` | +| `network_edge.yml` | yes | | `vars` under `hosts: controller_group` | `vars` under `hosts: edgenode_group` - every node | | +| `on_premises.yml` | | yes | `vars` under `hosts: controller_group` | `vars` under `hosts: edgenode_group` - every node| | + +Note that variables have a precedence: +1. `network_edge.yml` and `on_premises.yml` will always take precedence for files from this list (override every var) +2. `host_vars/` +3. `group_vars/` +4. `default/main.yml` in roles' directory + +### Examples + +#### Changing size of the hugepage for both controllers and nodes +Add following line to the `group_vars/all.yml`: +* To set the page size of 2 MB (which is default value): + ```yaml + hugepage_size: "2M" + ``` +* To set the page size of 1GB: + ```yaml + hugepage_size: "1G" + ``` + +#### Setting different hugepage amount for Edge Controller or Edge Nodes in Network Edge mode +The amount of hugepages can be set separately for both controller and nodes. To set the amount of hugepages for controller please change the value of variable `hugepage_amount` in `network_edge.yml`, for example: ```yaml -hugepage_size: "2M" -``` - -To set the page size of 1GB: - -```yaml -hugepage_size: "1G" -``` - -The amount of hugepages can be set separately for both controller and nodes. To set the amount of hugepages for controller please change the value of variable `hugepage_amount` in `ne_controller.yml`: - -For example: - -```yaml -vars: +- hosts: controller_group + vars: hugepage_amount: "1500" ``` - will enable 1500 pages of the size specified by `hugepage_size` variable. -To set the amount of hugepages for nodes please change the value of variable `hugepage_amount` in `ne_node.yml`: - -For example: - +To set the amount of hugepages for all of the nodes please change the value of variable `hugepage_amount` in `network_edge.yml`, for example: ```yaml -vars: +- hosts: edgenode_group + vars: hugepage_amount: "3000" ``` will enable 3000 pages of the size specified by `hugepage_size` variable for each deployed node. -### OnPrem mode +#### Setting different hugepage amount for Edge Controller or Edge Nodes in On Premises mode -The hugepages are enabled only for the nodes. You can change the size of single page and amount of the pages editing the variables `hugepage_size` and `hugepage_amount` in `roles/grub/defaults/main.yml`: - -For example: +[Instruction for Network Edge](#setting-different-hugepage-amount-for-edge-controller-or-edge-nodes-in-network-edge-mode) is applicable for On Premises mode with the exception of the file to be edited: `on_premises.yml` +#### Setting hugepage size for Edge Controller or Edge Node in Network Edge mode +Different hugepage size for node or controller can be done by adding `hugepage_size` to the playbook (`network_edge.yml` file), e.g. ```yaml -hugepage_size: "2M" -hugepage_amount: "2000" +- hosts: controller_group # or edgenode_group + vars: + hugepage_amount: "5" + hugepage_size: "1G" ``` -will enable 2000 of 2MB pages, and: +#### Setting hugepage size for Edge Controller or Edge Node in On Premises mode -```yaml -hugepage_size: "1G" -hugepage_amount: "5" -``` +[Instruction for Network Edge](#setting-hugepage-size-for-edge-controller-or-edge-node-in-network-edge-mode) is applicable for On Premises mode with the exception of the file to be edited: `on_premises.yml` -will enable 5 pages, 1GB each. +#### Customizing hugepages for specific machine +To specify size or amount only for specific machine, `hugepage_size` and/or `hugepage_amount` can be provided in `host_vars/.yml` (i.e. if host is named `node01`, then the file is `host_vars/node01.yml`). -## Reference -- [Hugepages support in Kubernetes](https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/) +Note that vars in `on_premises.yml` have greater precedence than ones in `host_vars/`, therefore to provide greater control over hugepage variables, `hugepage_amount` from `network_edge.yml` and/or `on_premises.yml` should be removed. +## Reference +- [Hugepages support in Kubernetes](https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/) diff --git a/doc/enhanced-platform-awareness/openness-node-feature-discovery.md b/doc/enhanced-platform-awareness/openness-node-feature-discovery.md index 131fb1e8..b5b37e53 100644 --- a/doc/enhanced-platform-awareness/openness-node-feature-discovery.md +++ b/doc/enhanced-platform-awareness/openness-node-feature-discovery.md @@ -3,23 +3,26 @@ SPDX-License-Identifier: Apache-2.0 Copyright (c) 2019 Intel Corporation ``` -# Node Feature Discovery support in OpenNESS +# Node Feature Discovery support in OpenNESS - [Node Feature Discovery support in OpenNESS](#node-feature-discovery-support-in-openness) - [Overview of NFD and Edge usecase](#overview-of-nfd-and-edge-usecase) - - [Details - Node Feature Discovery support in OpenNESS](#details---node-feature-discovery-support-in-openness) - - [Usage](#usage) + - [Details](#details) + - [Node Feature Discovery support in OpenNESS Network Edge](#node-feature-discovery-support-in-openness-network-edge) + - [Usage](#usage) + - [Node Feature Discovery support in OpenNESS On Premises](#node-feature-discovery-support-in-openness-on-premises) + - [Usage](#usage-1) - [Reference](#reference) -## Overview of NFD and Edge usecase +## Overview of NFD and Edge usecase -COTS Platforms used for edge deployment come with many features that enable workloads take advantage of, to provide better performance and meet the SLA. When such COTS platforms are deployed in a cluster as part of a Cloudnative deployment it becomes important to detect the hardware and software features on all nodes that are part of that cluster. It should also be noted that some of the nodes might have special accelerator hardware like FPGA, GPU, NVMe, etc. +COTS Platforms used for edge deployment come with many features that enable workloads take advantage of, to provide better performance and meet the SLA. When such COTS platforms are deployed in a cluster as part of a Cloudnative deployment it becomes important to detect the hardware and software features on all nodes that are part of that cluster. It should also be noted that some of the nodes might have special accelerator hardware like FPGA, GPU, NVMe, etc. Let us consider an edge application like CDN that needs to be deployed in the cloud native edge cloud. It would be favorable for a Container orchestrator like Kubernetes to detect the nodes that have CDN friendly hardware and software features like NVMe, media extensions and so on. Now let us consider a Container Network Function (CNF) like 5G gNb that implements L1 5G NR base station. It would be favorable for the Container orchestrator like Kubernetes to detect nodes that have hardware and software features like FPGA acceleration for Forward error correction, Advanced vector instructions to implement math functions, real-time kernel and so on. -OpenNESS supports the discovery of such features using Node Feature Discovery (NFD). NFD is a Kubernetes add-on that detects and advertises hardware and software capabilities of a platform that can, in turn, be used to facilitate intelligent scheduling of a workload. Node Feature Discovery is one of the Intel technologies that supports targeting of intelligent configuration and capacity consumption of platform capabilities. NFD runs as a separate container on each individual node of the cluster, discovers capabilities of the node, and finally, publishes these as node labels using the Kubernetes API. NFD only handles non-allocatable features. +OpenNESS supports the discovery of such features using Node Feature Discovery (NFD). NFD is a Kubernetes add-on that detects and advertises hardware and software capabilities of a platform that can, in turn, be used to facilitate intelligent scheduling of a workload. Node Feature Discovery is one of the Intel technologies that supports targeting of intelligent configuration and capacity consumption of platform capabilities. NFD runs as a separate container on each individual node of the cluster, discovers capabilities of the node, and finally, publishes these as node labels using the Kubernetes API. NFD only handles non-allocatable features. Some of the Node features that NFD can detect include: @@ -32,7 +35,7 @@ At its core, NFD detects hardware features available on each node in a Kubernete NFD consists of two software components: 1) nfd-master is responsible for labeling Kubernetes node objects -2) nfd-worker detects features and communicates them to the nfd-master. One instance of nfd-worker should be run on each node of the cluster +2) nfd-worker detects features and communicates them to the nfd-master. One instance of nfd-worker should be run on each node of the cluster The figure below illustrates how the CDN application will be deployed on the right platform when NFD is utilized, where the required key hardware like NVMe and AVX instruction set support is available. @@ -46,13 +49,15 @@ _Figure - CDN app deployment with NFD Features_ > UEFI Secure Boot: Boot Firmware verification and authorization of OS Loader/Kernel components -## Details - Node Feature Discovery support in OpenNESS +## Details -Node Feature Discovery is enabled by default. It does not require any configuration or user input. It can be disabled by editing the `ne_controller.yml` file and commenting out `nfd` role before OpenNESS installation. +### Node Feature Discovery support in OpenNESS Network Edge + +Node Feature Discovery is enabled by default. It does not require any configuration or user input. It can be disabled by editing the `network_edge.yml` file and commenting out `nfd/network_edge` role before OpenNESS installation. Connection between nfd-workers and nfd-master is secured by certificates generated before running nfd pods. -### Usage +#### Usage NFD is working automatically and does not require any user action to collect the features from nodes. Features found by NFD and labeled in Kubernetes can be shown by command: `kubectl get no -o json | jq '.items[].metadata.labels'`. @@ -111,5 +116,27 @@ spec: feature.node.kubernetes.io/cpu-pstate.turbo: 'true' ``` -## Reference +### Node Feature Discovery support in OpenNESS On Premises + +Node Feature Discovery is enabled by default. It does not require any configuration or user input. It can be disabled by editing the `on_premises.yml` file and commenting out `nfd/onprem/master` role and `nfd/onprem/master` role before OpenNESS installation. + +NFD service in OpenNESS On Premises consists of two software components: + +- *nfd-worker*, which is taken from https://github.com/kubernetes-sigs/node-feature-discovery (downloaded as image) +- *nfd-master*: stand alone service run on Edge Controller. + +Nfd-worker connects to nfd-master server. Connection between nfd-workers and nfd-master is secured by TLS based certificates used in Edge Node enrollment: nfd-worker uses certificates of Edge Node, nfd-master generates certificate based on Edge Controller root certificate. Nfd-worker provides hardware features to nfd-master which stores that data to the controller mysql database. It can be used then as EPA Feature requirement while defining and deploying app on node. + +#### Usage + +NFD is working automatically and does not require any user action to collect the features from nodes. +Default version of nfd-worker downloaded by ansible scripts during deployment is v.0.5.0. It can be changed by setting variable `nfd_version` in `roles/nfd/onprem/worker/defaults/main.yml`. + +Features found by NFD are visible in Edge Controller UI in node's NFD tab. While defining edge application (Controller UI->APPLICATIONS->ADD APPLICATION), `EPA Feature` fields can be used as definition of NFD requirement for app deployment. Eg: if application requires Multi-Precision Add-Carry Instruction Extensions (ADX), user can set EPA Feature Key to `nfd:cpu-cpuid.ADX` and EPA Feature Value to `true`. + +![Sample application with NFD Feature required](nfd-images/nfd3_onp_app.png) + +Deployment of such application will fail for nodes that don't provide this feature with this particular value. List of features supported by nfd-worker service can be found: https://github.com/kubernetes-sigs/node-feature-discovery#feature-discovery. Please note that `nfd:` prefix always has to be added when used as EPA Feature Key. + +## Reference More details about NFD can be found here: https://github.com/Intel-Corp/node-feature-discovery diff --git a/doc/enhanced-platform-awareness/openness-port-forward.md b/doc/enhanced-platform-awareness/openness-port-forward.md new file mode 100644 index 00000000..aa2f82cd --- /dev/null +++ b/doc/enhanced-platform-awareness/openness-port-forward.md @@ -0,0 +1,21 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020 Intel Corporation +``` + +# Support for setting up port forwarding of a container in OpenNESS On-Prem mode + +- [Support for setting up port forwarding of a container in OpenNESS On-Prem mode](#support-for-setting-up-port-forwarding-of-a-container-in-openness-on-prem-mode) + - [Overview](#overview) + - [Usage](#usage) + +## Overview + +This feature enables the user to set up external network ports for their application (container) - so that applications running on other hosts can connect. + +## Usage +To take advantage of this feature, all you have to do is fill in the port and protocol fields during application creation. +OpenNESS will pass that information down to Docker, and assuming all goes well, when you start this container your ports will be exposed. + +For more details on the application onboarding (including other fields to set), please refer to +[Application Onboarding Document](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/on-premises-applications-onboarding.md) diff --git a/doc/enhanced-platform-awareness/openness-shared-storage.md b/doc/enhanced-platform-awareness/openness-shared-storage.md new file mode 100644 index 00000000..a95b5606 --- /dev/null +++ b/doc/enhanced-platform-awareness/openness-shared-storage.md @@ -0,0 +1,32 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020 Intel Corporation +``` + +# Shared storage for containers in OpenNESS On-Prem mode + + +- [Shared storage for containers in OpenNESS On-Prem mode](#shared-storage-for-containers-in-openness-on-prem-mode) + - [Overview](#overview) + - [Usage](#usage) + +## Overview + +OpenNESS On-Prem mode provides possibility to use volume and bind mount storage models known from docker. For detailed information please refer to: https://docs.docker.com/storage/volumes/ and https://docs.docker.com/storage/bind-mounts/. In OpenNESS On-Prem it is achieved by simply adding mount items to the containers `HostConfig` structure. + +## Usage + +In order to add volume/bindmount to node container application user should use `EPA Feature` part of application creation form on +ControllerUI->APPLICATIONS->ADD APPLICATION by adding item with `mount` EPA Feature Key. Valid syntax of EPA Feature Value in such case should be `...;type,source,target,readonly;...` where: +- multiple mounts can be added in one EPA Feature by delimiting with semicolons +- supported types are `volume` and `bind` which corresponds to volume and bind mount known from docker +- source: + - volume name (volume will be automatically created if not exists) for `volume` type + - location on the Host machine for `bind` type +- taget is location inside the container +- readonly: setting to `true` will set volume/bind mount to read-only mode +- invalid entries will be skipped + +Example valid EPA Feature entry: +- EPA Feature Key: `mount` +- EPA Feature Value: `volume,testvolume,/testvol,false;bind,/home/testdir,/testbind,true` diff --git a/doc/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md b/doc/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md index 06d196f1..c5447272 100644 --- a/doc/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md +++ b/doc/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md @@ -1,9 +1,9 @@ ```text SPDX-License-Identifier: Apache-2.0 -Copyright (c) 2019 Intel Corporation +Copyright (c) 2019-2020 Intel Corporation ``` -# Multiple Interface and PCIe SRIOV support in OpenNESS +# Multiple Interface and PCIe SRIOV support in OpenNESS - [Multiple Interface and PCIe SRIOV support in OpenNESS](#multiple-interface-and-pcie-sriov-support-in-openness) - [Overview](#overview) @@ -12,25 +12,28 @@ Copyright (c) 2019 Intel Corporation - [Overview of SR-IOV Device Plugin](#overview-of-sr-iov-device-plugin) - [Details - Multiple Interface and PCIe SRIOV support in OpenNESS](#details---multiple-interface-and-pcie-sriov-support-in-openness) - [Multus usage](#multus-usage) - - [SRIOV](#sriov) - - [Edgecontroller setup](#edgecontroller-setup) - - [Edgenode setup](#edgenode-setup) + - [SRIOV for Network-Edge](#sriov-for-network-edge) + - [Edge Node SRIOV interfaces configuration](#edge-node-sriov-interfaces-configuration) - [Usage](#usage) + - [SRIOV for On-Premises](#sriov-for-on-premises) + - [Edgenode Setup](#edgenode-setup) + - [Docker Container Deployment Usage](#docker-container-deployment-usage) + - [Virtual Machine Deployment Usage](#virtual-machine-deployment-usage) - [Reference](#reference) -## Overview +## Overview Edge deployments consist of both Network Functions and Applications. Cloud Native solutions like Kubernetes typically expose only one interface to the Application or Network function PODs. These interfaces are typically bridged interfaces. This means that Network Functions like Base station or Core network User plane functions and Applications like CDN etc. are limited by the default interface. -To address this we need to enable two key networking features: -1) Enable a Kubernetes like orchestration environment to provision more than one interface to the application and Network function PODs -2) Enable the allocation of dedicated hardware interfaces to application and Network Function PODs +To address this we need to enable two key networking features: +1) Enable a Kubernetes like orchestration environment to provision more than one interface to the application and Network function PODs +2) Enable the allocation of dedicated hardware interfaces to application and Network Function PODs -### Overview of Multus +### Overview of Multus To enable multiple interface support in PODs, OpenNESS Network Edge uses the Multus container network interface. Multus CNI is a container network interface (CNI) plugin for Kubernetes that enables the attachment of multiple network interfaces to pods. Typically, in Kubernetes each pod only has one network interface (apart from a loopback) – with Multus you can create a multi-homed pod that has multiple interfaces. This is accomplished by Multus acting as a “meta-plugin”, a CNI plugin that can call multiple other CNI plugins. Multus CNI follows the Kubernetes Network Custom Resource Definition De-facto Standard to provide a standardized method by which to specify the configurations for additional network interfaces. This standard is put forward by the Kubernetes Network Plumbing Working Group. -Below is an illustration of the network interfaces attached to a pod, as provisioned by the Multus CNI. The diagram shows the pod with three interfaces: eth0, net0 and net1. eth0 connects to the Kubernetes cluster network to connect with the Kubernetes server/services (e.g. kubernetes api-server, kubelet and so on). net0 and net1 are additional network attachments and connect to other networks by using other CNI plugins (e.g. vlan/vxlan/ptp). +Below is an illustration of the network interfaces attached to a pod, as provisioned by the Multus CNI. The diagram shows the pod with three interfaces: eth0, net0 and net1. eth0 connects to the Kubernetes cluster network to connect with the Kubernetes server/services (e.g. kubernetes api-server, kubelet and so on). net0 and net1 are additional network attachments and connect to other networks by using other CNI plugins (e.g. vlan/vxlan/ptp). ![Multus overview](multussriov-images/multus-pod-image.svg) @@ -55,14 +58,13 @@ _Figure - SR-IOV Device plugin_ ## Details - Multiple Interface and PCIe SRIOV support in OpenNESS -The Multus role is enabled by default in ansible(`ne_controller.yml`): - -``` - - role: multus +In Network Edge mode Multus CNI, which provides possibility for attaching multiple interfaces to the pod, is deployed automatically when `kubernetes_cnis` variable list (in the `group_vars/all.yml` file) contains at least two elements, e.g.: +```yaml +kubernetes_cnis: +- kubeovn +- sriov ``` ->NOTE: Multus is installed only for Network Edge mode. - ### Multus usage [Custom resource definition](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-resources) (CRD) is used to define additional network that can be used by Multus. @@ -100,7 +102,12 @@ EOF name: samplepod annotations: k8s.v1.cni.cncf.io/networks: macvlan + spec: + containers: + - name: multitoolcont + image: praqma/network-multitool ``` + > NOTE: More networks can be added after a coma in the same annotation 4. To verify that the additional interface was configured run `ip a` in the deployed pod. The output should look similar to the following: ```bash @@ -118,23 +125,18 @@ EOF valid_lft forever preferred_lft forever ``` -### SRIOV - -#### Edgecontroller setup -To install the OpenNESS controller with SR-IOV support please uncomment `role: sriov/master` in `ne_controller.yml` of Ansible scripts. Please also remember, that `role: multus` has to be enabled as well. +### SRIOV for Network-Edge +To deploy the OpenNESS' Network Edge with SR-IOV `sriov` must be added to the `kubernetes_cnis` list in `group_vars/all.yml`: ```yaml -- role: sriov/master +kubernetes_cnis: +- kubeovn +- sriov ``` -#### Edgenode setup -To install the OpenNESS node with SR-IOV support please uncomment `role: sriov/worker` in `ne_node.yml` of Ansible scripts. - -```yaml -- role: sriov/worker -``` +#### Edge Node SRIOV interfaces configuration -For the installer to turn on the specified number of SR-IOV VFs for selected network interface of node, please provide that information in format `{interface_name: VF_NUM, ...}` in `sriov.network_interfaces` variable inside config files in `host_vars` ansible directory. +For the installer to turn on the specified number of SR-IOV VFs for selected network interface of node, please provide that information in format `{interface_name: VF_NUM, ...}` in `sriov.network_interfaces` variable inside config files in `host_vars` ansible directory. Due to the technical reasons, each node has to be configured separately. Copy the example file `host_vars/node1.yml` and then create a similar one for each node being deployed. Please also remember, that each node must be added to Ansible inventory file `inventory.ini`. @@ -177,42 +179,127 @@ spec: > Note: Users can create network with different CRD if they need to. 1. To create a POD with an attached SR-IOV device, add the network annotation to the POD definition and `request` access to the SR-IOV capable device (`intel.com/intel_sriov_netdevice`): + ```yaml + apiVersion: v1 + kind: Pod + metadata: + name: samplepod + annotations: + k8s.v1.cni.cncf.io/networks: sriov-openness + spec: + containers: + - name: samplecnt + image: centos/tools + resources: + requests: + intel.com/intel_sriov_netdevice: "1" + limits: + intel.com/intel_sriov_netdevice: "1" + command: ["sleep", "infinity"] + ``` + +2. To verify that the additional interface was configured run `ip a` in the deployed pod. The output should look similar to the following: + ```bash + 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + 41: net1: mtu 1500 qdisc mq state UP group default qlen 1000 + link/ether aa:37:23:b5:63:bc brd ff:ff:ff:ff:ff:ff + inet 192.168.2.2/24 brd 192.168.2.255 scope global net1 + valid_lft forever preferred_lft forever + 169: eth0@if170: mtu 1400 qdisc noqueue state UP group default + link/ether 0a:00:00:10:00:0b brd ff:ff:ff:ff:ff:ff link-netnsid 0 + inet 10.16.0.10/16 brd 10.16.255.255 scope global eth0 + valid_lft forever preferred_lft forever + ``` + +### SRIOV for On-Premises +Support for providing SR-IOV interfaces to containers and virtual machines is also available for OpenNESS On-Premises deployments. + +#### Edgenode Setup +To install the OpenNESS node with SR-IOV support, the option `role: sriov_device_init/onprem` must be uncommented in the `edgenode_group` in `on_premises.yml` of the ansible scripts. + ```yaml - apiVersion: v1 - kind: Pod - metadata: - name: samplepod - annotations: - k8s.v1.cni.cncf.io/networks: sriov-openness - spec: - containers: - - name: samplecnt - image: centos/tools - resources: - requests: - intel.com/intel_sriov_netdevice: "1" +- role: sriov_device_init/onprem ``` -2. To verify that the additional interface was configured run `ip a` in the deployed pod. The output should look similar to the following: +In order to configure the number of SR-IOV VFs on the node, the `network_interfaces` variable located under `sriov` in `host_vars/node01.yml` needs to be updated with the physical network interfaces on the node where the VFs should be created, along with the number of VFs to be created for each interface. The format this information should be provided in is `{interface_name: number_of_vfs, ...}`. + +> Note: Remember that each node must be added to the ansible inventory file `inventory.ini` if they are to be deployed by the ansible scripts. + +To inform the installer of the number of VFs to configure for use with virtual machine deployments, the variable `vm_vf_ports` must be set, e.g. `vm_vf_ports: 4` tells the installer to configure four VFs for use with virtual machines. The installer will use this setting to assign that number of VFs to the kernel pci-stub driver so that they can be passed to virtual machines at deployment. + +When deploying containers in On-Premises mode, additional settings in the `host_vars/node01.yml` file are required so the installer can configure the VFs correctly. Each VF will be assigned to a Docker network configuration which will be created by the installer. To do this, the following variables must be configured: +- `interface_subnets`: This contains the subnet information for the Docker network that the VF will be assigned to. Must be provided in the format `[subnet_ip/subnet_mask,...]`. +- `interface_ips`: This contains the gateway IP address for the Docker network which will be assigned to the VF in the container. The address must be located within the subnet provided above. Must be provided in the format `[ip_address,...]`. +- `network_name`: This contains the name of the Docker network to be created by the installer. Must be in the format `[name_of_network,...]`. + +An example `host_vars/node01.yml` which enables 4 VFs across two interfaces with two VFs configured for virtual machines and two VFs configured for containers is shown below: +```yaml +sriov: + network_interfaces: {enp24s0f0: 2, enp24s0f1: 2} + interface_subnets: [192.168.1.0/24, 192.168.2.0/24] + interface_ips: [192.168.1.1, 192.168.2.1] + network_name: [test_network1, test_network2] + vm_vf_ports: 2 +``` + +> Note: When setting VFs for On-Premises mode the total number of VFs assigned to virtual machines and containers *must* match the total number of VFs requested, i.e. if requesting 8 VFs in total, the amount assigned to virtual machines and containers *must* also total to 8. + +#### Docker Container Deployment Usage + +To assign a VF to a Docker container at deployment, the following steps are required once the Edge Node has been set up by the ansible scripts with VFs created. + +1. On the Edge Node, run `docker network ls` to get the list of Docker networks available. These should include the Docker networks assigned to VFs by the installer. +```bash +# docker network ls +NETWORK ID NAME DRIVER SCOPE +74d9cb38603e bridge bridge local +57411c1ca4c6 host host local +b8910de9ad89 none null local +c227f1b184bc test_network1 macvlan local +3742881cf9ff test_network2 macvlan local +``` +> Note: if you want to check the network settings for a specific network, simply run `docker network inspect ` on the Edge Node. +2. Log into the controller UI and go to the Applications tab to create a new container application with the *EPA Feature Key* set to `sriov_nic` and the *EPA Feature Value* set to `network_name`. +![SR-IOV On-Premises Container Deployment](multussriov-images/sriov-onprem-container.png) +3. To verify that the additional interface was configured run `docker exec -it ip a s` on the deployed container. The output should be similar to the following, with the new interface labelled as eth0. ```bash - 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 +1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever - 41: net1: mtu 1500 qdisc mq state UP group default qlen 1000 - link/ether aa:37:23:b5:63:bc brd ff:ff:ff:ff:ff:ff - inet 192.168.2.2/24 brd 192.168.2.255 scope global net1 - valid_lft forever preferred_lft forever - 169: eth0@if170: mtu 1400 qdisc noqueue state UP group default - link/ether 0a:00:00:10:00:0b brd ff:ff:ff:ff:ff:ff link-netnsid 0 - inet 10.16.0.10/16 brd 10.16.255.255 scope global eth0 - valid_lft forever preferred_lft forever +111: eth0@if50: mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default + link/ether 02:42:c0:a8:01:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 + inet 192.168.1.2/24 brd 192.168.1.255 scope global eth0 + valid_lft forever preferred_lft forever +112: vEth1: mtu 1500 qdisc noop state DOWN group default qlen 1000 + link/ether 9a:09:f3:84:f9:7b brd ff:ff:ff:ff:ff:ff +``` + +#### Virtual Machine Deployment Usage + +To assign a VF to a virtual machine at deployment, the following steps are required on the Edge Node that has been set up by the ansible scripts with VFs created. + +1. On the Edge Node, get the list of PCI address bound to the pci-stub kernel driver by running `ls /sys/bus/pci/drivers/pci-stub`. The output should look similar to the following: +```bash +# ls /sys/bus/pci/drivers/pci-stub +0000:18:02.0 0000:18:02.1 bind new_id remove_id uevent unbind +``` +2. Log into the controller UI and go to the Applications tab to create a new virtual machine application with the *EPA Feature Key* set to `sriov_nic` and the *EPA Feature Value* set to `pci_address`. +![SR-IOV On-Premises Virtual Machine Deployment](multussriov-images/sriov-onprem-vm.png) +3. To verify that the additional interface was configured run `virsh domiflist ` on the Edge Node. The output should be similar to the following, with the hostdev device for the VF interface shown. +```bash +Interface Type Source Model MAC +------------------------------------------------------- +- network default virtio 52:54:00:39:3d:80 +- vhostuser - virtio 52:54:00:90:44:ee +- hostdev - - 52:54:00:eb:f0:10 ``` -## Reference -For further details +## Reference +For further details - SR-IOV CNI: https://github.com/intel/sriov-cni - Multus: https://github.com/Intel-Corp/multus-cni - SR-IOV network device plugin: https://github.com/intel/intel-device-plugins-for-kubernetes - - diff --git a/doc/enhanced-platform-awareness/openness-tunable-exec.md b/doc/enhanced-platform-awareness/openness-tunable-exec.md new file mode 100644 index 00000000..c09dc8c7 --- /dev/null +++ b/doc/enhanced-platform-awareness/openness-tunable-exec.md @@ -0,0 +1,18 @@ +```text +SPDX-License-Identifier: Apache-2.0 +Copyright (c) 2020 Intel Corporation +``` + +# Support for overriding the startup command of a container in OpenNESS On-Prem mode + +## Overview + +This feature enables you to override the startup command for a container, thus removing the need to rebuild it just to make this change. +It also allows you to create multiple containers using the same image but with each container using a different startup command. + +## Usage +To take advantage of this feature, all you have to do is add a new 'EPA Feature Key' (on the application details page) called 'cmd', +with the value of the command you want to run instead of the default. OpenNESS will pass that information down to Docker, and assuming all goes well (for example your command is correct / the path is valid), next time you start this container your command will be run. + +For more details on the application onboarding (including other fields to set), please refer to +[Application Onboarding Document](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/on-premises-applications-onboarding.md) diff --git a/doc/enhanced-platform-awareness/openness_hddl.md b/doc/enhanced-platform-awareness/openness_hddl.md index 73c15a86..b6561e39 100644 --- a/doc/enhanced-platform-awareness/openness_hddl.md +++ b/doc/enhanced-platform-awareness/openness_hddl.md @@ -3,7 +3,7 @@ SPDX-License-Identifier: Apache-2.0 Copyright (c) 2019 Intel Corporation ``` -# Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS +# Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS - [Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS](#using-intel%c2%ae-movidius%e2%84%a2-myriad%e2%84%a2-x-high-density-deep-learning-hddl-solution-in-openness) - [HDDL Introduction](#hddl-introduction) @@ -16,7 +16,7 @@ Copyright (c) 2019 Intel Corporation - [Summary](#summary) - [Reference](#reference) -Deployment of AI based Machine Learning (ML) applications on the edge is becoming more prevalent. Supporting hardware resources that accelerate AI/ML applications on the edge is key to improve the capacity of edge cloud deployment. It is also important to use CPU instruction set to execute AI/ML tasks when load is less. This paper explains these topics in the context of inference as a edge workload. +Deployment of AI based Machine Learning (ML) applications on the edge is becoming more prevalent. Supporting hardware resources that accelerate AI/ML applications on the edge is key to improve the capacity of edge cloud deployment. It is also important to use CPU instruction set to execute AI/ML tasks when load is less. This paper explains these topics in the context of inference as a edge workload. ## HDDL Introduction Intel® Movidius™ Myriad™ X High Density Deep Learning solution integrates multiple Myriad™ X SoCs in a PCIe add-in card form factor or a module form factor to build a scalable, high capacity deep learning solution. It provides hardware and software reference for customers. The following figure shows the HDDL-R concept. @@ -61,16 +61,48 @@ Further sections provide information on how to use the HDDL setup on OpenNESS On ### HDDL-R PCI card Ansible installation for OpenNESS OnPremise Edge To run the OpenNESS package with HDDL-R functionality the feature needs to be enabled on Edge Node. -To enable on the Edge Node set following in `onprem_node.yml` (Please note that the hddl role needs to be executed after openness/onprem/worker role): +To enable on the Edge Node set following in `on_premises.yml` (Please note that the hddl precheck and role needs to be executed after openness/onprem/worker role): ``` -- role: hddl +- include_tasks: ./roles/hddl/common/tasks/precheck.yml + +- role: hddl/onprem/worker ``` -Run setup script `deploy_onprem_node.sh`. +Run setup script `deploy_onprem.sh nodes`. + +NOTE: For this release, HDDL only supports default OS kernel(3.10.0-957.el7.x86_64) and need to set flag: kernel_skip as true before running OpenNESS installation scripts. (kernel_skip in the roles/machine_setup/custom_kernel/defaults/main.yml) +NOTE: The HDDL precheck will check the current role and playbooks variables whether they satisfy the HDDL running pre-conditions. + +To check HDDL service running status on the edgenode after deploy, docker logs should look like: +``` +docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +ca7e9bf9e570 hddlservice:1.0 "./start.sh" 20 hours ago Up 20 hours openvino-hddl-service +ea82cbc0d84a 004fddc9c299 "/usr/sbin/syslog-ng…" 21 hours ago Up 21 hours 601/tcp, 514/udp, 6514/tcp edgenode_syslog-ng_1 +3b4daaac1bc6 appliance:1.0 "sudo -E ./entrypoin…" 21 hours ago Up 21 hours 0.0.0.0:42101-42102->42101-42102/tcp, 192.168.122.1:42103->42103/tcp edgenode_appliance_1 +2262b4fa875b eaa:1.0 "sudo ./entrypoint_e…" 21 hours ago Up 21 hours 192.168.122.1:80->80/tcp, 192.168.122.1:443->443/tcp edgenode_eaa_1 +eedf4355ec98 edgednssvr:1.0 "sudo ./edgednssvr -…" 21 hours ago Up 19 hours 192.168.122.128:53->53/udp mec-app-edgednssvr +5c94f7203023 nts:1.0 "sudo -E ./entrypoin…" 21 hours ago Up 19 hours nts +docker logs --tail 20 ca7e9bf9e570 ++-------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+ +| status | WAIT_TASK | WAIT_TASK | WAIT_TASK | WAIT_TASK | WAIT_TASK | RUNNING | WAIT_TASK | WAIT_TASK | +| fps | 1.61 | 1.62 | 1.63 | 1.65 | 1.59 | 1.58 | 1.67 | 1.60 | +| curGraph | icv-ped...sd-v2.0 | icv-ped...sd-v2.0 | icv-ped...sd-v2.0 | icv-ped...sd-v2.0 | icv-ped...sd-v2.0 | icv-ped...sd-v2.0 | icv-ped...sd-v2.0 | icv-ped...sd-v2.0 | +| rPriority | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | +| loadTime | 20200330 05:34:34 | 20200330 05:34:35 | 20200330 05:34:35 | 20200330 05:34:35 | 20200330 05:34:35 | 20200330 05:34:35 | 20200330 05:34:35 | 20200330 05:34:35 | +| runTime | 00:00:41 | 00:00:41 | 00:00:41 | 00:00:40 | 00:00:40 | 00:00:40 | 00:00:40 | 00:00:40 | +| inference | 64 | 64 | 64 | 64 | 63 | 63 | 64 | 63 | +| prevGraph | | | | | | | | | +| loadTime | | | | | | | | | +| unloadTime | | | | | | | | | +| runTime | | | | | | | | | +| inference | | | | | | | | | +``` + ### Building Docker image with HDDL only or dynamic CPU/VPU usage -In order to enable HDDL or mixed CPU/VPU operation by the containerized OpenVINO application set the `OPENVINO_ACCL` environmental variable to `HDDL` or `CPU_HDDL` inside producer application Dockerfile, located in Edge Apps repo - [edgeapps/openvino/producer](https://github.com/open-ness/edgeapps/blob/master/openvino/producer/Dockerfile). Build the image using the ./build-image.sh located in same directory. Making the image accessible by Edge Controller via HTTPs server is out of scope of this documentation - please refer to [Application Onboard Document](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/on-premises-applications-onboarding.md). +In order to enable HDDL or mixed CPU/VPU operation by the containerized OpenVINO application set the `OPENVINO_ACCL` environmental variable to `HDDL` or `CPU_HDDL` inside producer application Dockerfile, located in Edge Apps repo - [edgeapps/applications/openvino/producer](https://github.com/open-ness/edgeapps/blob/master/applications/openvino/producer/Dockerfile). Build the image using the ./build-image.sh located in same directory. Making the image accessible by Edge Controller via HTTPs server is out of scope of this documentation - please refer to [Application Onboard Document](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/on-premises-applications-onboarding.md). ### Deploying application with HDDL support @@ -79,6 +111,5 @@ Application onboarding is out of scope of this document - please refer to [Appli ## Summary Intel® Movidius™ Myriad™ X High Density Deep Learning solution integrates multiple Myriad™ X SoCs in a PCIe add-in card form factor or a module form factor to build a scalable, high capacity deep learning solution. OpenNESS provides a toolkit for customers to put together Deep learning solution at the edge. To take it further for efficient resource usage OpenNESS provides mechanism to use CPU or VPU depending on the load or any other criteria. -## Reference +## Reference - [HDDL-R: Mouser Mustang-V100](https://www.mouser.ie/datasheet/2/763/Mustang-V100_brochure-1526472.pdf) - diff --git a/doc/getting-started/network-edge/controller-edge-node-setup.md b/doc/getting-started/network-edge/controller-edge-node-setup.md index 245495cc..3100155e 100644 --- a/doc/getting-started/network-edge/controller-edge-node-setup.md +++ b/doc/getting-started/network-edge/controller-edge-node-setup.md @@ -1,6 +1,6 @@ ```text SPDX-License-Identifier: Apache-2.0 -Copyright (c) 2019 Intel Corporation +Copyright (c) 2019-2020 Intel Corporation ``` # OpenNESS Network Edge: Controller and Edge node setup @@ -10,9 +10,13 @@ Copyright (c) 2019 Intel Corporation - [Network Edge Playbooks](#network-edge-playbooks) - [Cleanup playbooks](#cleanup-playbooks) - [Supported EPA features](#supported-epa-features) + - [VM support for Network Edge](#vm-support-for-network-edge) - [Quickstart](#quickstart) - [Application on-boarding](#application-on-boarding) -- [Q&A](#qampa) + - [Kubernetes cluster networking plugins (Network Edge)](#kubernetes-cluster-networking-plugins-network-edge) + - [Selecting cluster networking plugins (CNI)](#selecting-cluster-networking-plugins-cni) + - [Adding additional interfaces to pods](#adding-additional-interfaces-to-pods) +- [Q&A](#qa) - [Configuring time](#configuring-time) - [Setup static hostname](#setup-static-hostname) - [Configuring inventory](#configuring-inventory) @@ -22,6 +26,7 @@ Copyright (c) 2019 Intel Corporation - [GitHub Token](#github-token) - [Customize tag/commit/sha to checkout](#customize-tagcommitsha-to-checkout) - [Installing Kubernetes Dashboard](#installing-kubernetes-dashboard) + - [Customization of kernel, grub parameters and tuned profile](#customization-of-kernel-grub-parameters-and-tuned-profile) # Preconditions @@ -45,11 +50,11 @@ For convenience, playbooks can be executed by running helper deployment scripts. > NOTE: All nodes provided in the inventory may reboot during the installation. -Convention for the scripts is: `action_mode[_group].sh`. Following scripts are available for Network Edge mode: - - `deploy_ne.sh` - sets up cluster (first controller, then nodes) - - `cleanup_ne.sh` - - `deploy_ne_controller.sh` - - `deploy_ne_node.sh` +Convention for the scripts is: `action_mode.sh [group]`. Following scripts are available for Network Edge mode: + - `deploy_ne.sh [ controller | nodes ]` + - `cleanup_ne.sh [ controller | nodes ] ` + +To run deploy of only Edge Nodes or Edge Controller use `deploy_ne.sh nodes` and `deploy_ne.sh controller` respectively. > NOTE: Playbooks for Edge Controller/Kubernetes master must be executed before playbooks for Edge Nodes. @@ -57,7 +62,7 @@ Convention for the scripts is: `action_mode[_group].sh`. Following scripts are a ## Network Edge Playbooks -The `ne_controller.yml`, `ne_node.yml` and `ne_cleanup.yml` files contain playbooks for Network Edge mode. +The `network_edge.yml` and `network_edge_cleanup.yml` files contain playbooks for Network Edge mode. Playbooks can be customized by (un)commenting roles that are optional and by customizing variables where needed. ### Cleanup playbooks @@ -72,6 +77,9 @@ Note that there might be some leftovers created by installed software. ### Supported EPA features A number of enhanced platform capabilities/features are available in OpenNESS for Network Edge. For the full list of features supported see [supported-epa.md](https://github.com/open-ness/specs/blob/master/doc/getting-started/network-edge/supported-epa.md), the documents referenced in the list provide detailed description of the features and step by step instructions how to enable them. The user is advised to get familiarized with the features available before executing the deployment playbooks. +### VM support for Network Edge +Support for VM deployment on OpenNESS for Network Edge is available and enabled by default, certain configuration and pre-requisites may need to be fulfilled in order to use all capabilities. The user is advised to get familiarized with the VM support documentation before executing the deployment playbooks. Please see [openness-network-edge-vm-support.md](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/openness-network-edge-vm-support.md) for more information. + ### Quickstart The following is a complete set of actions that need to be completed to successfully set up OpenNESS cluster. @@ -86,6 +94,93 @@ The following is a complete set of actions that need to be completed to successf Please refer to [network-edge-applications-onboarding.md](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md) document for instructions on how to deploy edge applications for OpenNESS Network Edge. +## Kubernetes cluster networking plugins (Network Edge) + +Kubernetes uses 3rd party networking plugins to provide [cluster networking](https://kubernetes.io/docs/concepts/cluster-administration/networking/). +These plugins are based on [CNI (Container Network Interface) specification](https://github.com/containernetworking/cni). + +OpenNESS Experience Kits provides several ready-to-use Ansible roles deploying CNIs. +Following CNIs are currently supported: +* [kube-ovn](https://github.com/alauda/kube-ovn) + * **Only as primary CNI** + * CIDR: 10.16.0.0/16 +* [flannel](https://github.com/coreos/flannel) + * IPAM: host-local + * CIDR: 10.244.0.0/16 + * Network attachment definition: openness-flannel +* [calico](https://github.com/projectcalico/cni-plugin) + * IPAM: host-local + * CIDR: 10.243.0.0/16 + * Network attachment definition: openness-calico +* [SR-IOV](https://github.com/intel/sriov-cni) (cannot be used as a standalone or primary CNI - [sriov setup](doc/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md)) + +Multiple CNIs can be requested to be set up for the cluster. To provide such functionality [Multus CNI](https://github.com/intel/multus-cni) is used. + +> NOTE: For guide on how to add new CNI role to the OpenNESS Experience Kits refer to [the OpenNESS Experience Kits guide](../openness-experience-kits.md#adding-new-cni-plugins-for-kubernetes-network-edge) + +### Selecting cluster networking plugins (CNI) + +> Note: When using non-default CNI (default is kube-ovn) remember to add CNI's networks (CIDR for pods and other CIDRs used by the CNI) to `proxy_os_noproxy` in `group_vars/all.yml` + +In order to customize which CNI are to be deployed for the Network Edge cluster edit `kubernetes_cnis` variable in `group_vars/all.yml` file. +CNIs are applied in requested order. +By default `kube-ovn` and `calico` are set up (with `multus` in between): +```yaml +kubernetes_cnis: +- kubeovn +- calico +``` + +For example, to add SR-IOV just add another item on the list. That'll result in following CNIs being applied: `kube-ovn`, `multus`, `calico` and `sriov`. +```yaml +kubernetes_cnis: +- kubeovn +- calico +- sriov +``` + +### Adding additional interfaces to pods + +In order to add additional interface from secondary CNIs annotation is required. +Below is an example pod yaml file for a scenario with `kube-ovn` as a primary CNI and `calico` and `flannel` as additional CNIs. +Multus will create an interface named `calico` using network attachment definition `openness-calico` and interface `flannel` using network attachment definition `openness-flannel`: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: cni-test-pod + annotations: + k8s.v1.cni.cncf.io/networks: openness-calico@calico, openness-flannel@flannel +spec: + containers: + - name: cni-test-pod + image: docker.io/centos/tools:latest + command: + - /sbin/init +``` + +Below is an output (some lines were cut out for readability) of `ip a` command executed in the pod. +Following interfaces are available: `calico@if142`, `flannel@if143` and `eth0@if141` (`kubeovn`). +``` +# kubectl exec -ti cni-test-pod ip a + +1: lo: + inet 127.0.0.1/8 scope host lo + +2: tunl0@NONE: + link/ipip 0.0.0.0 brd 0.0.0.0 + +4: calico@if142: + inet 10.243.0.3/32 scope global calico + +6: flannel@if143: + inet 10.244.0.3/16 scope global flannel + +140: eth0@if141: + inet 10.16.0.5/16 brd 10.16.255.255 scope global eth0 +``` + # Q&A ## Configuring time @@ -258,7 +353,7 @@ proxy_os_ftp: "http://proxy.example.com:3128" proxy_os_noproxy: "localhost,127.0.0.1,10.244.0.0/24,10.96.0.0/12,192.168.0.1/24" ``` > NOTE: Ensure the no_proxy environment variable in your profile is set -> +> > export no_proxy="localhost,127.0.0.1,10.244.0.0/24,10.96.0.0/12,192.168.0.1/24" ## Setting Git @@ -341,7 +436,7 @@ Follow the below steps to get the Kubernetes dashboard installed after OpenNESS 7. Open the dashboard from the browser at `https://:/`, use the port that was noted in the previous steps -> **NOTE**: Firefox browser can be an alternative to Chrome and Internet Explorer in case the dashboard web page is blocked due to certification issue. +> **NOTE**: Firefox browser can be an alternative to Chrome and Internet Explorer in case the dashboard web page is blocked due to certification issue. 8. Capture the bearer token using this command @@ -354,7 +449,7 @@ Paste the Token in the browser to log in as shown in this diagram ![Dashboard Login](controller-edge-node-setup-images/dashboard-login.png) _Figure - Kubernetes Dashboard Login_ -9. Go to the OpenNESS Controller installation directory and edit the `.env` file with the dashboard link `INFRASTRUCTURE_UI_URL=https://:/#` in order to get it integrated with the OpenNESS controller UI (note the `#` symbole at the end of the URL) +9. Go to the OpenNESS Controller installation directory and edit the `.env` file with the dashboard link `INFRASTRUCTURE_UI_URL=https://:/` in order to get it integrated with the OpenNESS controller UI ```shell cd /opt/edgecontroller/ @@ -370,3 +465,8 @@ _Figure - Kubernetes Dashboard Login_ 11. The OpenNESS controller landing page is accessible at `http:///`. > **NOTE**: `LANDING_UI_URL` can be retrieved from `.env` file. + + +## Customization of kernel, grub parameters and tuned profile + +OpenNESS Experience Kits provides easy way to customize kernel version, grub parameters and tuned profile - for more information refer to [the OpenNESS Experience Kits guide](https://github.com/open-ness/specs/blob/master/doc/getting-started/openness-experience-kits.md). diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/AddingInterfaceToNTS.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/AddingInterfaceToNTS.png new file mode 100644 index 00000000..58612072 Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/AddingInterfaceToNTS.png differ diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/AddingInterfaceToNTS1.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/AddingInterfaceToNTS1.png new file mode 100644 index 00000000..81d52a52 Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/AddingInterfaceToNTS1.png differ diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/AddingInterfaceToNTS2.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/AddingInterfaceToNTS2.png new file mode 100644 index 00000000..ea753fe3 Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/AddingInterfaceToNTS2.png differ diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/CheckingNodeInterfaces.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/CheckingNodeInterfaces.png new file mode 100644 index 00000000..7c975778 Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/CheckingNodeInterfaces.png differ diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/CheckingNodeInterfaces1.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/CheckingNodeInterfaces1.png new file mode 100644 index 00000000..014466cc Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/CheckingNodeInterfaces1.png differ diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/Enroll1.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/Enroll1.png new file mode 100644 index 00000000..4691a9a7 Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/Enroll1.png differ diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/Enroll2.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/Enroll2.png new file mode 100644 index 00000000..8d254aac Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/Enroll2.png differ diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/Enroll3.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/Enroll3.png new file mode 100644 index 00000000..84d71dab Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/Enroll3.png differ diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/LBP_rule.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/LBP_rule.png new file mode 100644 index 00000000..b089c6eb Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/LBP_rule.png differ diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/LBP_set_up.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/LBP_set_up.png new file mode 100644 index 00000000..ce105605 Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/LBP_set_up.png differ diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/StartingNTS.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/StartingNTS.png new file mode 100644 index 00000000..8bc99773 Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/StartingNTS.png differ diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/StartingNTS2.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/StartingNTS2.png new file mode 100644 index 00000000..dfc4f8c2 Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/StartingNTS2.png differ diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/controller_ui_landing.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/controller_ui_landing.png index 505972a8..231b9c74 100644 Binary files a/doc/getting-started/on-premises/controller-edge-node-setup-images/controller_ui_landing.png and b/doc/getting-started/on-premises/controller-edge-node-setup-images/controller_ui_landing.png differ diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/login.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/login.png new file mode 100644 index 00000000..4a533002 Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/login.png differ diff --git a/doc/getting-started/on-premises/controller-edge-node-setup.md b/doc/getting-started/on-premises/controller-edge-node-setup.md index 8a229da9..eff72bb8 100644 --- a/doc/getting-started/on-premises/controller-edge-node-setup.md +++ b/doc/getting-started/on-premises/controller-edge-node-setup.md @@ -1,6 +1,6 @@ ```text SPDX-License-Identifier: Apache-2.0 -Copyright (c) 2019 Intel Corporation +Copyright (c) 2019-2020 Intel Corporation ``` # OpenNESS OnPremises: Controller and Edge node setup @@ -11,18 +11,24 @@ Copyright (c) 2019 Intel Corporation - [Running playbooks](#running-playbooks) - [On Premise Playbooks](#on-premise-playbooks) - [Cleanup playbooks](#cleanup-playbooks) + - [Dataplanes](#dataplanes) - [Manual steps](#manual-steps) - [Enrolling Nodes with Controller](#enrolling-nodes-with-controller) - [First Login](#first-login) - - [Enrollment](#enrollment) + - [Manual enrollment](#manual-enrollment) - [NTS Configuration](#nts-configuration) - [Displaying Edge Node's Interfaces](#displaying-edge-nodes-interfaces) - - [Creating Traffic Policy](#creating-traffic-policy) - - [Adding Traffic Policy to Interface](#adding-traffic-policy-to-interface) - [Configuring Interface](#configuring-interface) - [Starting NTS](#starting-nts) -- [Q&A](#qampa) + - [Preparing set-up for Local Breakout Point (LBP)](#preparing-set-up-for-local-breakout-point-lbp) + - [Controller and Edge Node deployment](#controller-and-edge-node-deployment) + - [Network configuration](#network-configuration) + - [Configuration in Controller](#configuration-in-controller) + - [Verification](#verification) + - [Configuring DNS](#configuring-dns) +- [Q&A](#qa) - [Configuring time](#configuring-time) + - [Setup static hostname](#setup-static-hostname) - [Configuring inventory](#configuring-inventory) - [Exchanging SSH keys with hosts](#exchanging-ssh-keys-with-hosts) - [Setting proxy](#setting-proxy) @@ -30,6 +36,7 @@ Copyright (c) 2019 Intel Corporation - [GitHub Token](#github-token) - [Customize tag/commit/sha to checkout](#customize-tagcommitsha-to-checkout) - [Obtaining Edge Node's serial with command](#obtaining-edge-nodes-serial-with-command) + - [Customization of kernel, grub parameters and tuned profile](#customization-of-kernel-grub-parameters-and-tuned-profile) # Purpose @@ -41,6 +48,8 @@ In order to use the playbooks several preconditions must be fulfilled: - Time must be configured on all hosts (refer to [Configuring time](#configuring-time)) +- Hosts for Edge Controller and Edge Nodes must have proper and unique hostname (not `localhost`). This hostname must be specified in `/etc/hosts` (refer to [Setup static hostname](#Setup-static-hostname)). + - Inventory must be configured (refer to [Configuring inventory](#configuring-inventory)) - SSH keys must be exchanged with hosts (refer to [Exchanging SSH keys with hosts](#Exchanging-SSH-keys-with-hosts)) @@ -52,10 +61,11 @@ In order to use the playbooks several preconditions must be fulfilled: # Running playbooks For convenience, playbooks can be played by running helper deploy scripts. -Convention for the scripts is: `action_mode[_group].sh`. Following scripts are available for On Premise mode: - - `cleanup_onprem.sh` - - `deploy_onprem_controller.sh` - - `deploy_onprem_node.sh` +Convention for the scripts is: `action_mode.sh [group]`. Following scripts are available for On Premise mode: + - `cleanup_onprem.sh [ controller | nodes ]` + - `deploy_onprem.sh [ controller | nodes ]` + +To run deploy of only Edge Nodes or Edge Controller use `deploy_ne.sh nodes` and `deploy_ne.sh controller` respectively. > NOTE: All nodes provided in the inventory might get rebooted during the installation. @@ -65,7 +75,7 @@ Convention for the scripts is: `action_mode[_group].sh`. Following scripts are a ## On Premise Playbooks -`onprem_controller.yml`, `onprem_node.yml` and `onprem_cleanup.yml` contain playbooks for On Premise mode. Playbooks can be customized by (un)commenting roles that are optional and by customizing variables where needed. +`on_premises.yml` and `on_premises_cleanup.yml` contain playbooks for On Premise mode. Playbooks can be customized by (un)commenting roles that are optional and by customizing variables where needed. ### Cleanup playbooks @@ -76,6 +86,17 @@ For example, when installing Docker - RPM repository is added and Docker install Note that there might be some leftovers created by installed software. +### Dataplanes +OpenNESS' On Premises delivers two dataplanes to be used: +* NTS (default) +* OVS/OVN + +In order to use OVS/OVN instead of NTS, `onprem_dataplane` variable must be edited in `group_vars/all.yml` file before running the deployment scripts: +```yaml +onprem_dataplane: "ovncni" +``` +> NOTE: When deploying virtual machine with OVNCNI dataplane, `/etc/resolv.conf` must be edited to use `192.168.122.1` nameserver. + ## Manual steps > *Ansible Controller* is a machine with [openness-experience-kits](https://github.com/open-ness/openness-experience-kits) repo and it's used to configure *Edge Controller* and *Edge Nodes*. Please be careful not to confuse them. @@ -95,7 +116,7 @@ Prerequisites (*Ansible Controller*): The following steps need to be done for successful login: 1. Open internet browser on *Ansible Controller*. -2. Type in `http://:3000` in address bar. +2. Type in `http:///` in address bar. `LANDING_UI_URL` can be retrieved from `.env` file. 3. Click on "INFRASTRUCTURE MANAGER" button. ![Landing page](controller-edge-node-setup-images/controller_ui_landing.png) @@ -103,9 +124,11 @@ The following steps need to be done for successful login: 4. Enter you username and password (default username: admin) (the password to be used is the password provided during Controller bring-up with the **cce_admin_password** in *openness-experience-kits/group_vars/all.yml*). 5. Click on "SIGN IN" button. -![Login screen](../../applications-onboard/howto-images/login.png) +![Login screen](controller-edge-node-setup-images/login.png) + +#### Manual enrollment -#### Enrollment +> NOTE: Following steps are now part of Ansible automated platform setup. Manual steps are left for reference. In order for the Controller and Edge Node to work together the Edge Node needs to enroll with the Controller. The Edge Node will continuously try to connect to the controller until its serial key is recognized by the Controller. @@ -119,17 +142,17 @@ In order to enroll and add new Edge Node to be managed by the Controller the fol 2. Navigate to 'NODES' tab. 3. Click on 'ADD EDGE NODE' button. -![Add Edge Node 1](../../applications-onboard/howto-images/Enroll1.png) +![Add Edge Node 1](controller-edge-node-setup-images/Enroll1.png) 4. Enter previously obtained Edge Node Serial Key into 'Serial*' field (Step 1). 5. Enter the name and location of Edge Node. 6. Press 'ADD EDGE NODE'. -![Add Edge Node 2](../../applications-onboard/howto-images/Enroll2.png) +![Add Edge Node 2](controller-edge-node-setup-images/Enroll2.png) 7. Check that your Edge Node is visible under 'List of Edge Nodes'. -![Add Edge Node 3](../../applications-onboard/howto-images/Enroll3.png) +![Add Edge Node 3](controller-edge-node-setup-images/Enroll3.png) ### NTS Configuration OpenNESS data-plane interface configuration. @@ -144,72 +167,12 @@ To check the interfaces available on the Edge Node execute following steps: 2. Find you Edge Node on the list. 3. Click 'EDIT'. -![Check Edge Node Interfaces 1](../../applications-onboard/howto-images/CheckingNodeInterfaces.png) +![Check Edge Node Interfaces 1](controller-edge-node-setup-images/CheckingNodeInterfaces.png) 5. Navigate to 'INTERFACES' tab. 6. Available interfaces are listed. -![Check Edge Node Interfaces 2](../../applications-onboard/howto-images/CheckingNodeInterfaces1.png) - -#### Creating Traffic Policy -Prerequisites: -- Enrollment phase completed successfully. -- User is logged in to UI. - -The steps to create a sample traffic policy are as follows: -1. From UI navigate to 'TRAFFIC POLICIES' tab. -2. Click 'ADD POLICY'. - -> Note: This specific traffic policy is only an example. - -![Creating Traffic Policy 1](../../applications-onboard/howto-images/CreatingTrafficPolicy.png) - -3. Give policy a name. -4. Click 'ADD' next to 'Traffic Rules*' field. -5. Fill in following fields: - - Description: "Sample Description" - - Priority: 99 - - Source -> IP Filter -> IP Address: 1.1.1.1 - - Source -> IP Filter -> Mask: 24 - - Source -> IP Filter -> Begin Port: 10 - - Source -> IP Filter -> End Port: 20 - - Source -> IP Filter -> Protocol: all - - Target -> Description: "Sample Description" - - Target -> Action: accept -6. Click on "CREATE". - -![Creating Traffic Policy 2](../../applications-onboard/howto-images/CreatingTrafficPolicy2.png) - -After creating Traffic Policy it will be visible under 'List of Traffic Policies' in 'TRAFFIC POLICIES' tab. - -![Creating Traffic Policy 3](../../applications-onboard/howto-images/CreatingTrafficPolicy3.png) - -#### Adding Traffic Policy to Interface -Prerequisites: -- Enrollment phase completed successfully. -- User is logged in to UI. -- Traffic Policy Created. - -To add a previously created traffic policy to an interface available on Edge Node the following steps need to be completed: -1. From UI navigate to "NODES" tab. -2. Find Edge Node on the 'List Of Edge Nodes'. -3. Click "EDIT". - -> Note: This step is instructional only, users can decide if they need/want a traffic policy designated for their interface, or if they desire traffic policy designated per application instead. - -![Adding Traffic Policy To Interface 1](../../applications-onboard/howto-images/AddingTrafficPolicyToInterface1.png) - -4. Navigate to "INTERFACES" tab. -5. Find desired interface which will be used to add traffic policy. -6. Click 'ADD' under 'Traffic Policy' column for that interface. -7. A window titled 'Assign Traffic Policy to interface' will pop-up. Select a previously created traffic policy. -8. Click on 'ASSIGN'. - -![Adding Traffic Policy To Interface 2](../../applications-onboard/howto-images/AddingTrafficPolicyToInterface2.png) - -On success the user is able to see 'EDIT' and 'REMOVE POLICY' buttons under 'Traffic Policy' column for desired interface. These buttons can be respectively used for editing and removing traffic rule policy on that interface. - -![Adding Traffic Policy To Interface 3](../../applications-onboard/howto-images/AddingTrafficPolicyToInterface3.png) +![Check Edge Node Interfaces 2](controller-edge-node-setup-images/CheckingNodeInterfaces1.png) #### Configuring Interface Prerequisites: @@ -223,7 +186,9 @@ In order to configure interface available on the Edge Node for the NTS the follo | WARNING: do not modify a NIC which is used for Internet connection! | | --- | -![Configuring Interface 1](../../applications-onboard/howto-images/AddingInterfaceToNTS.png) +> Note: For adding traffic policy to interface refere to following section in on-premises-applications-onboarding.md: [Instruction to create Traffic Policy and assign it to Interface](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/on-premises-applications-onboarding.md#instruction-to-create-traffic-policy-and-assign-it-to-interface) + +![Configuring Interface 1](controller-edge-node-setup-images/AddingInterfaceToNTS.png) 1. A window will pop-up titled "Edit Interface". The following fields need to be set: - Driver: userspace @@ -232,11 +197,11 @@ In order to configure interface available on the Edge Node for the NTS the follo - In case of two interfaces being configured, one for 'Upstream' another for 'Downstream', the fallback interface for 'Upstream' is the 'Downstream' interface and vice versa. 2. Click 'SAVE'. -![Configuring Interface 2](../../applications-onboard/howto-images/AddingInterfaceToNTS1.png) +![Configuring Interface 2](controller-edge-node-setup-images/AddingInterfaceToNTS1.png) 3. The interface's 'Driver' and 'Type' columns will reflect changes made. -![Configuring Interface 3](../../applications-onboard/howto-images/AddingInterfaceToNTS2.png) +![Configuring Interface 3](controller-edge-node-setup-images/AddingInterfaceToNTS2.png) #### Starting NTS Prerequisite: @@ -253,16 +218,162 @@ Once the interfaces are configured accordingly the following steps need to be do 1. From UI navigate to 'INTERFACES' tab of the Edge Node. 2. Click 'COMMIT CHANGES' -![Starting NTS 1](../../applications-onboard/howto-images/StartingNTS.png) +![Starting NTS 1](controller-edge-node-setup-images/StartingNTS.png) 3. NTS will start -![Starting NTS 2](../../applications-onboard/howto-images/StartingNTS2.png) +![Starting NTS 2](controller-edge-node-setup-images/StartingNTS2.png) 4. Make sure that the **nts** and **edgednssvr** containers are running on an *Edge Node* machine: ![Starting NTS 3](controller-edge-node-setup-images/StartingNTS3.png) +#### Preparing set-up for Local Breakout Point (LBP) + +It is possible in a set up with NTS used as dataplane to prepare following LBP configuration +- LBP set-up requirements: five machines are used as following set-up elements + - Controller + - Edge Node + - UE + - LBP + - EPC +- Edge Node is connected via 10GB cards to UE, LBP, EPC +- network configuration of all elements is given on the diagram: + + ![LBP set-up ](controller-edge-node-setup-images/LBP_set_up.png "LBP set-up") + +- configuration of interfaces for each server is done in Controller +- ARP configuration is done on servers +- IP addresses 10.103.104.X are addresses of machines from local subnet used for building set-up +- IP addresses 192.168.100.X are addresses given for LBP test purpose + +##### Controller and Edge Node deployment + +Build and deploy Controller and Edge Node using ansible scripts and instructions in this document. + +##### Network configuration + +Find interface with following commands +- `ifconfig` +or +- `ip a` + +Command `ethtool -p ` can be used to identify port (port on physical machine will start to blink and it will be possible to verify if it is valid port). + +Use following commands to configure network on servers in set up +- UE + - `ifconfig 192.168.100.1/24 up` + - `arp -s 192.168.100.2 ` (e.g. `arp -s 192.168.100.2 3c:fd:fe:a7:c0:eb`) +- LBP + - `ifconfig 192.168.100.2/24 up` + - `arp -s 192.168.100.1 ` (e.g. `arp -s 192.168.100.1 90:e2:ba:ac:6a:d5`) +- EPC + - `ifconfig 192.168.100.3/24 up` + + +Alternatively to using `ifconfig` configuration can be done with `ip` command: +`ip address add
dev ` (e.g.`ip address add 192.168.100.1/24 dev enp23s0f0`) + +##### Configuration in Controller + +Add traffic policy with rule for LBP: + +- Name: LBP rule +- Priority: 99 +- IP filter: + - IP address: 192.168.100.2 + - Mask: 32 + - Protocol: all +- Target: + - Action: accept +- MAC Modifier + - MAC address: 3c:fd:fe:a7:c0:eb + +![LBP rule adding](controller-edge-node-setup-images/LBP_rule.png) + +Update interfaces: +- edit interfaces to UE, LBP, EPC as shown on diagram (Interface set-up) +- add Traffic policy (LBP rule) to LBP interface (0000:88.00.2) + +After configuring NTS send PING (it is needed by NTS) from UE to EPC (`ping 192.168.100.3`). + +##### Verification + +1. NES client + - SSH to UE machine and ping LBP (`ping 192.168.100.2`) + - SSH to Edge Node server + - Set following environment variable: `export NES_SERVER_CONF=/var/lib/appliance/nts/nts.cfg` + - Run NES client: `/internal/nts/client/build/nes_client` + - connect to NTS using command `connect` + - use command `route list` to verify traffic rule for LBP + - use command `show all` to verify packet flow (received and sent packet should increase) + - use command `quit` to exit (use `help` for information on available commands) + + ```shell + # connect + Connection is established. + # route list + +-------+------------+--------------------+--------------------+--------------------+--------------------+-------------+-------------+--------+----------------------+ + | ID | PRIO | ENB IP | EPC IP | UE IP | SRV IP | UE PORT | SRV PORT | ENCAP | Destination | + +-------+------------+--------------------+--------------------+--------------------+--------------------+-------------+-------------+--------+----------------------+ + | 0 | 99 | n/a | n/a | 192.168.100.2/32 | * | * | * | IP | 3c:fd:fe:a7:c0:eb | + | 1 | 99 | n/a | n/a | * | 192.168.100.2/32 | * | * | IP | 3c:fd:fe:a7:c0:eb | + | 2 | 5 | n/a | n/a | * | 53.53.53.53/32 | * | * | IP | 8a:68:41:df:fa:d5 | + | 3 | 5 | n/a | n/a | 53.53.53.53/32 | * | * | * | IP | 8a:68:41:df:fa:d5 | + | 4 | 5 | * | * | * | 53.53.53.53/32 | * | * | GTPU | 8a:68:41:df:fa:d5 | + | 5 | 5 | * | * | 53.53.53.53/32 | * | * | * | GTPU | 8a:68:41:df:fa:d5 | + +-------+------------+--------------------+--------------------+--------------------+--------------------+-------------+-------------+--------+----------------------+ + # show all + ID: Name: Received: Sent: Dropped(TX full): Dropped(HW): IP Fragmented(Forwarded): + 0 0000:88:00.1 1303 pkts 776 pkts 0 pkts 0 pkts 0 pkts + (3c:fd:fe:b2:44:b1) 127432 bytes 75820 bytes 0 bytes + 1 0000:88:00.2 1261 pkts 1261 pkts 0 pkts 0 pkts 0 pkts + (3c:fd:fe:b2:44:b2) 123578 bytes 123578 bytes 0 bytes + 2 0000:88:00.3 40 pkts 42 pkts 0 pkts 0 pkts 0 pkts + (3c:fd:fe:b2:44:b3) 3692 bytes 3854 bytes 0 bytes + 3 KNI 0 pkts 0 pkts 0 pkts 0 pkts 0 pkts + (not registered) 0 bytes 0 bytes 0 bytes + # show all + ID: Name: Received: Sent: Dropped(TX full): Dropped(HW): IP Fragmented(Forwarded): + 0 0000:88:00.1 1304 pkts 777 pkts 0 pkts 0 pkts 0 pkts + (3c:fd:fe:b2:44:b1) 127530 bytes 75918 bytes 0 bytes + 1 0000:88:00.2 1262 pkts 1262 pkts 0 pkts 0 pkts 0 pkts + (3c:fd:fe:b2:44:b2) 123676 bytes 123676 bytes 0 bytes + 2 0000:88:00.3 40 pkts 42 pkts 0 pkts 0 pkts 0 pkts + (3c:fd:fe:b2:44:b3) 3692 bytes 3854 bytes 0 bytes + 3 KNI 0 pkts 0 pkts 0 pkts 0 pkts 0 pkts + (not registered) 0 bytes 0 bytes 0 bytes + ``` + +2. Tcpdump + +- SSH to UE machine and ping LBP (`ping 192.168.100.2`) +- SSH to LBP server. + - Run tcpdump with name of interface connected to Edge Node, verify data flow, use Ctrl+c to stop. + + ```shell + # tcpdump -i enp23s0f3 + tcpdump: verbose output suppressed, use -v or -vv for full protocol decode + listening on enp23s0f3, link-type EN10MB (Ethernet), capture size 262144 bytes + 10:29:14.678250 IP 192.168.100.1 > twesolox-mobl.ger.corp.intel.com: ICMP echo request, id 9249, seq 320, length 64 + 10:29:14.678296 IP twesolox-mobl.ger.corp.intel.com > 192.168.100.1: ICMP echo reply, id 9249, seq 320, length 64 + 10:29:15.678240 IP 192.168.100.1 > twesolox-mobl.ger.corp.intel.com: ICMP echo request, id 9249, seq 321, length 64 + 10:29:15.678283 IP twesolox-mobl.ger.corp.intel.com > 192.168.100.1: ICMP echo reply, id 9249, seq 321, length 64 + 10:29:16.678269 IP 192.168.100.1 > twesolox-mobl.ger.corp.intel.com: ICMP echo request, id 9249, seq 322, length 64 + 10:29:16.678312 IP twesolox-mobl.ger.corp.intel.com > 192.168.100.1: ICMP echo reply, id 9249, seq 322, length 64 + 10:29:17.678241 IP 192.168.100.1 > twesolox-mobl.ger.corp.intel.com: ICMP echo request, id 9249, seq 323, length 64 + 10:29:17.678285 IP twesolox-mobl.ger.corp.intel.com > 192.168.100.1: ICMP echo reply, id 9249, seq 323, length 64 + 10:29:18.678215 IP 192.168.100.1 > twesolox-mobl.ger.corp.intel.com: ICMP echo request, id 9249, seq 324, length 64 + 10:29:18.678258 IP twesolox-mobl.ger.corp.intel.com > 192.168.100.1: ICMP echo reply, id 9249, seq 324, length 64 + ^C + 10 packets captured + 10 packets received by filter + 0 packets dropped by kernel + ``` + +### Configuring DNS +* [Instructions for configuring DNS](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/openness-edgedns.md) + # Q&A ## Configuring time @@ -314,6 +425,22 @@ Update interval : 130.2 seconds Leap status : Normal ``` +## Setup static hostname + +In order to set some custom static hostname a command can be used: + +``` +hostnamectl set-hostname +``` + +Make sure that static hostname provided is proper and unique. +The hostname provided needs to be defined in /etc/hosts as well: + +``` +127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 +::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 +``` + ## Configuring inventory In order to execute playbooks, `inventory.ini` must be configure to include specific hosts to run the playbooks on. @@ -444,3 +571,7 @@ Alternatively to reading from /opt/edgenode/verification_key.txt Edge Node's ser ```bash openssl pkey -pubout -in /var/lib/appliance/certs/key.pem -inform pem -outform der | md5sum | xxd -r -p | openssl enc -a | tr -d '=' | tr '/+' '_-' ``` + +## Customization of kernel, grub parameters and tuned profile + +OpenNESS Experience Kits provides easy way to customize kernel version, grub parameters and tuned profile - for more information refer to [the OpenNESS Experience Kits guide](https://github.com/open-ness/specs/blob/master/doc/getting-started/openness-experience-kits.md). diff --git a/doc/getting-started/on-premises/offline-deployment.md b/doc/getting-started/on-premises/offline-deployment.md index 9cdaf89c..d7802fe4 100644 --- a/doc/getting-started/on-premises/offline-deployment.md +++ b/doc/getting-started/on-premises/offline-deployment.md @@ -147,7 +147,7 @@ In extracted offline package, in `openness-experience-kits` folder, you will fin 9. Update `inventory.ini` file and enter IP address of this controller machine machine in `[all]` section. Do not use localhost or 127.0.0.1. 10. Run deploy script: ``` - ./deploy_onprem_controller.sh + ./deploy_onprem.sh controller ``` This operation may take 40 minutes or more.
Controller functionality will be installed on this server as defined in `group_vars/all.yml` using its IP address obtained from `[all]` section.
@@ -178,12 +178,12 @@ Steps to follow on each node from `[edgenode_group]`: ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 YOUR_NEW_HOSTNAME ``` -And finally, run `deploy_onprem_node.sh` script from controller: +And finally, run `deploy_onprem.sh nodes` script from controller: 1. Log into controller as root user. 2. Go to extracted `openness_experience_kits` folder: 3. Run deploy script for nodes: ``` - ./deploy_onprem_node.sh + ./deploy_onprem.sh nodes ``` Note: This operation may take one hour or more, depending on the amount of chosen hosts in inventory.
Node functionality will be installed on chosen list of hosts.
@@ -219,7 +219,7 @@ Offline prepare and restore of the HDDL image is not enabled by default due to i In order to prepare and later restore the HDDL image, `- role: offline/prepare/hddl` line must be uncommented in `offline_prepare.yml` playbook before running `prepare_offline_package.sh` script. This will result in OpenVINO (tm) toolkit being downloaded and the intermediate HDDL Docker image being built. -During offline package restoration HDDL role must be enabled in order to finish the building. It is done by uncommenting `- role: hddl` line in `onprem_node.yml` before `deploy_onprem_node.sh` is executed. +During offline package restoration HDDL role must be enabled in order to finish the building. It is done by uncommenting `- role: hddl` line in `on_premises.yml` before `deploy_onprem.sh nodes` is executed. # Troubleshooting Q:
diff --git a/doc/getting-started/on-premises/supported-epa.md b/doc/getting-started/on-premises/supported-epa.md index e8f31823..7a0613ec 100644 --- a/doc/getting-started/on-premises/supported-epa.md +++ b/doc/getting-started/on-premises/supported-epa.md @@ -1,6 +1,6 @@ ```text SPDX-License-Identifier: Apache-2.0 -Copyright (c) 2019 Intel Corporation +Copyright (c) 2019-2020 Intel Corporation ``` # OpenNESS OnPremises - Enhanced Platform Awareness Features supported @@ -19,4 +19,8 @@ Enhanced Platform Awareness features are supported in OnPremises using EVA APIs. ## Features Following are the EPA features supported in OpenNESS OnPremises Edge 1. [openness_hddl.md: Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness_hddl.md) - +2. [openness-environment-variables.md: Support for setting Environment Variables in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-environment-variables.md) +3. [openness-dedicated-core.md: Dedicated CPU core allocation support for Edge Applications and Network Functions](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-dedicated-core.md) +4. [openness-tunable-exec.md: Tunable executable command in OpenNESS On-Prem mode](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-tunable-exec.md) +5. [openness-sriov-mulitple-interfaces.md: Multiple Interface and PCIe SRIOV support in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md) +6. [openness-port-forward.md: Support for setting up port forwarding of a container in OpenNESS On-Prem mode](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-port-forward.md) diff --git a/doc/getting-started/openness-experience-kits.md b/doc/getting-started/openness-experience-kits.md index 20829f38..ec1ff0a0 100644 --- a/doc/getting-started/openness-experience-kits.md +++ b/doc/getting-started/openness-experience-kits.md @@ -7,8 +7,21 @@ Copyright (c) 2019 Intel Corporation - [OpenNESS Experience Kits](#openness-experience-kits) - [Purpose](#purpose) - - [OpenNess setup playbooks](#openness-setup-playbooks) + - [OpenNESS setup playbooks](#openness-setup-playbooks) - [Playbooks for OpenNESS offline deployment](#playbooks-for-openness-offline-deployment) + - [Customizing kernel, grub parameters, and tuned profile & variables per host.](#customizing-kernel-grub-parameters-and-tuned-profile--variables-per-host) + - [Default values](#default-values) + - [Use newer realtime kernel (3.10.0-1062)](#use-newer-realtime-kernel-3100-1062) + - [Use newer non-rt kernel (3.10.0-1062)](#use-newer-non-rt-kernel-3100-1062) + - [Use tuned 2.9](#use-tuned-29) + - [Default kernel and configure tuned](#default-kernel-and-configure-tuned) + - [Change amount of hugepages](#change-amount-of-hugepages) + - [Change the size of hugepages](#change-the-size-of-hugepages) + - [Change amount & size of hugepages](#change-amount--size-of-hugepages) + - [Remove Intel IOMMU from grub params](#remove-intel-iommu-from-grub-params) + - [Add custom GRUB parameter](#add-custom-grub-parameter) + - [Configure OVS-DPDK in kube-ovn](#configure-ovs-dpdk-in-kube-ovn) + - [Adding new CNI plugins for Kubernetes (Network Edge)](#adding-new-cni-plugins-for-kubernetes-network-edge) ## Purpose @@ -17,7 +30,7 @@ OpenNESS Experience Kits repository contains set of Ansible playbooks for: - easy setup of OpenNESS in **Network Edge** and **On-Premise** modes - preparation and deployment of the **offline package** (i.e. package for OpenNESS offline deployment in On-Premise mode) -## OpenNess setup playbooks +## OpenNESS setup playbooks @@ -27,3 +40,174 @@ When Edge Controller and Edge Node machines have no internet access and the netw - playbooks that download all the packages and dependencies to the local folder and create offline package archive file; - playbooks that unpack the archive file and install packages. + +## Customizing kernel, grub parameters, and tuned profile & variables per host. + +>NOTE: Following per-host customizations in host_vars files are not currently supported in Offline On-Premises mode. + +OpenNESS Experience Kits allows user to customize kernel, grub parameters, and tuned profile by leveraging Ansible's feature of host_vars. + +OpenNESS Experience Kits contains `host_vars/` directory that can be used to place a YAML file (`nodes-inventory-name.yml`, e.g. `node01.yml`). The file would contain variables that would override roles' default values. + +To override the default value, place the variable's name and new value in the host's vars file, e.g. contents of `host_vars/node01.yml` that would result in skipping kernel customization on that node: + +```yaml +kernel_skip: true +``` + +Below are several common customization scenarios. + + +### Default values +Here are several default values: + +```yaml +# --- machine_setup/custom_kernel +kernel_skip: false # use this variable to disable custom kernel installation for host + +kernel_repo_url: http://linuxsoft.cern.ch/cern/centos/7/rt/CentOS-RT.repo +kernel_repo_key: http://linuxsoft.cern.ch/cern/centos/7/os/x86_64/RPM-GPG-KEY-cern +kernel_package: kernel-rt-kvm +kernel_devel_package: kernel-rt-devel +kernel_version: 3.10.0-957.21.3.rt56.935.el7.x86_64 + +kernel_dependencies_urls: [] +kernel_dependencies_packages: [] + + +# --- machine_setup/grub +hugepage_size: "2M" # Or 1G +hugepage_amount: "5000" + +default_grub_params: "hugepagesz={{ hugepage_size }} hugepages={{ hugepage_amount }} intel_iommu=on iommu=pt" +additional_grub_params: "" + + +# --- machine_setup/configure_tuned +tuned_skip: false # use this variable to skip tuned profile configuration for host +tuned_packages: +- http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/tuned-2.11.0-5.el7_7.1.noarch.rpm +- http://linuxsoft.cern.ch/scientific/7x/x86_64/updates/fastbugs/tuned-profiles-realtime-2.11.0-5.el7_7.1.noarch.rpm +tuned_profile: realtime +tuned_vars: | + isolated_cores=2-3 + nohz=on + nohz_full=2-3 +``` + +### Use newer realtime kernel (3.10.0-1062) +By default, `kernel-rt-kvm-3.10.0-957.21.3.rt56.935.el7.x86_64` from `http://linuxsoft.cern.ch/cern/centos/$releasever/rt/$basearch/` repository is installed. + +In order to use another version, e.g. `kernel-rt-kvm-3.10.0-1062.9.1.rt56.1033.el7.x86_64` just create host_var file for the host with content: +```yaml +kernel_version: 3.10.0-1062.9.1.rt56.1033.el7.x86_64 +``` + +### Use newer non-rt kernel (3.10.0-1062) +The OEK installs realtime kernel by default from specific repository. However, the non-rt kernel are present in the official CentOS repository. +Therefore, in order to use a newer non-rt kernel, following overrides must be applied: +```yaml +kernel_repo_url: "" # package is in default repository, no need to add new repository +kernel_package: kernel # instead of kernel-rt-kvm +kernel_devel_package: kernel-devel # instead of kernel-rt-devel +kernel_version: 3.10.0-1062.el7.x86_64 + +dpdk_kernel_devel: "" # kernel-devel is in the repository, no need for url with RPM + +# Since, we're not using rt kernel, we don't need a tuned-profiles-realtime but want to keep the tuned 2.11 +tuned_packages: +- http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/tuned-2.11.0-5.el7_7.1.noarch.rpm +tuned_profile: balanced +tuned_vars: "" +``` + +### Use tuned 2.9 +```yaml +tuned_packages: +- tuned-2.9.0-1.el7fdp +- tuned-profiles-realtime-2.9.0-1.el7fdp +``` + +### Default kernel and configure tuned +```yaml +kernel_skip: true # skip kernel customization altogether + +# update tuned to 2.11, but don't install tuned-profiles-realtime since we're not using rt kernel +tuned_packages: +- http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/tuned-2.11.0-5.el7_7.1.noarch.rpm +tuned_profile: balanced +tuned_vars: "" +``` + +### Change amount of hugepages +```yaml +hugepage_amount: "1000" # default is 5000 +``` + +### Change the size of hugepages +```yaml +hugepage_size: "1G" # default is 2M +``` + +### Change amount & size of hugepages +```yaml +hugepage_amount: "10" # default is 5000 +hugepage_size: "1G" # default is 2M +``` + +### Remove Intel IOMMU from grub params +```yaml +default_grub_params: "hugepagesz={{ hugepage_size }} hugepages={{ hugepage_amount }}" +``` + +### Add custom GRUB parameter +```yaml +additional_grub_params: "debug" +``` + +### Configure OVS-DPDK in kube-ovn +By default OVS-DPDK is enabled. To be able to disable it please set a flag: +```yaml +ovs_dpdk: false +``` + +>NOTE: This flag should be set in `roles/kubernetes/cni/kubeovn/common/defaults/main.ym` or either added to `group_vars/all.yml`. + +Additionally hugepages in ovs pod can be adjusted once default hugepage settings are changed. +```yaml +ovs_dpdk_hugepage_size: "2Mi" +ovs_dpdk_hugepages: "1Gi" +``` +OVS pods limits are configured by: +```yaml +ovs_dpdk_resources_requests: "1Gi" +ovs_dpdk_resources_limits: "1Gi" +``` +CPU settings can be configured using: +```yaml +ovs_dpdk_pmd_cpu_mask: "0x4" +ovs_dpdk_lcore_mask: "0x2" +``` + +## Adding new CNI plugins for Kubernetes (Network Edge) + +* Role that handles CNI deployment must be placed in `roles/kubernetes/cni/` directory, e.g. `roles/kubernetes/cni/kube-ovn/`. +* Subroles for master and worker (if needed) should be placed in `master/` and `worker/` dirs, e.g `roles/kubernetes/cni/kube-ovn/{master,worker}`. +* If there is a part of common command for both master and worker additional subrole can be created - `common` (e.g. `roles/kubernetes/cni/sriov/common`).
+Note that automatic inclusion of the `common` role should be handled by Ansible mechanisms (e.g. usage of meta's `dependiences` or `include_role` module) +* Name of the main role must be added to `available_kubernetes_cnis` variable in `roles/kubernetes/cni/defaults/main.yml`. +* If there are some additional requirements that should be checked before running the playbook (to not have an error in the middle of execution), they can be placed in the `roles/kubernetes/cni/tasks/precheck.yml` file which is included as a pre_task in plays for both Edge Controller and Edge Node.
+Currently executed basic prechecks are: + * Check if any CNI is requested (i.e. `kubernetes_cni` is not empty), + * Check if `sriov` is not requested as primary (first on the list) or standalone (only on the list), + * Check if `kubeovn` is requested as a primary (first on the list), + * Check if requested CNI is available (check if some CNI is requested that isn't present in the `available_kubernetes_cnis` list). +* CNI roles should as self-contained as possible (CNI specific tasks should not be present in `kubernetes/{master,worker,common}` or `openness/network_edge/{master,worker}` if not absolutely necessary). +* If CNI needs custom OpenNESS service (like Interface Service in case of `kube-ovn`), then it can be added to the `openness/network_edge/{master,worker}`.
+ Best if such tasks would be contained in separate task file (like `roles/openness/network_edge/master/tasks/kube-ovn.yml`) and executed only if CNI is requested, for example: + ```yaml + - name: deploy interface service for kube-ovn + include_tasks: kube-ovn.yml + when: "'kubeovn' in kubernetes_cnis" + ``` +* If CNI is to be used as an additional CNI (with Multus), Network Attachment Definition must be supplied ([refer to Multus docs for more info](https://github.com/intel/multus-cni/blob/master/doc/quickstart.md#storing-a-configuration-as-a-custom-resource)). diff --git a/doc/ran/openness-ran.png b/doc/ran/openness-ran.png new file mode 100644 index 00000000..1f46c47e Binary files /dev/null and b/doc/ran/openness-ran.png differ diff --git a/doc/ran/openness_ran.md b/doc/ran/openness_ran.md new file mode 100644 index 00000000..304130bb --- /dev/null +++ b/doc/ran/openness_ran.md @@ -0,0 +1,312 @@ +SPDX-License-Identifier: Apache-2.0 +Copyright © 2020 Intel Corporation + +- [Introduction](#introduction) +- [Building the FlexRAN image](#building-the-flexran-image) +- [FlexRAN hardware platform configuration](#flexran-hardware-platform-configuration) + - [BIOS](#bios) + - [Host kernel command line](#host-kernel-command-line) +- [Deploying and Running the FlexRAN pod](#deploying-and-running-the-flexran-pod) +- [Setting up 1588 - PTP based Time synchronization](#setting-up-1588---ptp-based-time-synchronization) + - [Setting up PTP](#setting-up-ptp) + - [Grandmaster clock](#grandmaster-clock) + - [Slave clock](#slave-clock) +- [BIOS configuration](#bios-configuration) +- [References](#references) + +# Introduction + +Radio Access Network is the edge of wireless network. 4G and 5G base stations form the key network function for the edge deployment. In OpenNESS Intel FlexRAN is used as a reference 4G and 5G base station for 4G and 5G end-to-end testing. + +FlexRAN offers high-density baseband pooling that could run on a distributed Telco Cloud to provide a smart indoor coverage solution and next-generation front haul architecture. This flexible, 4G and 5G platform provides the open platform ‘smarts’ for both connectivity and new applications at the edge of the network, along with the developer tools to create these new services. FlexRAN running on Telco Cloud provides low latency compute, storage, and network offload from the edge, thus saving network bandwidth. + +Intel FlexRAN 5GNR Reference PHY is a baseband PHY Reference Design for a 4G and 5G base station, using Xeon® series Processor with Intel Architecture. This 5GNR Reference PHY consists of a library of c-callable functions which are validated on Intel® Xeon® Broadwell / Skylake / Cascade Lake / Ice Lake platforms and demonstrates the capabilities of the software running different 5GNR L1 features. Functionality of these library functions is defined by the relevant sections in [3GPP TS 38.211, 212, 213, 214 and 215]. Performance of the Intel 5GNR Reference PHY meets the requirements defined by the base station conformance tests in [3GPP TS 38.141]. This library of Intel functions will be used by Intel partners and end-customers as a foundation for their own product development. Reference PHY is integrated with third party L2 and L3 to complete the base station pipeline. + +The diagram below shows FlexRAN DU (Real-time L1 and L2) deployed on the OpenNESS platform with the necessary microservice and Kubernetes enhancements required for real-time workload deployment. + +![FlexRAN DU deployed on OpenNESS](openness-ran.png) + +This document aims to provide the steps involved in deploying FlexRAN 5G (gNb) on the OpenNESS platform. + +> Note: This document covers both FlexRAN 4G and 5G. All the steps mentioned in this document uses 5G for reference. Please refer to the FlexRAN 4G document for minor updated needed in order to build, deploy and test FlexRAN 4G. + +# Building the FlexRAN image + +This section will explain the steps involved in building the FlexRAN image. Only L1 and L2-stub will be part of these steps. Real-time L2 (MAC and RLC) and non Real-time L2 and L3 is out of scope as it is a part of the third party component. + +1. Please contact your Intel representative to obtain the package +2. Untar the FlexRAN package. +3. Set the required environmental variables: + ``` + export RTE_SDK=$localPath/dpdk-19.11 + export RTE_TARGET=x86_64-native-linuxapp-icc + export WIRELESS_SDK_TARGET_ISA=avx512 + export RPE_DIR=${flexranPath}/libs/ferrybridge + export ROE_DIR=${flexranPath}/libs/roe + export XRAN_DIR=${localPath}/flexran_xran + export WIRELESS_SDK_TOOLCHAIN=icc + export DIR_WIRELESS_SDK_ROOT=${localPath}/wireless_sdk + export DIR_WIRELESS_FW=${localPath}/wireless_convergence_l1/framework + export DIR_WIRELESS_TEST_4G=${localPath}/flexran_l1_4g_test + export DIR_WIRELESS_TEST_5G=${localPath}/flexran_l1_5g_test + export SDK_BUILD=build-${WIRELESS_SDK_TARGET_ISA}-icc + export DIR_WIRELESS_SDK=${DIR_WIRELESS_SDK_ROOT}/${SDK_BUILD} + export FLEXRAN_SDK=${DIR_WIRELESS_SDK}/install + export DIR_WIRELESS_TABLE_5G=${flexranPath}/bin/nr5g/gnb/l1/table + ``` + > Note: these environmental variables path has to be updated according to your installation and file/directory names +4. Build L1, WLS interface between L1 and L2 and L2-Stub (testmac) + `./flexran_build.sh -r 5gnr_sub6 -m testmac -m wls -m l1app -b -c` +5. Once the build is successfully completed, copy the required binary files to the folder where docker image is built. The list of binary files that were used is documented in the [dockerfile](https://github.com/open-ness/edgeapps/blob/master/network-functions/ran/5G/flexRAN-gnb/Dockerfile) + - ICC, IPP mpi and mkl Runtime + - DPDK build target directory + - FlexRAN test vectors (optional) + - FlexRAN L1 and testmac (L2-stub) binary + - FlexRAN SDK modules + - FlexRAN WLS share library + - FlexRAN CPA libraries +6. `cd` to the folder where docker image is built and start the docker build ` docker build -t flexran-va:1.0 .` + +By the end of step 5 the FlexRAN docker image will be created. This image is copied to the edge node where FlexRAN will be deployed and that is installed with OpenNESS Network edge with all the required EPA features including Intel PACN3000 FPGA. Please refer to [Using FPGA in OpenNESS: Programming, Resource Allocation and Configuration](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-fpga.md) document for further details for setting up Intel PACN3000 vRAN FPGA. + +# FlexRAN hardware platform configuration +## BIOS +FlexRAN on Skylake and Cascade lake require special BIOS configuration which involves disabling C-state and enabling Config TDP level-2. Please refer to the [BIOS configuration](#bios-configuration) section in this document. + +## Host kernel command line + +``` +usbcore.autosuspend=-1 selinux=0 enforcing=0 nmi_watchdog=0 softlockup_panic=0 audit=0 intel_pstate=disable cgroup_memory=1 cgroup_enable=memory mce=off idle=poll isolcpus=1-23,25-47 rcu_nocbs=1-23,25-47 kthread_cpus=0,24 irqaffinity=0,24 nohz_full=1-23,25-47 hugepagesz=1G hugepages=50 default_hugepagesz=1G intel_iommu=on iommu=pt pci=realloc pci=assign-busses +``` + +Host kernel version - 3.10.0-1062.12.1.rt56.1042.el7.x86_64 + +Instructions on how to configure kernel command line in OpenNESS can be found in [OpenNESS getting started documentation](https://github.com/open-ness/specs/blob/master/doc/getting-started/openness-experience-kits.md#customizing-kernel-grub-parameters-and-tuned-profile--variables-per-host) + +# Deploying and Running the FlexRAN pod + +1. Deploy the OpenNESS cluster with [SRIOV for FPGA enabled](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-fpga.md#fpga-fec-ansible-installation-for-openness-network-edge) . +2. Ensure there are no FlexRAN pods and FPGA configuration pods are not deployed using `kubectl get pods` +3. Ensure all the EPA microservice and Enhancements (part of OpenNESS play book) are deployed `kubectl get po --all-namespaces` + ```yaml + NAMESPACE NAME READY STATUS RESTARTS AGE + kube-ovn kube-ovn-cni-8x5hc 1/1 Running 17 7d19h + kube-ovn kube-ovn-cni-p6v6s 1/1 Running 1 7d19h + kube-ovn kube-ovn-controller-578786b499-28lvh 1/1 Running 1 7d19h + kube-ovn kube-ovn-controller-578786b499-d8d2t 1/1 Running 3 5d19h + kube-ovn ovn-central-5f456db89f-l2gps 1/1 Running 0 7d19h + kube-ovn ovs-ovn-56c4c 1/1 Running 17 7d19h + kube-ovn ovs-ovn-fm279 1/1 Running 5 7d19h + kube-system coredns-6955765f44-2lqm7 1/1 Running 0 7d19h + kube-system coredns-6955765f44-bpk8q 1/1 Running 0 7d19h + kube-system etcd-silpixa00394960 1/1 Running 0 7d19h + kube-system kube-apiserver-silpixa00394960 1/1 Running 0 7d19h + kube-system kube-controller-manager-silpixa00394960 1/1 Running 0 7d19h + kube-system kube-multus-ds-amd64-bpq6s 1/1 Running 17 7d18h + kube-system kube-multus-ds-amd64-jf8ft 1/1 Running 0 7d19h + kube-system kube-proxy-2rh9c 1/1 Running 0 7d19h + kube-system kube-proxy-7jvqg 1/1 Running 17 7d19h + kube-system kube-scheduler-silpixa00394960 1/1 Running 0 7d19h + kube-system kube-sriov-cni-ds-amd64-crn2h 1/1 Running 17 7d19h + kube-system kube-sriov-cni-ds-amd64-j4jnt 1/1 Running 0 7d19h + kube-system kube-sriov-device-plugin-amd64-vtghv 1/1 Running 0 7d19h + kube-system kube-sriov-device-plugin-amd64-w4px7 1/1 Running 0 4d21h + openness eaa-78b89b4757-7phb8 1/1 Running 3 5d19h + openness edgedns-mdvds 1/1 Running 16 7d18h + openness interfaceservice-tkn6s 1/1 Running 16 7d18h + openness nfd-master-82dhc 1/1 Running 0 7d19h + openness nfd-worker-h4jlt 1/1 Running 37 7d19h + openness syslog-master-894hs 1/1 Running 0 7d19h + openness syslog-ng-n7zfm 1/1 Running 16 7d19h + ``` +4. Deploy the Kubernetes job to program the [FPGA](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-fpga.md) +5. Deploy the Kubernetes job to configure the [BIOS](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-bios.md) (note: only works on select Intel development platforms) +6. Deploy the Kubernetes job to configure the Intel PAC N3000 FPGA `kubectl create -f /opt/edgecontroller/fpga/fpga-config-job.yaml` +7. Deploy the FlexRAN Kubernetes pod `kubectl create -f flexran-va.yaml` - more info [here](https://github.com/open-ness/edgeapps/blob/master/network-functions/ran/5G/flexRAN-gnb/flexran-va.yaml) +8. `exec` into FlexRAN pod `kubectl exec -it flexran -- /bin/bash` +9. Find the PCI Bus function device ID of the FPGA VF assigned to the pod: + + ```shell + printenv | grep FEC + ``` + +10. Edit `phycfg_timer.xml` used for configuration of L1 application with the PCI Bus function device ID from previous step in order to offload FEC to this device: + + ```xml + + 1 + + 0000:1d:00.1 + ``` +11. Once in the FlexRAN pod L1 and test-L2 (testmac) can be started. + +# Setting up 1588 - PTP based Time synchronization +This section provides an overview of setting up PTP based Time synchronization in a cloud Native Kubernetes/docker environment. For FlexRAN specific xRAN Front haul tests and configuration please refer to the xRAN specific document in the reference section. + +> Note: PTP based Time synchronization method described here is applicable only for containers. For VMs methods based on Virtual PTP needs to be applied and is not covered in this document. + +## Setting up PTP +In the environment that needs to be synchronized install linuxptp package. It provides ptp4l and phc2sys applications. PTP setup needs Grandmaster clock and Slave clock setup. Slave clock will be synchronized to the Grandmaster clock. At first, Grandmaster clock will be configured. To use Hardware Time Stamps supported NIC is required. To check if NIC is supporting Hardware Time Stamps run ethtool. Similar output should appear: + +```shell +# ethtool -T eno4 +Time stamping parameters for eno4: +Capabilities: + hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE) + software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE) + hardware-receive (SOF_TIMESTAMPING_RX_HARDWARE) + software-receive (SOF_TIMESTAMPING_RX_SOFTWARE) + software-system-clock (SOF_TIMESTAMPING_SOFTWARE) + hardware-raw-clock (SOF_TIMESTAMPING_RAW_HARDWARE) +PTP Hardware Clock: 3 +Hardware Transmit Timestamp Modes: + off (HWTSTAMP_TX_OFF) + on (HWTSTAMP_TX_ON) +Hardware Receive Filter Modes: + none (HWTSTAMP_FILTER_NONE) + ptpv1-l4-sync (HWTSTAMP_FILTER_PTP_V1_L4_SYNC) + ptpv1-l4-delay-req (HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) + ptpv2-event (HWTSTAMP_FILTER_PTP_V2_EVENT) +``` + +Time in containers is the same as on the host machine so it is enough to synchronize the host to Grandmaster clock. + +PTP requires a few Kernel configuration options to be enabled: +- CONFIG_PPS +- CONFIG_NETWORK_PHY_TIMESTAMPING +- CONFIG_PTP_1588_CLOCK + +## Grandmaster clock +This is an optional step if you already have a grandmaster. The below steps explain how to setup a linux system to behave like ptp GM. + +On Grandmaster clock side take a look at `/etc/sysconfig/ptp4l` file. It is `ptp4l` daemon configuration file where starting options will be provided. Its content should look like this: +```shell +OPTIONS=”-f /etc/ptp4l.conf -i ” +``` +Where `` is interface name that will be used for time stamping and `/etc/ptp4l.conf` is a configuration file for `ptp4l` instance. + +To determine a Grandmaster clock PTP protocol is using BMC algorithm and it is not obvious which clock will be chosen as master. However, user can set the timer that is preferable to be master clock. It can be changed in `/etc/ptp4l.conf`. Set `priority1 property` to `127`. + +After that start ptp4l service. + +```shell +service ptp4l start +``` + +Output from the service can be checked at `/var/log/messages` and for master clock should be like: + +```shell +Mar 16 17:08:57 localhost ptp4l: ptp4l[23627.304]: selected /dev/ptp2 as PTP clock +Mar 16 17:08:57 localhost ptp4l: [23627.304] selected /dev/ptp2 as PTP clock +Mar 16 17:08:57 localhost ptp4l: [23627.306] port 1: INITIALIZING to LISTENING on INITIALIZE +Mar 16 17:08:57 localhost ptp4l: ptp4l[23627.306]: port 1: INITIALIZING to LISTENING on INITIALIZE +Mar 16 17:08:57 localhost ptp4l: [23627.307] port 0: INITIALIZING to LISTENING on INITIALIZE +Mar 16 17:08:57 localhost ptp4l: ptp4l[23627.307]: port 0: INITIALIZING to LISTENING on INITIALIZE +Mar 16 17:08:57 localhost ptp4l: [23627.308] port 1: link up +Mar 16 17:08:57 localhost ptp4l: ptp4l[23627.308]: port 1: link up +Mar 16 17:09:03 localhost ptp4l: [23633.664] port 1: LISTENING to MASTER on ANNOUNCE_RECEIPT_TIMEOUT_EXPIRES +Mar 16 17:09:03 localhost ptp4l: ptp4l[23633.664]: port 1: LISTENING to MASTER on ANNOUNCE_RECEIPT_TIMEOUT_EXPIRES +Mar 16 17:09:03 localhost ptp4l: ptp4l[23633.664]: selected best master clock 001e67.fffe.d2f206 +Mar 16 17:09:03 localhost ptp4l: ptp4l[23633.665]: assuming the grand master role +Mar 16 17:09:03 localhost ptp4l: [23633.664] selected best master clock 001e67.fffe.d2f206 +Mar 16 17:09:03 localhost ptp4l: [23633.665] assuming the grand master role +``` + +The next step is to synchronize PHC timer to the system time. To do that `phc2sys` daemon will be used. Firstly edit configuration file at `/etc/sysconfig/phc2sys`. + +```shell +OPTIONS="-c -s CLOCK_REALTIME -w" +``` + +Replace `` with interface name. Start phc2sys service. +```shell +service phc2sys start +``` +Logs can be viewed at `/var/log/messages` and look like: + +```shell +phc2sys[3656456.969]: Waiting for ptp4l... +phc2sys[3656457.970]: sys offset -6875996252 s0 freq -22725 delay 1555 +phc2sys[3656458.970]: sys offset -6875996391 s1 freq -22864 delay 1542 +phc2sys[3656459.970]: sys offset -52 s2 freq -22916 delay 1536 +phc2sys[3656460.970]: sys offset -29 s2 freq -22909 delay 1548 +phc2sys[3656461.971]: sys offset -25 s2 freq -22913 delay 1549 +``` + +## Slave clock +Slave clock configuration will be the same as for Grandmaster clock except `phc2sys` options and priority1 property for `ptp4l`. For slave clock priority1 property in `/etc/ptp4l.conf` should stay with default value (128). Run `ptp4l` service. To keep system time synchronized to PHC time change `phc2sys` options in `/etc/sysconfig/phc2sys` to: + +```shell +OPTIONS=”phc2sys -s -w" +``` +Replace `` with interface name. Logs will be available at `/var/log/messages`. + +```shell +phc2sys[28917.406]: Waiting for ptp4l... +phc2sys[28918.406]: phc offset -42928591735 s0 freq +24545 delay 1046 +phc2sys[28919.407]: phc offset -42928611122 s1 freq +5162 delay 955 +phc2sys[28920.407]: phc offset 308 s2 freq +5470 delay 947 +phc2sys[28921.407]: phc offset 408 s2 freq +5662 delay 947 +phc2sys[28922.407]: phc offset 394 s2 freq +5771 delay 947 +``` +Since this moment both clocks should be synchronized. Any docker container running in a pod is using the same clock as host so its clock will be synchronized as well. + + +# BIOS configuration + +Below is the subset of the BIOS configuration. It contains the list of BIOS features that are recommended to be configured for FlexRAN DU deployment. + +```shell +[BIOS::Advanced] + +[BIOS::Advanced::Processor Configuration] +Intel(R) Hyper-Threading Tech=Enabled +Active Processor Cores=All +Intel(R) Virtualization Technology=Enabled +MLC Streamer=Enabled +MLC Spatial Prefetcher=Enabled +DCU Data Prefetcher=Enabled +DCU Instruction Prefetcher=Enabled +LLC Prefetch=Enabled + +[BIOS::Advanced::Power & Performance] +CPU Power and Performance Policy=Performance +Workload Configuration=I/O Sensitive + +[BIOS::Advanced::Power & Performance::CPU C State Control] +Package C-State=C0/C1 state +C1E=Disabled ; Can be enabled Power savings +Processor C6=Disabled + +[BIOS::Advanced::Power & Performance::Hardware P States] +Hardware P-States=Disabled + +[BIOS::Advanced::Power & Performance::CPU P State Control] +Enhanced Intel SpeedStep(R) Tech=Enabled +Intel Configurable TDP=Enabled +Configurable TDP Level=Level 2 +Intel(R) Turbo Boost Technology=Enabled +Energy Efficient Turbo=Disabled + +[BIOS::Advanced::Power & Performance::Uncore Power Management] +Uncore Frequency Scaling=Enabled +Performance P-limit=Enabled + +[BIOS::Advanced::Memory Configuration::Memory RAS and Performance Configuration] +NUMA Optimized=Enabled +Sub_NUMA Cluster=Disabled + +[BIOS::Advanced::PCI Configuration] +Memory Mapped I/O above 4 GB=Enabled +SR-IOV Support=Enabled +``` + +# References +- FlexRAN Reference Solution Software Release Notes - Document ID:575822 +- FlexRAN Reference Solution LTE eNB L2-L1 API Specification - Document ID:571742 +- FlexRAN 5G New Radio Reference Solution L2-L1 API Specification - Document ID:603575 +- FlexRAN 4G Reference Solution L1 User Guide - Document ID:570228 +- FlexRAN 5G NR Reference Solution L1 User Guide - Document ID:603576 +- FlexRAN Reference Solution L1 XML Configuration User Guide - Document ID:571741 +- FlexRAN 5G New Radio FPGA User Guide - Document ID:603578 +- FlexRAN Reference Solution xRAN FrontHaul SAS - Document ID:611268 \ No newline at end of file diff --git a/openness_releasenotes.md b/openness_releasenotes.md index 4fe66008..89d97c87 100644 --- a/openness_releasenotes.md +++ b/openness_releasenotes.md @@ -23,6 +23,7 @@ This document provides high level system features, issues and limitations inform 2. OpenNESS - 19.06.01 3. OpenNESS - 19.09 4. OpenNESS - 19.12 +5. OpenNESS - 20.03 # Features for Release 1. OpenNESS - 19.06 @@ -82,7 +83,7 @@ This document provides high level system features, issues and limitations inform - Open Visual Cloud Smart City Application on OpenNESS - Solution Overview - Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS - OpenNESS How-to Guide (update) -3. OpenNESS – 19.12 +3. OpenNESS – 19.12 - Hardware - Support for Cascade lake 6252N - Support for Intel FPGA PAC N3000 @@ -118,15 +119,50 @@ This document provides high level system features, issues and limitations inform - Completely reorganized documentation structure for ease of navigation - 5G NR Edge Cloud deployment Whitepaper - EPA application note for each of the features +4. OpenNESS – 20.03 + - OVN/OVS-DPDK support for dataplane + - Network Edge: Support for kube-ovn CNI with OVS or OVS-DPDK as dataplane. Support for Calico as CNI. + - OnPremises Edge: Support for OVS-DPDK CNI with OVS-DPDK as dataplane supporting application deployed in containers or VMs + - Support for VM deployments on Kubernetes mode + - Kubevirt based VM deployment support + - EPA Support for SRIOV Virtual function allocation to the VMs deployed using K8s + - EPA support - OnPremises + - Support for dedicated core allocation to application running as VMs or Containers + - Support for dedicated SRIOV VF allocation to application running in VM or containers + - Support for system resource allocation into the application running as container + - Mount point for shared storage + - Pass environment variables + - Configure the port rules + - 5G Components + - PFD Management API support (3GPP 23.502 Sec. 52.6.3 PFD Management service) + - AF: Added support for PFD Northbound API + - NEF: Added support for PFD southbound API, and Stubs to loopback the PCF calls. + - kubectl: Enhanced CNCA kubectl plugin to configure PFD parameters + - WEB UI: Enhanced CNCA WEB UI to configure PFD params in OnPerm mode + - Auth2 based authentication between 5G Network functions: (as per 3GPP Standard) + - Implemented oAuth2 based authentication and validation + - AF and NEF communication channel is updated to authenticated based on oAuth2 JWT token in addition to HTTP2. + - HTTPS support + - Enhanced the 5G OAM, CNCA (web-ui and kube-ctl) to HTTPS interface + - Modular Playbook + - Support for customers to choose real-time or non-realtime kernel for a edge node + - Support for customer to choose CNIs - Validated with Kube-OVN and Calico + - Edge Apps + - FlexRAN: dockerfile and pod specification for deployment of 4G or 5G FlexRAN + - AF: dockerfile and pod specification + - NEF: dockerfile and pod specification + - UPF: dockerfile and pod specification # Changes to Existing Features - **OpenNESS 19.06** There are no unsupported or discontinued features relevant to this release. - **OpenNESS 19.06.01** There are no unsupported or discontinued features relevant to this release. - **OpenNESS 19.09** There are no unsupported or discontinued features relevant to this release. - - **OpenNESS 19.12** : + - **OpenNESS 19.12** - NTS Dataplane support for Network edge is discontinued. - Controller UI for Network edge has be discontinued except for the CNCA configuration. Customers can optionally leverage Kubernetes dashboard to onboard applications. - Edge node only supports non-realtime kernel. + - **OpenNESS 20.03** + - Support for HDDL-R only restricted to non-real-time or non-customized CentOS 7.6 default kernel. # Fixed Issues - **OpenNESS 19.06** There are no non-Intel issues relevant to this release. @@ -142,6 +178,9 @@ This document provides high level system features, issues and limitations inform - Application memory field is in MB - **OpenNESS 19.12** - Improved usability/automation in Ansible scripts +- **OpenNESS 20.03** + - Realtime Kernel support for network edge with K8s. + - Modular playbooks # Known Issues and Limitations - **OpenNESS 19.06** There are no issues relevant to this release. @@ -157,12 +196,19 @@ This document provides high level system features, issues and limitations inform - OpenNESS OnPremises: Can not remove a failed/disconnected the edge node information/state from the controller - The CNCA APIs (4G & 5G) supported in this release is an early access reference implementation and does not support authentication - Realtime kernel support has been temporarily disabled to address the Kubernetes 1.16.2 and Realtime kernel instability. - +- **OpenNESS 20.03** + - On-Premises edge installation takes more than 1.5hrs because of docker image build for OVS-DPDK + - Network edge installation takes more than 1.5hrs because of docker image build for OVS-DPDK + - OpenNESS controller allows management NICs to be in the pool of configuration which might allow configuration by mistake there by disconnecting the node from master + - When using the SRIOV EPA feature added in 20.03 with OVNCNI, the container cannot access the CNI port. This is due to the SRIOV port being set by changing the network used by the container from default to a custom network, This overwrites the OVNCNI network setting configured prior to this to enable the container to work with OVNCNI. Another issue with the SRIOV, is that this also overwrites the network configuration with the EAA and edgedns, agents, which prevents the SRIOV enabled container from communicating with the agents. + - Cannot remove Edge Node from Controller when its offline and traffic policy is configured or app is deployed. + # Release Content - **OpenNESS 19.06** OpenNESS Edge node, OpenNESS Controller, Common, Spec and OpenNESS Applications. - **OpenNESS 19.06.01** OpenNESS Edge node, OpenNESS Controller, Common, Spec and OpenNESS Applications. - **OpenNESS 19.09** OpenNESS Edge node, OpenNESS Controller, Common, Spec and OpenNESS Applications. - **OpenNESS 19.12** OpenNESS Edge node, OpenNESS Controller, Common, Spec, OpenNESS Applications and Experience kit. +- **OpenNESS 20.03** OpenNESS Edge node, OpenNESS Controller, Common, Spec, OpenNESS Applications and Experience kit. # Hardware and Software Compatibility OpenNESS Edge Node has been tested using the following hardware specification: @@ -205,5 +251,5 @@ OpenNESS Edge Node has been tested using the following hardware specification: | HDDL-R | [Mouser Mustang-V100](https://www.mouser.ie/datasheet/2/763/Mustang-V100_brochure-1526472.pdf) | # Supported Operating Systems -> OpenNESS was tested on CentOS Linux release 7.6.1810 (Core) : Note: OpenNESS is tested with CentOS 7.6 Pre-empt RT kernel to make sure VNFs and Applications can co-exist. There is not requirement from OpenNESS software to run on a Pre-empt RT kernel. +> OpenNESS was tested on CentOS Linux release 7.6.1810 (Core) : Note: OpenNESS is tested with CentOS 7.6 Pre-empt RT kernel to ensure VNFs and Applications can co-exist. There is not a requirement from OpenNESS software to run on a Pre-empt RT kernel. diff --git a/schema/5goam/5goam.swagger.json b/schema/5goam/5goam.swagger.json index c8f76f6d..c0278b08 100644 --- a/schema/5goam/5goam.swagger.json +++ b/schema/5goam/5goam.swagger.json @@ -1,17 +1,3 @@ -# Copyright 2019 Intel Corporation and Smart-Edge.com, Inc. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - { "swagger": "2.0", "info": { @@ -293,4 +279,5 @@ "description": "The afService post response body info" } } -} \ No newline at end of file +} + diff --git a/schema/af/af.openapi.json b/schema/af/af.openapi.json index 27843ce1..c749a633 100644 --- a/schema/af/af.openapi.json +++ b/schema/af/af.openapi.json @@ -28,7 +28,7 @@ "get": { "summary": "read all of the active subscriptions for this AF", "tags": [ - "Traffic Influence API, AF level GET operation" + "Traffic Influence API AF level GET operation" ], "responses": { "200": { @@ -132,7 +132,7 @@ "get": { "summary": "Reads an active subscriptions for the AF and for the subscription ID", "tags": [ - "Traffic Influence API, subscription level GET operation" + "Traffic Influence API Subscription level GET operation" ], "responses": { "200": { @@ -168,7 +168,7 @@ "put": { "summary": "Replaces an existing subscription resource based on subscription ID", "tags": [ - "Traffic Influence API, subscription level PUT operation" + "Traffic Influence API Subscription level PUT operation" ], "requestBody": { "description": "Parameters to replace the existing subscription", @@ -218,7 +218,7 @@ "patch": { "summary": "Updates an existing subscription resource based on subscription ID", "tags": [ - "Traffic Influence API, subscription level PATCH operation" + "Traffic Influence API Subscription level PATCH operation" ], "requestBody": { "required": true, @@ -267,7 +267,7 @@ "delete": { "summary": "Deletes an already existing subscription based on subscription ID", "tags": [ - "Traffic Influence API, subscription level DELETE operation" + "Traffic Influence API Subscription level DELETE operation" ], "responses": { "204": { diff --git a/schema/af/af.openapi.yaml b/schema/af/af.openapi.yaml index 4d379fa2..8e2b2697 100644 --- a/schema/af/af.openapi.yaml +++ b/schema/af/af.openapi.yaml @@ -26,7 +26,7 @@ paths: get: summary: read all of the active subscriptions for this AF tags: - - Traffic Influence API, AF level GET operation + - Traffic Influence API AF level GET operation responses: '200': description: OK. @@ -93,7 +93,7 @@ paths: get: summary: Reads an active subscriptions for the AF and for the subscription ID tags: - - Traffic Influence API, subscription level GET operation + - Traffic Influence API Subscription level GET operation responses: '200': description: OK (Successful get the active subscription) @@ -116,7 +116,7 @@ paths: put: summary: Replaces an existing subscription resource based on subscription ID tags: - - Traffic Influence API, subscription level PUT operation + - Traffic Influence API Subscription level PUT operation requestBody: description: Parameters to replace the existing subscription required: true @@ -148,7 +148,7 @@ paths: patch: summary: Updates an existing subscription resource based on subscription ID tags: - - Traffic Influence API, subscription level PATCH operation + - Traffic Influence API Subscription level PATCH operation requestBody: required: true content: @@ -179,7 +179,7 @@ paths: delete: summary: Deletes an already existing subscription based on subscription ID tags: - - Traffic Influence API, subscription level DELETE operation + - Traffic Influence API Subscription level DELETE operation responses: '204': description: No Content (Successful deletion of the existing subscription) diff --git a/schema/af/af_pfd.openapi.json b/schema/af/af_pfd.openapi.json new file mode 100644 index 00000000..91428fcd --- /dev/null +++ b/schema/af/af_pfd.openapi.json @@ -0,0 +1,1003 @@ +{ + "openapi": "3.0.0", + "info": { + "title": "Application Function PFD APIs", + "version": "1.0.0" + }, + "externalDocs": { + "description": "3GPP TS 29.122 V15.3.0 T8 reference point for Northbound APIs", + "url": "http://www.3gpp.org/ftp/Specs/archive/29_series/29.122/" + }, + "servers": [ + { + "url": "{apiRoot}/af/v1/pfd", + "variables": { + "apiRoot": { + "default": "https://example.com", + "description": "apiRoot as defined in subclause 5.2.4 of 3GPP TS 29.122." + } + } + } + ], + "paths": { + "/transactions": { + "get": { + "summary": "read all the PFD transactions for this AF", + "tags": [ + "PFD Management API AF level GET operation" + ], + "responses": { + "200": { + "description": "OK. All transactions related to the request URI are returned.", + "content": { + "application/json": { + "schema": { + "type": "array", + "items": { + "$ref": "#/components/schemas/PfdManagement" + } + } + } + } + }, + "400": { + "$ref": "#/components/responses/400" + }, + "401": { + "$ref": "#/components/responses/401" + }, + "403": { + "$ref": "#/components/responses/403" + }, + "404": { + "$ref": "#/components/responses/404" + }, + "406": { + "$ref": "#/components/responses/406" + }, + "429": { + "$ref": "#/components/responses/429" + }, + "500": { + "$ref": "#/components/responses/500" + }, + "503": { + "$ref": "#/components/responses/503" + }, + "default": { + "$ref": "#/components/responses/default" + } + } + }, + "post": { + "summary": "Creates a new PFD Management resource", + "tags": [ + "PFD Management API Transaction level POST Operation" + ], + "requestBody": { + "required": true, + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdManagement" + } + } + }, + "description": "Create a new transaction for PFD management." + }, + "responses": { + "201": { + "description": "Created. The transaction was created successfully. The SCEF shall return the created transaction in the response payload body. PfdReport may be included to provide detailed failure information for some applications.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdManagement" + } + } + }, + "headers": { + "Location": { + "description": "Contains the URI of the newly created resource", + "required": true, + "schema": { + "type": "string" + } + } + } + }, + "400": { + "$ref": "#/components/responses/400" + }, + "401": { + "$ref": "#/components/responses/401" + }, + "403": { + "$ref": "#/components/responses/403" + }, + "404": { + "$ref": "#/components/responses/404" + }, + "411": { + "$ref": "#/components/responses/411" + }, + "413": { + "$ref": "#/components/responses/413" + }, + "415": { + "$ref": "#/components/responses/415" + }, + "429": { + "$ref": "#/components/responses/429" + }, + "500": { + "description": "The PFDs for all applications were not created successfully. PfdReport is included with detailed information.", + "content": { + "application/json": { + "schema": { + "type": "array", + "items": { + "$ref": "#/components/schemas/PfdReport" + }, + "minItems": 1 + } + }, + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "503": { + "$ref": "#/components/responses/503" + }, + "default": { + "$ref": "#/components/responses/default" + } + } + } + }, + "/transactions/{transactionId}": { + "parameters": [ + { + "name": "transactionId", + "in": "path", + "description": "Transaction ID", + "required": true, + "schema": { + "type": "string" + } + } + ], + "get": { + "summary": "Reads an active transaction for the AF based on the transaction ID", + "tags": [ + "PFD Management API Transaction level GET Operation" + ], + "responses": { + "200": { + "description": "OK. The transaction information related to the request URI is returned.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdManagement" + } + } + } + }, + "400": { + "$ref": "#/components/responses/400" + }, + "401": { + "$ref": "#/components/responses/401" + }, + "403": { + "$ref": "#/components/responses/403" + }, + "404": { + "$ref": "#/components/responses/404" + }, + "406": { + "$ref": "#/components/responses/406" + }, + "429": { + "$ref": "#/components/responses/429" + }, + "500": { + "$ref": "#/components/responses/500" + }, + "503": { + "$ref": "#/components/responses/503" + }, + "default": { + "$ref": "#/components/responses/default" + } + } + }, + "put": { + "summary": "Replaces an active transaction based on the transaction ID", + "tags": [ + "PFD Management API Transaction level PUT Operation" + ], + "requestBody": { + "required": true, + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdManagement" + } + } + }, + "description": "Change information in PFD management transaction." + }, + "responses": { + "200": { + "description": "OK. The transaction was modified successfully. The SCEF shall return an updated transaction in the response payload body.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdManagement" + } + } + } + }, + "400": { + "$ref": "#/components/responses/400" + }, + "401": { + "$ref": "#/components/responses/401" + }, + "403": { + "$ref": "#/components/responses/403" + }, + "404": { + "$ref": "#/components/responses/404" + }, + "411": { + "$ref": "#/components/responses/411" + }, + "413": { + "$ref": "#/components/responses/413" + }, + "415": { + "$ref": "#/components/responses/415" + }, + "429": { + "$ref": "#/components/responses/429" + }, + "500": { + "description": "The PFDs for all applications were not updated successfully. PfdReport is included with detailed information.", + "content": { + "application/json": { + "schema": { + "type": "array", + "items": { + "$ref": "#/components/schemas/PfdReport" + }, + "minItems": 1 + } + }, + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "503": { + "$ref": "#/components/responses/503" + }, + "default": { + "$ref": "#/components/responses/default" + } + } + }, + "delete": { + "summary": "Deletes an already existing transaction based on transaction ID", + "tags": [ + "PFD Management API Transaction level DELETE Operation" + ], + "responses": { + "204": { + "description": "No Content. The transaction was deleted successfully. The payload body shall be empty." + }, + "400": { + "$ref": "#/components/responses/400" + }, + "401": { + "$ref": "#/components/responses/401" + }, + "403": { + "$ref": "#/components/responses/403" + }, + "404": { + "$ref": "#/components/responses/404" + }, + "429": { + "$ref": "#/components/responses/429" + }, + "500": { + "$ref": "#/components/responses/500" + }, + "503": { + "$ref": "#/components/responses/503" + }, + "default": { + "$ref": "#/components/responses/default" + } + } + } + }, + "/transactions/{transactionId}/applications/{appId}": { + "parameters": [ + { + "name": "transactionId", + "in": "path", + "description": "Transaction ID", + "required": true, + "schema": { + "type": "string" + } + }, + { + "name": "appId", + "in": "path", + "description": "Identifier of the application", + "required": true, + "schema": { + "type": "string" + } + } + ], + "get": { + "summary": "Reads PFD data for an application based on transaction ID and application ID", + "tags": [ + "PFD Management API Application level GET Operation" + ], + "responses": { + "200": { + "description": "OK. The application information related to the request URI is returned.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdData" + } + } + } + }, + "400": { + "$ref": "#/components/responses/400" + }, + "401": { + "$ref": "#/components/responses/401" + }, + "403": { + "$ref": "#/components/responses/403" + }, + "404": { + "$ref": "#/components/responses/404" + }, + "406": { + "$ref": "#/components/responses/406" + }, + "429": { + "$ref": "#/components/responses/429" + }, + "500": { + "$ref": "#/components/responses/500" + }, + "503": { + "$ref": "#/components/responses/503" + }, + "default": { + "$ref": "#/components/responses/default" + } + } + }, + "put": { + "summary": "Replaces PFD data for an application based on transaction ID and application ID", + "tags": [ + "PFD Management API Application level PUT Operation" + ], + "requestBody": { + "required": true, + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdData" + } + } + }, + "description": "Change information in application." + }, + "responses": { + "200": { + "description": "OK. The application resource was modified successfully. The SCEF shall return an updated application resource in the response payload body.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdData" + } + } + } + }, + "400": { + "$ref": "#/components/responses/400" + }, + "401": { + "$ref": "#/components/responses/401" + }, + "403": { + "description": "The PFDs for the application were not updated successfully.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdReport" + } + }, + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "404": { + "$ref": "#/components/responses/404" + }, + "409": { + "description": "The PFDs for the application were not updated successfully.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdReport" + } + }, + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "411": { + "$ref": "#/components/responses/411" + }, + "413": { + "$ref": "#/components/responses/413" + }, + "415": { + "$ref": "#/components/responses/415" + }, + "429": { + "$ref": "#/components/responses/429" + }, + "500": { + "description": "The PFDs for the application were not updated successfully.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdReport" + } + }, + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "503": { + "$ref": "#/components/responses/503" + }, + "default": { + "$ref": "#/components/responses/default" + } + } + }, + "patch": { + "summary": "Updates PFD data for an application based on transaction ID and application ID", + "tags": [ + "PFD Management API Application level PATCH Operation" + ], + "requestBody": { + "required": true, + "content": { + "application/merge-patch+json": { + "schema": { + "$ref": "#/components/schemas/PfdData" + } + } + }, + "description": "Change information in PFD management transaction." + }, + "responses": { + "200": { + "description": "OK. The transaction was modified successfully. The SCEF shall return an updated transaction in the response payload body.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdData" + } + } + } + }, + "400": { + "$ref": "#/components/responses/400" + }, + "401": { + "$ref": "#/components/responses/401" + }, + "403": { + "description": "The PFDs for the application were not updated successfully.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdReport" + } + }, + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "404": { + "$ref": "#/components/responses/404" + }, + "409": { + "description": "The PFDs for the application were not updated successfully.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdReport" + } + }, + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "411": { + "$ref": "#/components/responses/411" + }, + "413": { + "$ref": "#/components/responses/413" + }, + "415": { + "$ref": "#/components/responses/415" + }, + "429": { + "$ref": "#/components/responses/429" + }, + "500": { + "description": "The PFDs for the application were not updated successfully.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdReport" + } + }, + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "503": { + "$ref": "#/components/responses/503" + }, + "default": { + "$ref": "#/components/responses/default" + } + } + }, + "delete": { + "summary": "Deletes PFD data for an application based on transaction ID and application ID", + "tags": [ + "PFD Management API Application level DELETE Operation" + ], + "responses": { + "204": { + "description": "No Content. The application was deleted successfully. The payload body shall be empty." + }, + "400": { + "$ref": "#/components/responses/400" + }, + "401": { + "$ref": "#/components/responses/401" + }, + "403": { + "$ref": "#/components/responses/403" + }, + "404": { + "$ref": "#/components/responses/404" + }, + "429": { + "$ref": "#/components/responses/429" + }, + "500": { + "$ref": "#/components/responses/500" + }, + "503": { + "$ref": "#/components/responses/503" + }, + "default": { + "$ref": "#/components/responses/default" + } + } + } + } + }, + "components": { + "responses": { + "400": { + "description": "Bad request", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "401": { + "description": "Unauthorized", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "403": { + "description": "Forbidden", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "404": { + "description": "Not Found", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "406": { + "description": "Not Acceptable", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "409": { + "description": "Conflict", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "411": { + "description": "Length Required", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "412": { + "description": "Precondition Failed", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "413": { + "description": "Payload Too Large", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "414": { + "description": "URI Too Long", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "415": { + "description": "Unsupported Media Type", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "429": { + "description": "Too Many Requests", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "500": { + "description": "Internal Server Error", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "503": { + "description": "Service Unavailable", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "default": { + "description": "Generic Error" + } + }, + "schemas": { + "DurationSec": { + "type": "integer", + "minimum": 0, + "description": "Unsigned integer identifying a period of time in units of seconds." + }, + "DurationSecRm": { + "type": "integer", + "minimum": 0, + "description": "Unsigned integer identifying a period of time in units of seconds with \"nullable=true\" property.", + "nullable": true + }, + "DurationSecRo": { + "type": "integer", + "minimum": 0, + "description": "Unsigned integer identifying a period of time in units of seconds with \"readOnly=true\" property.", + "readOnly": true + }, + "SupportedFeatures": { + "type": "string", + "pattern": "^[A-Fa-f0-9]*$" + }, + "Link": { + "type": "string", + "description": "string formatted according to IETF RFC 3986 identifying a referenced resource." + }, + "Uri": { + "type": "string", + "description": "string providing an URI formatted according to IETF RFC 3986." + }, + "ProblemDetails": { + "type": "object", + "properties": { + "type": { + "$ref": "#/components/schemas/Uri" + }, + "title": { + "type": "string", + "description": "A short, human-readable summary of the problem type. It should not change from occurrence to occurrence of the problem." + }, + "status": { + "type": "integer", + "description": "The HTTP status code for this occurrence of the problem." + }, + "detail": { + "type": "string", + "description": "A human-readable explanation specific to this occurrence of the problem." + }, + "instance": { + "$ref": "#/components/schemas/Uri" + }, + "cause": { + "type": "string", + "description": "A machine-readable application error cause specific to this occurrence of the problem. This IE should be present and provide application-related error information, if available." + }, + "invalidParams": { + "type": "array", + "items": { + "$ref": "#/components/schemas/InvalidParam" + }, + "minItems": 1, + "description": "Description of invalid parameters, for a request rejected due to invalid parameters." + } + } + }, + "InvalidParam": { + "type": "object", + "properties": { + "param": { + "type": "string", + "description": "Attribute's name encoded as a JSON Pointer, or header's name." + }, + "reason": { + "type": "string", + "description": "A human-readable reason, e.g. \"must be a positive integer\"." + } + }, + "required": [ + "param" + ] + }, + "PfdManagement": { + "type": "object", + "properties": { + "self": { + "$ref": "#/components/schemas/Link" + }, + "supportedFeatures": { + "$ref": "#/components/schemas/SupportedFeatures" + }, + "pfdDatas": { + "type": "object", + "additionalProperties": { + "$ref": "#/components/schemas/PfdData" + }, + "minProperties": 1, + "description": "Each element uniquely identifies the PFDs for an external application identifier. Each element is identified in the map via an external application identifier as key. The response shall include successfully provisioned PFD data of application(s)." + }, + "pfdReports": { + "type": "object", + "additionalProperties": { + "$ref": "#/components/schemas/PfdReport" + }, + "minProperties": 1, + "description": "Supplied by the SCEF and contains the external application identifiers for which PFD(s) are not added or modified successfully. The failure reason is also included. Each element provides the related information for one or more external application identifier(s) and is identified in the map via the failure identifier as key.", + "readOnly": true + } + }, + "required": [ + "pfdDatas" + ] + }, + "PfdData": { + "type": "object", + "properties": { + "externalAppId": { + "type": "string", + "description": "Each element uniquely external application identifier" + }, + "self": { + "$ref": "#/components/schemas/Link" + }, + "pfds": { + "type": "object", + "additionalProperties": { + "$ref": "#/components/schemas/Pfd" + }, + "description": "Contains the PFDs of the external application identifier. Each PFD is identified in the map via a key containing the PFD identifier." + }, + "allowedDelay": { + "$ref": "#/components/schemas/DurationSecRm" + }, + "cachingTime": { + "$ref": "#/components/schemas/DurationSecRo" + } + }, + "required": [ + "externalAppId", + "pfds" + ] + }, + "Pfd": { + "type": "object", + "properties": { + "pfdId": { + "type": "string", + "description": "Identifies a PDF of an application identifier." + }, + "flowDescriptions": { + "type": "array", + "items": { + "type": "string" + }, + "minItems": 1, + "description": "Represents a 3-tuple with protocol, server ip and server port for UL/DL application traffic. The content of the string has the same encoding as the IPFilterRule AVP value as defined in IETF RFC 6733." + }, + "urls": { + "type": "array", + "items": { + "type": "string" + }, + "minItems": 1, + "description": "Indicates a URL or a regular expression which is used to match the significant parts of the URL." + }, + "domainNames": { + "type": "array", + "items": { + "type": "string" + }, + "minItems": 1, + "description": "Indicates an FQDN or a regular expression as a domain name matching criteria." + } + }, + "required": [ + "pfdId" + ] + }, + "PfdReport": { + "type": "object", + "properties": { + "externalAppIds": { + "type": "array", + "items": { + "type": "string" + }, + "minItems": 1, + "description": "Identifies the external application identifier(s) which PFD(s) are not added or modified successfully" + }, + "failureCode": { + "$ref": "#/components/schemas/FailureCode" + }, + "cachingTime": { + "$ref": "#/components/schemas/DurationSec" + } + }, + "required": [ + "externalAppIds", + "failureCode" + ] + }, + "FailureCode": { + "anyOf": [ + { + "type": "string", + "enum": [ + "MALFUNCTION", + "RESOURCE_LIMITATION", + "SHORT_DELAY", + "APP_ID_DUPLICATED", + "OTHER_REASON" + ] + }, + { + "type": "string", + "description": "This string provides forward-compatibility with future extensions to the enumeration but is not used to encode content defined in the present version of this API.\n" + } + ], + "description": "Possible values are - MALFUNCTION: This value indicates that something functions wrongly in PFD provisioning or the PFD provisioning does not function at all. - RESOURCE_LIMITATION: This value indicates there is resource limitation for PFD storage. - SHORT_DELAY: This value indicates that the allowed delay is too short and PFD(s) are not stored. - APP_ID_DUPLICATED: The received external application identifier(s) are already provisioned. - OTHER_REASON: Other reason unspecified.\n" + } + } + } +} \ No newline at end of file diff --git a/schema/af/af_pfd.openapi.yaml b/schema/af/af_pfd.openapi.yaml new file mode 100644 index 00000000..fb486afb --- /dev/null +++ b/schema/af/af_pfd.openapi.yaml @@ -0,0 +1,666 @@ +# SPDX-License-Identifier: Apache-2.0 +# Copyright (c) 2020 Intel Corporation + +# The source of this file is from 3GPP 29.522 Release 15 version 3 +# taken from http://www.3gpp.org/ftp/Specs/archive/29_series/29.522/ + +openapi: 3.0.0 +info: + title: Application Function PFD APIs + version: "1.0.0" +externalDocs: + description: 3GPP TS 29.122 V15.3.0 T8 reference point for Northbound APIs + url: 'http://www.3gpp.org/ftp/Specs/archive/29_series/29.122/' +servers: + - url: '{apiRoot}/af/v1/pfd' + variables: + apiRoot: + default: https://example.com + description: apiRoot as defined in subclause 5.2.4 of 3GPP TS 29.122. +paths: + '/transactions': + get: + summary: read all the PFD transactions for this AF + tags: + - PFD Management API AF level GET operation + responses: + '200': + description: OK. All transactions related to the request URI are returned. + content: + application/json: + schema: + type: array + items: + $ref: '#/components/schemas/PfdManagement' + '400': + $ref: '#/components/responses/400' + '401': + $ref: '#/components/responses/401' + '403': + $ref: '#/components/responses/403' + '404': + $ref: '#/components/responses/404' + '406': + $ref: '#/components/responses/406' + '429': + $ref: '#/components/responses/429' + '500': + $ref: '#/components/responses/500' + '503': + $ref: '#/components/responses/503' + default: + $ref: '#/components/responses/default' + post: + summary: Creates a new PFD Management resource + tags: + - PFD Management API Transaction level POST Operation + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/PfdManagement' + description: Create a new transaction for PFD management. + responses: + '201': + description: Created. The transaction was created successfully. The SCEF shall return the created transaction in the response payload body. PfdReport may be included to provide detailed failure information for some applications. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdManagement' + headers: + Location: + description: 'Contains the URI of the newly created resource' + required: true + schema: + type: string + '400': + $ref: '#/components/responses/400' + '401': + $ref: '#/components/responses/401' + '403': + $ref: '#/components/responses/403' + '404': + $ref: '#/components/responses/404' + '411': + $ref: '#/components/responses/411' + '413': + $ref: '#/components/responses/413' + '415': + $ref: '#/components/responses/415' + '429': + $ref: '#/components/responses/429' + '500': + description: The PFDs for all applications were not created successfully. PfdReport is included with detailed information. + content: + application/json: + schema: + type: array + items: + $ref: '#/components/schemas/PfdReport' + minItems: 1 + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '503': + $ref: '#/components/responses/503' + default: + $ref: '#/components/responses/default' + '/transactions/{transactionId}': + parameters: + - name: transactionId + in: path + description: Transaction ID + required: true + schema: + type: string + get: + summary: Reads an active transaction for the AF based on the transaction ID + tags: + - PFD Management API Transaction level GET Operation + responses: + '200': + description: OK. The transaction information related to the request URI is returned. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdManagement' + '400': + $ref: '#/components/responses/400' + '401': + $ref: '#/components/responses/401' + '403': + $ref: '#/components/responses/403' + '404': + $ref: '#/components/responses/404' + '406': + $ref: '#/components/responses/406' + '429': + $ref: '#/components/responses/429' + '500': + $ref: '#/components/responses/500' + '503': + $ref: '#/components/responses/503' + default: + $ref: '#/components/responses/default' + put: + summary: Replaces an active transaction based on the transaction ID + tags: + - PFD Management API Transaction level PUT Operation + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/PfdManagement' + description: Change information in PFD management transaction. + responses: + '200': + description: OK. The transaction was modified successfully. The SCEF shall return an updated transaction in the response payload body. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdManagement' + '400': + $ref: '#/components/responses/400' + '401': + $ref: '#/components/responses/401' + '403': + $ref: '#/components/responses/403' + '404': + $ref: '#/components/responses/404' + '411': + $ref: '#/components/responses/411' + '413': + $ref: '#/components/responses/413' + '415': + $ref: '#/components/responses/415' + '429': + $ref: '#/components/responses/429' + '500': + description: The PFDs for all applications were not updated successfully. PfdReport is included with detailed information. + content: + application/json: + schema: + type: array + items: + $ref: '#/components/schemas/PfdReport' + minItems: 1 + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '503': + $ref: '#/components/responses/503' + default: + $ref: '#/components/responses/default' + delete: + summary: Deletes an already existing transaction based on transaction ID + tags: + - PFD Management API Transaction level DELETE Operation + responses: + '204': + description: No Content. The transaction was deleted successfully. The payload body shall be empty. + '400': + $ref: '#/components/responses/400' + '401': + $ref: '#/components/responses/401' + '403': + $ref: '#/components/responses/403' + '404': + $ref: '#/components/responses/404' + '429': + $ref: '#/components/responses/429' + '500': + $ref: '#/components/responses/500' + '503': + $ref: '#/components/responses/503' + default: + $ref: '#/components/responses/default' + '/transactions/{transactionId}/applications/{appId}': + parameters: + - name: transactionId + in: path + description: Transaction ID + required: true + schema: + type: string + - name: appId + in: path + description: Identifier of the application + required: true + schema: + type: string + get: + summary: Reads PFD data for an application based on transaction ID and application ID + tags: + - PFD Management API Application level GET Operation + responses: + '200': + description: OK. The application information related to the request URI is returned. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdData' + '400': + $ref: '#/components/responses/400' + '401': + $ref: '#/components/responses/401' + '403': + $ref: '#/components/responses/403' + '404': + $ref: '#/components/responses/404' + '406': + $ref: '#/components/responses/406' + '429': + $ref: '#/components/responses/429' + '500': + $ref: '#/components/responses/500' + '503': + $ref: '#/components/responses/503' + default: + $ref: '#/components/responses/default' + put: + summary: Replaces PFD data for an application based on transaction ID and application ID + tags: + - PFD Management API Application level PUT Operation + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/PfdData' + description: Change information in application. + responses: + '200': + description: OK. The application resource was modified successfully. The SCEF shall return an updated application resource in the response payload body. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdData' + '400': + $ref: '#/components/responses/400' + '401': + $ref: '#/components/responses/401' + '403': + description: The PFDs for the application were not updated successfully. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdReport' + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '404': + $ref: '#/components/responses/404' + '409': + description: The PFDs for the application were not updated successfully. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdReport' + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '411': + $ref: '#/components/responses/411' + '413': + $ref: '#/components/responses/413' + '415': + $ref: '#/components/responses/415' + '429': + $ref: '#/components/responses/429' + '500': + description: The PFDs for the application were not updated successfully. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdReport' + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '503': + $ref: '#/components/responses/503' + default: + $ref: '#/components/responses/default' + patch: + summary: Updates PFD data for an application based on transaction ID and application ID + tags: + - PFD Management API Application level PATCH Operation + requestBody: + required: true + content: + application/merge-patch+json: + schema: + $ref: '#/components/schemas/PfdData' + description: Change information in PFD management transaction. + responses: + '200': + description: OK. The transaction was modified successfully. The SCEF shall return an updated transaction in the response payload body. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdData' + '400': + $ref: '#/components/responses/400' + '401': + $ref: '#/components/responses/401' + '403': + description: The PFDs for the application were not updated successfully. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdReport' + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '404': + $ref: '#/components/responses/404' + '409': + description: The PFDs for the application were not updated successfully. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdReport' + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '411': + $ref: '#/components/responses/411' + '413': + $ref: '#/components/responses/413' + '415': + $ref: '#/components/responses/415' + '429': + $ref: '#/components/responses/429' + '500': + description: The PFDs for the application were not updated successfully. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdReport' + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '503': + $ref: '#/components/responses/503' + default: + $ref: '#/components/responses/default' + delete: + summary: Deletes PFD data for an application based on transaction ID and application ID + tags: + - PFD Management API Application level DELETE Operation + responses: + '204': + description: No Content. The application was deleted successfully. The payload body shall be empty. + '400': + $ref: '#/components/responses/400' + '401': + $ref: '#/components/responses/401' + '403': + $ref: '#/components/responses/403' + '404': + $ref: '#/components/responses/404' + '429': + $ref: '#/components/responses/429' + '500': + $ref: '#/components/responses/500' + '503': + $ref: '#/components/responses/503' + default: + $ref: '#/components/responses/default' +components: + responses: + '400': + description: Bad request + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '401': + description: Unauthorized + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '403': + description: Forbidden + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '404': + description: Not Found + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '406': + description: Not Acceptable + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '409': + description: Conflict + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '411': + description: Length Required + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '412': + description: Precondition Failed + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '413': + description: Payload Too Large + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '414': + description: URI Too Long + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '415': + description: Unsupported Media Type + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '429': + description: Too Many Requests + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '500': + description: Internal Server Error + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '503': + description: Service Unavailable + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + default: + description: Generic Error + + schemas: + DurationSec: + type: integer + minimum: 0 + description: Unsigned integer identifying a period of time in units of seconds. + DurationSecRm: + type: integer + minimum: 0 + description: Unsigned integer identifying a period of time in units of seconds with "nullable=true" property. + nullable: true + DurationSecRo: + type: integer + minimum: 0 + description: Unsigned integer identifying a period of time in units of seconds with "readOnly=true" property. + readOnly: true + SupportedFeatures: + type: string + pattern: '^[A-Fa-f0-9]*$' + Link: + type: string + description: string formatted according to IETF RFC 3986 identifying a referenced resource. + Uri: + type: string + description: string providing an URI formatted according to IETF RFC 3986. + ProblemDetails: + type: object + properties: + type: + $ref: '#/components/schemas/Uri' + title: + type: string + description: A short, human-readable summary of the problem type. It should not change from occurrence to occurrence of the problem. + status: + type: integer + description: The HTTP status code for this occurrence of the problem. + detail: + type: string + description: A human-readable explanation specific to this occurrence of the problem. + instance: + $ref: '#/components/schemas/Uri' + cause: + type: string + description: A machine-readable application error cause specific to this occurrence of the problem. This IE should be present and provide application-related error information, if available. + invalidParams: + type: array + items: + $ref: '#/components/schemas/InvalidParam' + minItems: 1 + description: Description of invalid parameters, for a request rejected due to invalid parameters. + InvalidParam: + type: object + properties: + param: + type: string + description: Attribute's name encoded as a JSON Pointer, or header's name. + reason: + type: string + description: A human-readable reason, e.g. "must be a positive integer". + required: + - param + + PfdManagement: + type: object + properties: + self: + $ref: '#/components/schemas/Link' + supportedFeatures: + $ref: '#/components/schemas/SupportedFeatures' + pfdDatas: + type: object + additionalProperties: + $ref: '#/components/schemas/PfdData' + minProperties: 1 + description: Each element uniquely identifies the PFDs for an external application identifier. Each element is identified in the map via an external application identifier as key. The response shall include successfully provisioned PFD data of application(s). + pfdReports: + type: object + additionalProperties: + $ref: '#/components/schemas/PfdReport' + minProperties: 1 + description: Supplied by the SCEF and contains the external application identifiers for which PFD(s) are not added or modified successfully. The failure reason is also included. Each element provides the related information for one or more external application identifier(s) and is identified in the map via the failure identifier as key. + readOnly: true + required: + - pfdDatas + PfdData: + type: object + properties: + externalAppId: + type: string + description: Each element uniquely external application identifier + self: + $ref: '#/components/schemas/Link' + pfds: + type: object + additionalProperties: + $ref: '#/components/schemas/Pfd' + description: Contains the PFDs of the external application identifier. Each PFD is identified in the map via a key containing the PFD identifier. + allowedDelay: + $ref: '#/components/schemas/DurationSecRm' + cachingTime: + $ref: '#/components/schemas/DurationSecRo' + required: + - externalAppId + - pfds + Pfd: + type: object + properties: + pfdId: + type: string + description: Identifies a PDF of an application identifier. + flowDescriptions: + type: array + items: + type: string + minItems: 1 + description: Represents a 3-tuple with protocol, server ip and server port for UL/DL application traffic. The content of the string has the same encoding as the IPFilterRule AVP value as defined in IETF RFC 6733. + urls: + type: array + items: + type: string + minItems: 1 + description: Indicates a URL or a regular expression which is used to match the significant parts of the URL. + domainNames: + type: array + items: + type: string + minItems: 1 + description: Indicates an FQDN or a regular expression as a domain name matching criteria. + required: + - pfdId + PfdReport: + type: object + properties: + externalAppIds: + type: array + items: + type: string + minItems: 1 + description: Identifies the external application identifier(s) which PFD(s) are not added or modified successfully + failureCode: + $ref: '#/components/schemas/FailureCode' + cachingTime: + $ref: '#/components/schemas/DurationSec' + required: + - externalAppIds + - failureCode + FailureCode: + anyOf: + - type: string + enum: + - MALFUNCTION + - RESOURCE_LIMITATION + - SHORT_DELAY + - APP_ID_DUPLICATED + - OTHER_REASON + - type: string + description: > + This string provides forward-compatibility with future + extensions to the enumeration but is not used to encode + content defined in the present version of this API. + description: > + Possible values are + - MALFUNCTION: This value indicates that something functions wrongly in PFD provisioning or the PFD provisioning does not function at all. + - RESOURCE_LIMITATION: This value indicates there is resource limitation for PFD storage. + - SHORT_DELAY: This value indicates that the allowed delay is too short and PFD(s) are not stored. + - APP_ID_DUPLICATED: The received external application identifier(s) are already provisioned. + - OTHER_REASON: Other reason unspecified. + diff --git a/schema/controller/api.swagger.yml b/schema/controller/api.swagger.yml index bf524164..f49278f7 100644 --- a/schema/controller/api.swagger.yml +++ b/schema/controller/api.swagger.yml @@ -1,5 +1,5 @@ # SPDX-License-Identifier: Apache-2.0 -# Copyright (c) 2019 Intel Corporation +# Copyright (c) 2019-2020 Intel Corporation openapi: 3.0.0 info: @@ -351,7 +351,7 @@ components: ipModifier: type: object properties: - address: + address: oneOf: - $ref: '#/components/schemas/ipv4Address' - $ref: '#/components/schemas/ipv6Address' diff --git a/schema/eaa/README.md b/schema/eaa/README.md index 0eb0ea81..2944110e 100644 --- a/schema/eaa/README.md +++ b/schema/eaa/README.md @@ -180,18 +180,3 @@ return ""HTTP 204: Deactivated"" @enduml -### License - -Copyright 2019 Smart-Edge.com, Inc. All rights reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. diff --git a/schema/nef/nef_pfd_management_openapi.json b/schema/nef/nef_pfd_management_openapi.json new file mode 100644 index 00000000..9a8fb8a3 --- /dev/null +++ b/schema/nef/nef_pfd_management_openapi.json @@ -0,0 +1,1049 @@ +{ + "openapi": "3.0.0", + "info": { + "title": "3gpp-pfd-management", + "version": "1.0.0" + }, + "externalDocs": { + "description": "3GPP TS 29.122 V15.3.0 T8 reference point for Northbound APIs", + "url": "http://www.3gpp.org/ftp/Specs/archive/29_series/29.122/" + }, + "security": [ + {}, + { + "oAuth2ClientCredentials": [] + } + ], + "servers": [ + { + "url": "{apiRoot}/3gpp-pfd-management/v1", + "variables": { + "apiRoot": { + "default": "https://example.com", + "description": "apiRoot as defined in subclause 5.2.4 of 3GPP TS 29.122." + } + } + } + ], + "paths": { + "/{scsAsId}/transactions": { + "parameters": [ + { + "name": "scsAsId", + "in": "path", + "description": "Identifier of the SCS/AS as defined in subclause subclause 5.2.4 of 3GPP TS 29.122.", + "required": true, + "schema": { + "type": "string" + } + } + ], + "get": { + "summary": "read all the PFD transactions for SCS/AS", + "tags": [ + "PFD Management API SCS/AS level GET operation" + ], + "responses": { + "200": { + "description": "OK. All transactions related to the request URI are returned.", + "content": { + "application/json": { + "schema": { + "type": "array", + "items": { + "$ref": "#/components/schemas/PfdManagement" + } + } + } + } + }, + "400": { + "$ref": "#/components/responses/400" + }, + "401": { + "$ref": "#/components/responses/401" + }, + "403": { + "$ref": "#/components/responses/403" + }, + "404": { + "$ref": "#/components/responses/404" + }, + "406": { + "$ref": "#/components/responses/406" + }, + "429": { + "$ref": "#/components/responses/429" + }, + "500": { + "$ref": "#/components/responses/500" + }, + "503": { + "$ref": "#/components/responses/503" + }, + "default": { + "$ref": "#/components/responses/default" + } + } + }, + "post": { + "summary": "Creates a new PFD Management resource", + "tags": [ + "PFD Management API Transaction level POST Operation" + ], + "requestBody": { + "required": true, + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdManagement" + } + } + }, + "description": "Create a new transaction for PFD management." + }, + "responses": { + "201": { + "description": "Created. The transaction was created successfully. The SCEF shall return the created transaction in the response payload body. PfdReport may be included to provide detailed failure information for some applications.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdManagement" + } + } + }, + "headers": { + "Location": { + "description": "Contains the URI of the newly created resource", + "required": true, + "schema": { + "type": "string" + } + } + } + }, + "400": { + "$ref": "#/components/responses/400" + }, + "401": { + "$ref": "#/components/responses/401" + }, + "403": { + "$ref": "#/components/responses/403" + }, + "404": { + "$ref": "#/components/responses/404" + }, + "411": { + "$ref": "#/components/responses/411" + }, + "413": { + "$ref": "#/components/responses/413" + }, + "415": { + "$ref": "#/components/responses/415" + }, + "429": { + "$ref": "#/components/responses/429" + }, + "500": { + "description": "The PFDs for all applications were not created successfully. PfdReport is included with detailed information.", + "content": { + "application/json": { + "schema": { + "type": "array", + "items": { + "$ref": "#/components/schemas/PfdReport" + }, + "minItems": 1 + } + }, + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "503": { + "$ref": "#/components/responses/503" + }, + "default": { + "$ref": "#/components/responses/default" + } + } + } + }, + "/{scsAsId}/transactions/{transactionId}": { + "parameters": [ + { + "name": "scsAsId", + "in": "path", + "description": "Identifier of the SCS/AS as defined in subclause subclause 5.2.4 of 3GPP TS 29.122.", + "required": true, + "schema": { + "type": "string" + } + }, + { + "name": "transactionId", + "in": "path", + "description": "Transaction ID", + "required": true, + "schema": { + "type": "string" + } + } + ], + "get": { + "summary": "Reads an active transaction based on the transaction ID", + "tags": [ + "PFD Management API Transaction level GET Operation" + ], + "responses": { + "200": { + "description": "OK. The transaction information related to the request URI is returned.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdManagement" + } + } + } + }, + "400": { + "$ref": "#/components/responses/400" + }, + "401": { + "$ref": "#/components/responses/401" + }, + "403": { + "$ref": "#/components/responses/403" + }, + "404": { + "$ref": "#/components/responses/404" + }, + "406": { + "$ref": "#/components/responses/406" + }, + "429": { + "$ref": "#/components/responses/429" + }, + "500": { + "$ref": "#/components/responses/500" + }, + "503": { + "$ref": "#/components/responses/503" + }, + "default": { + "$ref": "#/components/responses/default" + } + } + }, + "put": { + "summary": "Replaces an active transaction based on the transaction ID", + "tags": [ + "PFD Management API Transaction level PUT Operation" + ], + "requestBody": { + "required": true, + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdManagement" + } + } + }, + "description": "Change information in PFD management transaction." + }, + "responses": { + "200": { + "description": "OK. The transaction was modified successfully. The SCEF shall return an updated transaction in the response payload body.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdManagement" + } + } + } + }, + "400": { + "$ref": "#/components/responses/400" + }, + "401": { + "$ref": "#/components/responses/401" + }, + "403": { + "$ref": "#/components/responses/403" + }, + "404": { + "$ref": "#/components/responses/404" + }, + "411": { + "$ref": "#/components/responses/411" + }, + "413": { + "$ref": "#/components/responses/413" + }, + "415": { + "$ref": "#/components/responses/415" + }, + "429": { + "$ref": "#/components/responses/429" + }, + "500": { + "description": "The PFDs for all applications were not updated successfully. PfdReport is included with detailed information.", + "content": { + "application/json": { + "schema": { + "type": "array", + "items": { + "$ref": "#/components/schemas/PfdReport" + }, + "minItems": 1 + } + }, + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "503": { + "$ref": "#/components/responses/503" + }, + "default": { + "$ref": "#/components/responses/default" + } + } + }, + "delete": { + "summary": "Deletes an already existing transaction based on transaction ID", + "tags": [ + "PFD Management API Transaction level DELETE Operation" + ], + "responses": { + "204": { + "description": "No Content. The transaction was deleted successfully. The payload body shall be empty." + }, + "400": { + "$ref": "#/components/responses/400" + }, + "401": { + "$ref": "#/components/responses/401" + }, + "403": { + "$ref": "#/components/responses/403" + }, + "404": { + "$ref": "#/components/responses/404" + }, + "429": { + "$ref": "#/components/responses/429" + }, + "500": { + "$ref": "#/components/responses/500" + }, + "503": { + "$ref": "#/components/responses/503" + }, + "default": { + "$ref": "#/components/responses/default" + } + } + } + }, + "/{scsAsId}/transactions/{transactionId}/applications/{appId}": { + "parameters": [ + { + "name": "scsAsId", + "in": "path", + "description": "Identifier of the SCS/AS as defined in subclause subclause 5.2.4 of 3GPP TS 29.122.", + "required": true, + "schema": { + "type": "string" + } + }, + { + "name": "transactionId", + "in": "path", + "description": "Transaction ID", + "required": true, + "schema": { + "type": "string" + } + }, + { + "name": "appId", + "in": "path", + "description": "Identifier of the application", + "required": true, + "schema": { + "type": "string" + } + } + ], + "get": { + "summary": "Reads PFD data for an application based on transaction ID and application ID", + "tags": [ + "PFD Management API Application level GET Operation" + ], + "responses": { + "200": { + "description": "OK. The application information related to the request URI is returned.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdData" + } + } + } + }, + "400": { + "$ref": "#/components/responses/400" + }, + "401": { + "$ref": "#/components/responses/401" + }, + "403": { + "$ref": "#/components/responses/403" + }, + "404": { + "$ref": "#/components/responses/404" + }, + "406": { + "$ref": "#/components/responses/406" + }, + "429": { + "$ref": "#/components/responses/429" + }, + "500": { + "$ref": "#/components/responses/500" + }, + "503": { + "$ref": "#/components/responses/503" + }, + "default": { + "$ref": "#/components/responses/default" + } + } + }, + "put": { + "summary": "Replaces PFD data for an application based on transaction ID and application ID", + "tags": [ + "PFD Management API Application level PUT Operation" + ], + "requestBody": { + "required": true, + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdData" + } + } + }, + "description": "Change information in application." + }, + "responses": { + "200": { + "description": "OK. The application resource was modified successfully. The SCEF shall return an updated application resource in the response payload body.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdData" + } + } + } + }, + "400": { + "$ref": "#/components/responses/400" + }, + "401": { + "$ref": "#/components/responses/401" + }, + "403": { + "description": "The PFDs for the application were not updated successfully.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdReport" + } + }, + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "404": { + "$ref": "#/components/responses/404" + }, + "409": { + "description": "The PFDs for the application were not updated successfully.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdReport" + } + }, + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "411": { + "$ref": "#/components/responses/411" + }, + "413": { + "$ref": "#/components/responses/413" + }, + "415": { + "$ref": "#/components/responses/415" + }, + "429": { + "$ref": "#/components/responses/429" + }, + "500": { + "description": "The PFDs for the application were not updated successfully.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdReport" + } + }, + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "503": { + "$ref": "#/components/responses/503" + }, + "default": { + "$ref": "#/components/responses/default" + } + } + }, + "patch": { + "summary": "Updates PFD data for an application based on transaction ID and application ID", + "tags": [ + "PFD Management API Application level PATCH Operation" + ], + "requestBody": { + "required": true, + "content": { + "application/merge-patch+json": { + "schema": { + "$ref": "#/components/schemas/PfdData" + } + } + }, + "description": "Change information in PFD management transaction." + }, + "responses": { + "200": { + "description": "OK. The transaction was modified successfully. The SCEF shall return an updated transaction in the response payload body.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdData" + } + } + } + }, + "400": { + "$ref": "#/components/responses/400" + }, + "401": { + "$ref": "#/components/responses/401" + }, + "403": { + "description": "The PFDs for the application were not updated successfully.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdReport" + } + }, + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "404": { + "$ref": "#/components/responses/404" + }, + "409": { + "description": "The PFDs for the application were not updated successfully.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdReport" + } + }, + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "411": { + "$ref": "#/components/responses/411" + }, + "413": { + "$ref": "#/components/responses/413" + }, + "415": { + "$ref": "#/components/responses/415" + }, + "429": { + "$ref": "#/components/responses/429" + }, + "500": { + "description": "The PFDs for the application were not updated successfully.", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PfdReport" + } + }, + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "503": { + "$ref": "#/components/responses/503" + }, + "default": { + "$ref": "#/components/responses/default" + } + } + }, + "delete": { + "summary": "Deletes PFD data for an application based on transaction ID and application ID", + "tags": [ + "PFD Management API Application level DELETE Operation" + ], + "responses": { + "204": { + "description": "No Content. The application was deleted successfully. The payload body shall be empty." + }, + "400": { + "$ref": "#/components/responses/400" + }, + "401": { + "$ref": "#/components/responses/401" + }, + "403": { + "$ref": "#/components/responses/403" + }, + "404": { + "$ref": "#/components/responses/404" + }, + "429": { + "$ref": "#/components/responses/429" + }, + "500": { + "$ref": "#/components/responses/500" + }, + "503": { + "$ref": "#/components/responses/503" + }, + "default": { + "$ref": "#/components/responses/default" + } + } + } + } + }, + "components": { + "securitySchemes": { + "oAuth2ClientCredentials": { + "type": "oauth2", + "flows": { + "clientCredentials": { + "tokenUrl": "{tokenUrl}", + "scopes": {} + } + } + } + }, + "responses": { + "400": { + "description": "Bad request", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "401": { + "description": "Unauthorized", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "403": { + "description": "Forbidden", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "404": { + "description": "Not Found", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "406": { + "description": "Not Acceptable", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "409": { + "description": "Conflict", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "411": { + "description": "Length Required", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "412": { + "description": "Precondition Failed", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "413": { + "description": "Payload Too Large", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "414": { + "description": "URI Too Long", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "415": { + "description": "Unsupported Media Type", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "429": { + "description": "Too Many Requests", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "500": { + "description": "Internal Server Error", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "503": { + "description": "Service Unavailable", + "content": { + "application/problem+json": { + "schema": { + "$ref": "#/components/schemas/ProblemDetails" + } + } + } + }, + "default": { + "description": "Generic Error" + } + }, + "schemas": { + "DurationSec": { + "type": "integer", + "minimum": 0, + "description": "Unsigned integer identifying a period of time in units of seconds." + }, + "DurationSecRm": { + "type": "integer", + "minimum": 0, + "description": "Unsigned integer identifying a period of time in units of seconds with \"nullable=true\" property.", + "nullable": true + }, + "DurationSecRo": { + "type": "integer", + "minimum": 0, + "description": "Unsigned integer identifying a period of time in units of seconds with \"readOnly=true\" property.", + "readOnly": true + }, + "SupportedFeatures": { + "type": "string", + "pattern": "^[A-Fa-f0-9]*$" + }, + "Link": { + "type": "string", + "description": "string formatted according to IETF RFC 3986 identifying a referenced resource." + }, + "Uri": { + "type": "string", + "description": "string providing an URI formatted according to IETF RFC 3986." + }, + "ProblemDetails": { + "type": "object", + "properties": { + "type": { + "$ref": "#/components/schemas/Uri" + }, + "title": { + "type": "string", + "description": "A short, human-readable summary of the problem type. It should not change from occurrence to occurrence of the problem." + }, + "status": { + "type": "integer", + "description": "The HTTP status code for this occurrence of the problem." + }, + "detail": { + "type": "string", + "description": "A human-readable explanation specific to this occurrence of the problem." + }, + "instance": { + "$ref": "#/components/schemas/Uri" + }, + "cause": { + "type": "string", + "description": "A machine-readable application error cause specific to this occurrence of the problem. This IE should be present and provide application-related error information, if available." + }, + "invalidParams": { + "type": "array", + "items": { + "$ref": "#/components/schemas/InvalidParam" + }, + "minItems": 1, + "description": "Description of invalid parameters, for a request rejected due to invalid parameters." + } + } + }, + "InvalidParam": { + "type": "object", + "properties": { + "param": { + "type": "string", + "description": "Attribute's name encoded as a JSON Pointer, or header's name." + }, + "reason": { + "type": "string", + "description": "A human-readable reason, e.g. \"must be a positive integer\"." + } + }, + "required": [ + "param" + ] + }, + "PfdManagement": { + "type": "object", + "properties": { + "self": { + "$ref": "#/components/schemas/Link" + }, + "supportedFeatures": { + "$ref": "#/components/schemas/SupportedFeatures" + }, + "pfdDatas": { + "type": "object", + "additionalProperties": { + "$ref": "#/components/schemas/PfdData" + }, + "minProperties": 1, + "description": "Each element uniquely identifies the PFDs for an external application identifier. Each element is identified in the map via an external application identifier as key. The response shall include successfully provisioned PFD data of application(s)." + }, + "pfdReports": { + "type": "object", + "additionalProperties": { + "$ref": "#/components/schemas/PfdReport" + }, + "minProperties": 1, + "description": "Supplied by the SCEF and contains the external application identifiers for which PFD(s) are not added or modified successfully. The failure reason is also included. Each element provides the related information for one or more external application identifier(s) and is identified in the map via the failure identifier as key.", + "readOnly": true + } + }, + "required": [ + "pfdDatas" + ] + }, + "PfdData": { + "type": "object", + "properties": { + "externalAppId": { + "type": "string", + "description": "Each element uniquely external application identifier" + }, + "self": { + "$ref": "#/components/schemas/Link" + }, + "pfds": { + "type": "object", + "additionalProperties": { + "$ref": "#/components/schemas/Pfd" + }, + "description": "Contains the PFDs of the external application identifier. Each PFD is identified in the map via a key containing the PFD identifier." + }, + "allowedDelay": { + "$ref": "#/components/schemas/DurationSecRm" + }, + "cachingTime": { + "$ref": "#/components/schemas/DurationSecRo" + } + }, + "required": [ + "externalAppId", + "pfds" + ] + }, + "Pfd": { + "type": "object", + "properties": { + "pfdId": { + "type": "string", + "description": "Identifies a PDF of an application identifier." + }, + "flowDescriptions": { + "type": "array", + "items": { + "type": "string" + }, + "minItems": 1, + "description": "Represents a 3-tuple with protocol, server ip and server port for UL/DL application traffic. The content of the string has the same encoding as the IPFilterRule AVP value as defined in IETF RFC 6733." + }, + "urls": { + "type": "array", + "items": { + "type": "string" + }, + "minItems": 1, + "description": "Indicates a URL or a regular expression which is used to match the significant parts of the URL." + }, + "domainNames": { + "type": "array", + "items": { + "type": "string" + }, + "minItems": 1, + "description": "Indicates an FQDN or a regular expression as a domain name matching criteria." + } + }, + "required": [ + "pfdId" + ] + }, + "PfdReport": { + "type": "object", + "properties": { + "externalAppIds": { + "type": "array", + "items": { + "type": "string" + }, + "minItems": 1, + "description": "Identifies the external application identifier(s) which PFD(s) are not added or modified successfully" + }, + "failureCode": { + "$ref": "#/components/schemas/FailureCode" + }, + "cachingTime": { + "$ref": "#/components/schemas/DurationSec" + } + }, + "required": [ + "externalAppIds", + "failureCode" + ] + }, + "FailureCode": { + "anyOf": [ + { + "type": "string", + "enum": [ + "MALFUNCTION", + "RESOURCE_LIMITATION", + "SHORT_DELAY", + "APP_ID_DUPLICATED", + "OTHER_REASON" + ] + }, + { + "type": "string", + "description": "This string provides forward-compatibility with future extensions to the enumeration but is not used to encode content defined in the present version of this API.\n" + } + ], + "description": "Possible values are - MALFUNCTION: This value indicates that something functions wrongly in PFD provisioning or the PFD provisioning does not function at all. - RESOURCE_LIMITATION: This value indicates there is resource limitation for PFD storage. - SHORT_DELAY: This value indicates that the allowed delay is too short and PFD(s) are not stored. - APP_ID_DUPLICATED: The received external application identifier(s) are already provisioned. - OTHER_REASON: Other reason unspecified.\n" + } + } + } +} \ No newline at end of file diff --git a/schema/nef/nef_pfd_management_openapi.yaml b/schema/nef/nef_pfd_management_openapi.yaml new file mode 100644 index 00000000..a1edd704 --- /dev/null +++ b/schema/nef/nef_pfd_management_openapi.yaml @@ -0,0 +1,693 @@ +# SPDX-License-Identifier: Apache-2.0 +# Copyright (c) 2020 Intel Corporation +# The source of this file is from 3GPP 29.522 Release 15 version 3 +# taken from http://www.3gpp.org/ftp/Specs/archive/29_series/29.522/ +openapi: 3.0.0 +info: + title: 3gpp-pfd-management + version: "1.0.0" +externalDocs: + description: 3GPP TS 29.122 V15.3.0 T8 reference point for Northbound APIs + url: 'http://www.3gpp.org/ftp/Specs/archive/29_series/29.122/' +security: + - {} + - oAuth2ClientCredentials: [] +servers: + - url: '{apiRoot}/3gpp-pfd-management/v1' + variables: + apiRoot: + default: https://example.com + description: apiRoot as defined in subclause 5.2.4 of 3GPP TS 29.122. +paths: + /{scsAsId}/transactions: + parameters: + - name: scsAsId + in: path + description: Identifier of the SCS/AS as defined in subclause subclause 5.2.4 of 3GPP TS 29.122. + required: true + schema: + type: string + get: + summary: read all the PFD transactions for SCS/AS + tags: + - PFD Management API SCS/AS level GET operation + responses: + '200': + description: OK. All transactions related to the request URI are returned. + content: + application/json: + schema: + type: array + items: + $ref: '#/components/schemas/PfdManagement' + '400': + $ref: '#/components/responses/400' + '401': + $ref: '#/components/responses/401' + '403': + $ref: '#/components/responses/403' + '404': + $ref: '#/components/responses/404' + '406': + $ref: '#/components/responses/406' + '429': + $ref: '#/components/responses/429' + '500': + $ref: '#/components/responses/500' + '503': + $ref: '#/components/responses/503' + default: + $ref: '#/components/responses/default' + post: + summary: Creates a new PFD Management resource + tags: + - PFD Management API Transaction level POST Operation + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/PfdManagement' + description: Create a new transaction for PFD management. + responses: + '201': + description: Created. The transaction was created successfully. The SCEF shall return the created transaction in the response payload body. PfdReport may be included to provide detailed failure information for some applications. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdManagement' + headers: + Location: + description: 'Contains the URI of the newly created resource' + required: true + schema: + type: string + '400': + $ref: '#/components/responses/400' + '401': + $ref: '#/components/responses/401' + '403': + $ref: '#/components/responses/403' + '404': + $ref: '#/components/responses/404' + '411': + $ref: '#/components/responses/411' + '413': + $ref: '#/components/responses/413' + '415': + $ref: '#/components/responses/415' + '429': + $ref: '#/components/responses/429' + '500': + description: The PFDs for all applications were not created successfully. PfdReport is included with detailed information. + content: + application/json: + schema: + type: array + items: + $ref: '#/components/schemas/PfdReport' + minItems: 1 + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '503': + $ref: '#/components/responses/503' + default: + $ref: '#/components/responses/default' + /{scsAsId}/transactions/{transactionId}: + parameters: + - name: scsAsId + in: path + description: Identifier of the SCS/AS as defined in subclause subclause 5.2.4 of 3GPP TS 29.122. + required: true + schema: + type: string + - name: transactionId + in: path + description: Transaction ID + required: true + schema: + type: string + get: + summary: Reads an active transaction based on the transaction ID + tags: + - PFD Management API Transaction level GET Operation + responses: + '200': + description: OK. The transaction information related to the request URI is returned. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdManagement' + '400': + $ref: '#/components/responses/400' + '401': + $ref: '#/components/responses/401' + '403': + $ref: '#/components/responses/403' + '404': + $ref: '#/components/responses/404' + '406': + $ref: '#/components/responses/406' + '429': + $ref: '#/components/responses/429' + '500': + $ref: '#/components/responses/500' + '503': + $ref: '#/components/responses/503' + default: + $ref: '#/components/responses/default' + put: + summary: Replaces an active transaction based on the transaction ID + tags: + - PFD Management API Transaction level PUT Operation + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/PfdManagement' + description: Change information in PFD management transaction. + responses: + '200': + description: OK. The transaction was modified successfully. The SCEF shall return an updated transaction in the response payload body. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdManagement' + '400': + $ref: '#/components/responses/400' + '401': + $ref: '#/components/responses/401' + '403': + $ref: '#/components/responses/403' + '404': + $ref: '#/components/responses/404' + '411': + $ref: '#/components/responses/411' + '413': + $ref: '#/components/responses/413' + '415': + $ref: '#/components/responses/415' + '429': + $ref: '#/components/responses/429' + '500': + description: The PFDs for all applications were not updated successfully. PfdReport is included with detailed information. + content: + application/json: + schema: + type: array + items: + $ref: '#/components/schemas/PfdReport' + minItems: 1 + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '503': + $ref: '#/components/responses/503' + default: + $ref: '#/components/responses/default' + delete: + summary: Deletes an already existing transaction based on transaction ID + tags: + - PFD Management API Transaction level DELETE Operation + responses: + '204': + description: No Content. The transaction was deleted successfully. The payload body shall be empty. + '400': + $ref: '#/components/responses/400' + '401': + $ref: '#/components/responses/401' + '403': + $ref: '#/components/responses/403' + '404': + $ref: '#/components/responses/404' + '429': + $ref: '#/components/responses/429' + '500': + $ref: '#/components/responses/500' + '503': + $ref: '#/components/responses/503' + default: + $ref: '#/components/responses/default' + /{scsAsId}/transactions/{transactionId}/applications/{appId}: + parameters: + - name: scsAsId + in: path + description: Identifier of the SCS/AS as defined in subclause subclause 5.2.4 of 3GPP TS 29.122. + required: true + schema: + type: string + - name: transactionId + in: path + description: Transaction ID + required: true + schema: + type: string + - name: appId + in: path + description: Identifier of the application + required: true + schema: + type: string + get: + summary: Reads PFD data for an application based on transaction ID and application ID + tags: + - PFD Management API Application level GET Operation + responses: + '200': + description: OK. The application information related to the request URI is returned. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdData' + '400': + $ref: '#/components/responses/400' + '401': + $ref: '#/components/responses/401' + '403': + $ref: '#/components/responses/403' + '404': + $ref: '#/components/responses/404' + '406': + $ref: '#/components/responses/406' + '429': + $ref: '#/components/responses/429' + '500': + $ref: '#/components/responses/500' + '503': + $ref: '#/components/responses/503' + default: + $ref: '#/components/responses/default' + put: + summary: Replaces PFD data for an application based on transaction ID and application ID + tags: + - PFD Management API Application level PUT Operation + requestBody: + required: true + content: + application/json: + schema: + $ref: '#/components/schemas/PfdData' + description: Change information in application. + responses: + '200': + description: OK. The application resource was modified successfully. The SCEF shall return an updated application resource in the response payload body. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdData' + '400': + $ref: '#/components/responses/400' + '401': + $ref: '#/components/responses/401' + '403': + description: The PFDs for the application were not updated successfully. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdReport' + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '404': + $ref: '#/components/responses/404' + '409': + description: The PFDs for the application were not updated successfully. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdReport' + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '411': + $ref: '#/components/responses/411' + '413': + $ref: '#/components/responses/413' + '415': + $ref: '#/components/responses/415' + '429': + $ref: '#/components/responses/429' + '500': + description: The PFDs for the application were not updated successfully. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdReport' + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '503': + $ref: '#/components/responses/503' + default: + $ref: '#/components/responses/default' + patch: + summary: Updates PFD data for an application based on transaction ID and application ID + tags: + - PFD Management API Application level PATCH Operation + requestBody: + required: true + content: + application/merge-patch+json: + schema: + $ref: '#/components/schemas/PfdData' + description: Change information in PFD management transaction. + responses: + '200': + description: OK. The transaction was modified successfully. The SCEF shall return an updated transaction in the response payload body. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdData' + '400': + $ref: '#/components/responses/400' + '401': + $ref: '#/components/responses/401' + '403': + description: The PFDs for the application were not updated successfully. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdReport' + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '404': + $ref: '#/components/responses/404' + '409': + description: The PFDs for the application were not updated successfully. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdReport' + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '411': + $ref: '#/components/responses/411' + '413': + $ref: '#/components/responses/413' + '415': + $ref: '#/components/responses/415' + '429': + $ref: '#/components/responses/429' + '500': + description: The PFDs for the application were not updated successfully. + content: + application/json: + schema: + $ref: '#/components/schemas/PfdReport' + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '503': + $ref: '#/components/responses/503' + default: + $ref: '#/components/responses/default' + delete: + summary: Deletes PFD data for an application based on transaction ID and application ID + tags: + - PFD Management API Application level DELETE Operation + responses: + '204': + description: No Content. The application was deleted successfully. The payload body shall be empty. + '400': + $ref: '#/components/responses/400' + '401': + $ref: '#/components/responses/401' + '403': + $ref: '#/components/responses/403' + '404': + $ref: '#/components/responses/404' + '429': + $ref: '#/components/responses/429' + '500': + $ref: '#/components/responses/500' + '503': + $ref: '#/components/responses/503' + default: + $ref: '#/components/responses/default' +components: + securitySchemes: + oAuth2ClientCredentials: + type: oauth2 + flows: + clientCredentials: + tokenUrl: '{tokenUrl}' + scopes: {} + + responses: + '400': + description: Bad request + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '401': + description: Unauthorized + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '403': + description: Forbidden + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '404': + description: Not Found + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '406': + description: Not Acceptable + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '409': + description: Conflict + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '411': + description: Length Required + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '412': + description: Precondition Failed + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '413': + description: Payload Too Large + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '414': + description: URI Too Long + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '415': + description: Unsupported Media Type + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '429': + description: Too Many Requests + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '500': + description: Internal Server Error + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + '503': + description: Service Unavailable + content: + application/problem+json: + schema: + $ref: '#/components/schemas/ProblemDetails' + default: + description: Generic Error + + schemas: + DurationSec: + type: integer + minimum: 0 + description: Unsigned integer identifying a period of time in units of seconds. + DurationSecRm: + type: integer + minimum: 0 + description: Unsigned integer identifying a period of time in units of seconds with "nullable=true" property. + nullable: true + DurationSecRo: + type: integer + minimum: 0 + description: Unsigned integer identifying a period of time in units of seconds with "readOnly=true" property. + readOnly: true + SupportedFeatures: + type: string + pattern: '^[A-Fa-f0-9]*$' + Link: + type: string + description: string formatted according to IETF RFC 3986 identifying a referenced resource. + Uri: + type: string + description: string providing an URI formatted according to IETF RFC 3986. + ProblemDetails: + type: object + properties: + type: + $ref: '#/components/schemas/Uri' + title: + type: string + description: A short, human-readable summary of the problem type. It should not change from occurrence to occurrence of the problem. + status: + type: integer + description: The HTTP status code for this occurrence of the problem. + detail: + type: string + description: A human-readable explanation specific to this occurrence of the problem. + instance: + $ref: '#/components/schemas/Uri' + cause: + type: string + description: A machine-readable application error cause specific to this occurrence of the problem. This IE should be present and provide application-related error information, if available. + invalidParams: + type: array + items: + $ref: '#/components/schemas/InvalidParam' + minItems: 1 + description: Description of invalid parameters, for a request rejected due to invalid parameters. + InvalidParam: + type: object + properties: + param: + type: string + description: Attribute's name encoded as a JSON Pointer, or header's name. + reason: + type: string + description: A human-readable reason, e.g. "must be a positive integer". + required: + - param + + PfdManagement: + type: object + properties: + self: + $ref: '#/components/schemas/Link' + supportedFeatures: + $ref: '#/components/schemas/SupportedFeatures' + pfdDatas: + type: object + additionalProperties: + $ref: '#/components/schemas/PfdData' + minProperties: 1 + description: Each element uniquely identifies the PFDs for an external application identifier. Each element is identified in the map via an external application identifier as key. The response shall include successfully provisioned PFD data of application(s). + pfdReports: + type: object + additionalProperties: + $ref: '#/components/schemas/PfdReport' + minProperties: 1 + description: Supplied by the SCEF and contains the external application identifiers for which PFD(s) are not added or modified successfully. The failure reason is also included. Each element provides the related information for one or more external application identifier(s) and is identified in the map via the failure identifier as key. + readOnly: true + required: + - pfdDatas + PfdData: + type: object + properties: + externalAppId: + type: string + description: Each element uniquely external application identifier + self: + $ref: '#/components/schemas/Link' + pfds: + type: object + additionalProperties: + $ref: '#/components/schemas/Pfd' + description: Contains the PFDs of the external application identifier. Each PFD is identified in the map via a key containing the PFD identifier. + allowedDelay: + $ref: '#/components/schemas/DurationSecRm' + cachingTime: + $ref: '#/components/schemas/DurationSecRo' + required: + - externalAppId + - pfds + Pfd: + type: object + properties: + pfdId: + type: string + description: Identifies a PDF of an application identifier. + flowDescriptions: + type: array + items: + type: string + minItems: 1 + description: Represents a 3-tuple with protocol, server ip and server port for UL/DL application traffic. The content of the string has the same encoding as the IPFilterRule AVP value as defined in IETF RFC 6733. + urls: + type: array + items: + type: string + minItems: 1 + description: Indicates a URL or a regular expression which is used to match the significant parts of the URL. + domainNames: + type: array + items: + type: string + minItems: 1 + description: Indicates an FQDN or a regular expression as a domain name matching criteria. + required: + - pfdId + PfdReport: + type: object + properties: + externalAppIds: + type: array + items: + type: string + minItems: 1 + description: Identifies the external application identifier(s) which PFD(s) are not added or modified successfully + failureCode: + $ref: '#/components/schemas/FailureCode' + cachingTime: + $ref: '#/components/schemas/DurationSec' + required: + - externalAppIds + - failureCode + FailureCode: + anyOf: + - type: string + enum: + - MALFUNCTION + - RESOURCE_LIMITATION + - SHORT_DELAY + - APP_ID_DUPLICATED + - OTHER_REASON + - type: string + description: > + This string provides forward-compatibility with future + extensions to the enumeration but is not used to encode + content defined in the present version of this API. + description: > + Possible values are + - MALFUNCTION: This value indicates that something functions wrongly in PFD provisioning or the PFD provisioning does not function at all. + - RESOURCE_LIMITATION: This value indicates there is resource limitation for PFD storage. + - SHORT_DELAY: This value indicates that the allowed delay is too short and PFD(s) are not stored. + - APP_ID_DUPLICATED: The received external application identifier(s) are already provisioned. + - OTHER_REASON: Other reason unspecified. diff --git a/schema/nef/nef_traffic_influence_openapi.json b/schema/nef/nef_traffic_influence_openapi.json index 3bc38cbc..c2ad4df3 100644 --- a/schema/nef/nef_traffic_influence_openapi.json +++ b/schema/nef/nef_traffic_influence_openapi.json @@ -9,7 +9,10 @@ "url": "http://www.3gpp.org/ftp/Specs/archive/29_series/29.522/" }, "security": [ - {} + {}, + { + "oAuth2ClientCredentials": [] + } ], "servers": [ { @@ -238,7 +241,7 @@ "put": { "summary": "Updates/replaces an existing subscription resource", "tags": [ - "TrafficInfluence API subscription level PUT Operation" + "TrafficInfluence API Subscription level PUT Operation" ], "requestBody": { "description": "Parameters to update/replace the existing subscription", @@ -288,7 +291,7 @@ "patch": { "summary": "Updates/replaces an existing subscription resource", "tags": [ - "TrafficInfluence API subscription level PATCH Operation" + "TrafficInfluence API Subscription level PATCH Operation" ], "requestBody": { "required": true, @@ -363,6 +366,17 @@ } }, "components": { + "securitySchemes": { + "oAuth2ClientCredentials": { + "type": "oauth2", + "flows": { + "clientCredentials": { + "tokenUrl": "{tokenUrl}", + "scopes": {} + } + } + } + }, "schemas": { "TrafficInfluSub": { "type": "object", @@ -1085,4 +1099,4 @@ } } } -} +} \ No newline at end of file diff --git a/schema/nef/nef_traffic_influence_openapi.yaml b/schema/nef/nef_traffic_influence_openapi.yaml index 766a2dbc..225e1ecb 100644 --- a/schema/nef/nef_traffic_influence_openapi.yaml +++ b/schema/nef/nef_traffic_influence_openapi.yaml @@ -11,6 +11,7 @@ externalDocs: url: 'http://www.3gpp.org/ftp/Specs/archive/29_series/29.522/' security: - {} + - oAuth2ClientCredentials: [] servers: - url: '{apiRoot}/3gpp-traffic-influence/v1' variables: @@ -117,7 +118,7 @@ paths: $ref: '#/components/responses/503' default: $ref: '#/components/responses/default' - + /{afId}/subscriptions/{subscriptionId}: parameters: - name: afId @@ -158,7 +159,7 @@ paths: put: summary: Updates/replaces an existing subscription resource tags: - - TrafficInfluence API subscription level PUT Operation + - TrafficInfluence API Subscription level PUT Operation requestBody: description: Parameters to update/replace the existing subscription required: true @@ -190,7 +191,7 @@ paths: patch: summary: Updates/replaces an existing subscription resource tags: - - TrafficInfluence API subscription level PATCH Operation + - TrafficInfluence API Subscription level PATCH Operation requestBody: required: true content: @@ -236,6 +237,14 @@ paths: default: $ref: '#/components/responses/default' components: + securitySchemes: + oAuth2ClientCredentials: + type: oauth2 + flows: + clientCredentials: + tokenUrl: '{tokenUrl}' + scopes: {} + schemas: TrafficInfluSub: type: object diff --git a/schema/pb/eva.proto b/schema/pb/eva.proto index 19e9647e..58a2c749 100644 --- a/schema/pb/eva.proto +++ b/schema/pb/eva.proto @@ -80,6 +80,18 @@ message Application { // (Enhanced App Configuration). This is in Json format - but is at top level // an array of string key-value pairs. Specific keys are defined by their respective features. string EACJsonBlob = 11; + + // CNI configuration for the application + CNIConfiguration cniConf = 12; +} + +// CNIConfiguration stores CNI configuration data. +// CNI specification is available at https://github.com/containernetworking/cni/blob/master/SPEC.md +message CNIConfiguration { + string cniConfig = 1; // CNI configuration in form of a JSON + string interfaceName = 2; // Name of the interface + string path = 3; // CNI's path + string args = 4; // CNI's extra args passed as a CNI_ARGS env variable } message ApplicationID {