192.168.1.1
+```
+where `client_ip` is the IP address of a client connected to the new OVN port(192.168.1.0/24 subnet)
+
+### Traffic rules
+For On Premises deployment with OVN CNI dataplane traffic rules are handled by OVN's ACL (Access-Control List).
+> NOTE: Currently, managing the rules is only possible using `ovn-nbctl` utility.
+
+As per [`ovn-nbctl`](http://www.openvswitch.org/support/dist-docs/ovn-nbctl.8.html) manual, adding new ACL using `ovn-nbctl`:
+```
+ovn-nbctl
+ [--type={switch | port-group}]
+ [--log] [--meter=meter] [--severity=severity] [--name=name]
+ [--may-exist]
+ acl-add entity direction priority match verdict
+```
+Deleting ACL:
+```
+ovn-nbctl
+ [--type={switch | port-group}]
+ acl-del entity [direction [priority match]]
+```
+
+Listing ACLs:
+```
+ovn-nbctl
+ [--type={switch | port-group}]
+ acl-list entity
+```
+
+Where:
+* `--type={switch | port-group}` allows to specify type of the _entity_ if there's a switch and a port-group with same name
+* `--log` enables packet logging for the ACL
+ * `--meter=meter` can be used to limit packet logging, _meter_ is created using `ovn-nbctl meter-add`
+ * `--severity=severity` is a log level: `alert, debug, info, notice, warning`
+ * `--name=name` specifies a log name
+* `--may-exist` don't return an error when creating duplicated rule
+* `entity` entity (switch or port-group) to which ACL will be applied, can be UUID or name, in case of OVN CNI it's most likely to be a `cluster-switch`
+* `direction` either `from-lport` or `to-lport`:
+ * `from-lport` for filters on traffic arriving from a logical port (logical switch's ingress)
+ * `to-lport` for filters on traffic forwarded to a logical port (logical switch's egress)
+* `priority` integer in range from 0 to 32767, greater the number, more important is the rule
+* `match` rule for matching the packets
+* `verdict` action, one of the `allow`, `allow-related`, `drop`, `reject`
+
+> NOTE: By default all traffic is allowed. When restricting traffic, remember to allow flows like ARP or other essential networking protocols.
+
+For more information refer to [`ovn-nbctl`](http://www.openvswitch.org/support/dist-docs/ovn-nbctl.8.html) and [`ovn-nb`](http://www.openvswitch.org/support/dist-docs/ovn-nb.5.html) manuals.
+
+#### Example: Block pod's ingress IP traffic but allow ICMP
+
+Following examples use nginx container which is a HTTP server for OVN cluster. In the examples, it has IP address `10.100.0.4` and application ID = `f5bd3404-df38-47e7-8907-4adbc4d24a7f`.
+Second container acts as a supposed client of the server.
+
+First, make sure that there is a connectivity between two containers:
+```shell
+$ ping 10.100.0.4 -w 3
+
+PING 10.100.0.4 (10.100.0.4) 56(84) bytes of data.
+64 bytes from 10.100.0.4: icmp_seq=1 ttl=64 time=0.640 ms
+64 bytes from 10.100.0.4: icmp_seq=2 ttl=64 time=0.580 ms
+64 bytes from 10.100.0.4: icmp_seq=3 ttl=64 time=0.221 ms
+
+--- 10.100.0.4 ping statistics ---
+3 packets transmitted, 3 received, 0% packet loss, time 54ms
+rtt min/avg/max/mdev = 0.221/0.480/0.640/0.185 ms
+
+$ curl 10.100.0.4
+
+
+
+
+Welcome to nginx!
+
+
+
+Welcome to nginx!
+If you see this page, the nginx web server is successfully installed and
+working. Further configuration is required.
+
+For online documentation and support please refer to
+nginx.org.
+Commercial support is available at
+nginx.com.
+
+Thank you for using nginx.
+
+
+```
+
+To block pod's IP, but allow ICMP run following command either on the Edge Node or the Edge Controller:
+
+```shell
+$ docker exec ovs-ovn ovn-nbctl acl-add cluster-switch to-lport 100 'outport == "f5bd3404-df38-47e7-8907-4adbc4d24a7f" && ip && icmp' allow-related
+$ docker exec ovs-ovn ovn-nbctl acl-add cluster-switch to-lport 99 'outport == "f5bd3404-df38-47e7-8907-4adbc4d24a7f" && ip' drop
+```
+
+Explanation:
+* `docker exec ovs-ovn` allows us to enter the ovs-ovn container which has access to OVN's north bridge
+* `ovn-nbctl acl-add` adds an ACL rule
+* `cluster-switch` is switch to which all application containers are connected
+* `to-lport` means that we're adding rule affecting traffic going from switch to the logical port (application)
+* `100` or `99` is a priority, rule with ICMP has greater priority to be considered before DROP on all IP traffic
+* `'outport == "" && ip && icmp'` is a match string for rule, rule will executed for traffic going out via switch's port named "" (which is connected to container's internal port) and protocols are IP and ICMP
+* `allow-related` means that both incoming request and outgoing response will not be dropped or rejected
+* `drop` drops all the traffic
+
+Result of the ping and curl after applying these two rules:
+```shell
+$ ping 10.100.0.4 -w 3
+
+PING 10.100.0.4 (10.100.0.4) 56(84) bytes of data.
+64 bytes from 10.100.0.4: icmp_seq=1 ttl=64 time=2.48 ms
+64 bytes from 10.100.0.4: icmp_seq=2 ttl=64 time=0.852 ms
+^C
+--- 10.100.0.4 ping statistics ---
+2 packets transmitted, 2 received, 0% packet loss, time 3ms
+rtt min/avg/max/mdev = 0.852/1.664/2.477/0.813 ms
+
+$ curl --connect-timeout 10 10.100.0.4
+curl: (28) Connection timed out after 10001 milliseconds
+```
+
+If we run command `ovn-nbctl acl-list cluster-switch` we'll receive list of ACLs:
+```shell
+$ docker exec ovs-ovn ovn-nbctl acl-list cluster-switch
+
+ to-lport 1000 (ip4.src==10.20.0.0/16) allow-related
+ to-lport 100 (outport == "f5bd3404-df38-47e7-8907-4adbc4d24a7f" && ip && icmp) allow-related
+ to-lport 99 (outport == "f5bd3404-df38-47e7-8907-4adbc4d24a7f" && ip) drop
+```
+
+Now, let's remove rule for ICMP:
+```shell
+$ docker exec ovs-ovn ovn-nbctl acl-del cluster-switch to-lport 101 'outport == "f5bd3404-df38-47e7-8907-4adbc4d24a7f" && ip && icmp'
+
+$ docker exec ovs-ovn ovn-nbctl acl-list cluster-switch
+
+ to-lport 1000 (ip4.src==10.20.0.0/16) allow-related
+ to-lport 99 (outport == "f5bd3404-df38-47e7-8907-4adbc4d24a7f" && ip) drop
+```
+and make a ping once again to see if it's dropped:
+```shell
+$ ping 10.100.0.4 -w 3
+
+PING 10.100.0.4 (10.100.0.4) 56(84) bytes of data.
+
+--- 10.100.0.4 ping statistics ---
+3 packets transmitted, 0 received, 100% packet loss, time 36ms
+```
+
+## Summary
+OpenNESS is built with a microservices architecture. Depending on the deployment, there may be a requirement to service pure IP traffic and configure the dataplane using standard SDN based tools. OpenNESS demonstrates such a requirement this by providing OVS as a dataplane in the place of NTS without changing the APIs from an end user perspective.
diff --git a/doc/dataplane/openness-userspace-cni.md b/doc/dataplane/openness-userspace-cni.md
new file mode 100644
index 00000000..39869961
--- /dev/null
+++ b/doc/dataplane/openness-userspace-cni.md
@@ -0,0 +1,100 @@
+```text
+SPDX-License-Identifier: Apache-2.0
+Copyright (c) 2019 Intel Corporation
+```
+
+- [Userspace CNI](#userspace-cni)
+ - [Setup Userspace CNI](#setup-userspace-cni)
+ - [HugePages configuration](#hugepages-configuration)
+ - [Pod deployment](#pod-deployment)
+ - [Virtual interface usage](#virtual-interface-usage)
+
+# Userspace CNI
+
+Userspace CNI is a Container Network Interface Kubernetes plugin that was designed to simplify the process of deployment of DPDK based applications in Kubernetes pods. The plugin uses Kubernetes and Multus CNI's CRD to provide pod with virtual DPDK-enabled ethernet port. In this document you can find details about how to install OpenNESS with Userspace CNI support and how to use it's main features.
+
+## Setup Userspace CNI
+
+OpenNESS for Network Edge has been integrated with Userspace CNI to allow user to easily run DPDK based applications inside Kubernetes pods. To install OpenNESS Network Edge with Userspace CNI support, please add value `userspace` to variable `kubernetes_cnis` in `group_vars/all.yml` and set value of the variable `ovs_dpdk` in `roles/kubernetes/cni/kubeovn/common/defaults/main.yml` to `true`:
+
+```yaml
+# group_vars/all.yml
+kubernetes_cnis:
+- kubeovn
+- userspace
+```
+
+```yaml
+# roles/kubernetes/cni/kubeovn/common/defaults/main.yml
+ovs_dpdk: true
+```
+
+## HugePages configuration
+
+Please be aware that DPDK apps will require specific amount of HugePages enabled. By default the ansible scripts will enable 1024 of 2M HugePages in system, and then start OVS-DPDK with 1Gb of those HugePages. If you would like to change this settings to reflect your specific requirements please set ansible variables as defined in the example below. This example enables 4 of 1GB HugePages and appends 1 GB to OVS-DPDK leaving 3 pages for DPDK applications that will be running in the pods.
+
+```yaml
+# network_edge.yml
+- hosts: controller_group
+ vars:
+ hugepage_amount: "4"
+
+- hosts: edgenode_group
+ vars:
+ hugepage_amount: "4"
+```
+
+```yaml
+# roles/machine_setup/grub/defaults/main.yml
+hugepage_size: "1G"
+```
+
+>The variable `hugepage_amount` that can be found in `roles/machine_setup/grub/defaults/main.yml` can be left at default value of `5000` as this value will be overridden by values of `hugepage_amount` variables that were set earlier in `network_edge.yml`.
+
+```yaml
+# roles/kubernetes/cni/kubeovn/common/defaults/main.yml
+ovs_dpdk_hugepage_size: "1Gi" # This is the size of single hugepage to be used by DPDK. Can be 1Gi or 2Mi.
+ovs_dpdk_hugepages: "1Gi" # This is overall amount of hugepags available to DPDK.
+```
+
+## Pod deployment
+
+To deploy pod with DPDK interface please create pod with `hugepages` mounted to `/dev/hugepages`, host directory `/var/run/openvswitch/` (with mandatory trailing slash character) mounted into pod with the volume name `shared-dir` (the name `shared-dir` is mandatory) and `userspace-openness` network annotation. You can find example pod definition with two DPDK ports below:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: userspace-example
+ annotations:
+ k8s.v1.cni.cncf.io/networks: userspace-openness, userspace-openness
+spec:
+ containers:
+ - name: userspace-example
+ image: image-name
+ imagePullPolicy: Never
+ securityContext:
+ privileged: true
+ volumeMounts:
+ - mountPath: /ovs
+ name: shared-dir
+ - mountPath: /dev/hugepages
+ name: hugepages
+ resources:
+ requests:
+ memory: 1Gi
+ limits:
+ hugepages-1Gi: 2Gi
+ command: ["sleep", "infinity"]
+ volumes:
+ - name: shared-dir
+ hostPath:
+ path: /var/run/openvswitch/
+ - name: hugepages
+ emptyDir:
+ medium: HugePages
+```
+
+## Virtual interface usage
+
+Socket files for virtual interfaces generated by Userspace CNI are created on host machine in `/var/run/openvswitch` directory. This directory has to be mounted into your pod by volume with **obligatory name `shared-dir`** (in our [example pod definition](#pod-deployment) `/var/run/openvswitch` is mounted to pod as `/ovs`). Then you can use sockets available in your mount-point directory in your DPDK-enabled application deployed inside pod. You can find further example in [Userspace CNI's documentation](https://github.com/intel/userspace-cni-network-plugin#testing-with-dpdk-testpmd-application).
diff --git a/doc/dataplane/ovn_images/ovncni_cluster.png b/doc/dataplane/ovn_images/ovncni_cluster.png
new file mode 100644
index 00000000..9aa04456
Binary files /dev/null and b/doc/dataplane/ovn_images/ovncni_cluster.png differ
diff --git a/doc/enhanced-platform-awareness/multussriov-images/sriov-onprem-container.png b/doc/enhanced-platform-awareness/multussriov-images/sriov-onprem-container.png
new file mode 100644
index 00000000..3ba72754
Binary files /dev/null and b/doc/enhanced-platform-awareness/multussriov-images/sriov-onprem-container.png differ
diff --git a/doc/enhanced-platform-awareness/multussriov-images/sriov-onprem-vm.png b/doc/enhanced-platform-awareness/multussriov-images/sriov-onprem-vm.png
new file mode 100644
index 00000000..3c7b6f66
Binary files /dev/null and b/doc/enhanced-platform-awareness/multussriov-images/sriov-onprem-vm.png differ
diff --git a/doc/enhanced-platform-awareness/nfd-images/nfd3_onp_app.png b/doc/enhanced-platform-awareness/nfd-images/nfd3_onp_app.png
new file mode 100644
index 00000000..10ad35f4
Binary files /dev/null and b/doc/enhanced-platform-awareness/nfd-images/nfd3_onp_app.png differ
diff --git a/doc/enhanced-platform-awareness/openness-bios.md b/doc/enhanced-platform-awareness/openness-bios.md
index 31c37d0b..4fa46beb 100644
--- a/doc/enhanced-platform-awareness/openness-bios.md
+++ b/doc/enhanced-platform-awareness/openness-bios.md
@@ -13,27 +13,27 @@ Copyright (c) 2019 Intel Corporation
- [Usage](#usage)
- [Reference](#reference)
-## Overview
+## Overview
-BIOS and Firmware are the fundamental platform configurations of a typical Commercial off-the-shelf (COTS) platform. BIOS and Firmware configuration has very low level configurations that can determine the environment that will be available for the Network Functions or Applications. A typical BIOS configuration that would be of relevance for a network function or application may include CPU configuration, Cache and Memory configuration, PCIe Configuration, Power and Performance configuration, etc. Some Network Functions and Applications need certain BIOS and Firmware settings to be configured in a specific way for optimal functionality and behavior.
+BIOS and Firmware are the fundamental platform configurations of a typical Commercial off-the-shelf (COTS) platform. BIOS and Firmware configuration has very low level configurations that can determine the environment that will be available for the Network Functions or Applications. A typical BIOS configuration that would be of relevance for a network function or application may include CPU configuration, Cache and Memory configuration, PCIe Configuration, Power and Performance configuration, etc. Some Network Functions and Applications need certain BIOS and Firmware settings to be configured in a specific way for optimal functionality and behavior.
-## Usecase for edge
+## Usecase for edge
-Let's take an AI Inference Application as an example that uses an Accelerator like an FPGA. To get optimal performance, when this application is deployed by the Resource Orchestrator, it is recommended to place the Application on the same Node and CPU Socket to which the Accelerator is attached. To ensure this, NUMA, PCIe Memory mapped IO and Cache configuration needs to be configured optimally. Similarly for a Network Function like a Base station or Core network instruction set, cache and hyper threading play an important role in the performance and density.
+Let's take an AI Inference Application as an example that uses an Accelerator like an FPGA. To get optimal performance, when this application is deployed by the Resource Orchestrator, it is recommended to place the Application on the same Node and CPU Socket to which the Accelerator is attached. To ensure this, NUMA, PCIe Memory mapped IO and Cache configuration needs to be configured optimally. Similarly for a Network Function like a Base station or Core network instruction set, cache and hyper threading play an important role in the performance and density.
-OpenNESS provides a reference implementation demonstrating how to configure the low level platform settings like BIOS and Firmware and the capability to check if they are configured as per a required profile. To implement this feature, OpenNESS uses the Intel® System Configuration Utility. The Intel® System Configuration Utility (Syscfg) is a command-line utility that can be used to save and restore BIOS and firmware settings to a file or to set and display individual settings.
+OpenNESS provides a reference implementation demonstrating how to configure the low level platform settings like BIOS and Firmware and the capability to check if they are configured as per a required profile. To implement this feature, OpenNESS uses the Intel® System Configuration Utility. The Intel® System Configuration Utility (Syscfg) is a command-line utility that can be used to save and restore BIOS and firmware settings to a file or to set and display individual settings.
-> Important Note: Intel® System Configuration Utility is only supported on certain Intel® Server platforms. Please refer to the Intel® System Configuration Utility user guide for the supported server products.
+> Important Note: Intel® System Configuration Utility is only supported on certain Intel® Server platforms. Please refer to the Intel® System Configuration Utility user guide for the supported server products.
-> Important Note: Intel® System Configuration Utility is not intended for and should not be used on any non-Intel Server Products.
+> Important Note: Intel® System Configuration Utility is not intended for and should not be used on any non-Intel Server Products.
-The OpenNESS Network Edge implementation goes a step further and provides an automated process using Kubernetes to save and restore BIOS and firmware settings. To do this, the Intel® System Configuration Utility is packaged as a Pod deployed as a Kubernetes job that uses ConfigMap. This ConfigMap provides a mount point that has the BIOS and Firmware profile that needs to be used for the Worker node. A platform reboot is required for the BIOS and Firmware configuration to be applied. To enable this, the BIOS and Firmware Job is deployed as a privileged Pod.
+The OpenNESS Network Edge implementation goes a step further and provides an automated process using Kubernetes to save and restore BIOS and firmware settings. To do this, the Intel® System Configuration Utility is packaged as a Pod deployed as a Kubernetes job that uses ConfigMap. This ConfigMap provides a mount point that has the BIOS and Firmware profile that needs to be used for the Worker node. A platform reboot is required for the BIOS and Firmware configuration to be applied. To enable this, the BIOS and Firmware Job is deployed as a privileged Pod.
![BIOS and Firmware configuration on OpenNESS](biosfw-images/openness_biosfw.png)
_Figure - BIOS and Firmware configuration on OpenNESS_
-## Details: BIOS and Firmware Configuration on OpenNESS Network Edge
+## Details: BIOS and Firmware Configuration on OpenNESS Network Edge
BIOS and Firmware Configuration feature is wrapped in a kubectl plugin.
Knowledge of Intel SYSCFG utility is required for usage.
@@ -42,11 +42,10 @@ Intel SYSCFG must be manually downloaded by user after accepting the license.
### Setup
In order to enable BIOSFW following steps need to be performed:
-1. `biosfw/master` role needs to be uncommented in OpenNESS Experience Kits' `ne_controller.yml`
-2. SYSCFG package must be downloaded and stored inside OpenNESS Experience Kits' `biosfw/` directory as a syscfg_package.zip, i.e.
+1. SYSCFG package must be downloaded and stored inside OpenNESS Experience Kits' `biosfw/` directory as a syscfg_package.zip, i.e.
`openness-experience-kits/biosfw/syscfg_package.zip`
-3. `biosfw/worker` role needs to be uncommented in OpenNESS Experience Kits' `ne_node.yml`
-4. OpenNESS Experience Kits' NetworkEdge deployment for both controller and nodes can be started.
+2. `biosfw/master` and `biosfw/worker` roles must be uncommented in OpenNESS Experience Kits' `network_edge.yml`
+3. OpenNESS Experience Kits' NetworkEdge deployment for both controller and nodes can be started.
### Usage
@@ -61,4 +60,3 @@ In order to enable BIOSFW following steps need to be performed:
## Reference
- [Intel Save and Restore System Configuration Utility (SYSCFG)](https://downloadcenter.intel.com/download/28713/Save-and-Restore-System-Configuration-Utility-SYSCFG-)
-
diff --git a/doc/enhanced-platform-awareness/openness-dedicated-core.md b/doc/enhanced-platform-awareness/openness-dedicated-core.md
index e74a9fc9..4d018b0f 100644
--- a/doc/enhanced-platform-awareness/openness-dedicated-core.md
+++ b/doc/enhanced-platform-awareness/openness-dedicated-core.md
@@ -1,6 +1,6 @@
```text
SPDX-License-Identifier: Apache-2.0
-Copyright (c) 2019 Intel Corporation
+Copyright (c) 2019-2020 Intel Corporation
```
# Dedicated CPU core for workload support in OpenNESS
@@ -10,9 +10,10 @@ Copyright (c) 2019 Intel Corporation
- [Details - CPU Manager support in OpenNESS](#details---cpu-manager-support-in-openness)
- [Setup](#setup)
- [Usage](#usage)
+ - [OnPremises Usage](#onpremises-usage)
- [Reference](#reference)
-## Overview
+## Overview
Multi-core COTS platforms are typical in any cloud or Cloudnative deployment. Parallel processing on multiple cores helps achieve better density. On a Multi-core platform, one challenge for applications and network functions that are latency and throughput density is deterministic compute. To achieve deterministic compute allocating dedicated resources is important. Dedicated resource allocation avoids interference with other applications (Noisy Neighbor). When deploying on a cloud native platform, applications are deployed as PODs, therefore, providing required information to the container orchestrator on dedicated CPU cores is key. CPU manager allows provisioning of a POD to dedicated cores.
@@ -32,7 +33,7 @@ _Figure - CPU Manager support on OpenNESS_
The following section outlines some considerations for using CPU Manager(CMK):
-- If the workload already uses a threading library (e.g. pthread) and uses set affinity like APIs then CMK might not be needed. For such workloads, in order to provide cores to use for deployment, Kubernetes ConfigMaps is the recommended methodology. ConfigMaps can be used to pass the CPU core mask to the application to use.
+- If the workload already uses a threading library (e.g. pthread) and uses set affinity like APIs then CMK might not be needed. For such workloads, in order to provide cores to use for deployment, Kubernetes ConfigMaps is the recommended methodology. ConfigMaps can be used to pass the CPU core mask to the application to use.
- The workload is a medium to long-lived process with inter-arrival times of the order of ones to tens of seconds or greater.
- After a workload has started executing, there is no need to dynamically update its CPU assignments.
- Machines running workloads explicitly isolated by cmk must be guarded against other workloads that do not consult the cmk tool chain. The recommended way to do this is for the operator to taint the node. The provided cluster-init subcommand automatically adds such a taint.
@@ -56,21 +57,20 @@ CMK documentation available on github includes:
**Edge Controller / Kubernetes master**
-1. Configure Edge Controller in Network Edge mode using `ne_controller.yml`, following roles must be enabled kubernetes/master, kubeovn/master and cmk/master.
+1. Configure Edge Controller in Network Edge mode using `network_edge.yml`, following roles must be enabled `kubernetes/master`, `kubernetes/cni` (both enabled by default) and `cmk/master` (disabled by default).
2. CMK is enabled with following default values of parameters in `roles/cmk/master/defaults/main.yml` (adjust the values if needed):
- `cmk_num_exclusive_cores` set to `4`
- `cmk_num_shared_cores` set to `1`
- `cmk_host_list` set to `node01,node02` (it should contain comma separated list of nodes' hostnames).
-3. Deploy the controller with deploy_ne_controller.sh.
+3. Deploy the controller with `deploy_ne.sh controller`.
**Edge Node / Kubernetes worker**
-1. Configure Edge Node in Network Edge mode using ne_node.yml, following roles must be enabled kubernetes/worker, kubeovn/worker and cmk/worker.
-2. To change core isolation and tuned realtime profile settings edit `os_kernel_rt_tuned_vars` in `roles/os_kernelrt/defaults/main.yml`.
-The changes will affect all edge nodes in the inventory, to set the parameter only for a specific node add the variable `os_kernel_rt_tuned_vars` to host_vars/node-name-in-inventory.yml.
-3. Deploy the node with deploy_ne_node.sh.
+1. Configure Edge Node in Network Edge mode using `network_edge.yml`, following roles must be enabled `kubernetes/worker`, `kubernetes/cni` (both enabled by default) and `cmk/worker` (disabled by default).
+2. To change core isolation set isolated cores in `host_vars/node-name-in-inventory.yml` as `additional_grub_params` for your node e.g. in `host_vars/node01.yml` set `additional_grub_params: "isolcpus=1-10,49-58"`
+3. Deploy the node with `deploy_ne.sh node`.
Environment setup can be validated using steps from [CMK operator manual](https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/operator.md#validating-the-environment).
@@ -129,6 +129,32 @@ spec:
name: cmk-conf-dir
EOF
```
-## Reference
+
+> NOTE: CMK requires modification of deployed pod manifest for **all** deployed pods:
+> - nodeName: must be added under pod spec section before deploying application (to point node on which pod is to be deployed)
+>
+> alternatively
+> - toleration must be added to deployed pod under spec:
+>
+> ```yaml
+> ...
+> tolerations:
+>
+> - ...
+>
+> - effect: NoSchedule
+> key: cmk
+> operator: Exists
+> ```
+
+### OnPremises Usage
+Dedicated core pinning is also supported for container and virtual machine deployment in OnPremises mode. This is done using the EPA Features section provided when creating applications for onboarding. For more details on application creation and onboarding in OnPremises mode, please see the [Application Onboarding Document](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/on-premises-applications-onboarding.md).
+
+To set dedicated core pinning for an application, *EPA Feature Key* should be set to `cpu_pin` and *EPA Feature Value* should be set to one of the following options:
+
+1. A single core e.g. `EPA Feature Value = 3` if pinning to core 3 only.
+2. A sequential series of cores, e.g. `EPA Feature Value = 2-7` if pinning to cores 2 to 7 inclusive.
+3. A comma separated list of cores, e.g. `EPA Feature Value = 1,3,6,7,9` if pinning to cores 1,3,6,7 and 9 only.
+## Reference
- [CPU Manager Repo](https://github.com/intel/CPU-Manager-for-Kubernetes)
- More examples of Kubernetes manifests available in [CMK repository](https://github.com/intel/CPU-Manager-for-Kubernetes/tree/master/resources/pods) and [documentation](https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/user.md).
diff --git a/doc/enhanced-platform-awareness/openness-environment-variables.md b/doc/enhanced-platform-awareness/openness-environment-variables.md
new file mode 100644
index 00000000..c918884d
--- /dev/null
+++ b/doc/enhanced-platform-awareness/openness-environment-variables.md
@@ -0,0 +1,24 @@
+```text
+SPDX-License-Identifier: Apache-2.0
+Copyright (c) 2020 Intel Corporation
+```
+
+# Support for setting Environment Variables in OpenNESS
+
+- [Support for setting Environment Variables in OpenNESS](#support-for-setting-environment-variables-in-openness)
+ - [Overview](#overview)
+ - [Details of Environment Variable support in OpenNESS](#details-of-environment-variable-support-in-openness)
+
+## Overview
+
+Environment variables can be configured when creating a new Docker container. Once the container is running, any Application located in that container can detect and use the variable. These variables can be used to point to information needed by the processes being run by the Application. For example, an environment variable can be set to point to a file containing information to be read in by an Application or to the address of a device that the Application needs to use.
+
+When using environment variables, the value should be either a static value or some environment information that the Application cannot easily determine. Care should also be taken when setting environment variables, as using an incorrect variable name or value may cause the Application to operate in an unexpected way.
+
+## Details of Environment Variable support in OpenNESS
+
+Setting environment variables is supported when deploying containers in OnPrem mode during application onboarding. Please refer to the [Application Onboarding Document](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/on-premises-applications-onboarding.md) for more details on onboarding an application in OpenNESS. The general steps outlined in the document should be performed with the following addition. When creating a new application to add to the controller's application library, the user should set the *EPA Feature Key* and *EPA Feature Value* settings. The key to be used for environment variables is `env_vars` and the value should be set as `VARIABLE_NAME=VARIABLE_VALUE`.
+
+***Note:*** When setting environment variables, multiple variables can be provided in the *EPA Feature Value* field for a single `env_vars` key. To do so place a semi-colon (;) between each variable as follows:
+
+ VAR1_NAME=VAR1_VALUE;VAR2_NAME=VAR2_VALUE
diff --git a/doc/enhanced-platform-awareness/openness-fpga.md b/doc/enhanced-platform-awareness/openness-fpga.md
index ba92de53..456bf507 100644
--- a/doc/enhanced-platform-awareness/openness-fpga.md
+++ b/doc/enhanced-platform-awareness/openness-fpga.md
@@ -1,6 +1,6 @@
```text
SPDX-License-Identifier: Apache-2.0
-Copyright (c) 2019 Intel Corporation
+Copyright (c) 2019-2020 Intel Corporation
```
# Using FPGA in OpenNESS: Programming, Resource Allocation and Configuration
@@ -28,18 +28,18 @@ The FPGA Programmable Acceleration Card plays a key role in accelerating certain
- Integration - Today’s FPGAs include on-die processors, transceiver I/O’s at 28 Gbps (or faster), RAM blocks, DSP engines, and more.
- Total Cost of Ownership (TCO) - While ASICs may cost less per unit than an equivalent FPGA, building them requires a non-recurring expense (NRE), expensive software tools, specialized design teams, and long manufacturing cycles.
-Deployment of AI/ML applications at the edge is increasing the adoption of FPGA acceleration. This trend of devices performing machine learning at the edge locally versus relying solely on the cloud is driven by the need to lower latency, persistent availability, lower costs and address privacy concerns.
+Deployment of AI/ML applications at the edge is increasing the adoption of FPGA acceleration. This trend of devices performing machine learning at the edge locally versus relying solely on the cloud is driven by the need to lower latency, persistent availability, lower costs and address privacy concerns.
-This paper explains how the FPGA resource can be used on the OpenNESS platform for accelerating Network Functions and Edge application workloads. We use the Intel® FPGA Programmable Acceleration Card (Intel FPGA PAC) N3000 as a reference FPGA and use LTE/5G Forward Error Correction (FEC) as an example workload that accelerates the 5G or 4G L1 Base station Network function. The same concept and mechanism is applicable for application acceleration workloads like AI/ML on FPGA for Inference applications.
+This paper explains how the FPGA resource can be used on the OpenNESS platform for accelerating Network Functions and Edge application workloads. We use the Intel® FPGA Programmable Acceleration Card (Intel FPGA PAC) N3000 as a reference FPGA and use LTE/5G Forward Error Correction (FEC) as an example workload that accelerates the 5G or 4G L1 Base station Network function. The same concept and mechanism is applicable for application acceleration workloads like AI/ML on FPGA for Inference applications.
-The Intel® FPGA Programmable Acceleration Card (Intel FPGA PAC) N3000 is a full-duplex 100 Gbps in-system re-programmable acceleration card for multi-workload networking application acceleration. It has the right memory mixture designed for network functions, with integrated network interface card (NIC) in a small form factor that enables high throughput, low latency, low-power/bit for custom networking pipeline.
+The Intel® FPGA Programmable Acceleration Card (Intel FPGA PAC) N3000 is a full-duplex 100 Gbps in-system re-programmable acceleration card for multi-workload networking application acceleration. It has the right memory mixture designed for network functions, with integrated network interface card (NIC) in a small form factor that enables high throughput, low latency, low-power/bit for custom networking pipeline.
-FlexRAN is a reference Layer 1 pipeline of 4G eNb and 5G gNb on Intel architecture. The FlexRAN reference pipeline consists of L1 pipeline, optimized L1 processing modules, BBU pooling framework, Cloud and Cloud native deployment support and accelerator support for hardware offload. Intel® PAC N3000 card is used by FlexRAN to offload FEC (Forward Error Correction) for 4G and 5G and IO for Fronthaul/Midhaul.
+FlexRAN is a reference Layer 1 pipeline of 4G eNb and 5G gNb on Intel architecture. The FlexRAN reference pipeline consists of L1 pipeline, optimized L1 processing modules, BBU pooling framework, Cloud and Cloud native deployment support and accelerator support for hardware offload. Intel® PAC N3000 card is used by FlexRAN to offload FEC (Forward Error Correction) for 4G and 5G and IO for Fronthaul/Midhaul.
## Intel® PAC N3000 FlexRAN host interface overview
-The PAC N3000 card used in the FlexRAN solution exposes the following physical functions to the CPU host.
-- 2x25G Ethernet interface that can be used for Fronthaul or Midhaul
-- One FEC Interface that can be used of 4G or 5G FEC acceleration
+The PAC N3000 card used in the FlexRAN solution exposes the following physical functions to the CPU host.
+- 2x25G Ethernet interface that can be used for Fronthaul or Midhaul
+- One FEC Interface that can be used of 4G or 5G FEC acceleration
- The LTE FEC IP components have Turbo Encoder / Turbo decoder and rate matching / de-matching
- The 5GNR FEC IP components have Low-density parity-check (LDPC) Encoder / LDPC Decoder, rate matching / de-matching, UL HARQ combining
- Interface for managing and updating the FPGA Image – Remote system Update (RSU).
@@ -55,16 +55,16 @@ FlexRAN is the network function that implements the FEC and is a low latency net
_Figure - Intel PAC N3000 Orchestration and deployment with OpenNESS Network Edge for FlexRAN_
-## Intel PAC N3000 remote system update flow in OpenNESS Network edge Kubernetes
-Remote System Update (RSU) of the FPGA is enabled through Open Programmable Acceleration Engine (OPAE). The OPAE package consists of a kernel driver and user space FPGA utils package that enables programming of the FPGA. OpenNESS automates the process of deploying the OPAE stack as a Kubernetes POD that detects the FPGA and programs it. There is a separate FPGA Configuration POD which is deployed as a Kubernetes job which configures the FPGA resources such as Virtual Functions and queues.
+## Intel PAC N3000 remote system update flow in OpenNESS Network edge Kubernetes
+Remote System Update (RSU) of the FPGA is enabled through Open Programmable Acceleration Engine (OPAE). The OPAE package consists of a kernel driver and user space FPGA utils package that enables programming of the FPGA. OpenNESS automates the process of deploying the OPAE stack as a Kubernetes POD that detects the FPGA and programs it. There is a separate FPGA Configuration POD which is deployed as a Kubernetes job which configures the FPGA resources such as Virtual Functions and queues.
![OpenNESS Network Edge Intel PAC N3000 RSU and resource allocation](fpga-images/openness-fpga3.png)
_Figure - OpenNESS Network Edge Intel PAC N3000 RSU and resource allocation_
-## Using FPGA on OpenNESS - Details
+## Using FPGA on OpenNESS - Details
-Further sections provide instructions on how to use all three FPGA features - Programming, Configuration and accessing from application on OpenNESS Network and OnPremises Edge.
+Further sections provide instructions on how to use all three FPGA features - Programming, Configuration and accessing from application on OpenNESS Network and OnPremises Edge.
When the PAC N3000 FPGA is programmed with a vRAN 5G FPGA image it exposes the Single Root I/O Virtualization (SRIOV) Virtual Function (VF) devices which can be used to accelerate the FEC in the vRAN workload. In order to take advantage of this functionality for a Cloud Native deployment the PF (Physical Function) of the device must be bound to DPDK IGB_UIO user-space driver in order to create a number of VFs (Virtual Functions). Once the VFs are created they must also be bound to a DPDK user-space driver in order to allocate them to specific K8s pods running the vRAN workload.
@@ -81,17 +81,26 @@ It is assumed that the FPGA is always used with OpenNESS Network Edge, paired wi
### FPGA (FEC) Ansible installation for OpenNESS Network Edge
To run the OpenNESS package with FPGA (FEC) functionality the feature needs to be enabled on both Edge Controller and Edge Node.
-#### Edge Controller
+#### Edge Controller
-To enable on Edge Controller set/uncomment following in `ne_controller.yml` in OpenNESS-Experience-Kits top level directory:
-```
+To enable on Edge Controller set/uncomment following in `network_edge.yml` in OpenNESS-Experience-Kits top level directory:
+```yaml
+# network_edge.yml
- role: opae_fpga/master
-- role: multus
-- role: sriov/master
```
-Also enable/configure following options in `roles/sriov/common/defaults/main.yml`.
-The following device config is the default config for the PAC N3000 with 5GNR vRAN user image tested (this configuration is common both to EdgeNode and EdgeController setup).
+
+Additionally SRIOV must be enabled in OpenNESS:
+```yaml
+# group_vars/all.yml
+kubernetes_cnis:
+- kubeovn
+- sriov
```
+
+Also enable/configure following options in `roles/kubernetes/cni/sriov/common/defaults/main.yml`.
+The following device config is the default config for the PAC N3000 with 5GNR vRAN user image tested (this configuration is common both to EdgeNode and EdgeController setup).
+```yaml
+# roles/kubernetes/cni/sriov/common/defaults/main.yml
fpga_sriov_userspace:
enabled: true
fpga_userspace_vf:
@@ -100,17 +109,15 @@ fpga_userspace_vf:
vf_device_id: "0d90"
pf_device_id: "0d8f"
vf_number: "2"
+ vf_driver: "vfio-pci"
```
-Run setup script `deploy_ne_controller.sh`.
+#### Edge Node
-#### Edge Node
-
-To enable on the Edge Node set following in `ne_node.yml` (Please note that the `sriov/worker` role needs to be executed before `kubernetes/worker` role):
+To enable on the Edge Node set following in `network_edge.yml`:
```
- role: opae_fpga/worker
-- role: sriov/worker
```
The following packages need to be placed into specific directories in order for the feature to work:
@@ -121,7 +128,42 @@ The following packages need to be placed into specific directories in order for
3. Factory image configuration package `n3000-1-3-5-beta-cfg-2x2x25g-setup.zip` needs to be placed inside `openness-experience-kits/opae_fpga` directory. The package can be obtained as part of PAC N3000 OPAE beta release (Please contact your Intel representative or visit [Resource Design Centre](https://cdrdv2.intel.com/v1/dl/getContent/616080 ) to obtain the package)
-Run setup script `deploy_ne_node.sh`.
+Run setup script `deploy_ne.sh`.
+
+On successful deployment following pods will be available in the cluster:
+```shell
+kubectl get pods -A
+
+NAMESPACE NAME READY STATUS RESTARTS AGE
+kube-ovn kube-ovn-cni-hdgrl 1/1 Running 0 3d19h
+kube-ovn kube-ovn-cni-px79b 1/1 Running 0 3d18h
+kube-ovn kube-ovn-controller-578786b499-74vzm 1/1 Running 0 3d19h
+kube-ovn kube-ovn-controller-578786b499-j22gl 1/1 Running 0 3d19h
+kube-ovn ovn-central-5f456db89f-z7d6x 1/1 Running 0 3d19h
+kube-ovn ovs-ovn-46k8f 1/1 Running 0 3d18h
+kube-ovn ovs-ovn-5r2p6 1/1 Running 0 3d19h
+kube-system coredns-6955765f44-mrc82 1/1 Running 0 3d19h
+kube-system coredns-6955765f44-wlvhc 1/1 Running 0 3d19h
+kube-system etcd-silpixa00394960 1/1 Running 0 3d19h
+kube-system kube-apiserver-silpixa00394960 1/1 Running 0 3d19h
+kube-system kube-controller-manager-silpixa00394960 1/1 Running 0 3d19h
+kube-system kube-multus-ds-amd64-2zdqt 1/1 Running 0 3d18h
+kube-system kube-multus-ds-amd64-db8fd 1/1 Running 0 3d19h
+kube-system kube-proxy-dd259 1/1 Running 0 3d19h
+kube-system kube-proxy-sgn9g 1/1 Running 0 3d18h
+kube-system kube-scheduler-silpixa00394960 1/1 Running 0 3d19h
+kube-system kube-sriov-cni-ds-amd64-k9wnd 1/1 Running 0 3d18h
+kube-system kube-sriov-cni-ds-amd64-pclct 1/1 Running 0 3d19h
+kube-system kube-sriov-device-plugin-amd64-fhbv8 1/1 Running 0 3d18h
+kube-system kube-sriov-device-plugin-amd64-lmx9k 1/1 Running 0 3d19h
+openness eaa-78b89b4757-xzh84 1/1 Running 0 3d18h
+openness edgedns-dll9x 1/1 Running 0 3d18h
+openness interfaceservice-grjlb 1/1 Running 0 3d18h
+openness nfd-master-dd4ch 1/1 Running 0 3d19h
+openness nfd-worker-c24wn 1/1 Running 0 3d18h
+openness syslog-master-9x8hc 1/1 Running 0 3d19h
+openness syslog-ng-br92z 1/1 Running 0 3d18h
+```
### FPGA Programming and telemetry on OpenNESS Network Edge
In order to program the FPGA factory image (One Time Secure Upgrade) or the user image (5GN FEC vRAN) of the PAC N3000 via OPAE a `kubectl` plugin for K8s is provided. The plugin also allows for obtaining basic FPGA telemetry. This plugin will deploy K8s jobs which will run to completion on desired host and display the logs/output of the command.
@@ -154,6 +196,19 @@ kubectl rsu program -f -n -d
kubectl rsu get power -n
kubectl rsu get fme -n
+
+
+# Sample output for correctly programmed card with `get fme` command
+//****** FME ******//
+Object Id : 0xED00000
+PCIe s\:b\:d.f : 0000:1b:00.0
+Device Id : 0x0b30
+Numa Node : 0
+Ports Num : 01
+Bitstream Id : 0x2145042A010304
+Bitstream Version : 0.2.1
+Pr Interface Id : a5d72a3c-c8b0-4939-912c-f715e5dc10ca
+Boot Page : user
```
7. For more information on usage of each `kubectl rsu` plugin capability run each command with `-h` argument.
@@ -228,10 +283,14 @@ Expected: `Mode of operation = VF-mode FPGA_LTE PF [0000:xx:00.0] configuration
### Requesting resources and running pods for OpenNESS Network Edge
As part of OpenNESS Ansible automation a K8s device plugin to orchestrate the FPGA VFs bound to user-space driver is running. This will enable scheduling of pods requesting this device/devices. Number of devices available on the Edge Node can be checked from Edge Controller by running:
-`kubectl get node -o json | jq '.status.allocatable'`
+```shell
+kubectl get node silpixa00400489 -o json | jq '.status.allocatable'
-To request the device as a resource in the pod add the request for the resource into the pod specification file, by specifying its name and amount of resources required. If the resource is not available or the amount of resources requested is greater than the amount of resources available, the pod status will be 'Pending' until the resource is available.
-Note that the name of the resource must match the name specified in the configMap for the K8s devices plugin (`./fpga/configMap.yml`).
+"intel.com/intel_fec_5g": "2"
+```
+
+To request the device as a resource in the pod add the request for the resource into the pod specification file, by specifying its name and amount of resources required. If the resource is not available or the amount of resources requested is greater than the amount of resources available, the pod status will be 'Pending' until the resource is available.
+Note that the name of the resource must match the name specified in the configMap for the K8s devices plugin (`./fpga/configMap.yml`).
A sample pod requesting the FPGA (FEC) VF may look like this:
@@ -252,8 +311,8 @@ spec:
requests:
intel.com/intel_fec_5g: '1'
limits:
- intel.com/intel_fec_5g: '1'
-```
+ intel.com/intel_fec_5g: '1'
+```
In order to test the resource allocation to the pod, save the above snippet to the sample.yaml file and create the pod.
@@ -277,7 +336,7 @@ Navigate to:
`edgeapps/fpga-sample-app`
-Copy the necessary `flexran-dpdk-bbdev-v19-10.patch` file into the directory. This patch is available as part of FlexRAN 19.10 release package. To obtain this FlexRAN patch allowing 5G functionality for BBDEV in DPDK please contact your Intel representative or visit [Resource Design Centre](https://cdrdv2.intel.com/v1/dl/getContent/615743 )
+Copy the necessary `dpdk_19.11_new.patch` file into the directory. This patch is available as part of FlexRAN 20.02 release package. To obtain this FlexRAN patch allowing 5G functionality for BBDEV in DPDK please contact your Intel representative or visit [Resource Design Centre](https://cdrdv2.intel.com/v1/dl/getContent/615743 )
Build image:
@@ -288,16 +347,53 @@ From the Edge Controller deploy the application pod, pod specification located a
`kubectl create -f fpga-sample-app.yaml`
Execute into the application pod and run the sample app:
-```
+```shell
+# enter the pod
kubectl exec -it pod-bbdev-sample-app -- /bin/bash
+# run test application
./test-bbdev.py --testapp-path ./testbbdev -e="-w ${PCIDEVICE_INTEL_COM_INTEL_FEC_5G}" -i -n 1 -b 1 -l 1 -c validation -v ./test_vectors/ldpc_dec_v7813.data
+
+# sample output
+EAL: Detected 48 lcore(s)
+EAL: Detected 2 NUMA nodes
+EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
+EAL: Selected IOVA mode 'VA'
+EAL: No available hugepages reported in hugepages-1048576kB
+EAL: Probing VFIO support...
+EAL: VFIO support initialized
+EAL: PCI device 0000:20:00.1 on NUMA socket 0
+EAL: probe driver: 8086:d90 intel_fpga_5gnr_fec_vf
+EAL: using IOMMU type 1 (Type 1)
+
+===========================================================
+Starting Test Suite : BBdev Validation Tests
+Test vector file = ./test_vectors/ldpc_dec_v7813.data
+mcp fpga_setup_queuesDevice 0 queue 16 setup failed
+Allocated all queues (id=16) at prio0 on dev0
+Device 0 queue 16 setup failed
+All queues on dev 0 allocated: 16
++ ------------------------------------------------------- +
+== test: validation/latency
+dev: 0000:20:00.1, burst size: 1, num ops: 1, op type: RTE_BBDEV_OP_LDPC_DEC
+Operation latency:
+ avg: 17744 cycles, 12.6743 us
+ min: 17744 cycles, 12.6743 us
+ max: 17744 cycles, 12.6743 us
+TestCase [ 0] : latency_tc passed
+ + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +
+ + Test Suite Summary : BBdev Validation Tests
+ + Tests Total : 1
+ + Tests Skipped : 0
+ + Tests Passed : 1
+ + Tests Failed : 0
+ + Tests Lasted : 95.2308 ms
+ + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +
```
The output of the application should indicate total of ‘1’ tests and ‘1’ test passing, this concludes the validation of the FPGA VF working correctly inside K8s pod.
-## Reference
+## Reference
- [Intel® FPGA Programmable Acceleration Card N3000](https://www.intel.com/content/www/us/en/programmable/products/boards_and_kits/dev-kits/altera/intel-fpga-pac-n3000/overview.html)
- [FlexRAN 19.10 release - Resource Design Centre](https://cdrdv2.intel.com/v1/dl/getContent/615743)
- [PAC N3000 OPAE beta release - Resource Design Centre](https://cdrdv2.intel.com/v1/dl/getContent/616082)
-- [PAC N3000 OPAE beta release (2) - Resource Design Centre](https://cdrdv2.intel.com/v1/dl/getContent/616080)
-
+- [PAC N3000 OPAE beta release (2) - Resource Design Centre](https://cdrdv2.intel.com/v1/dl/getContent/616080)
diff --git a/doc/enhanced-platform-awareness/openness-hugepage.md b/doc/enhanced-platform-awareness/openness-hugepage.md
index dc68a688..c0c2dc2f 100644
--- a/doc/enhanced-platform-awareness/openness-hugepage.md
+++ b/doc/enhanced-platform-awareness/openness-hugepage.md
@@ -1,85 +1,125 @@
```text
SPDX-License-Identifier: Apache-2.0
-Copyright (c) 2019 Intel Corporation
+Copyright (c) 2019-2020 Intel Corporation
```
-# Hugepage support on OpenNESS
+# Hugepage support on OpenNESS
- [Hugepage support on OpenNESS](#hugepage-support-on-openness)
- [Overview](#overview)
- [Details of Hugepage support on OpenNESS](#details-of-hugepage-support-on-openness)
- - [Network edge mode](#network-edge-mode)
- - [OnPrem mode](#onprem-mode)
+ - [Examples](#examples)
+ - [Changing size of the hugepage for both controllers and nodes](#changing-size-of-the-hugepage-for-both-controllers-and-nodes)
+ - [Setting different hugepage amount for Edge Controller or Edge Nodes in Network Edge mode](#setting-different-hugepage-amount-for-edge-controller-or-edge-nodes-in-network-edge-mode)
+ - [Setting different hugepage amount for Edge Controller or Edge Nodes in On Premises mode](#setting-different-hugepage-amount-for-edge-controller-or-edge-nodes-in-on-premises-mode)
+ - [Setting hugepage size for Edge Controller or Edge Node in Network Edge mode](#setting-hugepage-size-for-edge-controller-or-edge-node-in-network-edge-mode)
+ - [Setting hugepage size for Edge Controller or Edge Node in On Premises mode](#setting-hugepage-size-for-edge-controller-or-edge-node-in-on-premises-mode)
+ - [Customizing hugepages for specific machine](#customizing-hugepages-for-specific-machine)
- [Reference](#reference)
-## Overview
+## Overview
-Memory is allocated to application processes in terms of pages - by default the 4K pages are supported. For Applications dealing with larger datasets, using 4K pages may lead to performance degradation and overhead because of TLB misses. To address this, modern CPUs support huge pages which are typically 2M and 1G. This helps avoid TLB miss overhead and therefore improves performance.
+Memory is allocated to application processes in terms of pages - by default the 4K pages are supported. For Applications dealing with larger datasets, using 4K pages may lead to performance degradation and overhead because of TLB misses. To address this, modern CPUs support huge pages which are typically 2M and 1G. This helps avoid TLB miss overhead and therefore improves performance.
-Both Applications and Network functions can gain in performance from using hugepages. Huge page support, added to Kubernetes v1.8, enables the discovery, scheduling and allocation of huge pages as a native first-class resource. This support addresses low latency and deterministic memory access requirements.
+Both Applications and Network functions can gain in performance from using hugepages. Huge page support, added to Kubernetes v1.8, enables the discovery, scheduling and allocation of huge pages as a native first-class resource. This support addresses low latency and deterministic memory access requirements.
## Details of Hugepage support on OpenNESS
-Hugepages are enabled by default. There are two parameters that are describing the hugepages: the size of single page (can be 2MB or 1GB) and amount of those pages. In network edge deployment there is, enabled by default, 500 of 2MB hugepages (which equals to 2GB of memory) per node/controller, and in OnPrem deployment hugepages are enabled only for nodes and the default is 5000 of 2MB pages (10GB). If you want to change those settings you will need to edit config files as described below. All the settings have to be adjusted before OpenNESS installation.
-
-### Network edge mode
-
-You can change the size of single page editing the variable `hugepage_size` in `roles/grub/defaults/main.yml`:
-
-To set the page size of 2 MB:
-
+OpenNESS deployment enables the hugepages by default and provides parameters for tuning the hugepages:
+* `hugepage_size` - size, which can be either `2M` or `1G`
+* `hugepage_amount` - amount
+
+By default, these variables have values:
+| Mode | Machine type | `hugepage_amount` | `hugepage_size` | Comments |
+|--------------|--------------|:-----------------:|:---------------:|----------------------------------------------|
+| Network Edge | Controller | `1024` | `2M` | |
+| | Node | `1024` | `2M` | |
+| On-Premises | Controller | `1024` | `2M` | For OVNCNI dataplane, otherwise no hugepages |
+| | Node | `5000` | `2M` | |
+
+Guide on changing these values is below. Customizations must be made before OpenNESS deployment.
+
+Variables for hugepage customization can be placed in several files:
+* `group_vars/all.yml` will affect all modes and machine types
+* `group_vars/controller_group.yml` and `group_vars/edgenode_group.yml` will affect Edge Controller and Edge Nodes respectively in all modes
+* `host_vars/.yml` will only affect `` host present in `inventory.ini` (in all modes)
+* To configure hugepages for specific mode, they can be placed in `network_edge.yml` and `on_premises.yml` under
+ ```yaml
+ - hosts: # e.g. controller_group or edgenode_group
+ vars:
+ hugepage_amount: "10"
+ hugepage_size: "1G"
+ ```
+
+This is summarized in a following table:
+
+| File | Network Edge | On Premises | Edge Controller | Edge Node | Comment |
+|---------------------------------------|:------------:|:-----------:|:--------------------------------------:|:------------------------------------:|:-------------------------------------------------------------------------------:|
+| `group_vars/all.yml` | yes | yes | yes | yes - every node | |
+| `group_vars/controller_group.yml` | yes | yes | yes | | |
+| `group_vars/edgenode_group.yml` | yes | yes | | yes - every node | |
+| `host_vars/.yml` | yes | yes | yes | yes | affects machine specified in `inventory.ini` with name `` |
+| `network_edge.yml` | yes | | `vars` under `hosts: controller_group` | `vars` under `hosts: edgenode_group` - every node | |
+| `on_premises.yml` | | yes | `vars` under `hosts: controller_group` | `vars` under `hosts: edgenode_group` - every node| |
+
+Note that variables have a precedence:
+1. `network_edge.yml` and `on_premises.yml` will always take precedence for files from this list (override every var)
+2. `host_vars/`
+3. `group_vars/`
+4. `default/main.yml` in roles' directory
+
+### Examples
+
+#### Changing size of the hugepage for both controllers and nodes
+Add following line to the `group_vars/all.yml`:
+* To set the page size of 2 MB (which is default value):
+ ```yaml
+ hugepage_size: "2M"
+ ```
+* To set the page size of 1GB:
+ ```yaml
+ hugepage_size: "1G"
+ ```
+
+#### Setting different hugepage amount for Edge Controller or Edge Nodes in Network Edge mode
+The amount of hugepages can be set separately for both controller and nodes. To set the amount of hugepages for controller please change the value of variable `hugepage_amount` in `network_edge.yml`, for example:
```yaml
-hugepage_size: "2M"
-```
-
-To set the page size of 1GB:
-
-```yaml
-hugepage_size: "1G"
-```
-
-The amount of hugepages can be set separately for both controller and nodes. To set the amount of hugepages for controller please change the value of variable `hugepage_amount` in `ne_controller.yml`:
-
-For example:
-
-```yaml
-vars:
+- hosts: controller_group
+ vars:
hugepage_amount: "1500"
```
-
will enable 1500 pages of the size specified by `hugepage_size` variable.
-To set the amount of hugepages for nodes please change the value of variable `hugepage_amount` in `ne_node.yml`:
-
-For example:
-
+To set the amount of hugepages for all of the nodes please change the value of variable `hugepage_amount` in `network_edge.yml`, for example:
```yaml
-vars:
+- hosts: edgenode_group
+ vars:
hugepage_amount: "3000"
```
will enable 3000 pages of the size specified by `hugepage_size` variable for each deployed node.
-### OnPrem mode
+#### Setting different hugepage amount for Edge Controller or Edge Nodes in On Premises mode
-The hugepages are enabled only for the nodes. You can change the size of single page and amount of the pages editing the variables `hugepage_size` and `hugepage_amount` in `roles/grub/defaults/main.yml`:
-
-For example:
+[Instruction for Network Edge](#setting-different-hugepage-amount-for-edge-controller-or-edge-nodes-in-network-edge-mode) is applicable for On Premises mode with the exception of the file to be edited: `on_premises.yml`
+#### Setting hugepage size for Edge Controller or Edge Node in Network Edge mode
+Different hugepage size for node or controller can be done by adding `hugepage_size` to the playbook (`network_edge.yml` file), e.g.
```yaml
-hugepage_size: "2M"
-hugepage_amount: "2000"
+- hosts: controller_group # or edgenode_group
+ vars:
+ hugepage_amount: "5"
+ hugepage_size: "1G"
```
-will enable 2000 of 2MB pages, and:
+#### Setting hugepage size for Edge Controller or Edge Node in On Premises mode
-```yaml
-hugepage_size: "1G"
-hugepage_amount: "5"
-```
+[Instruction for Network Edge](#setting-hugepage-size-for-edge-controller-or-edge-node-in-network-edge-mode) is applicable for On Premises mode with the exception of the file to be edited: `on_premises.yml`
-will enable 5 pages, 1GB each.
+#### Customizing hugepages for specific machine
+To specify size or amount only for specific machine, `hugepage_size` and/or `hugepage_amount` can be provided in `host_vars/.yml` (i.e. if host is named `node01`, then the file is `host_vars/node01.yml`).
-## Reference
-- [Hugepages support in Kubernetes](https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/)
+Note that vars in `on_premises.yml` have greater precedence than ones in `host_vars/`, therefore to provide greater control over hugepage variables, `hugepage_amount` from `network_edge.yml` and/or `on_premises.yml` should be removed.
+## Reference
+- [Hugepages support in Kubernetes](https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/)
diff --git a/doc/enhanced-platform-awareness/openness-node-feature-discovery.md b/doc/enhanced-platform-awareness/openness-node-feature-discovery.md
index 131fb1e8..b5b37e53 100644
--- a/doc/enhanced-platform-awareness/openness-node-feature-discovery.md
+++ b/doc/enhanced-platform-awareness/openness-node-feature-discovery.md
@@ -3,23 +3,26 @@ SPDX-License-Identifier: Apache-2.0
Copyright (c) 2019 Intel Corporation
```
-# Node Feature Discovery support in OpenNESS
+# Node Feature Discovery support in OpenNESS
- [Node Feature Discovery support in OpenNESS](#node-feature-discovery-support-in-openness)
- [Overview of NFD and Edge usecase](#overview-of-nfd-and-edge-usecase)
- - [Details - Node Feature Discovery support in OpenNESS](#details---node-feature-discovery-support-in-openness)
- - [Usage](#usage)
+ - [Details](#details)
+ - [Node Feature Discovery support in OpenNESS Network Edge](#node-feature-discovery-support-in-openness-network-edge)
+ - [Usage](#usage)
+ - [Node Feature Discovery support in OpenNESS On Premises](#node-feature-discovery-support-in-openness-on-premises)
+ - [Usage](#usage-1)
- [Reference](#reference)
-## Overview of NFD and Edge usecase
+## Overview of NFD and Edge usecase
-COTS Platforms used for edge deployment come with many features that enable workloads take advantage of, to provide better performance and meet the SLA. When such COTS platforms are deployed in a cluster as part of a Cloudnative deployment it becomes important to detect the hardware and software features on all nodes that are part of that cluster. It should also be noted that some of the nodes might have special accelerator hardware like FPGA, GPU, NVMe, etc.
+COTS Platforms used for edge deployment come with many features that enable workloads take advantage of, to provide better performance and meet the SLA. When such COTS platforms are deployed in a cluster as part of a Cloudnative deployment it becomes important to detect the hardware and software features on all nodes that are part of that cluster. It should also be noted that some of the nodes might have special accelerator hardware like FPGA, GPU, NVMe, etc.
Let us consider an edge application like CDN that needs to be deployed in the cloud native edge cloud. It would be favorable for a Container orchestrator like Kubernetes to detect the nodes that have CDN friendly hardware and software features like NVMe, media extensions and so on.
Now let us consider a Container Network Function (CNF) like 5G gNb that implements L1 5G NR base station. It would be favorable for the Container orchestrator like Kubernetes to detect nodes that have hardware and software features like FPGA acceleration for Forward error correction, Advanced vector instructions to implement math functions, real-time kernel and so on.
-OpenNESS supports the discovery of such features using Node Feature Discovery (NFD). NFD is a Kubernetes add-on that detects and advertises hardware and software capabilities of a platform that can, in turn, be used to facilitate intelligent scheduling of a workload. Node Feature Discovery is one of the Intel technologies that supports targeting of intelligent configuration and capacity consumption of platform capabilities. NFD runs as a separate container on each individual node of the cluster, discovers capabilities of the node, and finally, publishes these as node labels using the Kubernetes API. NFD only handles non-allocatable features.
+OpenNESS supports the discovery of such features using Node Feature Discovery (NFD). NFD is a Kubernetes add-on that detects and advertises hardware and software capabilities of a platform that can, in turn, be used to facilitate intelligent scheduling of a workload. Node Feature Discovery is one of the Intel technologies that supports targeting of intelligent configuration and capacity consumption of platform capabilities. NFD runs as a separate container on each individual node of the cluster, discovers capabilities of the node, and finally, publishes these as node labels using the Kubernetes API. NFD only handles non-allocatable features.
Some of the Node features that NFD can detect include:
@@ -32,7 +35,7 @@ At its core, NFD detects hardware features available on each node in a Kubernete
NFD consists of two software components:
1) nfd-master is responsible for labeling Kubernetes node objects
-2) nfd-worker detects features and communicates them to the nfd-master. One instance of nfd-worker should be run on each node of the cluster
+2) nfd-worker detects features and communicates them to the nfd-master. One instance of nfd-worker should be run on each node of the cluster
The figure below illustrates how the CDN application will be deployed on the right platform when NFD is utilized, where the required key hardware like NVMe and AVX instruction set support is available.
@@ -46,13 +49,15 @@ _Figure - CDN app deployment with NFD Features_
> UEFI Secure Boot: Boot Firmware verification and authorization of OS Loader/Kernel components
-## Details - Node Feature Discovery support in OpenNESS
+## Details
-Node Feature Discovery is enabled by default. It does not require any configuration or user input. It can be disabled by editing the `ne_controller.yml` file and commenting out `nfd` role before OpenNESS installation.
+### Node Feature Discovery support in OpenNESS Network Edge
+
+Node Feature Discovery is enabled by default. It does not require any configuration or user input. It can be disabled by editing the `network_edge.yml` file and commenting out `nfd/network_edge` role before OpenNESS installation.
Connection between nfd-workers and nfd-master is secured by certificates generated before running nfd pods.
-### Usage
+#### Usage
NFD is working automatically and does not require any user action to collect the features from nodes. Features found by NFD and labeled in Kubernetes can be shown by command: `kubectl get no -o json | jq '.items[].metadata.labels'`.
@@ -111,5 +116,27 @@ spec:
feature.node.kubernetes.io/cpu-pstate.turbo: 'true'
```
-## Reference
+### Node Feature Discovery support in OpenNESS On Premises
+
+Node Feature Discovery is enabled by default. It does not require any configuration or user input. It can be disabled by editing the `on_premises.yml` file and commenting out `nfd/onprem/master` role and `nfd/onprem/master` role before OpenNESS installation.
+
+NFD service in OpenNESS On Premises consists of two software components:
+
+- *nfd-worker*, which is taken from https://github.com/kubernetes-sigs/node-feature-discovery (downloaded as image)
+- *nfd-master*: stand alone service run on Edge Controller.
+
+Nfd-worker connects to nfd-master server. Connection between nfd-workers and nfd-master is secured by TLS based certificates used in Edge Node enrollment: nfd-worker uses certificates of Edge Node, nfd-master generates certificate based on Edge Controller root certificate. Nfd-worker provides hardware features to nfd-master which stores that data to the controller mysql database. It can be used then as EPA Feature requirement while defining and deploying app on node.
+
+#### Usage
+
+NFD is working automatically and does not require any user action to collect the features from nodes.
+Default version of nfd-worker downloaded by ansible scripts during deployment is v.0.5.0. It can be changed by setting variable `nfd_version` in `roles/nfd/onprem/worker/defaults/main.yml`.
+
+Features found by NFD are visible in Edge Controller UI in node's NFD tab. While defining edge application (Controller UI->APPLICATIONS->ADD APPLICATION), `EPA Feature` fields can be used as definition of NFD requirement for app deployment. Eg: if application requires Multi-Precision Add-Carry Instruction Extensions (ADX), user can set EPA Feature Key to `nfd:cpu-cpuid.ADX` and EPA Feature Value to `true`.
+
+![Sample application with NFD Feature required](nfd-images/nfd3_onp_app.png)
+
+Deployment of such application will fail for nodes that don't provide this feature with this particular value. List of features supported by nfd-worker service can be found: https://github.com/kubernetes-sigs/node-feature-discovery#feature-discovery. Please note that `nfd:` prefix always has to be added when used as EPA Feature Key.
+
+## Reference
More details about NFD can be found here: https://github.com/Intel-Corp/node-feature-discovery
diff --git a/doc/enhanced-platform-awareness/openness-port-forward.md b/doc/enhanced-platform-awareness/openness-port-forward.md
new file mode 100644
index 00000000..aa2f82cd
--- /dev/null
+++ b/doc/enhanced-platform-awareness/openness-port-forward.md
@@ -0,0 +1,21 @@
+```text
+SPDX-License-Identifier: Apache-2.0
+Copyright (c) 2020 Intel Corporation
+```
+
+# Support for setting up port forwarding of a container in OpenNESS On-Prem mode
+
+- [Support for setting up port forwarding of a container in OpenNESS On-Prem mode](#support-for-setting-up-port-forwarding-of-a-container-in-openness-on-prem-mode)
+ - [Overview](#overview)
+ - [Usage](#usage)
+
+## Overview
+
+This feature enables the user to set up external network ports for their application (container) - so that applications running on other hosts can connect.
+
+## Usage
+To take advantage of this feature, all you have to do is fill in the port and protocol fields during application creation.
+OpenNESS will pass that information down to Docker, and assuming all goes well, when you start this container your ports will be exposed.
+
+For more details on the application onboarding (including other fields to set), please refer to
+[Application Onboarding Document](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/on-premises-applications-onboarding.md)
diff --git a/doc/enhanced-platform-awareness/openness-shared-storage.md b/doc/enhanced-platform-awareness/openness-shared-storage.md
new file mode 100644
index 00000000..a95b5606
--- /dev/null
+++ b/doc/enhanced-platform-awareness/openness-shared-storage.md
@@ -0,0 +1,32 @@
+```text
+SPDX-License-Identifier: Apache-2.0
+Copyright (c) 2020 Intel Corporation
+```
+
+# Shared storage for containers in OpenNESS On-Prem mode
+
+
+- [Shared storage for containers in OpenNESS On-Prem mode](#shared-storage-for-containers-in-openness-on-prem-mode)
+ - [Overview](#overview)
+ - [Usage](#usage)
+
+## Overview
+
+OpenNESS On-Prem mode provides possibility to use volume and bind mount storage models known from docker. For detailed information please refer to: https://docs.docker.com/storage/volumes/ and https://docs.docker.com/storage/bind-mounts/. In OpenNESS On-Prem it is achieved by simply adding mount items to the containers `HostConfig` structure.
+
+## Usage
+
+In order to add volume/bindmount to node container application user should use `EPA Feature` part of application creation form on
+ControllerUI->APPLICATIONS->ADD APPLICATION by adding item with `mount` EPA Feature Key. Valid syntax of EPA Feature Value in such case should be `...;type,source,target,readonly;...` where:
+- multiple mounts can be added in one EPA Feature by delimiting with semicolons
+- supported types are `volume` and `bind` which corresponds to volume and bind mount known from docker
+- source:
+ - volume name (volume will be automatically created if not exists) for `volume` type
+ - location on the Host machine for `bind` type
+- taget is location inside the container
+- readonly: setting to `true` will set volume/bind mount to read-only mode
+- invalid entries will be skipped
+
+Example valid EPA Feature entry:
+- EPA Feature Key: `mount`
+- EPA Feature Value: `volume,testvolume,/testvol,false;bind,/home/testdir,/testbind,true`
diff --git a/doc/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md b/doc/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md
index 06d196f1..c5447272 100644
--- a/doc/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md
+++ b/doc/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md
@@ -1,9 +1,9 @@
```text
SPDX-License-Identifier: Apache-2.0
-Copyright (c) 2019 Intel Corporation
+Copyright (c) 2019-2020 Intel Corporation
```
-# Multiple Interface and PCIe SRIOV support in OpenNESS
+# Multiple Interface and PCIe SRIOV support in OpenNESS
- [Multiple Interface and PCIe SRIOV support in OpenNESS](#multiple-interface-and-pcie-sriov-support-in-openness)
- [Overview](#overview)
@@ -12,25 +12,28 @@ Copyright (c) 2019 Intel Corporation
- [Overview of SR-IOV Device Plugin](#overview-of-sr-iov-device-plugin)
- [Details - Multiple Interface and PCIe SRIOV support in OpenNESS](#details---multiple-interface-and-pcie-sriov-support-in-openness)
- [Multus usage](#multus-usage)
- - [SRIOV](#sriov)
- - [Edgecontroller setup](#edgecontroller-setup)
- - [Edgenode setup](#edgenode-setup)
+ - [SRIOV for Network-Edge](#sriov-for-network-edge)
+ - [Edge Node SRIOV interfaces configuration](#edge-node-sriov-interfaces-configuration)
- [Usage](#usage)
+ - [SRIOV for On-Premises](#sriov-for-on-premises)
+ - [Edgenode Setup](#edgenode-setup)
+ - [Docker Container Deployment Usage](#docker-container-deployment-usage)
+ - [Virtual Machine Deployment Usage](#virtual-machine-deployment-usage)
- [Reference](#reference)
-## Overview
+## Overview
Edge deployments consist of both Network Functions and Applications. Cloud Native solutions like Kubernetes typically expose only one interface to the Application or Network function PODs. These interfaces are typically bridged interfaces. This means that Network Functions like Base station or Core network User plane functions and Applications like CDN etc. are limited by the default interface.
-To address this we need to enable two key networking features:
-1) Enable a Kubernetes like orchestration environment to provision more than one interface to the application and Network function PODs
-2) Enable the allocation of dedicated hardware interfaces to application and Network Function PODs
+To address this we need to enable two key networking features:
+1) Enable a Kubernetes like orchestration environment to provision more than one interface to the application and Network function PODs
+2) Enable the allocation of dedicated hardware interfaces to application and Network Function PODs
-### Overview of Multus
+### Overview of Multus
To enable multiple interface support in PODs, OpenNESS Network Edge uses the Multus container network interface. Multus CNI is a container network interface (CNI) plugin for Kubernetes that enables the attachment of multiple network interfaces to pods. Typically, in Kubernetes each pod only has one network interface (apart from a loopback) – with Multus you can create a multi-homed pod that has multiple interfaces. This is accomplished by Multus acting as a “meta-plugin”, a CNI plugin that can call multiple other CNI plugins. Multus CNI follows the Kubernetes Network Custom Resource Definition De-facto Standard to provide a standardized method by which to specify the configurations for additional network interfaces. This standard is put forward by the Kubernetes Network Plumbing Working Group.
-Below is an illustration of the network interfaces attached to a pod, as provisioned by the Multus CNI. The diagram shows the pod with three interfaces: eth0, net0 and net1. eth0 connects to the Kubernetes cluster network to connect with the Kubernetes server/services (e.g. kubernetes api-server, kubelet and so on). net0 and net1 are additional network attachments and connect to other networks by using other CNI plugins (e.g. vlan/vxlan/ptp).
+Below is an illustration of the network interfaces attached to a pod, as provisioned by the Multus CNI. The diagram shows the pod with three interfaces: eth0, net0 and net1. eth0 connects to the Kubernetes cluster network to connect with the Kubernetes server/services (e.g. kubernetes api-server, kubelet and so on). net0 and net1 are additional network attachments and connect to other networks by using other CNI plugins (e.g. vlan/vxlan/ptp).
![Multus overview](multussriov-images/multus-pod-image.svg)
@@ -55,14 +58,13 @@ _Figure - SR-IOV Device plugin_
## Details - Multiple Interface and PCIe SRIOV support in OpenNESS
-The Multus role is enabled by default in ansible(`ne_controller.yml`):
-
-```
- - role: multus
+In Network Edge mode Multus CNI, which provides possibility for attaching multiple interfaces to the pod, is deployed automatically when `kubernetes_cnis` variable list (in the `group_vars/all.yml` file) contains at least two elements, e.g.:
+```yaml
+kubernetes_cnis:
+- kubeovn
+- sriov
```
->NOTE: Multus is installed only for Network Edge mode.
-
### Multus usage
[Custom resource definition](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-resources) (CRD) is used to define additional network that can be used by Multus.
@@ -100,7 +102,12 @@ EOF
name: samplepod
annotations:
k8s.v1.cni.cncf.io/networks: macvlan
+ spec:
+ containers:
+ - name: multitoolcont
+ image: praqma/network-multitool
```
+
> NOTE: More networks can be added after a coma in the same annotation
4. To verify that the additional interface was configured run `ip a` in the deployed pod. The output should look similar to the following:
```bash
@@ -118,23 +125,18 @@ EOF
valid_lft forever preferred_lft forever
```
-### SRIOV
-
-#### Edgecontroller setup
-To install the OpenNESS controller with SR-IOV support please uncomment `role: sriov/master` in `ne_controller.yml` of Ansible scripts. Please also remember, that `role: multus` has to be enabled as well.
+### SRIOV for Network-Edge
+To deploy the OpenNESS' Network Edge with SR-IOV `sriov` must be added to the `kubernetes_cnis` list in `group_vars/all.yml`:
```yaml
-- role: sriov/master
+kubernetes_cnis:
+- kubeovn
+- sriov
```
-#### Edgenode setup
-To install the OpenNESS node with SR-IOV support please uncomment `role: sriov/worker` in `ne_node.yml` of Ansible scripts.
-
-```yaml
-- role: sriov/worker
-```
+#### Edge Node SRIOV interfaces configuration
-For the installer to turn on the specified number of SR-IOV VFs for selected network interface of node, please provide that information in format `{interface_name: VF_NUM, ...}` in `sriov.network_interfaces` variable inside config files in `host_vars` ansible directory.
+For the installer to turn on the specified number of SR-IOV VFs for selected network interface of node, please provide that information in format `{interface_name: VF_NUM, ...}` in `sriov.network_interfaces` variable inside config files in `host_vars` ansible directory.
Due to the technical reasons, each node has to be configured separately. Copy the example file `host_vars/node1.yml` and then create a similar one for each node being deployed.
Please also remember, that each node must be added to Ansible inventory file `inventory.ini`.
@@ -177,42 +179,127 @@ spec:
> Note: Users can create network with different CRD if they need to.
1. To create a POD with an attached SR-IOV device, add the network annotation to the POD definition and `request` access to the SR-IOV capable device (`intel.com/intel_sriov_netdevice`):
+ ```yaml
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: samplepod
+ annotations:
+ k8s.v1.cni.cncf.io/networks: sriov-openness
+ spec:
+ containers:
+ - name: samplecnt
+ image: centos/tools
+ resources:
+ requests:
+ intel.com/intel_sriov_netdevice: "1"
+ limits:
+ intel.com/intel_sriov_netdevice: "1"
+ command: ["sleep", "infinity"]
+ ```
+
+2. To verify that the additional interface was configured run `ip a` in the deployed pod. The output should look similar to the following:
+ ```bash
+ 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
+ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+ inet 127.0.0.1/8 scope host lo
+ valid_lft forever preferred_lft forever
+ 41: net1: mtu 1500 qdisc mq state UP group default qlen 1000
+ link/ether aa:37:23:b5:63:bc brd ff:ff:ff:ff:ff:ff
+ inet 192.168.2.2/24 brd 192.168.2.255 scope global net1
+ valid_lft forever preferred_lft forever
+ 169: eth0@if170: mtu 1400 qdisc noqueue state UP group default
+ link/ether 0a:00:00:10:00:0b brd ff:ff:ff:ff:ff:ff link-netnsid 0
+ inet 10.16.0.10/16 brd 10.16.255.255 scope global eth0
+ valid_lft forever preferred_lft forever
+ ```
+
+### SRIOV for On-Premises
+Support for providing SR-IOV interfaces to containers and virtual machines is also available for OpenNESS On-Premises deployments.
+
+#### Edgenode Setup
+To install the OpenNESS node with SR-IOV support, the option `role: sriov_device_init/onprem` must be uncommented in the `edgenode_group` in `on_premises.yml` of the ansible scripts.
+
```yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: samplepod
- annotations:
- k8s.v1.cni.cncf.io/networks: sriov-openness
- spec:
- containers:
- - name: samplecnt
- image: centos/tools
- resources:
- requests:
- intel.com/intel_sriov_netdevice: "1"
+- role: sriov_device_init/onprem
```
-2. To verify that the additional interface was configured run `ip a` in the deployed pod. The output should look similar to the following:
+In order to configure the number of SR-IOV VFs on the node, the `network_interfaces` variable located under `sriov` in `host_vars/node01.yml` needs to be updated with the physical network interfaces on the node where the VFs should be created, along with the number of VFs to be created for each interface. The format this information should be provided in is `{interface_name: number_of_vfs, ...}`.
+
+> Note: Remember that each node must be added to the ansible inventory file `inventory.ini` if they are to be deployed by the ansible scripts.
+
+To inform the installer of the number of VFs to configure for use with virtual machine deployments, the variable `vm_vf_ports` must be set, e.g. `vm_vf_ports: 4` tells the installer to configure four VFs for use with virtual machines. The installer will use this setting to assign that number of VFs to the kernel pci-stub driver so that they can be passed to virtual machines at deployment.
+
+When deploying containers in On-Premises mode, additional settings in the `host_vars/node01.yml` file are required so the installer can configure the VFs correctly. Each VF will be assigned to a Docker network configuration which will be created by the installer. To do this, the following variables must be configured:
+- `interface_subnets`: This contains the subnet information for the Docker network that the VF will be assigned to. Must be provided in the format `[subnet_ip/subnet_mask,...]`.
+- `interface_ips`: This contains the gateway IP address for the Docker network which will be assigned to the VF in the container. The address must be located within the subnet provided above. Must be provided in the format `[ip_address,...]`.
+- `network_name`: This contains the name of the Docker network to be created by the installer. Must be in the format `[name_of_network,...]`.
+
+An example `host_vars/node01.yml` which enables 4 VFs across two interfaces with two VFs configured for virtual machines and two VFs configured for containers is shown below:
+```yaml
+sriov:
+ network_interfaces: {enp24s0f0: 2, enp24s0f1: 2}
+ interface_subnets: [192.168.1.0/24, 192.168.2.0/24]
+ interface_ips: [192.168.1.1, 192.168.2.1]
+ network_name: [test_network1, test_network2]
+ vm_vf_ports: 2
+```
+
+> Note: When setting VFs for On-Premises mode the total number of VFs assigned to virtual machines and containers *must* match the total number of VFs requested, i.e. if requesting 8 VFs in total, the amount assigned to virtual machines and containers *must* also total to 8.
+
+#### Docker Container Deployment Usage
+
+To assign a VF to a Docker container at deployment, the following steps are required once the Edge Node has been set up by the ansible scripts with VFs created.
+
+1. On the Edge Node, run `docker network ls` to get the list of Docker networks available. These should include the Docker networks assigned to VFs by the installer.
+```bash
+# docker network ls
+NETWORK ID NAME DRIVER SCOPE
+74d9cb38603e bridge bridge local
+57411c1ca4c6 host host local
+b8910de9ad89 none null local
+c227f1b184bc test_network1 macvlan local
+3742881cf9ff test_network2 macvlan local
+```
+> Note: if you want to check the network settings for a specific network, simply run `docker network inspect ` on the Edge Node.
+2. Log into the controller UI and go to the Applications tab to create a new container application with the *EPA Feature Key* set to `sriov_nic` and the *EPA Feature Value* set to `network_name`.
+![SR-IOV On-Premises Container Deployment](multussriov-images/sriov-onprem-container.png)
+3. To verify that the additional interface was configured run `docker exec -it ip a s` on the deployed container. The output should be similar to the following, with the new interface labelled as eth0.
```bash
- 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
+1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
- 41: net1: mtu 1500 qdisc mq state UP group default qlen 1000
- link/ether aa:37:23:b5:63:bc brd ff:ff:ff:ff:ff:ff
- inet 192.168.2.2/24 brd 192.168.2.255 scope global net1
- valid_lft forever preferred_lft forever
- 169: eth0@if170: mtu 1400 qdisc noqueue state UP group default
- link/ether 0a:00:00:10:00:0b brd ff:ff:ff:ff:ff:ff link-netnsid 0
- inet 10.16.0.10/16 brd 10.16.255.255 scope global eth0
- valid_lft forever preferred_lft forever
+111: eth0@if50: mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default
+ link/ether 02:42:c0:a8:01:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
+ inet 192.168.1.2/24 brd 192.168.1.255 scope global eth0
+ valid_lft forever preferred_lft forever
+112: vEth1: mtu 1500 qdisc noop state DOWN group default qlen 1000
+ link/ether 9a:09:f3:84:f9:7b brd ff:ff:ff:ff:ff:ff
+```
+
+#### Virtual Machine Deployment Usage
+
+To assign a VF to a virtual machine at deployment, the following steps are required on the Edge Node that has been set up by the ansible scripts with VFs created.
+
+1. On the Edge Node, get the list of PCI address bound to the pci-stub kernel driver by running `ls /sys/bus/pci/drivers/pci-stub`. The output should look similar to the following:
+```bash
+# ls /sys/bus/pci/drivers/pci-stub
+0000:18:02.0 0000:18:02.1 bind new_id remove_id uevent unbind
+```
+2. Log into the controller UI and go to the Applications tab to create a new virtual machine application with the *EPA Feature Key* set to `sriov_nic` and the *EPA Feature Value* set to `pci_address`.
+![SR-IOV On-Premises Virtual Machine Deployment](multussriov-images/sriov-onprem-vm.png)
+3. To verify that the additional interface was configured run `virsh domiflist ` on the Edge Node. The output should be similar to the following, with the hostdev device for the VF interface shown.
+```bash
+Interface Type Source Model MAC
+-------------------------------------------------------
+- network default virtio 52:54:00:39:3d:80
+- vhostuser - virtio 52:54:00:90:44:ee
+- hostdev - - 52:54:00:eb:f0:10
```
-## Reference
-For further details
+## Reference
+For further details
- SR-IOV CNI: https://github.com/intel/sriov-cni
- Multus: https://github.com/Intel-Corp/multus-cni
- SR-IOV network device plugin: https://github.com/intel/intel-device-plugins-for-kubernetes
-
-
diff --git a/doc/enhanced-platform-awareness/openness-tunable-exec.md b/doc/enhanced-platform-awareness/openness-tunable-exec.md
new file mode 100644
index 00000000..c09dc8c7
--- /dev/null
+++ b/doc/enhanced-platform-awareness/openness-tunable-exec.md
@@ -0,0 +1,18 @@
+```text
+SPDX-License-Identifier: Apache-2.0
+Copyright (c) 2020 Intel Corporation
+```
+
+# Support for overriding the startup command of a container in OpenNESS On-Prem mode
+
+## Overview
+
+This feature enables you to override the startup command for a container, thus removing the need to rebuild it just to make this change.
+It also allows you to create multiple containers using the same image but with each container using a different startup command.
+
+## Usage
+To take advantage of this feature, all you have to do is add a new 'EPA Feature Key' (on the application details page) called 'cmd',
+with the value of the command you want to run instead of the default. OpenNESS will pass that information down to Docker, and assuming all goes well (for example your command is correct / the path is valid), next time you start this container your command will be run.
+
+For more details on the application onboarding (including other fields to set), please refer to
+[Application Onboarding Document](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/on-premises-applications-onboarding.md)
diff --git a/doc/enhanced-platform-awareness/openness_hddl.md b/doc/enhanced-platform-awareness/openness_hddl.md
index 73c15a86..b6561e39 100644
--- a/doc/enhanced-platform-awareness/openness_hddl.md
+++ b/doc/enhanced-platform-awareness/openness_hddl.md
@@ -3,7 +3,7 @@ SPDX-License-Identifier: Apache-2.0
Copyright (c) 2019 Intel Corporation
```
-# Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS
+# Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS
- [Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS](#using-intel%c2%ae-movidius%e2%84%a2-myriad%e2%84%a2-x-high-density-deep-learning-hddl-solution-in-openness)
- [HDDL Introduction](#hddl-introduction)
@@ -16,7 +16,7 @@ Copyright (c) 2019 Intel Corporation
- [Summary](#summary)
- [Reference](#reference)
-Deployment of AI based Machine Learning (ML) applications on the edge is becoming more prevalent. Supporting hardware resources that accelerate AI/ML applications on the edge is key to improve the capacity of edge cloud deployment. It is also important to use CPU instruction set to execute AI/ML tasks when load is less. This paper explains these topics in the context of inference as a edge workload.
+Deployment of AI based Machine Learning (ML) applications on the edge is becoming more prevalent. Supporting hardware resources that accelerate AI/ML applications on the edge is key to improve the capacity of edge cloud deployment. It is also important to use CPU instruction set to execute AI/ML tasks when load is less. This paper explains these topics in the context of inference as a edge workload.
## HDDL Introduction
Intel® Movidius™ Myriad™ X High Density Deep Learning solution integrates multiple Myriad™ X SoCs in a PCIe add-in card form factor or a module form factor to build a scalable, high capacity deep learning solution. It provides hardware and software reference for customers. The following figure shows the HDDL-R concept.
@@ -61,16 +61,48 @@ Further sections provide information on how to use the HDDL setup on OpenNESS On
### HDDL-R PCI card Ansible installation for OpenNESS OnPremise Edge
To run the OpenNESS package with HDDL-R functionality the feature needs to be enabled on Edge Node.
-To enable on the Edge Node set following in `onprem_node.yml` (Please note that the hddl role needs to be executed after openness/onprem/worker role):
+To enable on the Edge Node set following in `on_premises.yml` (Please note that the hddl precheck and role needs to be executed after openness/onprem/worker role):
```
-- role: hddl
+- include_tasks: ./roles/hddl/common/tasks/precheck.yml
+
+- role: hddl/onprem/worker
```
-Run setup script `deploy_onprem_node.sh`.
+Run setup script `deploy_onprem.sh nodes`.
+
+NOTE: For this release, HDDL only supports default OS kernel(3.10.0-957.el7.x86_64) and need to set flag: kernel_skip as true before running OpenNESS installation scripts. (kernel_skip in the roles/machine_setup/custom_kernel/defaults/main.yml)
+NOTE: The HDDL precheck will check the current role and playbooks variables whether they satisfy the HDDL running pre-conditions.
+
+To check HDDL service running status on the edgenode after deploy, docker logs should look like:
+```
+docker ps
+CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
+ca7e9bf9e570 hddlservice:1.0 "./start.sh" 20 hours ago Up 20 hours openvino-hddl-service
+ea82cbc0d84a 004fddc9c299 "/usr/sbin/syslog-ng…" 21 hours ago Up 21 hours 601/tcp, 514/udp, 6514/tcp edgenode_syslog-ng_1
+3b4daaac1bc6 appliance:1.0 "sudo -E ./entrypoin…" 21 hours ago Up 21 hours 0.0.0.0:42101-42102->42101-42102/tcp, 192.168.122.1:42103->42103/tcp edgenode_appliance_1
+2262b4fa875b eaa:1.0 "sudo ./entrypoint_e…" 21 hours ago Up 21 hours 192.168.122.1:80->80/tcp, 192.168.122.1:443->443/tcp edgenode_eaa_1
+eedf4355ec98 edgednssvr:1.0 "sudo ./edgednssvr -…" 21 hours ago Up 19 hours 192.168.122.128:53->53/udp mec-app-edgednssvr
+5c94f7203023 nts:1.0 "sudo -E ./entrypoin…" 21 hours ago Up 19 hours nts
+docker logs --tail 20 ca7e9bf9e570
++-------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
+| status | WAIT_TASK | WAIT_TASK | WAIT_TASK | WAIT_TASK | WAIT_TASK | RUNNING | WAIT_TASK | WAIT_TASK |
+| fps | 1.61 | 1.62 | 1.63 | 1.65 | 1.59 | 1.58 | 1.67 | 1.60 |
+| curGraph | icv-ped...sd-v2.0 | icv-ped...sd-v2.0 | icv-ped...sd-v2.0 | icv-ped...sd-v2.0 | icv-ped...sd-v2.0 | icv-ped...sd-v2.0 | icv-ped...sd-v2.0 | icv-ped...sd-v2.0 |
+| rPriority | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+| loadTime | 20200330 05:34:34 | 20200330 05:34:35 | 20200330 05:34:35 | 20200330 05:34:35 | 20200330 05:34:35 | 20200330 05:34:35 | 20200330 05:34:35 | 20200330 05:34:35 |
+| runTime | 00:00:41 | 00:00:41 | 00:00:41 | 00:00:40 | 00:00:40 | 00:00:40 | 00:00:40 | 00:00:40 |
+| inference | 64 | 64 | 64 | 64 | 63 | 63 | 64 | 63 |
+| prevGraph | | | | | | | | |
+| loadTime | | | | | | | | |
+| unloadTime | | | | | | | | |
+| runTime | | | | | | | | |
+| inference | | | | | | | | |
+```
+
### Building Docker image with HDDL only or dynamic CPU/VPU usage
-In order to enable HDDL or mixed CPU/VPU operation by the containerized OpenVINO application set the `OPENVINO_ACCL` environmental variable to `HDDL` or `CPU_HDDL` inside producer application Dockerfile, located in Edge Apps repo - [edgeapps/openvino/producer](https://github.com/open-ness/edgeapps/blob/master/openvino/producer/Dockerfile). Build the image using the ./build-image.sh located in same directory. Making the image accessible by Edge Controller via HTTPs server is out of scope of this documentation - please refer to [Application Onboard Document](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/on-premises-applications-onboarding.md).
+In order to enable HDDL or mixed CPU/VPU operation by the containerized OpenVINO application set the `OPENVINO_ACCL` environmental variable to `HDDL` or `CPU_HDDL` inside producer application Dockerfile, located in Edge Apps repo - [edgeapps/applications/openvino/producer](https://github.com/open-ness/edgeapps/blob/master/applications/openvino/producer/Dockerfile). Build the image using the ./build-image.sh located in same directory. Making the image accessible by Edge Controller via HTTPs server is out of scope of this documentation - please refer to [Application Onboard Document](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/on-premises-applications-onboarding.md).
### Deploying application with HDDL support
@@ -79,6 +111,5 @@ Application onboarding is out of scope of this document - please refer to [Appli
## Summary
Intel® Movidius™ Myriad™ X High Density Deep Learning solution integrates multiple Myriad™ X SoCs in a PCIe add-in card form factor or a module form factor to build a scalable, high capacity deep learning solution. OpenNESS provides a toolkit for customers to put together Deep learning solution at the edge. To take it further for efficient resource usage OpenNESS provides mechanism to use CPU or VPU depending on the load or any other criteria.
-## Reference
+## Reference
- [HDDL-R: Mouser Mustang-V100](https://www.mouser.ie/datasheet/2/763/Mustang-V100_brochure-1526472.pdf)
-
diff --git a/doc/getting-started/network-edge/controller-edge-node-setup.md b/doc/getting-started/network-edge/controller-edge-node-setup.md
index 245495cc..3100155e 100644
--- a/doc/getting-started/network-edge/controller-edge-node-setup.md
+++ b/doc/getting-started/network-edge/controller-edge-node-setup.md
@@ -1,6 +1,6 @@
```text
SPDX-License-Identifier: Apache-2.0
-Copyright (c) 2019 Intel Corporation
+Copyright (c) 2019-2020 Intel Corporation
```
# OpenNESS Network Edge: Controller and Edge node setup
@@ -10,9 +10,13 @@ Copyright (c) 2019 Intel Corporation
- [Network Edge Playbooks](#network-edge-playbooks)
- [Cleanup playbooks](#cleanup-playbooks)
- [Supported EPA features](#supported-epa-features)
+ - [VM support for Network Edge](#vm-support-for-network-edge)
- [Quickstart](#quickstart)
- [Application on-boarding](#application-on-boarding)
-- [Q&A](#qampa)
+ - [Kubernetes cluster networking plugins (Network Edge)](#kubernetes-cluster-networking-plugins-network-edge)
+ - [Selecting cluster networking plugins (CNI)](#selecting-cluster-networking-plugins-cni)
+ - [Adding additional interfaces to pods](#adding-additional-interfaces-to-pods)
+- [Q&A](#qa)
- [Configuring time](#configuring-time)
- [Setup static hostname](#setup-static-hostname)
- [Configuring inventory](#configuring-inventory)
@@ -22,6 +26,7 @@ Copyright (c) 2019 Intel Corporation
- [GitHub Token](#github-token)
- [Customize tag/commit/sha to checkout](#customize-tagcommitsha-to-checkout)
- [Installing Kubernetes Dashboard](#installing-kubernetes-dashboard)
+ - [Customization of kernel, grub parameters and tuned profile](#customization-of-kernel-grub-parameters-and-tuned-profile)
# Preconditions
@@ -45,11 +50,11 @@ For convenience, playbooks can be executed by running helper deployment scripts.
> NOTE: All nodes provided in the inventory may reboot during the installation.
-Convention for the scripts is: `action_mode[_group].sh`. Following scripts are available for Network Edge mode:
- - `deploy_ne.sh` - sets up cluster (first controller, then nodes)
- - `cleanup_ne.sh`
- - `deploy_ne_controller.sh`
- - `deploy_ne_node.sh`
+Convention for the scripts is: `action_mode.sh [group]`. Following scripts are available for Network Edge mode:
+ - `deploy_ne.sh [ controller | nodes ]`
+ - `cleanup_ne.sh [ controller | nodes ] `
+
+To run deploy of only Edge Nodes or Edge Controller use `deploy_ne.sh nodes` and `deploy_ne.sh controller` respectively.
> NOTE: Playbooks for Edge Controller/Kubernetes master must be executed before playbooks for Edge Nodes.
@@ -57,7 +62,7 @@ Convention for the scripts is: `action_mode[_group].sh`. Following scripts are a
## Network Edge Playbooks
-The `ne_controller.yml`, `ne_node.yml` and `ne_cleanup.yml` files contain playbooks for Network Edge mode.
+The `network_edge.yml` and `network_edge_cleanup.yml` files contain playbooks for Network Edge mode.
Playbooks can be customized by (un)commenting roles that are optional and by customizing variables where needed.
### Cleanup playbooks
@@ -72,6 +77,9 @@ Note that there might be some leftovers created by installed software.
### Supported EPA features
A number of enhanced platform capabilities/features are available in OpenNESS for Network Edge. For the full list of features supported see [supported-epa.md](https://github.com/open-ness/specs/blob/master/doc/getting-started/network-edge/supported-epa.md), the documents referenced in the list provide detailed description of the features and step by step instructions how to enable them. The user is advised to get familiarized with the features available before executing the deployment playbooks.
+### VM support for Network Edge
+Support for VM deployment on OpenNESS for Network Edge is available and enabled by default, certain configuration and pre-requisites may need to be fulfilled in order to use all capabilities. The user is advised to get familiarized with the VM support documentation before executing the deployment playbooks. Please see [openness-network-edge-vm-support.md](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/openness-network-edge-vm-support.md) for more information.
+
### Quickstart
The following is a complete set of actions that need to be completed to successfully set up OpenNESS cluster.
@@ -86,6 +94,93 @@ The following is a complete set of actions that need to be completed to successf
Please refer to [network-edge-applications-onboarding.md](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md) document for instructions on how to deploy edge applications for OpenNESS Network Edge.
+## Kubernetes cluster networking plugins (Network Edge)
+
+Kubernetes uses 3rd party networking plugins to provide [cluster networking](https://kubernetes.io/docs/concepts/cluster-administration/networking/).
+These plugins are based on [CNI (Container Network Interface) specification](https://github.com/containernetworking/cni).
+
+OpenNESS Experience Kits provides several ready-to-use Ansible roles deploying CNIs.
+Following CNIs are currently supported:
+* [kube-ovn](https://github.com/alauda/kube-ovn)
+ * **Only as primary CNI**
+ * CIDR: 10.16.0.0/16
+* [flannel](https://github.com/coreos/flannel)
+ * IPAM: host-local
+ * CIDR: 10.244.0.0/16
+ * Network attachment definition: openness-flannel
+* [calico](https://github.com/projectcalico/cni-plugin)
+ * IPAM: host-local
+ * CIDR: 10.243.0.0/16
+ * Network attachment definition: openness-calico
+* [SR-IOV](https://github.com/intel/sriov-cni) (cannot be used as a standalone or primary CNI - [sriov setup](doc/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md))
+
+Multiple CNIs can be requested to be set up for the cluster. To provide such functionality [Multus CNI](https://github.com/intel/multus-cni) is used.
+
+> NOTE: For guide on how to add new CNI role to the OpenNESS Experience Kits refer to [the OpenNESS Experience Kits guide](../openness-experience-kits.md#adding-new-cni-plugins-for-kubernetes-network-edge)
+
+### Selecting cluster networking plugins (CNI)
+
+> Note: When using non-default CNI (default is kube-ovn) remember to add CNI's networks (CIDR for pods and other CIDRs used by the CNI) to `proxy_os_noproxy` in `group_vars/all.yml`
+
+In order to customize which CNI are to be deployed for the Network Edge cluster edit `kubernetes_cnis` variable in `group_vars/all.yml` file.
+CNIs are applied in requested order.
+By default `kube-ovn` and `calico` are set up (with `multus` in between):
+```yaml
+kubernetes_cnis:
+- kubeovn
+- calico
+```
+
+For example, to add SR-IOV just add another item on the list. That'll result in following CNIs being applied: `kube-ovn`, `multus`, `calico` and `sriov`.
+```yaml
+kubernetes_cnis:
+- kubeovn
+- calico
+- sriov
+```
+
+### Adding additional interfaces to pods
+
+In order to add additional interface from secondary CNIs annotation is required.
+Below is an example pod yaml file for a scenario with `kube-ovn` as a primary CNI and `calico` and `flannel` as additional CNIs.
+Multus will create an interface named `calico` using network attachment definition `openness-calico` and interface `flannel` using network attachment definition `openness-flannel`:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: cni-test-pod
+ annotations:
+ k8s.v1.cni.cncf.io/networks: openness-calico@calico, openness-flannel@flannel
+spec:
+ containers:
+ - name: cni-test-pod
+ image: docker.io/centos/tools:latest
+ command:
+ - /sbin/init
+```
+
+Below is an output (some lines were cut out for readability) of `ip a` command executed in the pod.
+Following interfaces are available: `calico@if142`, `flannel@if143` and `eth0@if141` (`kubeovn`).
+```
+# kubectl exec -ti cni-test-pod ip a
+
+1: lo:
+ inet 127.0.0.1/8 scope host lo
+
+2: tunl0@NONE:
+ link/ipip 0.0.0.0 brd 0.0.0.0
+
+4: calico@if142:
+ inet 10.243.0.3/32 scope global calico
+
+6: flannel@if143:
+ inet 10.244.0.3/16 scope global flannel
+
+140: eth0@if141:
+ inet 10.16.0.5/16 brd 10.16.255.255 scope global eth0
+```
+
# Q&A
## Configuring time
@@ -258,7 +353,7 @@ proxy_os_ftp: "http://proxy.example.com:3128"
proxy_os_noproxy: "localhost,127.0.0.1,10.244.0.0/24,10.96.0.0/12,192.168.0.1/24"
```
> NOTE: Ensure the no_proxy environment variable in your profile is set
->
+>
> export no_proxy="localhost,127.0.0.1,10.244.0.0/24,10.96.0.0/12,192.168.0.1/24"
## Setting Git
@@ -341,7 +436,7 @@ Follow the below steps to get the Kubernetes dashboard installed after OpenNESS
7. Open the dashboard from the browser at `https://:/`, use the port that was noted in the previous steps
-> **NOTE**: Firefox browser can be an alternative to Chrome and Internet Explorer in case the dashboard web page is blocked due to certification issue.
+> **NOTE**: Firefox browser can be an alternative to Chrome and Internet Explorer in case the dashboard web page is blocked due to certification issue.
8. Capture the bearer token using this command
@@ -354,7 +449,7 @@ Paste the Token in the browser to log in as shown in this diagram
![Dashboard Login](controller-edge-node-setup-images/dashboard-login.png)
_Figure - Kubernetes Dashboard Login_
-9. Go to the OpenNESS Controller installation directory and edit the `.env` file with the dashboard link `INFRASTRUCTURE_UI_URL=https://:/#` in order to get it integrated with the OpenNESS controller UI (note the `#` symbole at the end of the URL)
+9. Go to the OpenNESS Controller installation directory and edit the `.env` file with the dashboard link `INFRASTRUCTURE_UI_URL=https://:/` in order to get it integrated with the OpenNESS controller UI
```shell
cd /opt/edgecontroller/
@@ -370,3 +465,8 @@ _Figure - Kubernetes Dashboard Login_
11. The OpenNESS controller landing page is accessible at `http:///`.
> **NOTE**: `LANDING_UI_URL` can be retrieved from `.env` file.
+
+
+## Customization of kernel, grub parameters and tuned profile
+
+OpenNESS Experience Kits provides easy way to customize kernel version, grub parameters and tuned profile - for more information refer to [the OpenNESS Experience Kits guide](https://github.com/open-ness/specs/blob/master/doc/getting-started/openness-experience-kits.md).
diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/AddingInterfaceToNTS.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/AddingInterfaceToNTS.png
new file mode 100644
index 00000000..58612072
Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/AddingInterfaceToNTS.png differ
diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/AddingInterfaceToNTS1.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/AddingInterfaceToNTS1.png
new file mode 100644
index 00000000..81d52a52
Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/AddingInterfaceToNTS1.png differ
diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/AddingInterfaceToNTS2.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/AddingInterfaceToNTS2.png
new file mode 100644
index 00000000..ea753fe3
Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/AddingInterfaceToNTS2.png differ
diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/CheckingNodeInterfaces.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/CheckingNodeInterfaces.png
new file mode 100644
index 00000000..7c975778
Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/CheckingNodeInterfaces.png differ
diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/CheckingNodeInterfaces1.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/CheckingNodeInterfaces1.png
new file mode 100644
index 00000000..014466cc
Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/CheckingNodeInterfaces1.png differ
diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/Enroll1.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/Enroll1.png
new file mode 100644
index 00000000..4691a9a7
Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/Enroll1.png differ
diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/Enroll2.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/Enroll2.png
new file mode 100644
index 00000000..8d254aac
Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/Enroll2.png differ
diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/Enroll3.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/Enroll3.png
new file mode 100644
index 00000000..84d71dab
Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/Enroll3.png differ
diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/LBP_rule.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/LBP_rule.png
new file mode 100644
index 00000000..b089c6eb
Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/LBP_rule.png differ
diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/LBP_set_up.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/LBP_set_up.png
new file mode 100644
index 00000000..ce105605
Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/LBP_set_up.png differ
diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/StartingNTS.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/StartingNTS.png
new file mode 100644
index 00000000..8bc99773
Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/StartingNTS.png differ
diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/StartingNTS2.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/StartingNTS2.png
new file mode 100644
index 00000000..dfc4f8c2
Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/StartingNTS2.png differ
diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/controller_ui_landing.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/controller_ui_landing.png
index 505972a8..231b9c74 100644
Binary files a/doc/getting-started/on-premises/controller-edge-node-setup-images/controller_ui_landing.png and b/doc/getting-started/on-premises/controller-edge-node-setup-images/controller_ui_landing.png differ
diff --git a/doc/getting-started/on-premises/controller-edge-node-setup-images/login.png b/doc/getting-started/on-premises/controller-edge-node-setup-images/login.png
new file mode 100644
index 00000000..4a533002
Binary files /dev/null and b/doc/getting-started/on-premises/controller-edge-node-setup-images/login.png differ
diff --git a/doc/getting-started/on-premises/controller-edge-node-setup.md b/doc/getting-started/on-premises/controller-edge-node-setup.md
index 8a229da9..eff72bb8 100644
--- a/doc/getting-started/on-premises/controller-edge-node-setup.md
+++ b/doc/getting-started/on-premises/controller-edge-node-setup.md
@@ -1,6 +1,6 @@
```text
SPDX-License-Identifier: Apache-2.0
-Copyright (c) 2019 Intel Corporation
+Copyright (c) 2019-2020 Intel Corporation
```
# OpenNESS OnPremises: Controller and Edge node setup
@@ -11,18 +11,24 @@ Copyright (c) 2019 Intel Corporation
- [Running playbooks](#running-playbooks)
- [On Premise Playbooks](#on-premise-playbooks)
- [Cleanup playbooks](#cleanup-playbooks)
+ - [Dataplanes](#dataplanes)
- [Manual steps](#manual-steps)
- [Enrolling Nodes with Controller](#enrolling-nodes-with-controller)
- [First Login](#first-login)
- - [Enrollment](#enrollment)
+ - [Manual enrollment](#manual-enrollment)
- [NTS Configuration](#nts-configuration)
- [Displaying Edge Node's Interfaces](#displaying-edge-nodes-interfaces)
- - [Creating Traffic Policy](#creating-traffic-policy)
- - [Adding Traffic Policy to Interface](#adding-traffic-policy-to-interface)
- [Configuring Interface](#configuring-interface)
- [Starting NTS](#starting-nts)
-- [Q&A](#qampa)
+ - [Preparing set-up for Local Breakout Point (LBP)](#preparing-set-up-for-local-breakout-point-lbp)
+ - [Controller and Edge Node deployment](#controller-and-edge-node-deployment)
+ - [Network configuration](#network-configuration)
+ - [Configuration in Controller](#configuration-in-controller)
+ - [Verification](#verification)
+ - [Configuring DNS](#configuring-dns)
+- [Q&A](#qa)
- [Configuring time](#configuring-time)
+ - [Setup static hostname](#setup-static-hostname)
- [Configuring inventory](#configuring-inventory)
- [Exchanging SSH keys with hosts](#exchanging-ssh-keys-with-hosts)
- [Setting proxy](#setting-proxy)
@@ -30,6 +36,7 @@ Copyright (c) 2019 Intel Corporation
- [GitHub Token](#github-token)
- [Customize tag/commit/sha to checkout](#customize-tagcommitsha-to-checkout)
- [Obtaining Edge Node's serial with command](#obtaining-edge-nodes-serial-with-command)
+ - [Customization of kernel, grub parameters and tuned profile](#customization-of-kernel-grub-parameters-and-tuned-profile)
# Purpose
@@ -41,6 +48,8 @@ In order to use the playbooks several preconditions must be fulfilled:
- Time must be configured on all hosts (refer to [Configuring time](#configuring-time))
+- Hosts for Edge Controller and Edge Nodes must have proper and unique hostname (not `localhost`). This hostname must be specified in `/etc/hosts` (refer to [Setup static hostname](#Setup-static-hostname)).
+
- Inventory must be configured (refer to [Configuring inventory](#configuring-inventory))
- SSH keys must be exchanged with hosts (refer to [Exchanging SSH keys with hosts](#Exchanging-SSH-keys-with-hosts))
@@ -52,10 +61,11 @@ In order to use the playbooks several preconditions must be fulfilled:
# Running playbooks
For convenience, playbooks can be played by running helper deploy scripts.
-Convention for the scripts is: `action_mode[_group].sh`. Following scripts are available for On Premise mode:
- - `cleanup_onprem.sh`
- - `deploy_onprem_controller.sh`
- - `deploy_onprem_node.sh`
+Convention for the scripts is: `action_mode.sh [group]`. Following scripts are available for On Premise mode:
+ - `cleanup_onprem.sh [ controller | nodes ]`
+ - `deploy_onprem.sh [ controller | nodes ]`
+
+To run deploy of only Edge Nodes or Edge Controller use `deploy_ne.sh nodes` and `deploy_ne.sh controller` respectively.
> NOTE: All nodes provided in the inventory might get rebooted during the installation.
@@ -65,7 +75,7 @@ Convention for the scripts is: `action_mode[_group].sh`. Following scripts are a
## On Premise Playbooks
-`onprem_controller.yml`, `onprem_node.yml` and `onprem_cleanup.yml` contain playbooks for On Premise mode. Playbooks can be customized by (un)commenting roles that are optional and by customizing variables where needed.
+`on_premises.yml` and `on_premises_cleanup.yml` contain playbooks for On Premise mode. Playbooks can be customized by (un)commenting roles that are optional and by customizing variables where needed.
### Cleanup playbooks
@@ -76,6 +86,17 @@ For example, when installing Docker - RPM repository is added and Docker install
Note that there might be some leftovers created by installed software.
+### Dataplanes
+OpenNESS' On Premises delivers two dataplanes to be used:
+* NTS (default)
+* OVS/OVN
+
+In order to use OVS/OVN instead of NTS, `onprem_dataplane` variable must be edited in `group_vars/all.yml` file before running the deployment scripts:
+```yaml
+onprem_dataplane: "ovncni"
+```
+> NOTE: When deploying virtual machine with OVNCNI dataplane, `/etc/resolv.conf` must be edited to use `192.168.122.1` nameserver.
+
## Manual steps
> *Ansible Controller* is a machine with [openness-experience-kits](https://github.com/open-ness/openness-experience-kits) repo and it's used to configure *Edge Controller* and *Edge Nodes*. Please be careful not to confuse them.
@@ -95,7 +116,7 @@ Prerequisites (*Ansible Controller*):
The following steps need to be done for successful login:
1. Open internet browser on *Ansible Controller*.
-2. Type in `http://:3000` in address bar.
+2. Type in `http:///` in address bar. `LANDING_UI_URL` can be retrieved from `.env` file.
3. Click on "INFRASTRUCTURE MANAGER" button.
![Landing page](controller-edge-node-setup-images/controller_ui_landing.png)
@@ -103,9 +124,11 @@ The following steps need to be done for successful login:
4. Enter you username and password (default username: admin) (the password to be used is the password provided during Controller bring-up with the **cce_admin_password** in *openness-experience-kits/group_vars/all.yml*).
5. Click on "SIGN IN" button.
-![Login screen](../../applications-onboard/howto-images/login.png)
+![Login screen](controller-edge-node-setup-images/login.png)
+
+#### Manual enrollment
-#### Enrollment
+> NOTE: Following steps are now part of Ansible automated platform setup. Manual steps are left for reference.
In order for the Controller and Edge Node to work together the Edge Node needs to enroll with the Controller. The Edge Node will continuously try to connect to the controller until its serial key is recognized by the Controller.
@@ -119,17 +142,17 @@ In order to enroll and add new Edge Node to be managed by the Controller the fol
2. Navigate to 'NODES' tab.
3. Click on 'ADD EDGE NODE' button.
-![Add Edge Node 1](../../applications-onboard/howto-images/Enroll1.png)
+![Add Edge Node 1](controller-edge-node-setup-images/Enroll1.png)
4. Enter previously obtained Edge Node Serial Key into 'Serial*' field (Step 1).
5. Enter the name and location of Edge Node.
6. Press 'ADD EDGE NODE'.
-![Add Edge Node 2](../../applications-onboard/howto-images/Enroll2.png)
+![Add Edge Node 2](controller-edge-node-setup-images/Enroll2.png)
7. Check that your Edge Node is visible under 'List of Edge Nodes'.
-![Add Edge Node 3](../../applications-onboard/howto-images/Enroll3.png)
+![Add Edge Node 3](controller-edge-node-setup-images/Enroll3.png)
### NTS Configuration
OpenNESS data-plane interface configuration.
@@ -144,72 +167,12 @@ To check the interfaces available on the Edge Node execute following steps:
2. Find you Edge Node on the list.
3. Click 'EDIT'.
-![Check Edge Node Interfaces 1](../../applications-onboard/howto-images/CheckingNodeInterfaces.png)
+![Check Edge Node Interfaces 1](controller-edge-node-setup-images/CheckingNodeInterfaces.png)
5. Navigate to 'INTERFACES' tab.
6. Available interfaces are listed.
-![Check Edge Node Interfaces 2](../../applications-onboard/howto-images/CheckingNodeInterfaces1.png)
-
-#### Creating Traffic Policy
-Prerequisites:
-- Enrollment phase completed successfully.
-- User is logged in to UI.
-
-The steps to create a sample traffic policy are as follows:
-1. From UI navigate to 'TRAFFIC POLICIES' tab.
-2. Click 'ADD POLICY'.
-
-> Note: This specific traffic policy is only an example.
-
-![Creating Traffic Policy 1](../../applications-onboard/howto-images/CreatingTrafficPolicy.png)
-
-3. Give policy a name.
-4. Click 'ADD' next to 'Traffic Rules*' field.
-5. Fill in following fields:
- - Description: "Sample Description"
- - Priority: 99
- - Source -> IP Filter -> IP Address: 1.1.1.1
- - Source -> IP Filter -> Mask: 24
- - Source -> IP Filter -> Begin Port: 10
- - Source -> IP Filter -> End Port: 20
- - Source -> IP Filter -> Protocol: all
- - Target -> Description: "Sample Description"
- - Target -> Action: accept
-6. Click on "CREATE".
-
-![Creating Traffic Policy 2](../../applications-onboard/howto-images/CreatingTrafficPolicy2.png)
-
-After creating Traffic Policy it will be visible under 'List of Traffic Policies' in 'TRAFFIC POLICIES' tab.
-
-![Creating Traffic Policy 3](../../applications-onboard/howto-images/CreatingTrafficPolicy3.png)
-
-#### Adding Traffic Policy to Interface
-Prerequisites:
-- Enrollment phase completed successfully.
-- User is logged in to UI.
-- Traffic Policy Created.
-
-To add a previously created traffic policy to an interface available on Edge Node the following steps need to be completed:
-1. From UI navigate to "NODES" tab.
-2. Find Edge Node on the 'List Of Edge Nodes'.
-3. Click "EDIT".
-
-> Note: This step is instructional only, users can decide if they need/want a traffic policy designated for their interface, or if they desire traffic policy designated per application instead.
-
-![Adding Traffic Policy To Interface 1](../../applications-onboard/howto-images/AddingTrafficPolicyToInterface1.png)
-
-4. Navigate to "INTERFACES" tab.
-5. Find desired interface which will be used to add traffic policy.
-6. Click 'ADD' under 'Traffic Policy' column for that interface.
-7. A window titled 'Assign Traffic Policy to interface' will pop-up. Select a previously created traffic policy.
-8. Click on 'ASSIGN'.
-
-![Adding Traffic Policy To Interface 2](../../applications-onboard/howto-images/AddingTrafficPolicyToInterface2.png)
-
-On success the user is able to see 'EDIT' and 'REMOVE POLICY' buttons under 'Traffic Policy' column for desired interface. These buttons can be respectively used for editing and removing traffic rule policy on that interface.
-
-![Adding Traffic Policy To Interface 3](../../applications-onboard/howto-images/AddingTrafficPolicyToInterface3.png)
+![Check Edge Node Interfaces 2](controller-edge-node-setup-images/CheckingNodeInterfaces1.png)
#### Configuring Interface
Prerequisites:
@@ -223,7 +186,9 @@ In order to configure interface available on the Edge Node for the NTS the follo
| WARNING: do not modify a NIC which is used for Internet connection! |
| --- |
-![Configuring Interface 1](../../applications-onboard/howto-images/AddingInterfaceToNTS.png)
+> Note: For adding traffic policy to interface refere to following section in on-premises-applications-onboarding.md: [Instruction to create Traffic Policy and assign it to Interface](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/on-premises-applications-onboarding.md#instruction-to-create-traffic-policy-and-assign-it-to-interface)
+
+![Configuring Interface 1](controller-edge-node-setup-images/AddingInterfaceToNTS.png)
1. A window will pop-up titled "Edit Interface". The following fields need to be set:
- Driver: userspace
@@ -232,11 +197,11 @@ In order to configure interface available on the Edge Node for the NTS the follo
- In case of two interfaces being configured, one for 'Upstream' another for 'Downstream', the fallback interface for 'Upstream' is the 'Downstream' interface and vice versa.
2. Click 'SAVE'.
-![Configuring Interface 2](../../applications-onboard/howto-images/AddingInterfaceToNTS1.png)
+![Configuring Interface 2](controller-edge-node-setup-images/AddingInterfaceToNTS1.png)
3. The interface's 'Driver' and 'Type' columns will reflect changes made.
-![Configuring Interface 3](../../applications-onboard/howto-images/AddingInterfaceToNTS2.png)
+![Configuring Interface 3](controller-edge-node-setup-images/AddingInterfaceToNTS2.png)
#### Starting NTS
Prerequisite:
@@ -253,16 +218,162 @@ Once the interfaces are configured accordingly the following steps need to be do
1. From UI navigate to 'INTERFACES' tab of the Edge Node.
2. Click 'COMMIT CHANGES'
-![Starting NTS 1](../../applications-onboard/howto-images/StartingNTS.png)
+![Starting NTS 1](controller-edge-node-setup-images/StartingNTS.png)
3. NTS will start
-![Starting NTS 2](../../applications-onboard/howto-images/StartingNTS2.png)
+![Starting NTS 2](controller-edge-node-setup-images/StartingNTS2.png)
4. Make sure that the **nts** and **edgednssvr** containers are running on an *Edge Node* machine:
![Starting NTS 3](controller-edge-node-setup-images/StartingNTS3.png)
+#### Preparing set-up for Local Breakout Point (LBP)
+
+It is possible in a set up with NTS used as dataplane to prepare following LBP configuration
+- LBP set-up requirements: five machines are used as following set-up elements
+ - Controller
+ - Edge Node
+ - UE
+ - LBP
+ - EPC
+- Edge Node is connected via 10GB cards to UE, LBP, EPC
+- network configuration of all elements is given on the diagram:
+
+ ![LBP set-up ](controller-edge-node-setup-images/LBP_set_up.png "LBP set-up")
+
+- configuration of interfaces for each server is done in Controller
+- ARP configuration is done on servers
+- IP addresses 10.103.104.X are addresses of machines from local subnet used for building set-up
+- IP addresses 192.168.100.X are addresses given for LBP test purpose
+
+##### Controller and Edge Node deployment
+
+Build and deploy Controller and Edge Node using ansible scripts and instructions in this document.
+
+##### Network configuration
+
+Find interface with following commands
+- `ifconfig`
+or
+- `ip a`
+
+Command `ethtool -p ` can be used to identify port (port on physical machine will start to blink and it will be possible to verify if it is valid port).
+
+Use following commands to configure network on servers in set up
+- UE
+ - `ifconfig 192.168.100.1/24 up`
+ - `arp -s 192.168.100.2 ` (e.g. `arp -s 192.168.100.2 3c:fd:fe:a7:c0:eb`)
+- LBP
+ - `ifconfig 192.168.100.2/24 up`
+ - `arp -s 192.168.100.1 ` (e.g. `arp -s 192.168.100.1 90:e2:ba:ac:6a:d5`)
+- EPC
+ - `ifconfig 192.168.100.3/24 up`
+
+
+Alternatively to using `ifconfig` configuration can be done with `ip` command:
+`ip address add dev ` (e.g.`ip address add 192.168.100.1/24 dev enp23s0f0`)
+
+##### Configuration in Controller
+
+Add traffic policy with rule for LBP:
+
+- Name: LBP rule
+- Priority: 99
+- IP filter:
+ - IP address: 192.168.100.2
+ - Mask: 32
+ - Protocol: all
+- Target:
+ - Action: accept
+- MAC Modifier
+ - MAC address: 3c:fd:fe:a7:c0:eb
+
+![LBP rule adding](controller-edge-node-setup-images/LBP_rule.png)
+
+Update interfaces:
+- edit interfaces to UE, LBP, EPC as shown on diagram (Interface set-up)
+- add Traffic policy (LBP rule) to LBP interface (0000:88.00.2)
+
+After configuring NTS send PING (it is needed by NTS) from UE to EPC (`ping 192.168.100.3`).
+
+##### Verification
+
+1. NES client
+ - SSH to UE machine and ping LBP (`ping 192.168.100.2`)
+ - SSH to Edge Node server
+ - Set following environment variable: `export NES_SERVER_CONF=/var/lib/appliance/nts/nts.cfg`
+ - Run NES client: `/internal/nts/client/build/nes_client`
+ - connect to NTS using command `connect`
+ - use command `route list` to verify traffic rule for LBP
+ - use command `show all` to verify packet flow (received and sent packet should increase)
+ - use command `quit` to exit (use `help` for information on available commands)
+
+ ```shell
+ # connect
+ Connection is established.
+ # route list
+ +-------+------------+--------------------+--------------------+--------------------+--------------------+-------------+-------------+--------+----------------------+
+ | ID | PRIO | ENB IP | EPC IP | UE IP | SRV IP | UE PORT | SRV PORT | ENCAP | Destination |
+ +-------+------------+--------------------+--------------------+--------------------+--------------------+-------------+-------------+--------+----------------------+
+ | 0 | 99 | n/a | n/a | 192.168.100.2/32 | * | * | * | IP | 3c:fd:fe:a7:c0:eb |
+ | 1 | 99 | n/a | n/a | * | 192.168.100.2/32 | * | * | IP | 3c:fd:fe:a7:c0:eb |
+ | 2 | 5 | n/a | n/a | * | 53.53.53.53/32 | * | * | IP | 8a:68:41:df:fa:d5 |
+ | 3 | 5 | n/a | n/a | 53.53.53.53/32 | * | * | * | IP | 8a:68:41:df:fa:d5 |
+ | 4 | 5 | * | * | * | 53.53.53.53/32 | * | * | GTPU | 8a:68:41:df:fa:d5 |
+ | 5 | 5 | * | * | 53.53.53.53/32 | * | * | * | GTPU | 8a:68:41:df:fa:d5 |
+ +-------+------------+--------------------+--------------------+--------------------+--------------------+-------------+-------------+--------+----------------------+
+ # show all
+ ID: Name: Received: Sent: Dropped(TX full): Dropped(HW): IP Fragmented(Forwarded):
+ 0 0000:88:00.1 1303 pkts 776 pkts 0 pkts 0 pkts 0 pkts
+ (3c:fd:fe:b2:44:b1) 127432 bytes 75820 bytes 0 bytes
+ 1 0000:88:00.2 1261 pkts 1261 pkts 0 pkts 0 pkts 0 pkts
+ (3c:fd:fe:b2:44:b2) 123578 bytes 123578 bytes 0 bytes
+ 2 0000:88:00.3 40 pkts 42 pkts 0 pkts 0 pkts 0 pkts
+ (3c:fd:fe:b2:44:b3) 3692 bytes 3854 bytes 0 bytes
+ 3 KNI 0 pkts 0 pkts 0 pkts 0 pkts 0 pkts
+ (not registered) 0 bytes 0 bytes 0 bytes
+ # show all
+ ID: Name: Received: Sent: Dropped(TX full): Dropped(HW): IP Fragmented(Forwarded):
+ 0 0000:88:00.1 1304 pkts 777 pkts 0 pkts 0 pkts 0 pkts
+ (3c:fd:fe:b2:44:b1) 127530 bytes 75918 bytes 0 bytes
+ 1 0000:88:00.2 1262 pkts 1262 pkts 0 pkts 0 pkts 0 pkts
+ (3c:fd:fe:b2:44:b2) 123676 bytes 123676 bytes 0 bytes
+ 2 0000:88:00.3 40 pkts 42 pkts 0 pkts 0 pkts 0 pkts
+ (3c:fd:fe:b2:44:b3) 3692 bytes 3854 bytes 0 bytes
+ 3 KNI 0 pkts 0 pkts 0 pkts 0 pkts 0 pkts
+ (not registered) 0 bytes 0 bytes 0 bytes
+ ```
+
+2. Tcpdump
+
+- SSH to UE machine and ping LBP (`ping 192.168.100.2`)
+- SSH to LBP server.
+ - Run tcpdump with name of interface connected to Edge Node, verify data flow, use Ctrl+c to stop.
+
+ ```shell
+ # tcpdump -i enp23s0f3
+ tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+ listening on enp23s0f3, link-type EN10MB (Ethernet), capture size 262144 bytes
+ 10:29:14.678250 IP 192.168.100.1 > twesolox-mobl.ger.corp.intel.com: ICMP echo request, id 9249, seq 320, length 64
+ 10:29:14.678296 IP twesolox-mobl.ger.corp.intel.com > 192.168.100.1: ICMP echo reply, id 9249, seq 320, length 64
+ 10:29:15.678240 IP 192.168.100.1 > twesolox-mobl.ger.corp.intel.com: ICMP echo request, id 9249, seq 321, length 64
+ 10:29:15.678283 IP twesolox-mobl.ger.corp.intel.com > 192.168.100.1: ICMP echo reply, id 9249, seq 321, length 64
+ 10:29:16.678269 IP 192.168.100.1 > twesolox-mobl.ger.corp.intel.com: ICMP echo request, id 9249, seq 322, length 64
+ 10:29:16.678312 IP twesolox-mobl.ger.corp.intel.com > 192.168.100.1: ICMP echo reply, id 9249, seq 322, length 64
+ 10:29:17.678241 IP 192.168.100.1 > twesolox-mobl.ger.corp.intel.com: ICMP echo request, id 9249, seq 323, length 64
+ 10:29:17.678285 IP twesolox-mobl.ger.corp.intel.com > 192.168.100.1: ICMP echo reply, id 9249, seq 323, length 64
+ 10:29:18.678215 IP 192.168.100.1 > twesolox-mobl.ger.corp.intel.com: ICMP echo request, id 9249, seq 324, length 64
+ 10:29:18.678258 IP twesolox-mobl.ger.corp.intel.com > 192.168.100.1: ICMP echo reply, id 9249, seq 324, length 64
+ ^C
+ 10 packets captured
+ 10 packets received by filter
+ 0 packets dropped by kernel
+ ```
+
+### Configuring DNS
+* [Instructions for configuring DNS](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/openness-edgedns.md)
+
# Q&A
## Configuring time
@@ -314,6 +425,22 @@ Update interval : 130.2 seconds
Leap status : Normal
```
+## Setup static hostname
+
+In order to set some custom static hostname a command can be used:
+
+```
+hostnamectl set-hostname
+```
+
+Make sure that static hostname provided is proper and unique.
+The hostname provided needs to be defined in /etc/hosts as well:
+
+```
+127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
+::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
+```
+
## Configuring inventory
In order to execute playbooks, `inventory.ini` must be configure to include specific hosts to run the playbooks on.
@@ -444,3 +571,7 @@ Alternatively to reading from /opt/edgenode/verification_key.txt Edge Node's ser
```bash
openssl pkey -pubout -in /var/lib/appliance/certs/key.pem -inform pem -outform der | md5sum | xxd -r -p | openssl enc -a | tr -d '=' | tr '/+' '_-'
```
+
+## Customization of kernel, grub parameters and tuned profile
+
+OpenNESS Experience Kits provides easy way to customize kernel version, grub parameters and tuned profile - for more information refer to [the OpenNESS Experience Kits guide](https://github.com/open-ness/specs/blob/master/doc/getting-started/openness-experience-kits.md).
diff --git a/doc/getting-started/on-premises/offline-deployment.md b/doc/getting-started/on-premises/offline-deployment.md
index 9cdaf89c..d7802fe4 100644
--- a/doc/getting-started/on-premises/offline-deployment.md
+++ b/doc/getting-started/on-premises/offline-deployment.md
@@ -147,7 +147,7 @@ In extracted offline package, in `openness-experience-kits` folder, you will fin
9. Update `inventory.ini` file and enter IP address of this controller machine machine in `[all]` section. Do not use localhost or 127.0.0.1.
10. Run deploy script:
```
- ./deploy_onprem_controller.sh
+ ./deploy_onprem.sh controller
```
This operation may take 40 minutes or more.
Controller functionality will be installed on this server as defined in `group_vars/all.yml` using its IP address obtained from `[all]` section.
@@ -178,12 +178,12 @@ Steps to follow on each node from `[edgenode_group]`:
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 YOUR_NEW_HOSTNAME
```
-And finally, run `deploy_onprem_node.sh` script from controller:
+And finally, run `deploy_onprem.sh nodes` script from controller:
1. Log into controller as root user.
2. Go to extracted `openness_experience_kits` folder:
3. Run deploy script for nodes:
```
- ./deploy_onprem_node.sh
+ ./deploy_onprem.sh nodes
```
Note: This operation may take one hour or more, depending on the amount of chosen hosts in inventory.
Node functionality will be installed on chosen list of hosts.
@@ -219,7 +219,7 @@ Offline prepare and restore of the HDDL image is not enabled by default due to i
In order to prepare and later restore the HDDL image, `- role: offline/prepare/hddl` line must be uncommented in `offline_prepare.yml` playbook before running `prepare_offline_package.sh` script. This will result in OpenVINO (tm) toolkit being downloaded and the intermediate HDDL Docker image being built.
-During offline package restoration HDDL role must be enabled in order to finish the building. It is done by uncommenting `- role: hddl` line in `onprem_node.yml` before `deploy_onprem_node.sh` is executed.
+During offline package restoration HDDL role must be enabled in order to finish the building. It is done by uncommenting `- role: hddl` line in `on_premises.yml` before `deploy_onprem.sh nodes` is executed.
# Troubleshooting
Q:
diff --git a/doc/getting-started/on-premises/supported-epa.md b/doc/getting-started/on-premises/supported-epa.md
index e8f31823..7a0613ec 100644
--- a/doc/getting-started/on-premises/supported-epa.md
+++ b/doc/getting-started/on-premises/supported-epa.md
@@ -1,6 +1,6 @@
```text
SPDX-License-Identifier: Apache-2.0
-Copyright (c) 2019 Intel Corporation
+Copyright (c) 2019-2020 Intel Corporation
```
# OpenNESS OnPremises - Enhanced Platform Awareness Features supported
@@ -19,4 +19,8 @@ Enhanced Platform Awareness features are supported in OnPremises using EVA APIs.
## Features
Following are the EPA features supported in OpenNESS OnPremises Edge
1. [openness_hddl.md: Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness_hddl.md)
-
+2. [openness-environment-variables.md: Support for setting Environment Variables in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-environment-variables.md)
+3. [openness-dedicated-core.md: Dedicated CPU core allocation support for Edge Applications and Network Functions](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-dedicated-core.md)
+4. [openness-tunable-exec.md: Tunable executable command in OpenNESS On-Prem mode](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-tunable-exec.md)
+5. [openness-sriov-mulitple-interfaces.md: Multiple Interface and PCIe SRIOV support in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md)
+6. [openness-port-forward.md: Support for setting up port forwarding of a container in OpenNESS On-Prem mode](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-port-forward.md)
diff --git a/doc/getting-started/openness-experience-kits.md b/doc/getting-started/openness-experience-kits.md
index 20829f38..ec1ff0a0 100644
--- a/doc/getting-started/openness-experience-kits.md
+++ b/doc/getting-started/openness-experience-kits.md
@@ -7,8 +7,21 @@ Copyright (c) 2019 Intel Corporation
- [OpenNESS Experience Kits](#openness-experience-kits)
- [Purpose](#purpose)
- - [OpenNess setup playbooks](#openness-setup-playbooks)
+ - [OpenNESS setup playbooks](#openness-setup-playbooks)
- [Playbooks for OpenNESS offline deployment](#playbooks-for-openness-offline-deployment)
+ - [Customizing kernel, grub parameters, and tuned profile & variables per host.](#customizing-kernel-grub-parameters-and-tuned-profile--variables-per-host)
+ - [Default values](#default-values)
+ - [Use newer realtime kernel (3.10.0-1062)](#use-newer-realtime-kernel-3100-1062)
+ - [Use newer non-rt kernel (3.10.0-1062)](#use-newer-non-rt-kernel-3100-1062)
+ - [Use tuned 2.9](#use-tuned-29)
+ - [Default kernel and configure tuned](#default-kernel-and-configure-tuned)
+ - [Change amount of hugepages](#change-amount-of-hugepages)
+ - [Change the size of hugepages](#change-the-size-of-hugepages)
+ - [Change amount & size of hugepages](#change-amount--size-of-hugepages)
+ - [Remove Intel IOMMU from grub params](#remove-intel-iommu-from-grub-params)
+ - [Add custom GRUB parameter](#add-custom-grub-parameter)
+ - [Configure OVS-DPDK in kube-ovn](#configure-ovs-dpdk-in-kube-ovn)
+ - [Adding new CNI plugins for Kubernetes (Network Edge)](#adding-new-cni-plugins-for-kubernetes-network-edge)
## Purpose
@@ -17,7 +30,7 @@ OpenNESS Experience Kits repository contains set of Ansible playbooks for:
- easy setup of OpenNESS in **Network Edge** and **On-Premise** modes
- preparation and deployment of the **offline package** (i.e. package for OpenNESS offline deployment in On-Premise mode)
-## OpenNess setup playbooks
+## OpenNESS setup playbooks
@@ -27,3 +40,174 @@ When Edge Controller and Edge Node machines have no internet access and the netw
- playbooks that download all the packages and dependencies to the local folder and create offline package archive file;
- playbooks that unpack the archive file and install packages.
+
+## Customizing kernel, grub parameters, and tuned profile & variables per host.
+
+>NOTE: Following per-host customizations in host_vars files are not currently supported in Offline On-Premises mode.
+
+OpenNESS Experience Kits allows user to customize kernel, grub parameters, and tuned profile by leveraging Ansible's feature of host_vars.
+
+OpenNESS Experience Kits contains `host_vars/` directory that can be used to place a YAML file (`nodes-inventory-name.yml`, e.g. `node01.yml`). The file would contain variables that would override roles' default values.
+
+To override the default value, place the variable's name and new value in the host's vars file, e.g. contents of `host_vars/node01.yml` that would result in skipping kernel customization on that node:
+
+```yaml
+kernel_skip: true
+```
+
+Below are several common customization scenarios.
+
+
+### Default values
+Here are several default values:
+
+```yaml
+# --- machine_setup/custom_kernel
+kernel_skip: false # use this variable to disable custom kernel installation for host
+
+kernel_repo_url: http://linuxsoft.cern.ch/cern/centos/7/rt/CentOS-RT.repo
+kernel_repo_key: http://linuxsoft.cern.ch/cern/centos/7/os/x86_64/RPM-GPG-KEY-cern
+kernel_package: kernel-rt-kvm
+kernel_devel_package: kernel-rt-devel
+kernel_version: 3.10.0-957.21.3.rt56.935.el7.x86_64
+
+kernel_dependencies_urls: []
+kernel_dependencies_packages: []
+
+
+# --- machine_setup/grub
+hugepage_size: "2M" # Or 1G
+hugepage_amount: "5000"
+
+default_grub_params: "hugepagesz={{ hugepage_size }} hugepages={{ hugepage_amount }} intel_iommu=on iommu=pt"
+additional_grub_params: ""
+
+
+# --- machine_setup/configure_tuned
+tuned_skip: false # use this variable to skip tuned profile configuration for host
+tuned_packages:
+- http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/tuned-2.11.0-5.el7_7.1.noarch.rpm
+- http://linuxsoft.cern.ch/scientific/7x/x86_64/updates/fastbugs/tuned-profiles-realtime-2.11.0-5.el7_7.1.noarch.rpm
+tuned_profile: realtime
+tuned_vars: |
+ isolated_cores=2-3
+ nohz=on
+ nohz_full=2-3
+```
+
+### Use newer realtime kernel (3.10.0-1062)
+By default, `kernel-rt-kvm-3.10.0-957.21.3.rt56.935.el7.x86_64` from `http://linuxsoft.cern.ch/cern/centos/$releasever/rt/$basearch/` repository is installed.
+
+In order to use another version, e.g. `kernel-rt-kvm-3.10.0-1062.9.1.rt56.1033.el7.x86_64` just create host_var file for the host with content:
+```yaml
+kernel_version: 3.10.0-1062.9.1.rt56.1033.el7.x86_64
+```
+
+### Use newer non-rt kernel (3.10.0-1062)
+The OEK installs realtime kernel by default from specific repository. However, the non-rt kernel are present in the official CentOS repository.
+Therefore, in order to use a newer non-rt kernel, following overrides must be applied:
+```yaml
+kernel_repo_url: "" # package is in default repository, no need to add new repository
+kernel_package: kernel # instead of kernel-rt-kvm
+kernel_devel_package: kernel-devel # instead of kernel-rt-devel
+kernel_version: 3.10.0-1062.el7.x86_64
+
+dpdk_kernel_devel: "" # kernel-devel is in the repository, no need for url with RPM
+
+# Since, we're not using rt kernel, we don't need a tuned-profiles-realtime but want to keep the tuned 2.11
+tuned_packages:
+- http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/tuned-2.11.0-5.el7_7.1.noarch.rpm
+tuned_profile: balanced
+tuned_vars: ""
+```
+
+### Use tuned 2.9
+```yaml
+tuned_packages:
+- tuned-2.9.0-1.el7fdp
+- tuned-profiles-realtime-2.9.0-1.el7fdp
+```
+
+### Default kernel and configure tuned
+```yaml
+kernel_skip: true # skip kernel customization altogether
+
+# update tuned to 2.11, but don't install tuned-profiles-realtime since we're not using rt kernel
+tuned_packages:
+- http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/tuned-2.11.0-5.el7_7.1.noarch.rpm
+tuned_profile: balanced
+tuned_vars: ""
+```
+
+### Change amount of hugepages
+```yaml
+hugepage_amount: "1000" # default is 5000
+```
+
+### Change the size of hugepages
+```yaml
+hugepage_size: "1G" # default is 2M
+```
+
+### Change amount & size of hugepages
+```yaml
+hugepage_amount: "10" # default is 5000
+hugepage_size: "1G" # default is 2M
+```
+
+### Remove Intel IOMMU from grub params
+```yaml
+default_grub_params: "hugepagesz={{ hugepage_size }} hugepages={{ hugepage_amount }}"
+```
+
+### Add custom GRUB parameter
+```yaml
+additional_grub_params: "debug"
+```
+
+### Configure OVS-DPDK in kube-ovn
+By default OVS-DPDK is enabled. To be able to disable it please set a flag:
+```yaml
+ovs_dpdk: false
+```
+
+>NOTE: This flag should be set in `roles/kubernetes/cni/kubeovn/common/defaults/main.ym` or either added to `group_vars/all.yml`.
+
+Additionally hugepages in ovs pod can be adjusted once default hugepage settings are changed.
+```yaml
+ovs_dpdk_hugepage_size: "2Mi"
+ovs_dpdk_hugepages: "1Gi"
+```
+OVS pods limits are configured by:
+```yaml
+ovs_dpdk_resources_requests: "1Gi"
+ovs_dpdk_resources_limits: "1Gi"
+```
+CPU settings can be configured using:
+```yaml
+ovs_dpdk_pmd_cpu_mask: "0x4"
+ovs_dpdk_lcore_mask: "0x2"
+```
+
+## Adding new CNI plugins for Kubernetes (Network Edge)
+
+* Role that handles CNI deployment must be placed in `roles/kubernetes/cni/` directory, e.g. `roles/kubernetes/cni/kube-ovn/`.
+* Subroles for master and worker (if needed) should be placed in `master/` and `worker/` dirs, e.g `roles/kubernetes/cni/kube-ovn/{master,worker}`.
+* If there is a part of common command for both master and worker additional subrole can be created - `common` (e.g. `roles/kubernetes/cni/sriov/common`).
+Note that automatic inclusion of the `common` role should be handled by Ansible mechanisms (e.g. usage of meta's `dependiences` or `include_role` module)
+* Name of the main role must be added to `available_kubernetes_cnis` variable in `roles/kubernetes/cni/defaults/main.yml`.
+* If there are some additional requirements that should be checked before running the playbook (to not have an error in the middle of execution), they can be placed in the `roles/kubernetes/cni/tasks/precheck.yml` file which is included as a pre_task in plays for both Edge Controller and Edge Node.
+Currently executed basic prechecks are:
+ * Check if any CNI is requested (i.e. `kubernetes_cni` is not empty),
+ * Check if `sriov` is not requested as primary (first on the list) or standalone (only on the list),
+ * Check if `kubeovn` is requested as a primary (first on the list),
+ * Check if requested CNI is available (check if some CNI is requested that isn't present in the `available_kubernetes_cnis` list).
+* CNI roles should as self-contained as possible (CNI specific tasks should not be present in `kubernetes/{master,worker,common}` or `openness/network_edge/{master,worker}` if not absolutely necessary).
+* If CNI needs custom OpenNESS service (like Interface Service in case of `kube-ovn`), then it can be added to the `openness/network_edge/{master,worker}`.
+ Best if such tasks would be contained in separate task file (like `roles/openness/network_edge/master/tasks/kube-ovn.yml`) and executed only if CNI is requested, for example:
+ ```yaml
+ - name: deploy interface service for kube-ovn
+ include_tasks: kube-ovn.yml
+ when: "'kubeovn' in kubernetes_cnis"
+ ```
+* If CNI is to be used as an additional CNI (with Multus), Network Attachment Definition must be supplied ([refer to Multus docs for more info](https://github.com/intel/multus-cni/blob/master/doc/quickstart.md#storing-a-configuration-as-a-custom-resource)).
diff --git a/doc/ran/openness-ran.png b/doc/ran/openness-ran.png
new file mode 100644
index 00000000..1f46c47e
Binary files /dev/null and b/doc/ran/openness-ran.png differ
diff --git a/doc/ran/openness_ran.md b/doc/ran/openness_ran.md
new file mode 100644
index 00000000..304130bb
--- /dev/null
+++ b/doc/ran/openness_ran.md
@@ -0,0 +1,312 @@
+SPDX-License-Identifier: Apache-2.0
+Copyright © 2020 Intel Corporation
+
+- [Introduction](#introduction)
+- [Building the FlexRAN image](#building-the-flexran-image)
+- [FlexRAN hardware platform configuration](#flexran-hardware-platform-configuration)
+ - [BIOS](#bios)
+ - [Host kernel command line](#host-kernel-command-line)
+- [Deploying and Running the FlexRAN pod](#deploying-and-running-the-flexran-pod)
+- [Setting up 1588 - PTP based Time synchronization](#setting-up-1588---ptp-based-time-synchronization)
+ - [Setting up PTP](#setting-up-ptp)
+ - [Grandmaster clock](#grandmaster-clock)
+ - [Slave clock](#slave-clock)
+- [BIOS configuration](#bios-configuration)
+- [References](#references)
+
+# Introduction
+
+Radio Access Network is the edge of wireless network. 4G and 5G base stations form the key network function for the edge deployment. In OpenNESS Intel FlexRAN is used as a reference 4G and 5G base station for 4G and 5G end-to-end testing.
+
+FlexRAN offers high-density baseband pooling that could run on a distributed Telco Cloud to provide a smart indoor coverage solution and next-generation front haul architecture. This flexible, 4G and 5G platform provides the open platform ‘smarts’ for both connectivity and new applications at the edge of the network, along with the developer tools to create these new services. FlexRAN running on Telco Cloud provides low latency compute, storage, and network offload from the edge, thus saving network bandwidth.
+
+Intel FlexRAN 5GNR Reference PHY is a baseband PHY Reference Design for a 4G and 5G base station, using Xeon® series Processor with Intel Architecture. This 5GNR Reference PHY consists of a library of c-callable functions which are validated on Intel® Xeon® Broadwell / Skylake / Cascade Lake / Ice Lake platforms and demonstrates the capabilities of the software running different 5GNR L1 features. Functionality of these library functions is defined by the relevant sections in [3GPP TS 38.211, 212, 213, 214 and 215]. Performance of the Intel 5GNR Reference PHY meets the requirements defined by the base station conformance tests in [3GPP TS 38.141]. This library of Intel functions will be used by Intel partners and end-customers as a foundation for their own product development. Reference PHY is integrated with third party L2 and L3 to complete the base station pipeline.
+
+The diagram below shows FlexRAN DU (Real-time L1 and L2) deployed on the OpenNESS platform with the necessary microservice and Kubernetes enhancements required for real-time workload deployment.
+
+![FlexRAN DU deployed on OpenNESS](openness-ran.png)
+
+This document aims to provide the steps involved in deploying FlexRAN 5G (gNb) on the OpenNESS platform.
+
+> Note: This document covers both FlexRAN 4G and 5G. All the steps mentioned in this document uses 5G for reference. Please refer to the FlexRAN 4G document for minor updated needed in order to build, deploy and test FlexRAN 4G.
+
+# Building the FlexRAN image
+
+This section will explain the steps involved in building the FlexRAN image. Only L1 and L2-stub will be part of these steps. Real-time L2 (MAC and RLC) and non Real-time L2 and L3 is out of scope as it is a part of the third party component.
+
+1. Please contact your Intel representative to obtain the package
+2. Untar the FlexRAN package.
+3. Set the required environmental variables:
+ ```
+ export RTE_SDK=$localPath/dpdk-19.11
+ export RTE_TARGET=x86_64-native-linuxapp-icc
+ export WIRELESS_SDK_TARGET_ISA=avx512
+ export RPE_DIR=${flexranPath}/libs/ferrybridge
+ export ROE_DIR=${flexranPath}/libs/roe
+ export XRAN_DIR=${localPath}/flexran_xran
+ export WIRELESS_SDK_TOOLCHAIN=icc
+ export DIR_WIRELESS_SDK_ROOT=${localPath}/wireless_sdk
+ export DIR_WIRELESS_FW=${localPath}/wireless_convergence_l1/framework
+ export DIR_WIRELESS_TEST_4G=${localPath}/flexran_l1_4g_test
+ export DIR_WIRELESS_TEST_5G=${localPath}/flexran_l1_5g_test
+ export SDK_BUILD=build-${WIRELESS_SDK_TARGET_ISA}-icc
+ export DIR_WIRELESS_SDK=${DIR_WIRELESS_SDK_ROOT}/${SDK_BUILD}
+ export FLEXRAN_SDK=${DIR_WIRELESS_SDK}/install
+ export DIR_WIRELESS_TABLE_5G=${flexranPath}/bin/nr5g/gnb/l1/table
+ ```
+ > Note: these environmental variables path has to be updated according to your installation and file/directory names
+4. Build L1, WLS interface between L1 and L2 and L2-Stub (testmac)
+ `./flexran_build.sh -r 5gnr_sub6 -m testmac -m wls -m l1app -b -c`
+5. Once the build is successfully completed, copy the required binary files to the folder where docker image is built. The list of binary files that were used is documented in the [dockerfile](https://github.com/open-ness/edgeapps/blob/master/network-functions/ran/5G/flexRAN-gnb/Dockerfile)
+ - ICC, IPP mpi and mkl Runtime
+ - DPDK build target directory
+ - FlexRAN test vectors (optional)
+ - FlexRAN L1 and testmac (L2-stub) binary
+ - FlexRAN SDK modules
+ - FlexRAN WLS share library
+ - FlexRAN CPA libraries
+6. `cd` to the folder where docker image is built and start the docker build ` docker build -t flexran-va:1.0 .`
+
+By the end of step 5 the FlexRAN docker image will be created. This image is copied to the edge node where FlexRAN will be deployed and that is installed with OpenNESS Network edge with all the required EPA features including Intel PACN3000 FPGA. Please refer to [Using FPGA in OpenNESS: Programming, Resource Allocation and Configuration](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-fpga.md) document for further details for setting up Intel PACN3000 vRAN FPGA.
+
+# FlexRAN hardware platform configuration
+## BIOS
+FlexRAN on Skylake and Cascade lake require special BIOS configuration which involves disabling C-state and enabling Config TDP level-2. Please refer to the [BIOS configuration](#bios-configuration) section in this document.
+
+## Host kernel command line
+
+```
+usbcore.autosuspend=-1 selinux=0 enforcing=0 nmi_watchdog=0 softlockup_panic=0 audit=0 intel_pstate=disable cgroup_memory=1 cgroup_enable=memory mce=off idle=poll isolcpus=1-23,25-47 rcu_nocbs=1-23,25-47 kthread_cpus=0,24 irqaffinity=0,24 nohz_full=1-23,25-47 hugepagesz=1G hugepages=50 default_hugepagesz=1G intel_iommu=on iommu=pt pci=realloc pci=assign-busses
+```
+
+Host kernel version - 3.10.0-1062.12.1.rt56.1042.el7.x86_64
+
+Instructions on how to configure kernel command line in OpenNESS can be found in [OpenNESS getting started documentation](https://github.com/open-ness/specs/blob/master/doc/getting-started/openness-experience-kits.md#customizing-kernel-grub-parameters-and-tuned-profile--variables-per-host)
+
+# Deploying and Running the FlexRAN pod
+
+1. Deploy the OpenNESS cluster with [SRIOV for FPGA enabled](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-fpga.md#fpga-fec-ansible-installation-for-openness-network-edge) .
+2. Ensure there are no FlexRAN pods and FPGA configuration pods are not deployed using `kubectl get pods`
+3. Ensure all the EPA microservice and Enhancements (part of OpenNESS play book) are deployed `kubectl get po --all-namespaces`
+ ```yaml
+ NAMESPACE NAME READY STATUS RESTARTS AGE
+ kube-ovn kube-ovn-cni-8x5hc 1/1 Running 17 7d19h
+ kube-ovn kube-ovn-cni-p6v6s 1/1 Running 1 7d19h
+ kube-ovn kube-ovn-controller-578786b499-28lvh 1/1 Running 1 7d19h
+ kube-ovn kube-ovn-controller-578786b499-d8d2t 1/1 Running 3 5d19h
+ kube-ovn ovn-central-5f456db89f-l2gps 1/1 Running 0 7d19h
+ kube-ovn ovs-ovn-56c4c 1/1 Running 17 7d19h
+ kube-ovn ovs-ovn-fm279 1/1 Running 5 7d19h
+ kube-system coredns-6955765f44-2lqm7 1/1 Running 0 7d19h
+ kube-system coredns-6955765f44-bpk8q 1/1 Running 0 7d19h
+ kube-system etcd-silpixa00394960 1/1 Running 0 7d19h
+ kube-system kube-apiserver-silpixa00394960 1/1 Running 0 7d19h
+ kube-system kube-controller-manager-silpixa00394960 1/1 Running 0 7d19h
+ kube-system kube-multus-ds-amd64-bpq6s 1/1 Running 17 7d18h
+ kube-system kube-multus-ds-amd64-jf8ft 1/1 Running 0 7d19h
+ kube-system kube-proxy-2rh9c 1/1 Running 0 7d19h
+ kube-system kube-proxy-7jvqg 1/1 Running 17 7d19h
+ kube-system kube-scheduler-silpixa00394960 1/1 Running 0 7d19h
+ kube-system kube-sriov-cni-ds-amd64-crn2h 1/1 Running 17 7d19h
+ kube-system kube-sriov-cni-ds-amd64-j4jnt 1/1 Running 0 7d19h
+ kube-system kube-sriov-device-plugin-amd64-vtghv 1/1 Running 0 7d19h
+ kube-system kube-sriov-device-plugin-amd64-w4px7 1/1 Running 0 4d21h
+ openness eaa-78b89b4757-7phb8 1/1 Running 3 5d19h
+ openness edgedns-mdvds 1/1 Running 16 7d18h
+ openness interfaceservice-tkn6s 1/1 Running 16 7d18h
+ openness nfd-master-82dhc 1/1 Running 0 7d19h
+ openness nfd-worker-h4jlt 1/1 Running 37 7d19h
+ openness syslog-master-894hs 1/1 Running 0 7d19h
+ openness syslog-ng-n7zfm 1/1 Running 16 7d19h
+ ```
+4. Deploy the Kubernetes job to program the [FPGA](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-fpga.md)
+5. Deploy the Kubernetes job to configure the [BIOS](https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-bios.md) (note: only works on select Intel development platforms)
+6. Deploy the Kubernetes job to configure the Intel PAC N3000 FPGA `kubectl create -f /opt/edgecontroller/fpga/fpga-config-job.yaml`
+7. Deploy the FlexRAN Kubernetes pod `kubectl create -f flexran-va.yaml` - more info [here](https://github.com/open-ness/edgeapps/blob/master/network-functions/ran/5G/flexRAN-gnb/flexran-va.yaml)
+8. `exec` into FlexRAN pod `kubectl exec -it flexran -- /bin/bash`
+9. Find the PCI Bus function device ID of the FPGA VF assigned to the pod:
+
+ ```shell
+ printenv | grep FEC
+ ```
+
+10. Edit `phycfg_timer.xml` used for configuration of L1 application with the PCI Bus function device ID from previous step in order to offload FEC to this device:
+
+ ```xml
+
+ 1
+
+ 0000:1d:00.1
+ ```
+11. Once in the FlexRAN pod L1 and test-L2 (testmac) can be started.
+
+# Setting up 1588 - PTP based Time synchronization
+This section provides an overview of setting up PTP based Time synchronization in a cloud Native Kubernetes/docker environment. For FlexRAN specific xRAN Front haul tests and configuration please refer to the xRAN specific document in the reference section.
+
+> Note: PTP based Time synchronization method described here is applicable only for containers. For VMs methods based on Virtual PTP needs to be applied and is not covered in this document.
+
+## Setting up PTP
+In the environment that needs to be synchronized install linuxptp package. It provides ptp4l and phc2sys applications. PTP setup needs Grandmaster clock and Slave clock setup. Slave clock will be synchronized to the Grandmaster clock. At first, Grandmaster clock will be configured. To use Hardware Time Stamps supported NIC is required. To check if NIC is supporting Hardware Time Stamps run ethtool. Similar output should appear:
+
+```shell
+# ethtool -T eno4
+Time stamping parameters for eno4:
+Capabilities:
+ hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE)
+ software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE)
+ hardware-receive (SOF_TIMESTAMPING_RX_HARDWARE)
+ software-receive (SOF_TIMESTAMPING_RX_SOFTWARE)
+ software-system-clock (SOF_TIMESTAMPING_SOFTWARE)
+ hardware-raw-clock (SOF_TIMESTAMPING_RAW_HARDWARE)
+PTP Hardware Clock: 3
+Hardware Transmit Timestamp Modes:
+ off (HWTSTAMP_TX_OFF)
+ on (HWTSTAMP_TX_ON)
+Hardware Receive Filter Modes:
+ none (HWTSTAMP_FILTER_NONE)
+ ptpv1-l4-sync (HWTSTAMP_FILTER_PTP_V1_L4_SYNC)
+ ptpv1-l4-delay-req (HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ)
+ ptpv2-event (HWTSTAMP_FILTER_PTP_V2_EVENT)
+```
+
+Time in containers is the same as on the host machine so it is enough to synchronize the host to Grandmaster clock.
+
+PTP requires a few Kernel configuration options to be enabled:
+- CONFIG_PPS
+- CONFIG_NETWORK_PHY_TIMESTAMPING
+- CONFIG_PTP_1588_CLOCK
+
+## Grandmaster clock
+This is an optional step if you already have a grandmaster. The below steps explain how to setup a linux system to behave like ptp GM.
+
+On Grandmaster clock side take a look at `/etc/sysconfig/ptp4l` file. It is `ptp4l` daemon configuration file where starting options will be provided. Its content should look like this:
+```shell
+OPTIONS=”-f /etc/ptp4l.conf -i ”
+```
+Where `` is interface name that will be used for time stamping and `/etc/ptp4l.conf` is a configuration file for `ptp4l` instance.
+
+To determine a Grandmaster clock PTP protocol is using BMC algorithm and it is not obvious which clock will be chosen as master. However, user can set the timer that is preferable to be master clock. It can be changed in `/etc/ptp4l.conf`. Set `priority1 property` to `127`.
+
+After that start ptp4l service.
+
+```shell
+service ptp4l start
+```
+
+Output from the service can be checked at `/var/log/messages` and for master clock should be like:
+
+```shell
+Mar 16 17:08:57 localhost ptp4l: ptp4l[23627.304]: selected /dev/ptp2 as PTP clock
+Mar 16 17:08:57 localhost ptp4l: [23627.304] selected /dev/ptp2 as PTP clock
+Mar 16 17:08:57 localhost ptp4l: [23627.306] port 1: INITIALIZING to LISTENING on INITIALIZE
+Mar 16 17:08:57 localhost ptp4l: ptp4l[23627.306]: port 1: INITIALIZING to LISTENING on INITIALIZE
+Mar 16 17:08:57 localhost ptp4l: [23627.307] port 0: INITIALIZING to LISTENING on INITIALIZE
+Mar 16 17:08:57 localhost ptp4l: ptp4l[23627.307]: port 0: INITIALIZING to LISTENING on INITIALIZE
+Mar 16 17:08:57 localhost ptp4l: [23627.308] port 1: link up
+Mar 16 17:08:57 localhost ptp4l: ptp4l[23627.308]: port 1: link up
+Mar 16 17:09:03 localhost ptp4l: [23633.664] port 1: LISTENING to MASTER on ANNOUNCE_RECEIPT_TIMEOUT_EXPIRES
+Mar 16 17:09:03 localhost ptp4l: ptp4l[23633.664]: port 1: LISTENING to MASTER on ANNOUNCE_RECEIPT_TIMEOUT_EXPIRES
+Mar 16 17:09:03 localhost ptp4l: ptp4l[23633.664]: selected best master clock 001e67.fffe.d2f206
+Mar 16 17:09:03 localhost ptp4l: ptp4l[23633.665]: assuming the grand master role
+Mar 16 17:09:03 localhost ptp4l: [23633.664] selected best master clock 001e67.fffe.d2f206
+Mar 16 17:09:03 localhost ptp4l: [23633.665] assuming the grand master role
+```
+
+The next step is to synchronize PHC timer to the system time. To do that `phc2sys` daemon will be used. Firstly edit configuration file at `/etc/sysconfig/phc2sys`.
+
+```shell
+OPTIONS="-c -s CLOCK_REALTIME -w"
+```
+
+Replace `` with interface name. Start phc2sys service.
+```shell
+service phc2sys start
+```
+Logs can be viewed at `/var/log/messages` and look like:
+
+```shell
+phc2sys[3656456.969]: Waiting for ptp4l...
+phc2sys[3656457.970]: sys offset -6875996252 s0 freq -22725 delay 1555
+phc2sys[3656458.970]: sys offset -6875996391 s1 freq -22864 delay 1542
+phc2sys[3656459.970]: sys offset -52 s2 freq -22916 delay 1536
+phc2sys[3656460.970]: sys offset -29 s2 freq -22909 delay 1548
+phc2sys[3656461.971]: sys offset -25 s2 freq -22913 delay 1549
+```
+
+## Slave clock
+Slave clock configuration will be the same as for Grandmaster clock except `phc2sys` options and priority1 property for `ptp4l`. For slave clock priority1 property in `/etc/ptp4l.conf` should stay with default value (128). Run `ptp4l` service. To keep system time synchronized to PHC time change `phc2sys` options in `/etc/sysconfig/phc2sys` to:
+
+```shell
+OPTIONS=”phc2sys -s -w"
+```
+Replace `` with interface name. Logs will be available at `/var/log/messages`.
+
+```shell
+phc2sys[28917.406]: Waiting for ptp4l...
+phc2sys[28918.406]: phc offset -42928591735 s0 freq +24545 delay 1046
+phc2sys[28919.407]: phc offset -42928611122 s1 freq +5162 delay 955
+phc2sys[28920.407]: phc offset 308 s2 freq +5470 delay 947
+phc2sys[28921.407]: phc offset 408 s2 freq +5662 delay 947
+phc2sys[28922.407]: phc offset 394 s2 freq +5771 delay 947
+```
+Since this moment both clocks should be synchronized. Any docker container running in a pod is using the same clock as host so its clock will be synchronized as well.
+
+
+# BIOS configuration
+
+Below is the subset of the BIOS configuration. It contains the list of BIOS features that are recommended to be configured for FlexRAN DU deployment.
+
+```shell
+[BIOS::Advanced]
+
+[BIOS::Advanced::Processor Configuration]
+Intel(R) Hyper-Threading Tech=Enabled
+Active Processor Cores=All
+Intel(R) Virtualization Technology=Enabled
+MLC Streamer=Enabled
+MLC Spatial Prefetcher=Enabled
+DCU Data Prefetcher=Enabled
+DCU Instruction Prefetcher=Enabled
+LLC Prefetch=Enabled
+
+[BIOS::Advanced::Power & Performance]
+CPU Power and Performance Policy=Performance
+Workload Configuration=I/O Sensitive
+
+[BIOS::Advanced::Power & Performance::CPU C State Control]
+Package C-State=C0/C1 state
+C1E=Disabled ; Can be enabled Power savings
+Processor C6=Disabled
+
+[BIOS::Advanced::Power & Performance::Hardware P States]
+Hardware P-States=Disabled
+
+[BIOS::Advanced::Power & Performance::CPU P State Control]
+Enhanced Intel SpeedStep(R) Tech=Enabled
+Intel Configurable TDP=Enabled
+Configurable TDP Level=Level 2
+Intel(R) Turbo Boost Technology=Enabled
+Energy Efficient Turbo=Disabled
+
+[BIOS::Advanced::Power & Performance::Uncore Power Management]
+Uncore Frequency Scaling=Enabled
+Performance P-limit=Enabled
+
+[BIOS::Advanced::Memory Configuration::Memory RAS and Performance Configuration]
+NUMA Optimized=Enabled
+Sub_NUMA Cluster=Disabled
+
+[BIOS::Advanced::PCI Configuration]
+Memory Mapped I/O above 4 GB=Enabled
+SR-IOV Support=Enabled
+```
+
+# References
+- FlexRAN Reference Solution Software Release Notes - Document ID:575822
+- FlexRAN Reference Solution LTE eNB L2-L1 API Specification - Document ID:571742
+- FlexRAN 5G New Radio Reference Solution L2-L1 API Specification - Document ID:603575
+- FlexRAN 4G Reference Solution L1 User Guide - Document ID:570228
+- FlexRAN 5G NR Reference Solution L1 User Guide - Document ID:603576
+- FlexRAN Reference Solution L1 XML Configuration User Guide - Document ID:571741
+- FlexRAN 5G New Radio FPGA User Guide - Document ID:603578
+- FlexRAN Reference Solution xRAN FrontHaul SAS - Document ID:611268
\ No newline at end of file
diff --git a/openness_releasenotes.md b/openness_releasenotes.md
index 4fe66008..89d97c87 100644
--- a/openness_releasenotes.md
+++ b/openness_releasenotes.md
@@ -23,6 +23,7 @@ This document provides high level system features, issues and limitations inform
2. OpenNESS - 19.06.01
3. OpenNESS - 19.09
4. OpenNESS - 19.12
+5. OpenNESS - 20.03
# Features for Release
1. OpenNESS - 19.06
@@ -82,7 +83,7 @@ This document provides high level system features, issues and limitations inform
- Open Visual Cloud Smart City Application on OpenNESS - Solution Overview
- Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS
- OpenNESS How-to Guide (update)
-3. OpenNESS – 19.12
+3. OpenNESS – 19.12
- Hardware
- Support for Cascade lake 6252N
- Support for Intel FPGA PAC N3000
@@ -118,15 +119,50 @@ This document provides high level system features, issues and limitations inform
- Completely reorganized documentation structure for ease of navigation
- 5G NR Edge Cloud deployment Whitepaper
- EPA application note for each of the features
+4. OpenNESS – 20.03
+ - OVN/OVS-DPDK support for dataplane
+ - Network Edge: Support for kube-ovn CNI with OVS or OVS-DPDK as dataplane. Support for Calico as CNI.
+ - OnPremises Edge: Support for OVS-DPDK CNI with OVS-DPDK as dataplane supporting application deployed in containers or VMs
+ - Support for VM deployments on Kubernetes mode
+ - Kubevirt based VM deployment support
+ - EPA Support for SRIOV Virtual function allocation to the VMs deployed using K8s
+ - EPA support - OnPremises
+ - Support for dedicated core allocation to application running as VMs or Containers
+ - Support for dedicated SRIOV VF allocation to application running in VM or containers
+ - Support for system resource allocation into the application running as container
+ - Mount point for shared storage
+ - Pass environment variables
+ - Configure the port rules
+ - 5G Components
+ - PFD Management API support (3GPP 23.502 Sec. 52.6.3 PFD Management service)
+ - AF: Added support for PFD Northbound API
+ - NEF: Added support for PFD southbound API, and Stubs to loopback the PCF calls.
+ - kubectl: Enhanced CNCA kubectl plugin to configure PFD parameters
+ - WEB UI: Enhanced CNCA WEB UI to configure PFD params in OnPerm mode
+ - Auth2 based authentication between 5G Network functions: (as per 3GPP Standard)
+ - Implemented oAuth2 based authentication and validation
+ - AF and NEF communication channel is updated to authenticated based on oAuth2 JWT token in addition to HTTP2.
+ - HTTPS support
+ - Enhanced the 5G OAM, CNCA (web-ui and kube-ctl) to HTTPS interface
+ - Modular Playbook
+ - Support for customers to choose real-time or non-realtime kernel for a edge node
+ - Support for customer to choose CNIs - Validated with Kube-OVN and Calico
+ - Edge Apps
+ - FlexRAN: dockerfile and pod specification for deployment of 4G or 5G FlexRAN
+ - AF: dockerfile and pod specification
+ - NEF: dockerfile and pod specification
+ - UPF: dockerfile and pod specification
# Changes to Existing Features
- **OpenNESS 19.06** There are no unsupported or discontinued features relevant to this release.
- **OpenNESS 19.06.01** There are no unsupported or discontinued features relevant to this release.
- **OpenNESS 19.09** There are no unsupported or discontinued features relevant to this release.
- - **OpenNESS 19.12** :
+ - **OpenNESS 19.12**
- NTS Dataplane support for Network edge is discontinued.
- Controller UI for Network edge has be discontinued except for the CNCA configuration. Customers can optionally leverage Kubernetes dashboard to onboard applications.
- Edge node only supports non-realtime kernel.
+ - **OpenNESS 20.03**
+ - Support for HDDL-R only restricted to non-real-time or non-customized CentOS 7.6 default kernel.
# Fixed Issues
- **OpenNESS 19.06** There are no non-Intel issues relevant to this release.
@@ -142,6 +178,9 @@ This document provides high level system features, issues and limitations inform
- Application memory field is in MB
- **OpenNESS 19.12**
- Improved usability/automation in Ansible scripts
+- **OpenNESS 20.03**
+ - Realtime Kernel support for network edge with K8s.
+ - Modular playbooks
# Known Issues and Limitations
- **OpenNESS 19.06** There are no issues relevant to this release.
@@ -157,12 +196,19 @@ This document provides high level system features, issues and limitations inform
- OpenNESS OnPremises: Can not remove a failed/disconnected the edge node information/state from the controller
- The CNCA APIs (4G & 5G) supported in this release is an early access reference implementation and does not support authentication
- Realtime kernel support has been temporarily disabled to address the Kubernetes 1.16.2 and Realtime kernel instability.
-
+- **OpenNESS 20.03**
+ - On-Premises edge installation takes more than 1.5hrs because of docker image build for OVS-DPDK
+ - Network edge installation takes more than 1.5hrs because of docker image build for OVS-DPDK
+ - OpenNESS controller allows management NICs to be in the pool of configuration which might allow configuration by mistake there by disconnecting the node from master
+ - When using the SRIOV EPA feature added in 20.03 with OVNCNI, the container cannot access the CNI port. This is due to the SRIOV port being set by changing the network used by the container from default to a custom network, This overwrites the OVNCNI network setting configured prior to this to enable the container to work with OVNCNI. Another issue with the SRIOV, is that this also overwrites the network configuration with the EAA and edgedns, agents, which prevents the SRIOV enabled container from communicating with the agents.
+ - Cannot remove Edge Node from Controller when its offline and traffic policy is configured or app is deployed.
+
# Release Content
- **OpenNESS 19.06** OpenNESS Edge node, OpenNESS Controller, Common, Spec and OpenNESS Applications.
- **OpenNESS 19.06.01** OpenNESS Edge node, OpenNESS Controller, Common, Spec and OpenNESS Applications.
- **OpenNESS 19.09** OpenNESS Edge node, OpenNESS Controller, Common, Spec and OpenNESS Applications.
- **OpenNESS 19.12** OpenNESS Edge node, OpenNESS Controller, Common, Spec, OpenNESS Applications and Experience kit.
+- **OpenNESS 20.03** OpenNESS Edge node, OpenNESS Controller, Common, Spec, OpenNESS Applications and Experience kit.
# Hardware and Software Compatibility
OpenNESS Edge Node has been tested using the following hardware specification:
@@ -205,5 +251,5 @@ OpenNESS Edge Node has been tested using the following hardware specification:
| HDDL-R | [Mouser Mustang-V100](https://www.mouser.ie/datasheet/2/763/Mustang-V100_brochure-1526472.pdf) |
# Supported Operating Systems
-> OpenNESS was tested on CentOS Linux release 7.6.1810 (Core) : Note: OpenNESS is tested with CentOS 7.6 Pre-empt RT kernel to make sure VNFs and Applications can co-exist. There is not requirement from OpenNESS software to run on a Pre-empt RT kernel.
+> OpenNESS was tested on CentOS Linux release 7.6.1810 (Core) : Note: OpenNESS is tested with CentOS 7.6 Pre-empt RT kernel to ensure VNFs and Applications can co-exist. There is not a requirement from OpenNESS software to run on a Pre-empt RT kernel.
diff --git a/schema/5goam/5goam.swagger.json b/schema/5goam/5goam.swagger.json
index c8f76f6d..c0278b08 100644
--- a/schema/5goam/5goam.swagger.json
+++ b/schema/5goam/5goam.swagger.json
@@ -1,17 +1,3 @@
-# Copyright 2019 Intel Corporation and Smart-Edge.com, Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
{
"swagger": "2.0",
"info": {
@@ -293,4 +279,5 @@
"description": "The afService post response body info"
}
}
-}
\ No newline at end of file
+}
+
diff --git a/schema/af/af.openapi.json b/schema/af/af.openapi.json
index 27843ce1..c749a633 100644
--- a/schema/af/af.openapi.json
+++ b/schema/af/af.openapi.json
@@ -28,7 +28,7 @@
"get": {
"summary": "read all of the active subscriptions for this AF",
"tags": [
- "Traffic Influence API, AF level GET operation"
+ "Traffic Influence API AF level GET operation"
],
"responses": {
"200": {
@@ -132,7 +132,7 @@
"get": {
"summary": "Reads an active subscriptions for the AF and for the subscription ID",
"tags": [
- "Traffic Influence API, subscription level GET operation"
+ "Traffic Influence API Subscription level GET operation"
],
"responses": {
"200": {
@@ -168,7 +168,7 @@
"put": {
"summary": "Replaces an existing subscription resource based on subscription ID",
"tags": [
- "Traffic Influence API, subscription level PUT operation"
+ "Traffic Influence API Subscription level PUT operation"
],
"requestBody": {
"description": "Parameters to replace the existing subscription",
@@ -218,7 +218,7 @@
"patch": {
"summary": "Updates an existing subscription resource based on subscription ID",
"tags": [
- "Traffic Influence API, subscription level PATCH operation"
+ "Traffic Influence API Subscription level PATCH operation"
],
"requestBody": {
"required": true,
@@ -267,7 +267,7 @@
"delete": {
"summary": "Deletes an already existing subscription based on subscription ID",
"tags": [
- "Traffic Influence API, subscription level DELETE operation"
+ "Traffic Influence API Subscription level DELETE operation"
],
"responses": {
"204": {
diff --git a/schema/af/af.openapi.yaml b/schema/af/af.openapi.yaml
index 4d379fa2..8e2b2697 100644
--- a/schema/af/af.openapi.yaml
+++ b/schema/af/af.openapi.yaml
@@ -26,7 +26,7 @@ paths:
get:
summary: read all of the active subscriptions for this AF
tags:
- - Traffic Influence API, AF level GET operation
+ - Traffic Influence API AF level GET operation
responses:
'200':
description: OK.
@@ -93,7 +93,7 @@ paths:
get:
summary: Reads an active subscriptions for the AF and for the subscription ID
tags:
- - Traffic Influence API, subscription level GET operation
+ - Traffic Influence API Subscription level GET operation
responses:
'200':
description: OK (Successful get the active subscription)
@@ -116,7 +116,7 @@ paths:
put:
summary: Replaces an existing subscription resource based on subscription ID
tags:
- - Traffic Influence API, subscription level PUT operation
+ - Traffic Influence API Subscription level PUT operation
requestBody:
description: Parameters to replace the existing subscription
required: true
@@ -148,7 +148,7 @@ paths:
patch:
summary: Updates an existing subscription resource based on subscription ID
tags:
- - Traffic Influence API, subscription level PATCH operation
+ - Traffic Influence API Subscription level PATCH operation
requestBody:
required: true
content:
@@ -179,7 +179,7 @@ paths:
delete:
summary: Deletes an already existing subscription based on subscription ID
tags:
- - Traffic Influence API, subscription level DELETE operation
+ - Traffic Influence API Subscription level DELETE operation
responses:
'204':
description: No Content (Successful deletion of the existing subscription)
diff --git a/schema/af/af_pfd.openapi.json b/schema/af/af_pfd.openapi.json
new file mode 100644
index 00000000..91428fcd
--- /dev/null
+++ b/schema/af/af_pfd.openapi.json
@@ -0,0 +1,1003 @@
+{
+ "openapi": "3.0.0",
+ "info": {
+ "title": "Application Function PFD APIs",
+ "version": "1.0.0"
+ },
+ "externalDocs": {
+ "description": "3GPP TS 29.122 V15.3.0 T8 reference point for Northbound APIs",
+ "url": "http://www.3gpp.org/ftp/Specs/archive/29_series/29.122/"
+ },
+ "servers": [
+ {
+ "url": "{apiRoot}/af/v1/pfd",
+ "variables": {
+ "apiRoot": {
+ "default": "https://example.com",
+ "description": "apiRoot as defined in subclause 5.2.4 of 3GPP TS 29.122."
+ }
+ }
+ }
+ ],
+ "paths": {
+ "/transactions": {
+ "get": {
+ "summary": "read all the PFD transactions for this AF",
+ "tags": [
+ "PFD Management API AF level GET operation"
+ ],
+ "responses": {
+ "200": {
+ "description": "OK. All transactions related to the request URI are returned.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "array",
+ "items": {
+ "$ref": "#/components/schemas/PfdManagement"
+ }
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/400"
+ },
+ "401": {
+ "$ref": "#/components/responses/401"
+ },
+ "403": {
+ "$ref": "#/components/responses/403"
+ },
+ "404": {
+ "$ref": "#/components/responses/404"
+ },
+ "406": {
+ "$ref": "#/components/responses/406"
+ },
+ "429": {
+ "$ref": "#/components/responses/429"
+ },
+ "500": {
+ "$ref": "#/components/responses/500"
+ },
+ "503": {
+ "$ref": "#/components/responses/503"
+ },
+ "default": {
+ "$ref": "#/components/responses/default"
+ }
+ }
+ },
+ "post": {
+ "summary": "Creates a new PFD Management resource",
+ "tags": [
+ "PFD Management API Transaction level POST Operation"
+ ],
+ "requestBody": {
+ "required": true,
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdManagement"
+ }
+ }
+ },
+ "description": "Create a new transaction for PFD management."
+ },
+ "responses": {
+ "201": {
+ "description": "Created. The transaction was created successfully. The SCEF shall return the created transaction in the response payload body. PfdReport may be included to provide detailed failure information for some applications.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdManagement"
+ }
+ }
+ },
+ "headers": {
+ "Location": {
+ "description": "Contains the URI of the newly created resource",
+ "required": true,
+ "schema": {
+ "type": "string"
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/400"
+ },
+ "401": {
+ "$ref": "#/components/responses/401"
+ },
+ "403": {
+ "$ref": "#/components/responses/403"
+ },
+ "404": {
+ "$ref": "#/components/responses/404"
+ },
+ "411": {
+ "$ref": "#/components/responses/411"
+ },
+ "413": {
+ "$ref": "#/components/responses/413"
+ },
+ "415": {
+ "$ref": "#/components/responses/415"
+ },
+ "429": {
+ "$ref": "#/components/responses/429"
+ },
+ "500": {
+ "description": "The PFDs for all applications were not created successfully. PfdReport is included with detailed information.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "array",
+ "items": {
+ "$ref": "#/components/schemas/PfdReport"
+ },
+ "minItems": 1
+ }
+ },
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "503": {
+ "$ref": "#/components/responses/503"
+ },
+ "default": {
+ "$ref": "#/components/responses/default"
+ }
+ }
+ }
+ },
+ "/transactions/{transactionId}": {
+ "parameters": [
+ {
+ "name": "transactionId",
+ "in": "path",
+ "description": "Transaction ID",
+ "required": true,
+ "schema": {
+ "type": "string"
+ }
+ }
+ ],
+ "get": {
+ "summary": "Reads an active transaction for the AF based on the transaction ID",
+ "tags": [
+ "PFD Management API Transaction level GET Operation"
+ ],
+ "responses": {
+ "200": {
+ "description": "OK. The transaction information related to the request URI is returned.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdManagement"
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/400"
+ },
+ "401": {
+ "$ref": "#/components/responses/401"
+ },
+ "403": {
+ "$ref": "#/components/responses/403"
+ },
+ "404": {
+ "$ref": "#/components/responses/404"
+ },
+ "406": {
+ "$ref": "#/components/responses/406"
+ },
+ "429": {
+ "$ref": "#/components/responses/429"
+ },
+ "500": {
+ "$ref": "#/components/responses/500"
+ },
+ "503": {
+ "$ref": "#/components/responses/503"
+ },
+ "default": {
+ "$ref": "#/components/responses/default"
+ }
+ }
+ },
+ "put": {
+ "summary": "Replaces an active transaction based on the transaction ID",
+ "tags": [
+ "PFD Management API Transaction level PUT Operation"
+ ],
+ "requestBody": {
+ "required": true,
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdManagement"
+ }
+ }
+ },
+ "description": "Change information in PFD management transaction."
+ },
+ "responses": {
+ "200": {
+ "description": "OK. The transaction was modified successfully. The SCEF shall return an updated transaction in the response payload body.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdManagement"
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/400"
+ },
+ "401": {
+ "$ref": "#/components/responses/401"
+ },
+ "403": {
+ "$ref": "#/components/responses/403"
+ },
+ "404": {
+ "$ref": "#/components/responses/404"
+ },
+ "411": {
+ "$ref": "#/components/responses/411"
+ },
+ "413": {
+ "$ref": "#/components/responses/413"
+ },
+ "415": {
+ "$ref": "#/components/responses/415"
+ },
+ "429": {
+ "$ref": "#/components/responses/429"
+ },
+ "500": {
+ "description": "The PFDs for all applications were not updated successfully. PfdReport is included with detailed information.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "array",
+ "items": {
+ "$ref": "#/components/schemas/PfdReport"
+ },
+ "minItems": 1
+ }
+ },
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "503": {
+ "$ref": "#/components/responses/503"
+ },
+ "default": {
+ "$ref": "#/components/responses/default"
+ }
+ }
+ },
+ "delete": {
+ "summary": "Deletes an already existing transaction based on transaction ID",
+ "tags": [
+ "PFD Management API Transaction level DELETE Operation"
+ ],
+ "responses": {
+ "204": {
+ "description": "No Content. The transaction was deleted successfully. The payload body shall be empty."
+ },
+ "400": {
+ "$ref": "#/components/responses/400"
+ },
+ "401": {
+ "$ref": "#/components/responses/401"
+ },
+ "403": {
+ "$ref": "#/components/responses/403"
+ },
+ "404": {
+ "$ref": "#/components/responses/404"
+ },
+ "429": {
+ "$ref": "#/components/responses/429"
+ },
+ "500": {
+ "$ref": "#/components/responses/500"
+ },
+ "503": {
+ "$ref": "#/components/responses/503"
+ },
+ "default": {
+ "$ref": "#/components/responses/default"
+ }
+ }
+ }
+ },
+ "/transactions/{transactionId}/applications/{appId}": {
+ "parameters": [
+ {
+ "name": "transactionId",
+ "in": "path",
+ "description": "Transaction ID",
+ "required": true,
+ "schema": {
+ "type": "string"
+ }
+ },
+ {
+ "name": "appId",
+ "in": "path",
+ "description": "Identifier of the application",
+ "required": true,
+ "schema": {
+ "type": "string"
+ }
+ }
+ ],
+ "get": {
+ "summary": "Reads PFD data for an application based on transaction ID and application ID",
+ "tags": [
+ "PFD Management API Application level GET Operation"
+ ],
+ "responses": {
+ "200": {
+ "description": "OK. The application information related to the request URI is returned.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdData"
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/400"
+ },
+ "401": {
+ "$ref": "#/components/responses/401"
+ },
+ "403": {
+ "$ref": "#/components/responses/403"
+ },
+ "404": {
+ "$ref": "#/components/responses/404"
+ },
+ "406": {
+ "$ref": "#/components/responses/406"
+ },
+ "429": {
+ "$ref": "#/components/responses/429"
+ },
+ "500": {
+ "$ref": "#/components/responses/500"
+ },
+ "503": {
+ "$ref": "#/components/responses/503"
+ },
+ "default": {
+ "$ref": "#/components/responses/default"
+ }
+ }
+ },
+ "put": {
+ "summary": "Replaces PFD data for an application based on transaction ID and application ID",
+ "tags": [
+ "PFD Management API Application level PUT Operation"
+ ],
+ "requestBody": {
+ "required": true,
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdData"
+ }
+ }
+ },
+ "description": "Change information in application."
+ },
+ "responses": {
+ "200": {
+ "description": "OK. The application resource was modified successfully. The SCEF shall return an updated application resource in the response payload body.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdData"
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/400"
+ },
+ "401": {
+ "$ref": "#/components/responses/401"
+ },
+ "403": {
+ "description": "The PFDs for the application were not updated successfully.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdReport"
+ }
+ },
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "404": {
+ "$ref": "#/components/responses/404"
+ },
+ "409": {
+ "description": "The PFDs for the application were not updated successfully.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdReport"
+ }
+ },
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "411": {
+ "$ref": "#/components/responses/411"
+ },
+ "413": {
+ "$ref": "#/components/responses/413"
+ },
+ "415": {
+ "$ref": "#/components/responses/415"
+ },
+ "429": {
+ "$ref": "#/components/responses/429"
+ },
+ "500": {
+ "description": "The PFDs for the application were not updated successfully.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdReport"
+ }
+ },
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "503": {
+ "$ref": "#/components/responses/503"
+ },
+ "default": {
+ "$ref": "#/components/responses/default"
+ }
+ }
+ },
+ "patch": {
+ "summary": "Updates PFD data for an application based on transaction ID and application ID",
+ "tags": [
+ "PFD Management API Application level PATCH Operation"
+ ],
+ "requestBody": {
+ "required": true,
+ "content": {
+ "application/merge-patch+json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdData"
+ }
+ }
+ },
+ "description": "Change information in PFD management transaction."
+ },
+ "responses": {
+ "200": {
+ "description": "OK. The transaction was modified successfully. The SCEF shall return an updated transaction in the response payload body.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdData"
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/400"
+ },
+ "401": {
+ "$ref": "#/components/responses/401"
+ },
+ "403": {
+ "description": "The PFDs for the application were not updated successfully.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdReport"
+ }
+ },
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "404": {
+ "$ref": "#/components/responses/404"
+ },
+ "409": {
+ "description": "The PFDs for the application were not updated successfully.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdReport"
+ }
+ },
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "411": {
+ "$ref": "#/components/responses/411"
+ },
+ "413": {
+ "$ref": "#/components/responses/413"
+ },
+ "415": {
+ "$ref": "#/components/responses/415"
+ },
+ "429": {
+ "$ref": "#/components/responses/429"
+ },
+ "500": {
+ "description": "The PFDs for the application were not updated successfully.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdReport"
+ }
+ },
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "503": {
+ "$ref": "#/components/responses/503"
+ },
+ "default": {
+ "$ref": "#/components/responses/default"
+ }
+ }
+ },
+ "delete": {
+ "summary": "Deletes PFD data for an application based on transaction ID and application ID",
+ "tags": [
+ "PFD Management API Application level DELETE Operation"
+ ],
+ "responses": {
+ "204": {
+ "description": "No Content. The application was deleted successfully. The payload body shall be empty."
+ },
+ "400": {
+ "$ref": "#/components/responses/400"
+ },
+ "401": {
+ "$ref": "#/components/responses/401"
+ },
+ "403": {
+ "$ref": "#/components/responses/403"
+ },
+ "404": {
+ "$ref": "#/components/responses/404"
+ },
+ "429": {
+ "$ref": "#/components/responses/429"
+ },
+ "500": {
+ "$ref": "#/components/responses/500"
+ },
+ "503": {
+ "$ref": "#/components/responses/503"
+ },
+ "default": {
+ "$ref": "#/components/responses/default"
+ }
+ }
+ }
+ }
+ },
+ "components": {
+ "responses": {
+ "400": {
+ "description": "Bad request",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "401": {
+ "description": "Unauthorized",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "403": {
+ "description": "Forbidden",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "404": {
+ "description": "Not Found",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "406": {
+ "description": "Not Acceptable",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "409": {
+ "description": "Conflict",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "411": {
+ "description": "Length Required",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "412": {
+ "description": "Precondition Failed",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "413": {
+ "description": "Payload Too Large",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "414": {
+ "description": "URI Too Long",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "415": {
+ "description": "Unsupported Media Type",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "429": {
+ "description": "Too Many Requests",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "500": {
+ "description": "Internal Server Error",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "503": {
+ "description": "Service Unavailable",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "default": {
+ "description": "Generic Error"
+ }
+ },
+ "schemas": {
+ "DurationSec": {
+ "type": "integer",
+ "minimum": 0,
+ "description": "Unsigned integer identifying a period of time in units of seconds."
+ },
+ "DurationSecRm": {
+ "type": "integer",
+ "minimum": 0,
+ "description": "Unsigned integer identifying a period of time in units of seconds with \"nullable=true\" property.",
+ "nullable": true
+ },
+ "DurationSecRo": {
+ "type": "integer",
+ "minimum": 0,
+ "description": "Unsigned integer identifying a period of time in units of seconds with \"readOnly=true\" property.",
+ "readOnly": true
+ },
+ "SupportedFeatures": {
+ "type": "string",
+ "pattern": "^[A-Fa-f0-9]*$"
+ },
+ "Link": {
+ "type": "string",
+ "description": "string formatted according to IETF RFC 3986 identifying a referenced resource."
+ },
+ "Uri": {
+ "type": "string",
+ "description": "string providing an URI formatted according to IETF RFC 3986."
+ },
+ "ProblemDetails": {
+ "type": "object",
+ "properties": {
+ "type": {
+ "$ref": "#/components/schemas/Uri"
+ },
+ "title": {
+ "type": "string",
+ "description": "A short, human-readable summary of the problem type. It should not change from occurrence to occurrence of the problem."
+ },
+ "status": {
+ "type": "integer",
+ "description": "The HTTP status code for this occurrence of the problem."
+ },
+ "detail": {
+ "type": "string",
+ "description": "A human-readable explanation specific to this occurrence of the problem."
+ },
+ "instance": {
+ "$ref": "#/components/schemas/Uri"
+ },
+ "cause": {
+ "type": "string",
+ "description": "A machine-readable application error cause specific to this occurrence of the problem. This IE should be present and provide application-related error information, if available."
+ },
+ "invalidParams": {
+ "type": "array",
+ "items": {
+ "$ref": "#/components/schemas/InvalidParam"
+ },
+ "minItems": 1,
+ "description": "Description of invalid parameters, for a request rejected due to invalid parameters."
+ }
+ }
+ },
+ "InvalidParam": {
+ "type": "object",
+ "properties": {
+ "param": {
+ "type": "string",
+ "description": "Attribute's name encoded as a JSON Pointer, or header's name."
+ },
+ "reason": {
+ "type": "string",
+ "description": "A human-readable reason, e.g. \"must be a positive integer\"."
+ }
+ },
+ "required": [
+ "param"
+ ]
+ },
+ "PfdManagement": {
+ "type": "object",
+ "properties": {
+ "self": {
+ "$ref": "#/components/schemas/Link"
+ },
+ "supportedFeatures": {
+ "$ref": "#/components/schemas/SupportedFeatures"
+ },
+ "pfdDatas": {
+ "type": "object",
+ "additionalProperties": {
+ "$ref": "#/components/schemas/PfdData"
+ },
+ "minProperties": 1,
+ "description": "Each element uniquely identifies the PFDs for an external application identifier. Each element is identified in the map via an external application identifier as key. The response shall include successfully provisioned PFD data of application(s)."
+ },
+ "pfdReports": {
+ "type": "object",
+ "additionalProperties": {
+ "$ref": "#/components/schemas/PfdReport"
+ },
+ "minProperties": 1,
+ "description": "Supplied by the SCEF and contains the external application identifiers for which PFD(s) are not added or modified successfully. The failure reason is also included. Each element provides the related information for one or more external application identifier(s) and is identified in the map via the failure identifier as key.",
+ "readOnly": true
+ }
+ },
+ "required": [
+ "pfdDatas"
+ ]
+ },
+ "PfdData": {
+ "type": "object",
+ "properties": {
+ "externalAppId": {
+ "type": "string",
+ "description": "Each element uniquely external application identifier"
+ },
+ "self": {
+ "$ref": "#/components/schemas/Link"
+ },
+ "pfds": {
+ "type": "object",
+ "additionalProperties": {
+ "$ref": "#/components/schemas/Pfd"
+ },
+ "description": "Contains the PFDs of the external application identifier. Each PFD is identified in the map via a key containing the PFD identifier."
+ },
+ "allowedDelay": {
+ "$ref": "#/components/schemas/DurationSecRm"
+ },
+ "cachingTime": {
+ "$ref": "#/components/schemas/DurationSecRo"
+ }
+ },
+ "required": [
+ "externalAppId",
+ "pfds"
+ ]
+ },
+ "Pfd": {
+ "type": "object",
+ "properties": {
+ "pfdId": {
+ "type": "string",
+ "description": "Identifies a PDF of an application identifier."
+ },
+ "flowDescriptions": {
+ "type": "array",
+ "items": {
+ "type": "string"
+ },
+ "minItems": 1,
+ "description": "Represents a 3-tuple with protocol, server ip and server port for UL/DL application traffic. The content of the string has the same encoding as the IPFilterRule AVP value as defined in IETF RFC 6733."
+ },
+ "urls": {
+ "type": "array",
+ "items": {
+ "type": "string"
+ },
+ "minItems": 1,
+ "description": "Indicates a URL or a regular expression which is used to match the significant parts of the URL."
+ },
+ "domainNames": {
+ "type": "array",
+ "items": {
+ "type": "string"
+ },
+ "minItems": 1,
+ "description": "Indicates an FQDN or a regular expression as a domain name matching criteria."
+ }
+ },
+ "required": [
+ "pfdId"
+ ]
+ },
+ "PfdReport": {
+ "type": "object",
+ "properties": {
+ "externalAppIds": {
+ "type": "array",
+ "items": {
+ "type": "string"
+ },
+ "minItems": 1,
+ "description": "Identifies the external application identifier(s) which PFD(s) are not added or modified successfully"
+ },
+ "failureCode": {
+ "$ref": "#/components/schemas/FailureCode"
+ },
+ "cachingTime": {
+ "$ref": "#/components/schemas/DurationSec"
+ }
+ },
+ "required": [
+ "externalAppIds",
+ "failureCode"
+ ]
+ },
+ "FailureCode": {
+ "anyOf": [
+ {
+ "type": "string",
+ "enum": [
+ "MALFUNCTION",
+ "RESOURCE_LIMITATION",
+ "SHORT_DELAY",
+ "APP_ID_DUPLICATED",
+ "OTHER_REASON"
+ ]
+ },
+ {
+ "type": "string",
+ "description": "This string provides forward-compatibility with future extensions to the enumeration but is not used to encode content defined in the present version of this API.\n"
+ }
+ ],
+ "description": "Possible values are - MALFUNCTION: This value indicates that something functions wrongly in PFD provisioning or the PFD provisioning does not function at all. - RESOURCE_LIMITATION: This value indicates there is resource limitation for PFD storage. - SHORT_DELAY: This value indicates that the allowed delay is too short and PFD(s) are not stored. - APP_ID_DUPLICATED: The received external application identifier(s) are already provisioned. - OTHER_REASON: Other reason unspecified.\n"
+ }
+ }
+ }
+}
\ No newline at end of file
diff --git a/schema/af/af_pfd.openapi.yaml b/schema/af/af_pfd.openapi.yaml
new file mode 100644
index 00000000..fb486afb
--- /dev/null
+++ b/schema/af/af_pfd.openapi.yaml
@@ -0,0 +1,666 @@
+# SPDX-License-Identifier: Apache-2.0
+# Copyright (c) 2020 Intel Corporation
+
+# The source of this file is from 3GPP 29.522 Release 15 version 3
+# taken from http://www.3gpp.org/ftp/Specs/archive/29_series/29.522/
+
+openapi: 3.0.0
+info:
+ title: Application Function PFD APIs
+ version: "1.0.0"
+externalDocs:
+ description: 3GPP TS 29.122 V15.3.0 T8 reference point for Northbound APIs
+ url: 'http://www.3gpp.org/ftp/Specs/archive/29_series/29.122/'
+servers:
+ - url: '{apiRoot}/af/v1/pfd'
+ variables:
+ apiRoot:
+ default: https://example.com
+ description: apiRoot as defined in subclause 5.2.4 of 3GPP TS 29.122.
+paths:
+ '/transactions':
+ get:
+ summary: read all the PFD transactions for this AF
+ tags:
+ - PFD Management API AF level GET operation
+ responses:
+ '200':
+ description: OK. All transactions related to the request URI are returned.
+ content:
+ application/json:
+ schema:
+ type: array
+ items:
+ $ref: '#/components/schemas/PfdManagement'
+ '400':
+ $ref: '#/components/responses/400'
+ '401':
+ $ref: '#/components/responses/401'
+ '403':
+ $ref: '#/components/responses/403'
+ '404':
+ $ref: '#/components/responses/404'
+ '406':
+ $ref: '#/components/responses/406'
+ '429':
+ $ref: '#/components/responses/429'
+ '500':
+ $ref: '#/components/responses/500'
+ '503':
+ $ref: '#/components/responses/503'
+ default:
+ $ref: '#/components/responses/default'
+ post:
+ summary: Creates a new PFD Management resource
+ tags:
+ - PFD Management API Transaction level POST Operation
+ requestBody:
+ required: true
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdManagement'
+ description: Create a new transaction for PFD management.
+ responses:
+ '201':
+ description: Created. The transaction was created successfully. The SCEF shall return the created transaction in the response payload body. PfdReport may be included to provide detailed failure information for some applications.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdManagement'
+ headers:
+ Location:
+ description: 'Contains the URI of the newly created resource'
+ required: true
+ schema:
+ type: string
+ '400':
+ $ref: '#/components/responses/400'
+ '401':
+ $ref: '#/components/responses/401'
+ '403':
+ $ref: '#/components/responses/403'
+ '404':
+ $ref: '#/components/responses/404'
+ '411':
+ $ref: '#/components/responses/411'
+ '413':
+ $ref: '#/components/responses/413'
+ '415':
+ $ref: '#/components/responses/415'
+ '429':
+ $ref: '#/components/responses/429'
+ '500':
+ description: The PFDs for all applications were not created successfully. PfdReport is included with detailed information.
+ content:
+ application/json:
+ schema:
+ type: array
+ items:
+ $ref: '#/components/schemas/PfdReport'
+ minItems: 1
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '503':
+ $ref: '#/components/responses/503'
+ default:
+ $ref: '#/components/responses/default'
+ '/transactions/{transactionId}':
+ parameters:
+ - name: transactionId
+ in: path
+ description: Transaction ID
+ required: true
+ schema:
+ type: string
+ get:
+ summary: Reads an active transaction for the AF based on the transaction ID
+ tags:
+ - PFD Management API Transaction level GET Operation
+ responses:
+ '200':
+ description: OK. The transaction information related to the request URI is returned.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdManagement'
+ '400':
+ $ref: '#/components/responses/400'
+ '401':
+ $ref: '#/components/responses/401'
+ '403':
+ $ref: '#/components/responses/403'
+ '404':
+ $ref: '#/components/responses/404'
+ '406':
+ $ref: '#/components/responses/406'
+ '429':
+ $ref: '#/components/responses/429'
+ '500':
+ $ref: '#/components/responses/500'
+ '503':
+ $ref: '#/components/responses/503'
+ default:
+ $ref: '#/components/responses/default'
+ put:
+ summary: Replaces an active transaction based on the transaction ID
+ tags:
+ - PFD Management API Transaction level PUT Operation
+ requestBody:
+ required: true
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdManagement'
+ description: Change information in PFD management transaction.
+ responses:
+ '200':
+ description: OK. The transaction was modified successfully. The SCEF shall return an updated transaction in the response payload body.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdManagement'
+ '400':
+ $ref: '#/components/responses/400'
+ '401':
+ $ref: '#/components/responses/401'
+ '403':
+ $ref: '#/components/responses/403'
+ '404':
+ $ref: '#/components/responses/404'
+ '411':
+ $ref: '#/components/responses/411'
+ '413':
+ $ref: '#/components/responses/413'
+ '415':
+ $ref: '#/components/responses/415'
+ '429':
+ $ref: '#/components/responses/429'
+ '500':
+ description: The PFDs for all applications were not updated successfully. PfdReport is included with detailed information.
+ content:
+ application/json:
+ schema:
+ type: array
+ items:
+ $ref: '#/components/schemas/PfdReport'
+ minItems: 1
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '503':
+ $ref: '#/components/responses/503'
+ default:
+ $ref: '#/components/responses/default'
+ delete:
+ summary: Deletes an already existing transaction based on transaction ID
+ tags:
+ - PFD Management API Transaction level DELETE Operation
+ responses:
+ '204':
+ description: No Content. The transaction was deleted successfully. The payload body shall be empty.
+ '400':
+ $ref: '#/components/responses/400'
+ '401':
+ $ref: '#/components/responses/401'
+ '403':
+ $ref: '#/components/responses/403'
+ '404':
+ $ref: '#/components/responses/404'
+ '429':
+ $ref: '#/components/responses/429'
+ '500':
+ $ref: '#/components/responses/500'
+ '503':
+ $ref: '#/components/responses/503'
+ default:
+ $ref: '#/components/responses/default'
+ '/transactions/{transactionId}/applications/{appId}':
+ parameters:
+ - name: transactionId
+ in: path
+ description: Transaction ID
+ required: true
+ schema:
+ type: string
+ - name: appId
+ in: path
+ description: Identifier of the application
+ required: true
+ schema:
+ type: string
+ get:
+ summary: Reads PFD data for an application based on transaction ID and application ID
+ tags:
+ - PFD Management API Application level GET Operation
+ responses:
+ '200':
+ description: OK. The application information related to the request URI is returned.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdData'
+ '400':
+ $ref: '#/components/responses/400'
+ '401':
+ $ref: '#/components/responses/401'
+ '403':
+ $ref: '#/components/responses/403'
+ '404':
+ $ref: '#/components/responses/404'
+ '406':
+ $ref: '#/components/responses/406'
+ '429':
+ $ref: '#/components/responses/429'
+ '500':
+ $ref: '#/components/responses/500'
+ '503':
+ $ref: '#/components/responses/503'
+ default:
+ $ref: '#/components/responses/default'
+ put:
+ summary: Replaces PFD data for an application based on transaction ID and application ID
+ tags:
+ - PFD Management API Application level PUT Operation
+ requestBody:
+ required: true
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdData'
+ description: Change information in application.
+ responses:
+ '200':
+ description: OK. The application resource was modified successfully. The SCEF shall return an updated application resource in the response payload body.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdData'
+ '400':
+ $ref: '#/components/responses/400'
+ '401':
+ $ref: '#/components/responses/401'
+ '403':
+ description: The PFDs for the application were not updated successfully.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdReport'
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '404':
+ $ref: '#/components/responses/404'
+ '409':
+ description: The PFDs for the application were not updated successfully.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdReport'
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '411':
+ $ref: '#/components/responses/411'
+ '413':
+ $ref: '#/components/responses/413'
+ '415':
+ $ref: '#/components/responses/415'
+ '429':
+ $ref: '#/components/responses/429'
+ '500':
+ description: The PFDs for the application were not updated successfully.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdReport'
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '503':
+ $ref: '#/components/responses/503'
+ default:
+ $ref: '#/components/responses/default'
+ patch:
+ summary: Updates PFD data for an application based on transaction ID and application ID
+ tags:
+ - PFD Management API Application level PATCH Operation
+ requestBody:
+ required: true
+ content:
+ application/merge-patch+json:
+ schema:
+ $ref: '#/components/schemas/PfdData'
+ description: Change information in PFD management transaction.
+ responses:
+ '200':
+ description: OK. The transaction was modified successfully. The SCEF shall return an updated transaction in the response payload body.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdData'
+ '400':
+ $ref: '#/components/responses/400'
+ '401':
+ $ref: '#/components/responses/401'
+ '403':
+ description: The PFDs for the application were not updated successfully.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdReport'
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '404':
+ $ref: '#/components/responses/404'
+ '409':
+ description: The PFDs for the application were not updated successfully.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdReport'
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '411':
+ $ref: '#/components/responses/411'
+ '413':
+ $ref: '#/components/responses/413'
+ '415':
+ $ref: '#/components/responses/415'
+ '429':
+ $ref: '#/components/responses/429'
+ '500':
+ description: The PFDs for the application were not updated successfully.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdReport'
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '503':
+ $ref: '#/components/responses/503'
+ default:
+ $ref: '#/components/responses/default'
+ delete:
+ summary: Deletes PFD data for an application based on transaction ID and application ID
+ tags:
+ - PFD Management API Application level DELETE Operation
+ responses:
+ '204':
+ description: No Content. The application was deleted successfully. The payload body shall be empty.
+ '400':
+ $ref: '#/components/responses/400'
+ '401':
+ $ref: '#/components/responses/401'
+ '403':
+ $ref: '#/components/responses/403'
+ '404':
+ $ref: '#/components/responses/404'
+ '429':
+ $ref: '#/components/responses/429'
+ '500':
+ $ref: '#/components/responses/500'
+ '503':
+ $ref: '#/components/responses/503'
+ default:
+ $ref: '#/components/responses/default'
+components:
+ responses:
+ '400':
+ description: Bad request
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '401':
+ description: Unauthorized
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '403':
+ description: Forbidden
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '404':
+ description: Not Found
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '406':
+ description: Not Acceptable
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '409':
+ description: Conflict
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '411':
+ description: Length Required
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '412':
+ description: Precondition Failed
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '413':
+ description: Payload Too Large
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '414':
+ description: URI Too Long
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '415':
+ description: Unsupported Media Type
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '429':
+ description: Too Many Requests
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '500':
+ description: Internal Server Error
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '503':
+ description: Service Unavailable
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ default:
+ description: Generic Error
+
+ schemas:
+ DurationSec:
+ type: integer
+ minimum: 0
+ description: Unsigned integer identifying a period of time in units of seconds.
+ DurationSecRm:
+ type: integer
+ minimum: 0
+ description: Unsigned integer identifying a period of time in units of seconds with "nullable=true" property.
+ nullable: true
+ DurationSecRo:
+ type: integer
+ minimum: 0
+ description: Unsigned integer identifying a period of time in units of seconds with "readOnly=true" property.
+ readOnly: true
+ SupportedFeatures:
+ type: string
+ pattern: '^[A-Fa-f0-9]*$'
+ Link:
+ type: string
+ description: string formatted according to IETF RFC 3986 identifying a referenced resource.
+ Uri:
+ type: string
+ description: string providing an URI formatted according to IETF RFC 3986.
+ ProblemDetails:
+ type: object
+ properties:
+ type:
+ $ref: '#/components/schemas/Uri'
+ title:
+ type: string
+ description: A short, human-readable summary of the problem type. It should not change from occurrence to occurrence of the problem.
+ status:
+ type: integer
+ description: The HTTP status code for this occurrence of the problem.
+ detail:
+ type: string
+ description: A human-readable explanation specific to this occurrence of the problem.
+ instance:
+ $ref: '#/components/schemas/Uri'
+ cause:
+ type: string
+ description: A machine-readable application error cause specific to this occurrence of the problem. This IE should be present and provide application-related error information, if available.
+ invalidParams:
+ type: array
+ items:
+ $ref: '#/components/schemas/InvalidParam'
+ minItems: 1
+ description: Description of invalid parameters, for a request rejected due to invalid parameters.
+ InvalidParam:
+ type: object
+ properties:
+ param:
+ type: string
+ description: Attribute's name encoded as a JSON Pointer, or header's name.
+ reason:
+ type: string
+ description: A human-readable reason, e.g. "must be a positive integer".
+ required:
+ - param
+
+ PfdManagement:
+ type: object
+ properties:
+ self:
+ $ref: '#/components/schemas/Link'
+ supportedFeatures:
+ $ref: '#/components/schemas/SupportedFeatures'
+ pfdDatas:
+ type: object
+ additionalProperties:
+ $ref: '#/components/schemas/PfdData'
+ minProperties: 1
+ description: Each element uniquely identifies the PFDs for an external application identifier. Each element is identified in the map via an external application identifier as key. The response shall include successfully provisioned PFD data of application(s).
+ pfdReports:
+ type: object
+ additionalProperties:
+ $ref: '#/components/schemas/PfdReport'
+ minProperties: 1
+ description: Supplied by the SCEF and contains the external application identifiers for which PFD(s) are not added or modified successfully. The failure reason is also included. Each element provides the related information for one or more external application identifier(s) and is identified in the map via the failure identifier as key.
+ readOnly: true
+ required:
+ - pfdDatas
+ PfdData:
+ type: object
+ properties:
+ externalAppId:
+ type: string
+ description: Each element uniquely external application identifier
+ self:
+ $ref: '#/components/schemas/Link'
+ pfds:
+ type: object
+ additionalProperties:
+ $ref: '#/components/schemas/Pfd'
+ description: Contains the PFDs of the external application identifier. Each PFD is identified in the map via a key containing the PFD identifier.
+ allowedDelay:
+ $ref: '#/components/schemas/DurationSecRm'
+ cachingTime:
+ $ref: '#/components/schemas/DurationSecRo'
+ required:
+ - externalAppId
+ - pfds
+ Pfd:
+ type: object
+ properties:
+ pfdId:
+ type: string
+ description: Identifies a PDF of an application identifier.
+ flowDescriptions:
+ type: array
+ items:
+ type: string
+ minItems: 1
+ description: Represents a 3-tuple with protocol, server ip and server port for UL/DL application traffic. The content of the string has the same encoding as the IPFilterRule AVP value as defined in IETF RFC 6733.
+ urls:
+ type: array
+ items:
+ type: string
+ minItems: 1
+ description: Indicates a URL or a regular expression which is used to match the significant parts of the URL.
+ domainNames:
+ type: array
+ items:
+ type: string
+ minItems: 1
+ description: Indicates an FQDN or a regular expression as a domain name matching criteria.
+ required:
+ - pfdId
+ PfdReport:
+ type: object
+ properties:
+ externalAppIds:
+ type: array
+ items:
+ type: string
+ minItems: 1
+ description: Identifies the external application identifier(s) which PFD(s) are not added or modified successfully
+ failureCode:
+ $ref: '#/components/schemas/FailureCode'
+ cachingTime:
+ $ref: '#/components/schemas/DurationSec'
+ required:
+ - externalAppIds
+ - failureCode
+ FailureCode:
+ anyOf:
+ - type: string
+ enum:
+ - MALFUNCTION
+ - RESOURCE_LIMITATION
+ - SHORT_DELAY
+ - APP_ID_DUPLICATED
+ - OTHER_REASON
+ - type: string
+ description: >
+ This string provides forward-compatibility with future
+ extensions to the enumeration but is not used to encode
+ content defined in the present version of this API.
+ description: >
+ Possible values are
+ - MALFUNCTION: This value indicates that something functions wrongly in PFD provisioning or the PFD provisioning does not function at all.
+ - RESOURCE_LIMITATION: This value indicates there is resource limitation for PFD storage.
+ - SHORT_DELAY: This value indicates that the allowed delay is too short and PFD(s) are not stored.
+ - APP_ID_DUPLICATED: The received external application identifier(s) are already provisioned.
+ - OTHER_REASON: Other reason unspecified.
+
diff --git a/schema/controller/api.swagger.yml b/schema/controller/api.swagger.yml
index bf524164..f49278f7 100644
--- a/schema/controller/api.swagger.yml
+++ b/schema/controller/api.swagger.yml
@@ -1,5 +1,5 @@
# SPDX-License-Identifier: Apache-2.0
-# Copyright (c) 2019 Intel Corporation
+# Copyright (c) 2019-2020 Intel Corporation
openapi: 3.0.0
info:
@@ -351,7 +351,7 @@ components:
ipModifier:
type: object
properties:
- address:
+ address:
oneOf:
- $ref: '#/components/schemas/ipv4Address'
- $ref: '#/components/schemas/ipv6Address'
diff --git a/schema/eaa/README.md b/schema/eaa/README.md
index 0eb0ea81..2944110e 100644
--- a/schema/eaa/README.md
+++ b/schema/eaa/README.md
@@ -180,18 +180,3 @@ return ""HTTP 204: Deactivated""
@enduml
-### License
-
-Copyright 2019 Smart-Edge.com, Inc. All rights reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
diff --git a/schema/nef/nef_pfd_management_openapi.json b/schema/nef/nef_pfd_management_openapi.json
new file mode 100644
index 00000000..9a8fb8a3
--- /dev/null
+++ b/schema/nef/nef_pfd_management_openapi.json
@@ -0,0 +1,1049 @@
+{
+ "openapi": "3.0.0",
+ "info": {
+ "title": "3gpp-pfd-management",
+ "version": "1.0.0"
+ },
+ "externalDocs": {
+ "description": "3GPP TS 29.122 V15.3.0 T8 reference point for Northbound APIs",
+ "url": "http://www.3gpp.org/ftp/Specs/archive/29_series/29.122/"
+ },
+ "security": [
+ {},
+ {
+ "oAuth2ClientCredentials": []
+ }
+ ],
+ "servers": [
+ {
+ "url": "{apiRoot}/3gpp-pfd-management/v1",
+ "variables": {
+ "apiRoot": {
+ "default": "https://example.com",
+ "description": "apiRoot as defined in subclause 5.2.4 of 3GPP TS 29.122."
+ }
+ }
+ }
+ ],
+ "paths": {
+ "/{scsAsId}/transactions": {
+ "parameters": [
+ {
+ "name": "scsAsId",
+ "in": "path",
+ "description": "Identifier of the SCS/AS as defined in subclause subclause 5.2.4 of 3GPP TS 29.122.",
+ "required": true,
+ "schema": {
+ "type": "string"
+ }
+ }
+ ],
+ "get": {
+ "summary": "read all the PFD transactions for SCS/AS",
+ "tags": [
+ "PFD Management API SCS/AS level GET operation"
+ ],
+ "responses": {
+ "200": {
+ "description": "OK. All transactions related to the request URI are returned.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "array",
+ "items": {
+ "$ref": "#/components/schemas/PfdManagement"
+ }
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/400"
+ },
+ "401": {
+ "$ref": "#/components/responses/401"
+ },
+ "403": {
+ "$ref": "#/components/responses/403"
+ },
+ "404": {
+ "$ref": "#/components/responses/404"
+ },
+ "406": {
+ "$ref": "#/components/responses/406"
+ },
+ "429": {
+ "$ref": "#/components/responses/429"
+ },
+ "500": {
+ "$ref": "#/components/responses/500"
+ },
+ "503": {
+ "$ref": "#/components/responses/503"
+ },
+ "default": {
+ "$ref": "#/components/responses/default"
+ }
+ }
+ },
+ "post": {
+ "summary": "Creates a new PFD Management resource",
+ "tags": [
+ "PFD Management API Transaction level POST Operation"
+ ],
+ "requestBody": {
+ "required": true,
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdManagement"
+ }
+ }
+ },
+ "description": "Create a new transaction for PFD management."
+ },
+ "responses": {
+ "201": {
+ "description": "Created. The transaction was created successfully. The SCEF shall return the created transaction in the response payload body. PfdReport may be included to provide detailed failure information for some applications.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdManagement"
+ }
+ }
+ },
+ "headers": {
+ "Location": {
+ "description": "Contains the URI of the newly created resource",
+ "required": true,
+ "schema": {
+ "type": "string"
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/400"
+ },
+ "401": {
+ "$ref": "#/components/responses/401"
+ },
+ "403": {
+ "$ref": "#/components/responses/403"
+ },
+ "404": {
+ "$ref": "#/components/responses/404"
+ },
+ "411": {
+ "$ref": "#/components/responses/411"
+ },
+ "413": {
+ "$ref": "#/components/responses/413"
+ },
+ "415": {
+ "$ref": "#/components/responses/415"
+ },
+ "429": {
+ "$ref": "#/components/responses/429"
+ },
+ "500": {
+ "description": "The PFDs for all applications were not created successfully. PfdReport is included with detailed information.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "array",
+ "items": {
+ "$ref": "#/components/schemas/PfdReport"
+ },
+ "minItems": 1
+ }
+ },
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "503": {
+ "$ref": "#/components/responses/503"
+ },
+ "default": {
+ "$ref": "#/components/responses/default"
+ }
+ }
+ }
+ },
+ "/{scsAsId}/transactions/{transactionId}": {
+ "parameters": [
+ {
+ "name": "scsAsId",
+ "in": "path",
+ "description": "Identifier of the SCS/AS as defined in subclause subclause 5.2.4 of 3GPP TS 29.122.",
+ "required": true,
+ "schema": {
+ "type": "string"
+ }
+ },
+ {
+ "name": "transactionId",
+ "in": "path",
+ "description": "Transaction ID",
+ "required": true,
+ "schema": {
+ "type": "string"
+ }
+ }
+ ],
+ "get": {
+ "summary": "Reads an active transaction based on the transaction ID",
+ "tags": [
+ "PFD Management API Transaction level GET Operation"
+ ],
+ "responses": {
+ "200": {
+ "description": "OK. The transaction information related to the request URI is returned.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdManagement"
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/400"
+ },
+ "401": {
+ "$ref": "#/components/responses/401"
+ },
+ "403": {
+ "$ref": "#/components/responses/403"
+ },
+ "404": {
+ "$ref": "#/components/responses/404"
+ },
+ "406": {
+ "$ref": "#/components/responses/406"
+ },
+ "429": {
+ "$ref": "#/components/responses/429"
+ },
+ "500": {
+ "$ref": "#/components/responses/500"
+ },
+ "503": {
+ "$ref": "#/components/responses/503"
+ },
+ "default": {
+ "$ref": "#/components/responses/default"
+ }
+ }
+ },
+ "put": {
+ "summary": "Replaces an active transaction based on the transaction ID",
+ "tags": [
+ "PFD Management API Transaction level PUT Operation"
+ ],
+ "requestBody": {
+ "required": true,
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdManagement"
+ }
+ }
+ },
+ "description": "Change information in PFD management transaction."
+ },
+ "responses": {
+ "200": {
+ "description": "OK. The transaction was modified successfully. The SCEF shall return an updated transaction in the response payload body.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdManagement"
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/400"
+ },
+ "401": {
+ "$ref": "#/components/responses/401"
+ },
+ "403": {
+ "$ref": "#/components/responses/403"
+ },
+ "404": {
+ "$ref": "#/components/responses/404"
+ },
+ "411": {
+ "$ref": "#/components/responses/411"
+ },
+ "413": {
+ "$ref": "#/components/responses/413"
+ },
+ "415": {
+ "$ref": "#/components/responses/415"
+ },
+ "429": {
+ "$ref": "#/components/responses/429"
+ },
+ "500": {
+ "description": "The PFDs for all applications were not updated successfully. PfdReport is included with detailed information.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "type": "array",
+ "items": {
+ "$ref": "#/components/schemas/PfdReport"
+ },
+ "minItems": 1
+ }
+ },
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "503": {
+ "$ref": "#/components/responses/503"
+ },
+ "default": {
+ "$ref": "#/components/responses/default"
+ }
+ }
+ },
+ "delete": {
+ "summary": "Deletes an already existing transaction based on transaction ID",
+ "tags": [
+ "PFD Management API Transaction level DELETE Operation"
+ ],
+ "responses": {
+ "204": {
+ "description": "No Content. The transaction was deleted successfully. The payload body shall be empty."
+ },
+ "400": {
+ "$ref": "#/components/responses/400"
+ },
+ "401": {
+ "$ref": "#/components/responses/401"
+ },
+ "403": {
+ "$ref": "#/components/responses/403"
+ },
+ "404": {
+ "$ref": "#/components/responses/404"
+ },
+ "429": {
+ "$ref": "#/components/responses/429"
+ },
+ "500": {
+ "$ref": "#/components/responses/500"
+ },
+ "503": {
+ "$ref": "#/components/responses/503"
+ },
+ "default": {
+ "$ref": "#/components/responses/default"
+ }
+ }
+ }
+ },
+ "/{scsAsId}/transactions/{transactionId}/applications/{appId}": {
+ "parameters": [
+ {
+ "name": "scsAsId",
+ "in": "path",
+ "description": "Identifier of the SCS/AS as defined in subclause subclause 5.2.4 of 3GPP TS 29.122.",
+ "required": true,
+ "schema": {
+ "type": "string"
+ }
+ },
+ {
+ "name": "transactionId",
+ "in": "path",
+ "description": "Transaction ID",
+ "required": true,
+ "schema": {
+ "type": "string"
+ }
+ },
+ {
+ "name": "appId",
+ "in": "path",
+ "description": "Identifier of the application",
+ "required": true,
+ "schema": {
+ "type": "string"
+ }
+ }
+ ],
+ "get": {
+ "summary": "Reads PFD data for an application based on transaction ID and application ID",
+ "tags": [
+ "PFD Management API Application level GET Operation"
+ ],
+ "responses": {
+ "200": {
+ "description": "OK. The application information related to the request URI is returned.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdData"
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/400"
+ },
+ "401": {
+ "$ref": "#/components/responses/401"
+ },
+ "403": {
+ "$ref": "#/components/responses/403"
+ },
+ "404": {
+ "$ref": "#/components/responses/404"
+ },
+ "406": {
+ "$ref": "#/components/responses/406"
+ },
+ "429": {
+ "$ref": "#/components/responses/429"
+ },
+ "500": {
+ "$ref": "#/components/responses/500"
+ },
+ "503": {
+ "$ref": "#/components/responses/503"
+ },
+ "default": {
+ "$ref": "#/components/responses/default"
+ }
+ }
+ },
+ "put": {
+ "summary": "Replaces PFD data for an application based on transaction ID and application ID",
+ "tags": [
+ "PFD Management API Application level PUT Operation"
+ ],
+ "requestBody": {
+ "required": true,
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdData"
+ }
+ }
+ },
+ "description": "Change information in application."
+ },
+ "responses": {
+ "200": {
+ "description": "OK. The application resource was modified successfully. The SCEF shall return an updated application resource in the response payload body.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdData"
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/400"
+ },
+ "401": {
+ "$ref": "#/components/responses/401"
+ },
+ "403": {
+ "description": "The PFDs for the application were not updated successfully.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdReport"
+ }
+ },
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "404": {
+ "$ref": "#/components/responses/404"
+ },
+ "409": {
+ "description": "The PFDs for the application were not updated successfully.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdReport"
+ }
+ },
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "411": {
+ "$ref": "#/components/responses/411"
+ },
+ "413": {
+ "$ref": "#/components/responses/413"
+ },
+ "415": {
+ "$ref": "#/components/responses/415"
+ },
+ "429": {
+ "$ref": "#/components/responses/429"
+ },
+ "500": {
+ "description": "The PFDs for the application were not updated successfully.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdReport"
+ }
+ },
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "503": {
+ "$ref": "#/components/responses/503"
+ },
+ "default": {
+ "$ref": "#/components/responses/default"
+ }
+ }
+ },
+ "patch": {
+ "summary": "Updates PFD data for an application based on transaction ID and application ID",
+ "tags": [
+ "PFD Management API Application level PATCH Operation"
+ ],
+ "requestBody": {
+ "required": true,
+ "content": {
+ "application/merge-patch+json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdData"
+ }
+ }
+ },
+ "description": "Change information in PFD management transaction."
+ },
+ "responses": {
+ "200": {
+ "description": "OK. The transaction was modified successfully. The SCEF shall return an updated transaction in the response payload body.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdData"
+ }
+ }
+ }
+ },
+ "400": {
+ "$ref": "#/components/responses/400"
+ },
+ "401": {
+ "$ref": "#/components/responses/401"
+ },
+ "403": {
+ "description": "The PFDs for the application were not updated successfully.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdReport"
+ }
+ },
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "404": {
+ "$ref": "#/components/responses/404"
+ },
+ "409": {
+ "description": "The PFDs for the application were not updated successfully.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdReport"
+ }
+ },
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "411": {
+ "$ref": "#/components/responses/411"
+ },
+ "413": {
+ "$ref": "#/components/responses/413"
+ },
+ "415": {
+ "$ref": "#/components/responses/415"
+ },
+ "429": {
+ "$ref": "#/components/responses/429"
+ },
+ "500": {
+ "description": "The PFDs for the application were not updated successfully.",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/PfdReport"
+ }
+ },
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "503": {
+ "$ref": "#/components/responses/503"
+ },
+ "default": {
+ "$ref": "#/components/responses/default"
+ }
+ }
+ },
+ "delete": {
+ "summary": "Deletes PFD data for an application based on transaction ID and application ID",
+ "tags": [
+ "PFD Management API Application level DELETE Operation"
+ ],
+ "responses": {
+ "204": {
+ "description": "No Content. The application was deleted successfully. The payload body shall be empty."
+ },
+ "400": {
+ "$ref": "#/components/responses/400"
+ },
+ "401": {
+ "$ref": "#/components/responses/401"
+ },
+ "403": {
+ "$ref": "#/components/responses/403"
+ },
+ "404": {
+ "$ref": "#/components/responses/404"
+ },
+ "429": {
+ "$ref": "#/components/responses/429"
+ },
+ "500": {
+ "$ref": "#/components/responses/500"
+ },
+ "503": {
+ "$ref": "#/components/responses/503"
+ },
+ "default": {
+ "$ref": "#/components/responses/default"
+ }
+ }
+ }
+ }
+ },
+ "components": {
+ "securitySchemes": {
+ "oAuth2ClientCredentials": {
+ "type": "oauth2",
+ "flows": {
+ "clientCredentials": {
+ "tokenUrl": "{tokenUrl}",
+ "scopes": {}
+ }
+ }
+ }
+ },
+ "responses": {
+ "400": {
+ "description": "Bad request",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "401": {
+ "description": "Unauthorized",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "403": {
+ "description": "Forbidden",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "404": {
+ "description": "Not Found",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "406": {
+ "description": "Not Acceptable",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "409": {
+ "description": "Conflict",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "411": {
+ "description": "Length Required",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "412": {
+ "description": "Precondition Failed",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "413": {
+ "description": "Payload Too Large",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "414": {
+ "description": "URI Too Long",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "415": {
+ "description": "Unsupported Media Type",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "429": {
+ "description": "Too Many Requests",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "500": {
+ "description": "Internal Server Error",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "503": {
+ "description": "Service Unavailable",
+ "content": {
+ "application/problem+json": {
+ "schema": {
+ "$ref": "#/components/schemas/ProblemDetails"
+ }
+ }
+ }
+ },
+ "default": {
+ "description": "Generic Error"
+ }
+ },
+ "schemas": {
+ "DurationSec": {
+ "type": "integer",
+ "minimum": 0,
+ "description": "Unsigned integer identifying a period of time in units of seconds."
+ },
+ "DurationSecRm": {
+ "type": "integer",
+ "minimum": 0,
+ "description": "Unsigned integer identifying a period of time in units of seconds with \"nullable=true\" property.",
+ "nullable": true
+ },
+ "DurationSecRo": {
+ "type": "integer",
+ "minimum": 0,
+ "description": "Unsigned integer identifying a period of time in units of seconds with \"readOnly=true\" property.",
+ "readOnly": true
+ },
+ "SupportedFeatures": {
+ "type": "string",
+ "pattern": "^[A-Fa-f0-9]*$"
+ },
+ "Link": {
+ "type": "string",
+ "description": "string formatted according to IETF RFC 3986 identifying a referenced resource."
+ },
+ "Uri": {
+ "type": "string",
+ "description": "string providing an URI formatted according to IETF RFC 3986."
+ },
+ "ProblemDetails": {
+ "type": "object",
+ "properties": {
+ "type": {
+ "$ref": "#/components/schemas/Uri"
+ },
+ "title": {
+ "type": "string",
+ "description": "A short, human-readable summary of the problem type. It should not change from occurrence to occurrence of the problem."
+ },
+ "status": {
+ "type": "integer",
+ "description": "The HTTP status code for this occurrence of the problem."
+ },
+ "detail": {
+ "type": "string",
+ "description": "A human-readable explanation specific to this occurrence of the problem."
+ },
+ "instance": {
+ "$ref": "#/components/schemas/Uri"
+ },
+ "cause": {
+ "type": "string",
+ "description": "A machine-readable application error cause specific to this occurrence of the problem. This IE should be present and provide application-related error information, if available."
+ },
+ "invalidParams": {
+ "type": "array",
+ "items": {
+ "$ref": "#/components/schemas/InvalidParam"
+ },
+ "minItems": 1,
+ "description": "Description of invalid parameters, for a request rejected due to invalid parameters."
+ }
+ }
+ },
+ "InvalidParam": {
+ "type": "object",
+ "properties": {
+ "param": {
+ "type": "string",
+ "description": "Attribute's name encoded as a JSON Pointer, or header's name."
+ },
+ "reason": {
+ "type": "string",
+ "description": "A human-readable reason, e.g. \"must be a positive integer\"."
+ }
+ },
+ "required": [
+ "param"
+ ]
+ },
+ "PfdManagement": {
+ "type": "object",
+ "properties": {
+ "self": {
+ "$ref": "#/components/schemas/Link"
+ },
+ "supportedFeatures": {
+ "$ref": "#/components/schemas/SupportedFeatures"
+ },
+ "pfdDatas": {
+ "type": "object",
+ "additionalProperties": {
+ "$ref": "#/components/schemas/PfdData"
+ },
+ "minProperties": 1,
+ "description": "Each element uniquely identifies the PFDs for an external application identifier. Each element is identified in the map via an external application identifier as key. The response shall include successfully provisioned PFD data of application(s)."
+ },
+ "pfdReports": {
+ "type": "object",
+ "additionalProperties": {
+ "$ref": "#/components/schemas/PfdReport"
+ },
+ "minProperties": 1,
+ "description": "Supplied by the SCEF and contains the external application identifiers for which PFD(s) are not added or modified successfully. The failure reason is also included. Each element provides the related information for one or more external application identifier(s) and is identified in the map via the failure identifier as key.",
+ "readOnly": true
+ }
+ },
+ "required": [
+ "pfdDatas"
+ ]
+ },
+ "PfdData": {
+ "type": "object",
+ "properties": {
+ "externalAppId": {
+ "type": "string",
+ "description": "Each element uniquely external application identifier"
+ },
+ "self": {
+ "$ref": "#/components/schemas/Link"
+ },
+ "pfds": {
+ "type": "object",
+ "additionalProperties": {
+ "$ref": "#/components/schemas/Pfd"
+ },
+ "description": "Contains the PFDs of the external application identifier. Each PFD is identified in the map via a key containing the PFD identifier."
+ },
+ "allowedDelay": {
+ "$ref": "#/components/schemas/DurationSecRm"
+ },
+ "cachingTime": {
+ "$ref": "#/components/schemas/DurationSecRo"
+ }
+ },
+ "required": [
+ "externalAppId",
+ "pfds"
+ ]
+ },
+ "Pfd": {
+ "type": "object",
+ "properties": {
+ "pfdId": {
+ "type": "string",
+ "description": "Identifies a PDF of an application identifier."
+ },
+ "flowDescriptions": {
+ "type": "array",
+ "items": {
+ "type": "string"
+ },
+ "minItems": 1,
+ "description": "Represents a 3-tuple with protocol, server ip and server port for UL/DL application traffic. The content of the string has the same encoding as the IPFilterRule AVP value as defined in IETF RFC 6733."
+ },
+ "urls": {
+ "type": "array",
+ "items": {
+ "type": "string"
+ },
+ "minItems": 1,
+ "description": "Indicates a URL or a regular expression which is used to match the significant parts of the URL."
+ },
+ "domainNames": {
+ "type": "array",
+ "items": {
+ "type": "string"
+ },
+ "minItems": 1,
+ "description": "Indicates an FQDN or a regular expression as a domain name matching criteria."
+ }
+ },
+ "required": [
+ "pfdId"
+ ]
+ },
+ "PfdReport": {
+ "type": "object",
+ "properties": {
+ "externalAppIds": {
+ "type": "array",
+ "items": {
+ "type": "string"
+ },
+ "minItems": 1,
+ "description": "Identifies the external application identifier(s) which PFD(s) are not added or modified successfully"
+ },
+ "failureCode": {
+ "$ref": "#/components/schemas/FailureCode"
+ },
+ "cachingTime": {
+ "$ref": "#/components/schemas/DurationSec"
+ }
+ },
+ "required": [
+ "externalAppIds",
+ "failureCode"
+ ]
+ },
+ "FailureCode": {
+ "anyOf": [
+ {
+ "type": "string",
+ "enum": [
+ "MALFUNCTION",
+ "RESOURCE_LIMITATION",
+ "SHORT_DELAY",
+ "APP_ID_DUPLICATED",
+ "OTHER_REASON"
+ ]
+ },
+ {
+ "type": "string",
+ "description": "This string provides forward-compatibility with future extensions to the enumeration but is not used to encode content defined in the present version of this API.\n"
+ }
+ ],
+ "description": "Possible values are - MALFUNCTION: This value indicates that something functions wrongly in PFD provisioning or the PFD provisioning does not function at all. - RESOURCE_LIMITATION: This value indicates there is resource limitation for PFD storage. - SHORT_DELAY: This value indicates that the allowed delay is too short and PFD(s) are not stored. - APP_ID_DUPLICATED: The received external application identifier(s) are already provisioned. - OTHER_REASON: Other reason unspecified.\n"
+ }
+ }
+ }
+}
\ No newline at end of file
diff --git a/schema/nef/nef_pfd_management_openapi.yaml b/schema/nef/nef_pfd_management_openapi.yaml
new file mode 100644
index 00000000..a1edd704
--- /dev/null
+++ b/schema/nef/nef_pfd_management_openapi.yaml
@@ -0,0 +1,693 @@
+# SPDX-License-Identifier: Apache-2.0
+# Copyright (c) 2020 Intel Corporation
+# The source of this file is from 3GPP 29.522 Release 15 version 3
+# taken from http://www.3gpp.org/ftp/Specs/archive/29_series/29.522/
+openapi: 3.0.0
+info:
+ title: 3gpp-pfd-management
+ version: "1.0.0"
+externalDocs:
+ description: 3GPP TS 29.122 V15.3.0 T8 reference point for Northbound APIs
+ url: 'http://www.3gpp.org/ftp/Specs/archive/29_series/29.122/'
+security:
+ - {}
+ - oAuth2ClientCredentials: []
+servers:
+ - url: '{apiRoot}/3gpp-pfd-management/v1'
+ variables:
+ apiRoot:
+ default: https://example.com
+ description: apiRoot as defined in subclause 5.2.4 of 3GPP TS 29.122.
+paths:
+ /{scsAsId}/transactions:
+ parameters:
+ - name: scsAsId
+ in: path
+ description: Identifier of the SCS/AS as defined in subclause subclause 5.2.4 of 3GPP TS 29.122.
+ required: true
+ schema:
+ type: string
+ get:
+ summary: read all the PFD transactions for SCS/AS
+ tags:
+ - PFD Management API SCS/AS level GET operation
+ responses:
+ '200':
+ description: OK. All transactions related to the request URI are returned.
+ content:
+ application/json:
+ schema:
+ type: array
+ items:
+ $ref: '#/components/schemas/PfdManagement'
+ '400':
+ $ref: '#/components/responses/400'
+ '401':
+ $ref: '#/components/responses/401'
+ '403':
+ $ref: '#/components/responses/403'
+ '404':
+ $ref: '#/components/responses/404'
+ '406':
+ $ref: '#/components/responses/406'
+ '429':
+ $ref: '#/components/responses/429'
+ '500':
+ $ref: '#/components/responses/500'
+ '503':
+ $ref: '#/components/responses/503'
+ default:
+ $ref: '#/components/responses/default'
+ post:
+ summary: Creates a new PFD Management resource
+ tags:
+ - PFD Management API Transaction level POST Operation
+ requestBody:
+ required: true
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdManagement'
+ description: Create a new transaction for PFD management.
+ responses:
+ '201':
+ description: Created. The transaction was created successfully. The SCEF shall return the created transaction in the response payload body. PfdReport may be included to provide detailed failure information for some applications.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdManagement'
+ headers:
+ Location:
+ description: 'Contains the URI of the newly created resource'
+ required: true
+ schema:
+ type: string
+ '400':
+ $ref: '#/components/responses/400'
+ '401':
+ $ref: '#/components/responses/401'
+ '403':
+ $ref: '#/components/responses/403'
+ '404':
+ $ref: '#/components/responses/404'
+ '411':
+ $ref: '#/components/responses/411'
+ '413':
+ $ref: '#/components/responses/413'
+ '415':
+ $ref: '#/components/responses/415'
+ '429':
+ $ref: '#/components/responses/429'
+ '500':
+ description: The PFDs for all applications were not created successfully. PfdReport is included with detailed information.
+ content:
+ application/json:
+ schema:
+ type: array
+ items:
+ $ref: '#/components/schemas/PfdReport'
+ minItems: 1
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '503':
+ $ref: '#/components/responses/503'
+ default:
+ $ref: '#/components/responses/default'
+ /{scsAsId}/transactions/{transactionId}:
+ parameters:
+ - name: scsAsId
+ in: path
+ description: Identifier of the SCS/AS as defined in subclause subclause 5.2.4 of 3GPP TS 29.122.
+ required: true
+ schema:
+ type: string
+ - name: transactionId
+ in: path
+ description: Transaction ID
+ required: true
+ schema:
+ type: string
+ get:
+ summary: Reads an active transaction based on the transaction ID
+ tags:
+ - PFD Management API Transaction level GET Operation
+ responses:
+ '200':
+ description: OK. The transaction information related to the request URI is returned.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdManagement'
+ '400':
+ $ref: '#/components/responses/400'
+ '401':
+ $ref: '#/components/responses/401'
+ '403':
+ $ref: '#/components/responses/403'
+ '404':
+ $ref: '#/components/responses/404'
+ '406':
+ $ref: '#/components/responses/406'
+ '429':
+ $ref: '#/components/responses/429'
+ '500':
+ $ref: '#/components/responses/500'
+ '503':
+ $ref: '#/components/responses/503'
+ default:
+ $ref: '#/components/responses/default'
+ put:
+ summary: Replaces an active transaction based on the transaction ID
+ tags:
+ - PFD Management API Transaction level PUT Operation
+ requestBody:
+ required: true
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdManagement'
+ description: Change information in PFD management transaction.
+ responses:
+ '200':
+ description: OK. The transaction was modified successfully. The SCEF shall return an updated transaction in the response payload body.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdManagement'
+ '400':
+ $ref: '#/components/responses/400'
+ '401':
+ $ref: '#/components/responses/401'
+ '403':
+ $ref: '#/components/responses/403'
+ '404':
+ $ref: '#/components/responses/404'
+ '411':
+ $ref: '#/components/responses/411'
+ '413':
+ $ref: '#/components/responses/413'
+ '415':
+ $ref: '#/components/responses/415'
+ '429':
+ $ref: '#/components/responses/429'
+ '500':
+ description: The PFDs for all applications were not updated successfully. PfdReport is included with detailed information.
+ content:
+ application/json:
+ schema:
+ type: array
+ items:
+ $ref: '#/components/schemas/PfdReport'
+ minItems: 1
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '503':
+ $ref: '#/components/responses/503'
+ default:
+ $ref: '#/components/responses/default'
+ delete:
+ summary: Deletes an already existing transaction based on transaction ID
+ tags:
+ - PFD Management API Transaction level DELETE Operation
+ responses:
+ '204':
+ description: No Content. The transaction was deleted successfully. The payload body shall be empty.
+ '400':
+ $ref: '#/components/responses/400'
+ '401':
+ $ref: '#/components/responses/401'
+ '403':
+ $ref: '#/components/responses/403'
+ '404':
+ $ref: '#/components/responses/404'
+ '429':
+ $ref: '#/components/responses/429'
+ '500':
+ $ref: '#/components/responses/500'
+ '503':
+ $ref: '#/components/responses/503'
+ default:
+ $ref: '#/components/responses/default'
+ /{scsAsId}/transactions/{transactionId}/applications/{appId}:
+ parameters:
+ - name: scsAsId
+ in: path
+ description: Identifier of the SCS/AS as defined in subclause subclause 5.2.4 of 3GPP TS 29.122.
+ required: true
+ schema:
+ type: string
+ - name: transactionId
+ in: path
+ description: Transaction ID
+ required: true
+ schema:
+ type: string
+ - name: appId
+ in: path
+ description: Identifier of the application
+ required: true
+ schema:
+ type: string
+ get:
+ summary: Reads PFD data for an application based on transaction ID and application ID
+ tags:
+ - PFD Management API Application level GET Operation
+ responses:
+ '200':
+ description: OK. The application information related to the request URI is returned.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdData'
+ '400':
+ $ref: '#/components/responses/400'
+ '401':
+ $ref: '#/components/responses/401'
+ '403':
+ $ref: '#/components/responses/403'
+ '404':
+ $ref: '#/components/responses/404'
+ '406':
+ $ref: '#/components/responses/406'
+ '429':
+ $ref: '#/components/responses/429'
+ '500':
+ $ref: '#/components/responses/500'
+ '503':
+ $ref: '#/components/responses/503'
+ default:
+ $ref: '#/components/responses/default'
+ put:
+ summary: Replaces PFD data for an application based on transaction ID and application ID
+ tags:
+ - PFD Management API Application level PUT Operation
+ requestBody:
+ required: true
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdData'
+ description: Change information in application.
+ responses:
+ '200':
+ description: OK. The application resource was modified successfully. The SCEF shall return an updated application resource in the response payload body.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdData'
+ '400':
+ $ref: '#/components/responses/400'
+ '401':
+ $ref: '#/components/responses/401'
+ '403':
+ description: The PFDs for the application were not updated successfully.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdReport'
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '404':
+ $ref: '#/components/responses/404'
+ '409':
+ description: The PFDs for the application were not updated successfully.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdReport'
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '411':
+ $ref: '#/components/responses/411'
+ '413':
+ $ref: '#/components/responses/413'
+ '415':
+ $ref: '#/components/responses/415'
+ '429':
+ $ref: '#/components/responses/429'
+ '500':
+ description: The PFDs for the application were not updated successfully.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdReport'
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '503':
+ $ref: '#/components/responses/503'
+ default:
+ $ref: '#/components/responses/default'
+ patch:
+ summary: Updates PFD data for an application based on transaction ID and application ID
+ tags:
+ - PFD Management API Application level PATCH Operation
+ requestBody:
+ required: true
+ content:
+ application/merge-patch+json:
+ schema:
+ $ref: '#/components/schemas/PfdData'
+ description: Change information in PFD management transaction.
+ responses:
+ '200':
+ description: OK. The transaction was modified successfully. The SCEF shall return an updated transaction in the response payload body.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdData'
+ '400':
+ $ref: '#/components/responses/400'
+ '401':
+ $ref: '#/components/responses/401'
+ '403':
+ description: The PFDs for the application were not updated successfully.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdReport'
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '404':
+ $ref: '#/components/responses/404'
+ '409':
+ description: The PFDs for the application were not updated successfully.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdReport'
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '411':
+ $ref: '#/components/responses/411'
+ '413':
+ $ref: '#/components/responses/413'
+ '415':
+ $ref: '#/components/responses/415'
+ '429':
+ $ref: '#/components/responses/429'
+ '500':
+ description: The PFDs for the application were not updated successfully.
+ content:
+ application/json:
+ schema:
+ $ref: '#/components/schemas/PfdReport'
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '503':
+ $ref: '#/components/responses/503'
+ default:
+ $ref: '#/components/responses/default'
+ delete:
+ summary: Deletes PFD data for an application based on transaction ID and application ID
+ tags:
+ - PFD Management API Application level DELETE Operation
+ responses:
+ '204':
+ description: No Content. The application was deleted successfully. The payload body shall be empty.
+ '400':
+ $ref: '#/components/responses/400'
+ '401':
+ $ref: '#/components/responses/401'
+ '403':
+ $ref: '#/components/responses/403'
+ '404':
+ $ref: '#/components/responses/404'
+ '429':
+ $ref: '#/components/responses/429'
+ '500':
+ $ref: '#/components/responses/500'
+ '503':
+ $ref: '#/components/responses/503'
+ default:
+ $ref: '#/components/responses/default'
+components:
+ securitySchemes:
+ oAuth2ClientCredentials:
+ type: oauth2
+ flows:
+ clientCredentials:
+ tokenUrl: '{tokenUrl}'
+ scopes: {}
+
+ responses:
+ '400':
+ description: Bad request
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '401':
+ description: Unauthorized
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '403':
+ description: Forbidden
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '404':
+ description: Not Found
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '406':
+ description: Not Acceptable
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '409':
+ description: Conflict
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '411':
+ description: Length Required
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '412':
+ description: Precondition Failed
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '413':
+ description: Payload Too Large
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '414':
+ description: URI Too Long
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '415':
+ description: Unsupported Media Type
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '429':
+ description: Too Many Requests
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '500':
+ description: Internal Server Error
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ '503':
+ description: Service Unavailable
+ content:
+ application/problem+json:
+ schema:
+ $ref: '#/components/schemas/ProblemDetails'
+ default:
+ description: Generic Error
+
+ schemas:
+ DurationSec:
+ type: integer
+ minimum: 0
+ description: Unsigned integer identifying a period of time in units of seconds.
+ DurationSecRm:
+ type: integer
+ minimum: 0
+ description: Unsigned integer identifying a period of time in units of seconds with "nullable=true" property.
+ nullable: true
+ DurationSecRo:
+ type: integer
+ minimum: 0
+ description: Unsigned integer identifying a period of time in units of seconds with "readOnly=true" property.
+ readOnly: true
+ SupportedFeatures:
+ type: string
+ pattern: '^[A-Fa-f0-9]*$'
+ Link:
+ type: string
+ description: string formatted according to IETF RFC 3986 identifying a referenced resource.
+ Uri:
+ type: string
+ description: string providing an URI formatted according to IETF RFC 3986.
+ ProblemDetails:
+ type: object
+ properties:
+ type:
+ $ref: '#/components/schemas/Uri'
+ title:
+ type: string
+ description: A short, human-readable summary of the problem type. It should not change from occurrence to occurrence of the problem.
+ status:
+ type: integer
+ description: The HTTP status code for this occurrence of the problem.
+ detail:
+ type: string
+ description: A human-readable explanation specific to this occurrence of the problem.
+ instance:
+ $ref: '#/components/schemas/Uri'
+ cause:
+ type: string
+ description: A machine-readable application error cause specific to this occurrence of the problem. This IE should be present and provide application-related error information, if available.
+ invalidParams:
+ type: array
+ items:
+ $ref: '#/components/schemas/InvalidParam'
+ minItems: 1
+ description: Description of invalid parameters, for a request rejected due to invalid parameters.
+ InvalidParam:
+ type: object
+ properties:
+ param:
+ type: string
+ description: Attribute's name encoded as a JSON Pointer, or header's name.
+ reason:
+ type: string
+ description: A human-readable reason, e.g. "must be a positive integer".
+ required:
+ - param
+
+ PfdManagement:
+ type: object
+ properties:
+ self:
+ $ref: '#/components/schemas/Link'
+ supportedFeatures:
+ $ref: '#/components/schemas/SupportedFeatures'
+ pfdDatas:
+ type: object
+ additionalProperties:
+ $ref: '#/components/schemas/PfdData'
+ minProperties: 1
+ description: Each element uniquely identifies the PFDs for an external application identifier. Each element is identified in the map via an external application identifier as key. The response shall include successfully provisioned PFD data of application(s).
+ pfdReports:
+ type: object
+ additionalProperties:
+ $ref: '#/components/schemas/PfdReport'
+ minProperties: 1
+ description: Supplied by the SCEF and contains the external application identifiers for which PFD(s) are not added or modified successfully. The failure reason is also included. Each element provides the related information for one or more external application identifier(s) and is identified in the map via the failure identifier as key.
+ readOnly: true
+ required:
+ - pfdDatas
+ PfdData:
+ type: object
+ properties:
+ externalAppId:
+ type: string
+ description: Each element uniquely external application identifier
+ self:
+ $ref: '#/components/schemas/Link'
+ pfds:
+ type: object
+ additionalProperties:
+ $ref: '#/components/schemas/Pfd'
+ description: Contains the PFDs of the external application identifier. Each PFD is identified in the map via a key containing the PFD identifier.
+ allowedDelay:
+ $ref: '#/components/schemas/DurationSecRm'
+ cachingTime:
+ $ref: '#/components/schemas/DurationSecRo'
+ required:
+ - externalAppId
+ - pfds
+ Pfd:
+ type: object
+ properties:
+ pfdId:
+ type: string
+ description: Identifies a PDF of an application identifier.
+ flowDescriptions:
+ type: array
+ items:
+ type: string
+ minItems: 1
+ description: Represents a 3-tuple with protocol, server ip and server port for UL/DL application traffic. The content of the string has the same encoding as the IPFilterRule AVP value as defined in IETF RFC 6733.
+ urls:
+ type: array
+ items:
+ type: string
+ minItems: 1
+ description: Indicates a URL or a regular expression which is used to match the significant parts of the URL.
+ domainNames:
+ type: array
+ items:
+ type: string
+ minItems: 1
+ description: Indicates an FQDN or a regular expression as a domain name matching criteria.
+ required:
+ - pfdId
+ PfdReport:
+ type: object
+ properties:
+ externalAppIds:
+ type: array
+ items:
+ type: string
+ minItems: 1
+ description: Identifies the external application identifier(s) which PFD(s) are not added or modified successfully
+ failureCode:
+ $ref: '#/components/schemas/FailureCode'
+ cachingTime:
+ $ref: '#/components/schemas/DurationSec'
+ required:
+ - externalAppIds
+ - failureCode
+ FailureCode:
+ anyOf:
+ - type: string
+ enum:
+ - MALFUNCTION
+ - RESOURCE_LIMITATION
+ - SHORT_DELAY
+ - APP_ID_DUPLICATED
+ - OTHER_REASON
+ - type: string
+ description: >
+ This string provides forward-compatibility with future
+ extensions to the enumeration but is not used to encode
+ content defined in the present version of this API.
+ description: >
+ Possible values are
+ - MALFUNCTION: This value indicates that something functions wrongly in PFD provisioning or the PFD provisioning does not function at all.
+ - RESOURCE_LIMITATION: This value indicates there is resource limitation for PFD storage.
+ - SHORT_DELAY: This value indicates that the allowed delay is too short and PFD(s) are not stored.
+ - APP_ID_DUPLICATED: The received external application identifier(s) are already provisioned.
+ - OTHER_REASON: Other reason unspecified.
diff --git a/schema/nef/nef_traffic_influence_openapi.json b/schema/nef/nef_traffic_influence_openapi.json
index 3bc38cbc..c2ad4df3 100644
--- a/schema/nef/nef_traffic_influence_openapi.json
+++ b/schema/nef/nef_traffic_influence_openapi.json
@@ -9,7 +9,10 @@
"url": "http://www.3gpp.org/ftp/Specs/archive/29_series/29.522/"
},
"security": [
- {}
+ {},
+ {
+ "oAuth2ClientCredentials": []
+ }
],
"servers": [
{
@@ -238,7 +241,7 @@
"put": {
"summary": "Updates/replaces an existing subscription resource",
"tags": [
- "TrafficInfluence API subscription level PUT Operation"
+ "TrafficInfluence API Subscription level PUT Operation"
],
"requestBody": {
"description": "Parameters to update/replace the existing subscription",
@@ -288,7 +291,7 @@
"patch": {
"summary": "Updates/replaces an existing subscription resource",
"tags": [
- "TrafficInfluence API subscription level PATCH Operation"
+ "TrafficInfluence API Subscription level PATCH Operation"
],
"requestBody": {
"required": true,
@@ -363,6 +366,17 @@
}
},
"components": {
+ "securitySchemes": {
+ "oAuth2ClientCredentials": {
+ "type": "oauth2",
+ "flows": {
+ "clientCredentials": {
+ "tokenUrl": "{tokenUrl}",
+ "scopes": {}
+ }
+ }
+ }
+ },
"schemas": {
"TrafficInfluSub": {
"type": "object",
@@ -1085,4 +1099,4 @@
}
}
}
-}
+}
\ No newline at end of file
diff --git a/schema/nef/nef_traffic_influence_openapi.yaml b/schema/nef/nef_traffic_influence_openapi.yaml
index 766a2dbc..225e1ecb 100644
--- a/schema/nef/nef_traffic_influence_openapi.yaml
+++ b/schema/nef/nef_traffic_influence_openapi.yaml
@@ -11,6 +11,7 @@ externalDocs:
url: 'http://www.3gpp.org/ftp/Specs/archive/29_series/29.522/'
security:
- {}
+ - oAuth2ClientCredentials: []
servers:
- url: '{apiRoot}/3gpp-traffic-influence/v1'
variables:
@@ -117,7 +118,7 @@ paths:
$ref: '#/components/responses/503'
default:
$ref: '#/components/responses/default'
-
+
/{afId}/subscriptions/{subscriptionId}:
parameters:
- name: afId
@@ -158,7 +159,7 @@ paths:
put:
summary: Updates/replaces an existing subscription resource
tags:
- - TrafficInfluence API subscription level PUT Operation
+ - TrafficInfluence API Subscription level PUT Operation
requestBody:
description: Parameters to update/replace the existing subscription
required: true
@@ -190,7 +191,7 @@ paths:
patch:
summary: Updates/replaces an existing subscription resource
tags:
- - TrafficInfluence API subscription level PATCH Operation
+ - TrafficInfluence API Subscription level PATCH Operation
requestBody:
required: true
content:
@@ -236,6 +237,14 @@ paths:
default:
$ref: '#/components/responses/default'
components:
+ securitySchemes:
+ oAuth2ClientCredentials:
+ type: oauth2
+ flows:
+ clientCredentials:
+ tokenUrl: '{tokenUrl}'
+ scopes: {}
+
schemas:
TrafficInfluSub:
type: object
diff --git a/schema/pb/eva.proto b/schema/pb/eva.proto
index 19e9647e..58a2c749 100644
--- a/schema/pb/eva.proto
+++ b/schema/pb/eva.proto
@@ -80,6 +80,18 @@ message Application {
// (Enhanced App Configuration). This is in Json format - but is at top level
// an array of string key-value pairs. Specific keys are defined by their respective features.
string EACJsonBlob = 11;
+
+ // CNI configuration for the application
+ CNIConfiguration cniConf = 12;
+}
+
+// CNIConfiguration stores CNI configuration data.
+// CNI specification is available at https://github.com/containernetworking/cni/blob/master/SPEC.md
+message CNIConfiguration {
+ string cniConfig = 1; // CNI configuration in form of a JSON
+ string interfaceName = 2; // Name of the interface
+ string path = 3; // CNI's path
+ string args = 4; // CNI's extra args passed as a CNI_ARGS env variable
}
message ApplicationID {