Skip to content
This repository has been archived by the owner on May 27, 2022. It is now read-only.

Commit

Permalink
Update paths
Browse files Browse the repository at this point in the history
  • Loading branch information
cjnolan committed Apr 28, 2021
1 parent b1306d3 commit 07db152
Show file tree
Hide file tree
Showing 28 changed files with 125 additions and 125 deletions.
98 changes: 49 additions & 49 deletions README.md

Large diffs are not rendered by default.

12 changes: 6 additions & 6 deletions doc/applications-onboard/network-edge-applications-onboarding.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,14 +42,14 @@ Users must provide the application to be deployed on the OpenNESS platform for N

> **Note**: The Harbor registry setup is out of scope for this document. If users already have a docker container image file and would like to copy it to the node manually, they can use the `docker load` command to add the image. The success of using a pre-built Docker image depends on the application dependencies that users must know.
The OpenNESS [edgeapps](https://github.com/otcshare/edgeapps) repository provides images for OpenNESS supported applications. Pull the repository to your Edge Node to build the images.
The OpenNESS [edgeapps](https://github.com/open-ness/edgeapps) repository provides images for OpenNESS supported applications. Pull the repository to your Edge Node to build the images.

This document explains the build and deployment of two applications:
1. Sample application: a simple “Hello, World!” reference application for OpenNESS
2. OpenVINO™ application: A close to real-world inference application

## Building sample application images
The sample application is available in [the edgeapps repository](https://github.com/otcshare/edgeapps/tree/master/applications/sample-app); further information about the application is contained within the `Readme.md` file.
The sample application is available in [the edgeapps repository](https://github.com/open-ness/edgeapps/tree/master/applications/sample-app); further information about the application is contained within the `Readme.md` file.

The following steps are required to build the sample application Docker images for testing the OpenNESS Edge Application Agent (EAA) with consumer and producer applications:

Expand All @@ -64,7 +64,7 @@ The following steps are required to build the sample application Docker images f
docker images | grep consumer
```
## Building the OpenVINO application images
The OpenVINO application is available in [the EdgeApps repository](https://github.com/otcshare/edgeapps/tree/master/applications/openvino); further information about the application is contained within `Readme.md` file.
The OpenVINO application is available in [the EdgeApps repository](https://github.com/open-ness/edgeapps/tree/master/applications/openvino); further information about the application is contained within `Readme.md` file.

The following steps are required to build the sample application Docker images for testing OpenVINO consumer and producer applications:

Expand Down Expand Up @@ -491,12 +491,12 @@ This section guides users through the complete process of onboarding the OpenVIN
## Deploying the Application
1. An application `yaml` specification file for the OpenVINO producer that is used to deploy the K8s pod can be found in the Edge Apps repository at [./applications/openvino/producer/openvino-prod-app.yaml](https://github.com/otcshare/edgeapps/blob/master/applications/openvino/producer/openvino-prod-app.yaml). The pod will use the Docker image, which must be [built](#building-openvino-application-images) and available on the platform. Deploy the producer application by running:
1. An application `yaml` specification file for the OpenVINO producer that is used to deploy the K8s pod can be found in the Edge Apps repository at [./applications/openvino/producer/openvino-prod-app.yaml](https://github.com/open-ness/edgeapps/blob/master/applications/openvino/producer/openvino-prod-app.yaml). The pod will use the Docker image, which must be [built](#building-openvino-application-images) and available on the platform. Deploy the producer application by running:
```
kubectl apply -f openvino-prod-app.yaml
kubectl certificate approve openvino-prod-app
```
2. An application `yaml` specification file for the OpenVINO consumer that is used to deploy K8s pod can be found in the Edge Apps repository at [./applications/openvino/consumer/openvino-cons-app.yaml](https://github.com/otcshare/edgeapps/blob/master/applications/openvino/consumer/openvino-cons-app.yaml). The pod will use the Docker image, which must be [built](#building-openvino-application-images) and available on the platform. Deploy the consumer application by running:
2. An application `yaml` specification file for the OpenVINO consumer that is used to deploy K8s pod can be found in the Edge Apps repository at [./applications/openvino/consumer/openvino-cons-app.yaml](https://github.com/open-ness/edgeapps/blob/master/applications/openvino/consumer/openvino-cons-app.yaml). The pod will use the Docker image, which must be [built](#building-openvino-application-images) and available on the platform. Deploy the consumer application by running:
```
kubectl apply -f openvino-cons-app.yaml
kubectl certificate approve openvino-cons-app
Expand Down Expand Up @@ -593,7 +593,7 @@ The following is an example of how to set up DNS resolution for OpenVINO consume
dig openvino.openness
```
3. On the traffic generating host build the image for the [Client Simulator](#building-openvino-application-images)
4. Run the following from [edgeapps/applications/openvino/clientsim](https://github.com/otcshare/edgeapps/blob/master/applications/openvino/clientsim/run-docker.sh) to start the video traffic via the containerized Client Simulator. A graphical user environment is required to view the results of the returning augmented videos stream.
4. Run the following from [edgeapps/applications/openvino/clientsim](https://github.com/open-ness/edgeapps/blob/master/applications/openvino/clientsim/run-docker.sh) to start the video traffic via the containerized Client Simulator. A graphical user environment is required to view the results of the returning augmented videos stream.
```
./run_docker.sh
```
Expand Down
4 changes: 2 additions & 2 deletions doc/applications-onboard/openness-interface-service.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Update the physical Ethernet interface with an IP from the `192.168.1.0/24` subn
route add -net 10.16.0.0/16 gw 192.168.1.1 dev eth1
```

> **NOTE**: The default OpenNESS network policy applies to pods in a `default` namespace and blocks all ingress traffic. Refer to [Kubernetes NetworkPolicies](https://github.com/otcshare/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md#applying-kubernetes-network-policies) for an example policy allowing ingress traffic from the `192.168.1.0/24` subnet on a specific port.
> **NOTE**: The default OpenNESS network policy applies to pods in a `default` namespace and blocks all ingress traffic. Refer to [Kubernetes NetworkPolicies](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md#applying-kubernetes-network-policies) for an example policy allowing ingress traffic from the `192.168.1.0/24` subnet on a specific port.
> **NOTE**: The subnet `192.168.1.0/24` is allocated by the Ansible\* playbook to the physical interface, which is attached to the first edge node. The second edge node joined to the cluster is allocated to the next subnet `192.168.2.0/24` and so on.
Expand Down Expand Up @@ -78,7 +78,7 @@ Currently, interface service supports the following values of the `driver` param
## Userspace (DPDK) bridge
The default DPDK-enabled bridge `br-userspace` is only available if OpenNESS is deployed with support for [Userspace CNI](https://github.com/otcshare/specs/blob/master/doc/building-blocks/dataplane/openness-userspace-cni.md) and at least one pod was deployed using the Userspace CNI. You can check if the `br-userspace` bridge exists by running the following command on your node:
The default DPDK-enabled bridge `br-userspace` is only available if OpenNESS is deployed with support for [Userspace CNI](https://github.com/open-ness/specs/blob/master/doc/building-blocks/dataplane/openness-userspace-cni.md) and at least one pod was deployed using the Userspace CNI. You can check if the `br-userspace` bridge exists by running the following command on your node:
```shell
ovs-vsctl list-br
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ The KubeVirt role responsible for bringing up KubeVirt components is enabled by

## VM deployment
Provided below are sample deployment instructions for different types of VMs.
Please use sample `.yaml` specification files provided in the OpenNESS Edge Controller directory, [edgeservices/edgecontroller/kubevirt/examples/](https://github.com/otcshare/edgeservices/tree/master/edgecontroller/kubevirt/examples), to deploy the workloads. Some of the files require modification to suit the environment they will be deployed in. Specific instructions on modifications are provided in the following steps:
Please use sample `.yaml` specification files provided in the OpenNESS Edge Controller directory, [edgeservices/edgecontroller/kubevirt/examples/](https://github.com/open-ness/edgeservices/tree/master/edgecontroller/kubevirt/examples), to deploy the workloads. Some of the files require modification to suit the environment they will be deployed in. Specific instructions on modifications are provided in the following steps:

### Stateless VM deployment
To deploy a sample stateless VM with containerDisk storage:
Expand Down
2 changes: 1 addition & 1 deletion doc/applications-onboard/using-openness-cnca.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ OpenNESS provides ansible scripts for setting up NGC components for two scenario

For Network Edge mode, the CNCA provides a kubectl plugin to configure the 5G Core network. Kubernetes adopted plugin concepts to extend its functionality. The `kube-cnca` plugin executes CNCA related functions within the Kubernetes ecosystem. The plugin performs remote callouts against NGC OAM and AF microservice on the controller itself.

The `kube-cnca` plugin is installed automatically on the control plane during the installation phase of the [Converged Edge Experience Kits](https://github.com/otcshare/specs/blob/master/doc/getting-started/openness-cluster-setup.md)
The `kube-cnca` plugin is installed automatically on the control plane during the installation phase of the [Converged Edge Experience Kits](https://github.com/open-ness/specs/blob/master/doc/getting-started/openness-cluster-setup.md)

#### Edge Node services operations with 5G Core (through OAM interface)

Expand Down
2 changes: 1 addition & 1 deletion doc/applications/openness_appguide.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ This guide is targeted at the <b><i>Cloud Application developers who want to</b>
- Develop applications for Edge computing that take advantage of all the capabilities exposed through Edge Compute APIs in OpenNESS.
- Port the existing applications and services in the public/private cloud to the edge unmodified.

The document will describe how to develop applications from scratch using the template/example applications/services provided in the OpenNESS software release. All the OpenNESS Applications and services can be found in the [edgeapps repo](https://github.com/otcshare/edgeapps).
The document will describe how to develop applications from scratch using the template/example applications/services provided in the OpenNESS software release. All the OpenNESS Applications and services can be found in the [edgeapps repo](https://github.com/open-ness/edgeapps).

## OpenNESS Edge Node Applications
OpenNESS Applications can onboard and provision on the edge nodes only through the OpenNESS Controller. The first step in Onboarding involves uploading the application image to the controller through the web interface. Both VM and Container images are supported.
Expand Down
4 changes: 2 additions & 2 deletions doc/applications/openness_openvino.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,14 +100,14 @@ For more information about CSR, refer to [OpenNESS CertSigner](../applications-o

Applications are deployed on the OpenNESS Edge Node as Docker containers. Three docker containers need to be built to get the OpenVINO pipeline working: `clientsim`, `producer`, and `consumer`. The `clientsim` Docker image must be built and executed on the client simulator machine while the `producer` and `consumer` containers/pods should be onboarded on the OpenNESS Edge Node.

On the client simulator, clone the [OpenNESS edgeapps](https://github.com/otcshare/edgeapps) and execute the following command to build the `client-sim` container:
On the client simulator, clone the [OpenNESS edgeapps](https://github.com/open-ness/edgeapps) and execute the following command to build the `client-sim` container:

```shell
cd <edgeapps-repo>/openvino/clientsim
./build-image.sh
```

On the OpenNESS Edge Node, clone the [OpenNESS edgeapps](https://github.com/otcshare/edgeapps) and execute the following command to build the `producer` and `consumer` containers:
On the OpenNESS Edge Node, clone the [OpenNESS edgeapps](https://github.com/open-ness/edgeapps) and execute the following command to build the `producer` and `consumer` containers:
```shell
cd <edgeapps-repo>/openvino/producer
./build-image.sh
Expand Down
4 changes: 2 additions & 2 deletions doc/applications/openness_service_mesh.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ spec:
methods: ["GET", "POST", "DELETE"]
```

In this `AuthorizationPolicy`, the Istio service mesh will allow "GET", "POST", and "DELETE" requests from any authenticated applications from the `default` namespace only to be passed to the service. For example, if using the [Video Analytics sample application](https://github.com/otcshare/edgeapps/tree/master/applications/vas-sample-app), the policy will allow requests from the sample application to be received by the service as it is deployed in the `default` namespace. However, if the application is deployed in a different namespace (for example, the `openness` namespace mentioned above in the output from the Kubernetes cluster), then the policy will deny access to the service as the request is coming from an application on a different namespace.
In this `AuthorizationPolicy`, the Istio service mesh will allow "GET", "POST", and "DELETE" requests from any authenticated applications from the `default` namespace only to be passed to the service. For example, if using the [Video Analytics sample application](https://github.com/open-ness/edgeapps/tree/master/applications/vas-sample-app), the policy will allow requests from the sample application to be received by the service as it is deployed in the `default` namespace. However, if the application is deployed in a different namespace (for example, the `openness` namespace mentioned above in the output from the Kubernetes cluster), then the policy will deny access to the service as the request is coming from an application on a different namespace.

> **NOTE**: The above `AuthorizationPolicy` can be tailored so that the OpenNESS service mesh *selectively* authorizes particular applications to consume premium video analytics services such as those accelerated using HDDL or VCAC-A cards.

Expand Down Expand Up @@ -484,7 +484,7 @@ Users can change the namespace labeled with istio label using the parameter `ist
* in `flavors/media-analytics/all.yml` for deployment with media-analytics flavor
* in `inventory/default/group_vars/all/10-default.yml` for deployment with any flavor (and istio role enabled)

> **NOTE**: The default OpenNESS network policy applies to pods in the `default` namespace and blocks all ingress traffic. Users must remove the default policy and apply custom network policy when deploying applications in the `default` namespace. Refer to the [Kubernetes NetworkPolicies](https://github.com/otcshare/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md#applying-kubernetes-network-policies) for an example policy allowing ingress traffic from `192.168.1.0/24` subnet on a specific port.
> **NOTE**: The default OpenNESS network policy applies to pods in the `default` namespace and blocks all ingress traffic. Users must remove the default policy and apply custom network policy when deploying applications in the `default` namespace. Refer to the [Kubernetes NetworkPolicies](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md#applying-kubernetes-network-policies) for an example policy allowing ingress traffic from `192.168.1.0/24` subnet on a specific port.

Kiali console is accessible from a browser using `http://<CONTROLLER_IP>:30001` and credentials defined in Converged Edge Experience Kits:

Expand Down
10 changes: 5 additions & 5 deletions doc/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ Edge Multi-Cluster Orchestration(EMCO), is a Geo-distributed application orchest

![](arch-images/openness-emco.png)

Link: [EMCO](https://github.com/otcshare/specs/blob/master/doc/building-blocks/emco/openness-emco.md)
Link: [EMCO](https://github.com/open-ness/specs/blob/master/doc/building-blocks/emco/openness-emco.md)
### Resource Management

Resource Management represents a methodology which involves identification of the hardware and software resources on the edge cluster, Configuration and allocation of the resources and continuous monitoring of the resources for any changes.
Expand All @@ -137,7 +137,7 @@ Resource Allocation involves configuration of the certain hardware resources lik

Resource monitoring involves tracking the usage of allocated resources to the applications and services and also tracking the remaining allocatable resources. OpenNESS provides collectors, node exporters using collectd, telegraf and custom exporters as part of telemetry and monitoring of current resource usage. Resource monitoring support is provided for CPU, VPU, FPGA AND Memory.

Link: [Enhanced Platform Awareness: Documents covering Accelerators and Resource Management](https://github.com/otcshare/specs/tree/master/doc/building-blocks/enhanced-platform-awareness)
Link: [Enhanced Platform Awareness: Documents covering Accelerators and Resource Management](https://github.com/open-ness/specs/tree/master/doc/building-blocks/enhanced-platform-awareness)

### Accelerators

Expand Down Expand Up @@ -177,13 +177,13 @@ OpenNESS supports the following CNIs:
- <b>Kube-OVN CNI</b>: integrates the OVN-based network virtualization with Kubernetes. It offers an advanced container network fabric for enterprises with the most functions and the easiest operation.
- <b>Calico CNI/eBPF</b>: supports applications with higher performance using eBPF and IPv4/IPv6 dual-stack

Link: [Dataplane and CNI](https://github.com/otcshare/specs/tree/master/doc/building-blocks/dataplane)
Link: [Dataplane and CNI](https://github.com/open-ness/specs/tree/master/doc/building-blocks/dataplane)

### Edge Aware Service Mesh

Istio is a feature-rich, cloud-native service mesh platform that provides a collection of key capabilities such as: Traffic Management, Security and Observability uniformly across a network of services. OpenNESS integrates natively with the Istio service mesh to help reduce the complexity of large scale edge applications, services, and network functions.

Link: [Service Mesh](https://github.com/otcshare/specs/blob/master/doc/applications/openness_service_mesh.md)
Link: [Service Mesh](https://github.com/open-ness/specs/blob/master/doc/applications/openness_service_mesh.md)

### Telemetry and Monitoring

Expand Down Expand Up @@ -293,7 +293,7 @@ The following is a subset of supported reference network functions:

- <b>gNodeB or eNodeB</b>: 5G or 4G base station implementation on Intel architecture based on Intel’s FlexRAN.

Link: [Documents covering OpenNESS supported Reference Architectures](https://github.com/otcshare/specs/tree/master/doc/reference-architectures)
Link: [Documents covering OpenNESS supported Reference Architectures](https://github.com/open-ness/specs/tree/master/doc/reference-architectures)
## OpenNESS Optimized Commercial Applications

OpenNESS Optimized Commercial applications are available at [Intel® Network Builders](https://networkbuilders.intel.com/commercial-applications)
Expand Down
2 changes: 1 addition & 1 deletion doc/building-blocks/dataplane/openness-interapp.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Multi-core edge cloud platforms typically host multiple containers or virtual ma
## InterApp Communication support in OpenNESS Network Edge

InterApp communication on the OpenNESS Network Edge is supported using Open Virtual Network for Open vSwitch [OVN/OVS](https://github.com/otcshare/specs/blob/master/doc/building-blocks/dataplane/openness-ovn.md) as the infrastructure. OVN/OVS in the network edge is supported through the Kubernetes kube-OVN Container Network Interface (CNI).
InterApp communication on the OpenNESS Network Edge is supported using Open Virtual Network for Open vSwitch [OVN/OVS](https://github.com/open-ness/specs/blob/master/doc/building-blocks/dataplane/openness-ovn.md) as the infrastructure. OVN/OVS in the network edge is supported through the Kubernetes kube-OVN Container Network Interface (CNI).

>**NOTE**: The InterApps Communication also works with Calico cni. Calico is supported as a default cni in Openness from 21.03 release.
Expand Down
Loading

0 comments on commit 07db152

Please sign in to comment.