diff --git a/README.md b/README.md
index 6d56681d..f683ee4f 100644
--- a/README.md
+++ b/README.md
@@ -6,7 +6,7 @@ Copyright (c) 2019-2020 Intel Corporation
# OpenNESS Quick Start
## Network Edge
- ### Step 1. Get Hardware ► Step 2. [Getting started](https://github.com/otcshare/specs/blob/master/doc/getting-started/openness-cluster-setup.md) ► Step 3. [Applications Onboarding](https://github.com/otcshare/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md)
+ ### Step 1. Get Hardware ► Step 2. [Getting started](https://github.com/open-ness/specs/blob/master/doc/getting-started/openness-cluster-setup.md) ► Step 3. [Applications Onboarding](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md)
# OpenNESS solution documentation index
@@ -15,86 +15,86 @@ Below is the complete list of OpenNESS solution documentation
## Architecture
-* [architecture.md: OpenNESS Architecture overview](https://github.com/otcshare/specs/blob/master/doc/architecture.md)
-* [flavors.md: OpenNESS Deployment Flavors](https://github.com/otcshare/specs/blob/master/doc/flavors.md)
+* [architecture.md: OpenNESS Architecture overview](https://github.com/open-ness/specs/blob/master/doc/architecture.md)
+* [flavors.md: OpenNESS Deployment Flavors](https://github.com/open-ness/specs/blob/master/doc/flavors.md)
## Getting Started - Setup
-* [getting-started: Folder containing how to get started with installing and trying OpenNESS Network Edge solutions](https://github.com/otcshare/specs/blob/master/doc/getting-started)
- * [openness-cluster-setup.md: Getting started here for installing and trying OpenNESS Network Edge](https://github.com/otcshare/specs/blob/master/doc/getting-started/openness-cluster-setup.md)
- * [converged-edge-experience-kits.md: Overview of the Converged Edge Experience Kits that are used to install the Network Edge solutions](https://github.com/otcshare/specs/blob/master/doc/getting-started/converged-edge-experience-kits.md)
- * [non-root-user.md: Using the non-root user on the OpenNESS Platform](https://github.com/otcshare/specs/blob/master/doc/getting-started/non-root-user.md)
- * [harbor-registry.md: Enabling Harbor Registry service in OpenNESS](https://github.com/otcshare/specs/blob/master/doc/getting-started/harbor-registry.md)
- * [kubernetes-dashboard.md: Installing Kubernetes Dashboard for OpenNESS Network Edge cluster](https://github.com/otcshare/specs/blob/master/doc/getting-started/kubernetes-dashboard.md)
+* [getting-started: Folder containing how to get started with installing and trying OpenNESS Network Edge solutions](https://github.com/open-ness/specs/blob/master/doc/getting-started)
+ * [openness-cluster-setup.md: Getting started here for installing and trying OpenNESS Network Edge](https://github.com/open-ness/specs/blob/master/doc/getting-started/openness-cluster-setup.md)
+ * [converged-edge-experience-kits.md: Overview of the Converged Edge Experience Kits that are used to install the Network Edge solutions](https://github.com/open-ness/specs/blob/master/doc/getting-started/converged-edge-experience-kits.md)
+ * [non-root-user.md: Using the non-root user on the OpenNESS Platform](https://github.com/open-ness/specs/blob/master/doc/getting-started/non-root-user.md)
+ * [harbor-registry.md: Enabling Harbor Registry service in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/getting-started/harbor-registry.md)
+ * [kubernetes-dashboard.md: Installing Kubernetes Dashboard for OpenNESS Network Edge cluster](https://github.com/open-ness/specs/blob/master/doc/getting-started/kubernetes-dashboard.md)
## Application onboarding - Deployment
-* [applications-onboard: Now that you have installed OpenNESS platform start in this folder to onboard sample application on OpenNESS Network Edge](https://github.com/otcshare/specs/blob/master/doc/applications-onboard)
- * [network-edge-applications-onboarding.md: Steps for onboarding sample application on OpenNESS Network Edge](https://github.com/otcshare/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md)
- * [openness-edgedns.md: Using edge DNS service](https://github.com/otcshare/specs/blob/master/doc/applications-onboard/openness-edgedns.md)
- * [openness-interface-service.md: Using network interfaces management service](https://github.com/otcshare/specs/blob/master/doc/applications-onboard/openness-interface-service.md)
- * [using-openness-cnca.md: Steps for configuring 4G CUPS or 5G Application Function for Edge deployment for Network Edge](https://github.com/otcshare/specs/blob/master/doc/applications-onboard/using-openness-cnca.md)
- * [openness-eaa.md: Edge Application Agent: Description of Edge Application APIs and Edge Application Authentication APIs](https://github.com/otcshare/specs/blob/master/doc/applications-onboard/openness-eaa.md)
- * [openness-certsigner.md: Steps for issuing platform certificates](https://github.com/otcshare/specs/blob/master/doc/applications-onboard/openness-certsigner.md)
+* [applications-onboard: Now that you have installed OpenNESS platform start in this folder to onboard sample application on OpenNESS Network Edge](https://github.com/open-ness/specs/blob/master/doc/applications-onboard)
+ * [network-edge-applications-onboarding.md: Steps for onboarding sample application on OpenNESS Network Edge](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md)
+ * [openness-edgedns.md: Using edge DNS service](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/openness-edgedns.md)
+ * [openness-interface-service.md: Using network interfaces management service](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/openness-interface-service.md)
+ * [using-openness-cnca.md: Steps for configuring 4G CUPS or 5G Application Function for Edge deployment for Network Edge](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/using-openness-cnca.md)
+ * [openness-eaa.md: Edge Application Agent: Description of Edge Application APIs and Edge Application Authentication APIs](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/openness-eaa.md)
+ * [openness-certsigner.md: Steps for issuing platform certificates](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/openness-certsigner.md)
## Core Network - 4G and 5G
-* [core-network: Folder containing details of 4G CUPS and 5G edge cloud deployment support](https://github.com/otcshare/specs/tree/master/doc/reference-architectures/core-network)
- * [openness_epc.md: Whitepaper detailing the 4G CUPS support for Edge cloud deployment in OpenNESS for Network Edge](https://github.com/otcshare/specs/blob/master/doc/reference-architectures/core-network/openness_epc.md)
- * [openness_ngc.md: Whitepaper detailing the 5G Edge Cloud deployment support in OpenNESS for Network Edge](https://github.com/otcshare/specs/blob/master/doc/reference-architectures/core-network/openness_ngc.md)
- * [openness_upf.md: Whitepaper detailing the UPF, AF, NEF deployment support on OpenNESS for Network Edge](https://github.com/otcshare/specs/blob/master/doc/reference-architectures/core-network/openness_upf.md)
+* [core-network: Folder containing details of 4G CUPS and 5G edge cloud deployment support](https://github.com/open-ness/specs/tree/master/doc/reference-architectures/core-network)
+ * [openness_epc.md: Whitepaper detailing the 4G CUPS support for Edge cloud deployment in OpenNESS for Network Edge](https://github.com/open-ness/specs/blob/master/doc/reference-architectures/core-network/openness_epc.md)
+ * [openness_ngc.md: Whitepaper detailing the 5G Edge Cloud deployment support in OpenNESS for Network Edge](https://github.com/open-ness/specs/blob/master/doc/reference-architectures/core-network/openness_ngc.md)
+ * [openness_upf.md: Whitepaper detailing the UPF, AF, NEF deployment support on OpenNESS for Network Edge](https://github.com/open-ness/specs/blob/master/doc/reference-architectures/core-network/openness_upf.md)
## Enhanced Platform Awareness
-* [enhanced-platform-awareness: Folder containing individual Silicon and Software EPA that are features that are supported in OpenNESS and Network Edge](https://github.com/otcshare/specs/tree/master/doc/building-blocks/enhanced-platform-awareness)
- * [openness-hugepage.md: Hugepages support for Edge Applications and Network Functions](https://github.com/otcshare/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-hugepage.md)
- * [openness-node-feature-discovery.md: Edge Node hardware and software feature discovery support in OpenNESS](https://github.com/otcshare/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-node-feature-discovery.md)
- * [openness-sriov-multiple-interfaces.md: Dedicated Physical Network interface allocation support for Edge Applications and Network Functions](https://github.com/otcshare/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md)
- * [openness-dedicated-core.md: Dedicated CPU core allocation support for Edge Applications and Network Functions](https://github.com/otcshare/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-dedicated-core.md)
- * [openness-bios.md: Edge platform BIOS and Firmware and configuration support in OpenNESS](https://github.com/otcshare/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-bios.md)
- * [openness-qat.md: Resource allocation & configuration of Intel® QuickAssist Adapter](https://github.com/otcshare/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-qat.md)
- * [openness-fpga.md: Dedicated FPGA IP resource allocation support for Edge Applications and Network Functions](https://github.com/otcshare/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md)
- * [openness_hddl.md: Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS](https://github.com/otcshare/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness_hddl.md)
- * [openness-topology-manager.md: Resource Locality awareness support through Topology manager in OpenNESS](https://github.com/otcshare/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-topology-manager.md)
- * [openness-vca.md: Visual Compute Accelerator Card - Analytics (VCAC-A)](https://github.com/otcshare/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md)
- * [openness-rmd.md: Cache Allocation using Resource Management Daemon(RMD) in OpenNESS](https://github.com/otcshare/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-rmd.md)
- * [openness-telemetry: Telemetry Support in OpenNESS](https://github.com/otcshare/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-telemetry.md)
+* [enhanced-platform-awareness: Folder containing individual Silicon and Software EPA that are features that are supported in OpenNESS and Network Edge](https://github.com/open-ness/specs/tree/master/doc/building-blocks/enhanced-platform-awareness)
+ * [openness-hugepage.md: Hugepages support for Edge Applications and Network Functions](https://github.com/open-ness/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-hugepage.md)
+ * [openness-node-feature-discovery.md: Edge Node hardware and software feature discovery support in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-node-feature-discovery.md)
+ * [openness-sriov-multiple-interfaces.md: Dedicated Physical Network interface allocation support for Edge Applications and Network Functions](https://github.com/open-ness/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md)
+ * [openness-dedicated-core.md: Dedicated CPU core allocation support for Edge Applications and Network Functions](https://github.com/open-ness/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-dedicated-core.md)
+ * [openness-bios.md: Edge platform BIOS and Firmware and configuration support in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-bios.md)
+ * [openness-qat.md: Resource allocation & configuration of Intel® QuickAssist Adapter](https://github.com/open-ness/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-qat.md)
+ * [openness-fpga.md: Dedicated FPGA IP resource allocation support for Edge Applications and Network Functions](https://github.com/open-ness/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md)
+ * [openness_hddl.md: Using Intel® Movidius™ Myriad™ X High Density Deep Learning (HDDL) solution in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness_hddl.md)
+ * [openness-topology-manager.md: Resource Locality awareness support through Topology manager in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-topology-manager.md)
+ * [openness-vca.md: Visual Compute Accelerator Card - Analytics (VCAC-A)](https://github.com/open-ness/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md)
+ * [openness-rmd.md: Cache Allocation using Resource Management Daemon(RMD) in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-rmd.md)
+ * [openness-telemetry: Telemetry Support in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-telemetry.md)
## Dataplane
-* [dataplane: Folder containing Dataplane and inter-app infrastructure support in OpenNESS](https://github.com/otcshare/specs/tree/master/doc/building-blocks/dataplane)
- * [openness-interapp.md: InterApp Communication support in OpenNESS](https://github.com/otcshare/specs/blob/master/doc/building-blocks/dataplane/openness-interapp.md)
- * [openness-ovn.md: OpenNESS Support for OVS as dataplane with OVN](https://github.com/otcshare/specs/blob/master/doc/building-blocks/dataplane/openness-ovn.md)
- * [openness-userspace-cni.md: Userspace CNI - Container Network Interface Kubernetes plugin](https://github.com/otcshare/specs/blob/master/doc/building-blocks/dataplane/openness-userspace-cni.md)
+* [dataplane: Folder containing Dataplane and inter-app infrastructure support in OpenNESS](https://github.com/open-ness/specs/tree/master/doc/building-blocks/dataplane)
+ * [openness-interapp.md: InterApp Communication support in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/building-blocks/dataplane/openness-interapp.md)
+ * [openness-ovn.md: OpenNESS Support for OVS as dataplane with OVN](https://github.com/open-ness/specs/blob/master/doc/building-blocks/dataplane/openness-ovn.md)
+ * [openness-userspace-cni.md: Userspace CNI - Container Network Interface Kubernetes plugin](https://github.com/open-ness/specs/blob/master/doc/building-blocks/dataplane/openness-userspace-cni.md)
## Edge Applications
-* [applications: Folder Containing resource material for Edge Application developers](https://github.com/otcshare/specs/blob/master/doc/applications)
- * [openness_appguide.md: How to develop or Port existing cloud application to the Edge cloud based on OpenNESS](https://github.com/otcshare/specs/blob/master/doc/applications/openness_appguide.md)
- * [openness_ovc.md: Open Visual Cloud Smart City reference Application for OpenNESS](https://github.com/otcshare/specs/blob/master/doc/applications/openness_ovc.md)
- * [openness_openvino.md: AI inference reference Edge application for OpenNESS](https://github.com/otcshare/specs/blob/master/doc/applications/openness_openvino.md)
- * [openness_va_services.md: Video Analytics Services for OpenNESS](https://github.com/otcshare/specs/blob/master/doc/applications/openness_va_services.md)
- * [openness_service_mesh.md: Service Mesh support in OpenNESS](https://github.com/otcshare/specs/blob/master/doc/applications/openness_service_mesh.md)
+* [applications: Folder Containing resource material for Edge Application developers](https://github.com/open-ness/specs/blob/master/doc/applications)
+ * [openness_appguide.md: How to develop or Port existing cloud application to the Edge cloud based on OpenNESS](https://github.com/open-ness/specs/blob/master/doc/applications/openness_appguide.md)
+ * [openness_ovc.md: Open Visual Cloud Smart City reference Application for OpenNESS](https://github.com/open-ness/specs/blob/master/doc/applications/openness_ovc.md)
+ * [openness_openvino.md: AI inference reference Edge application for OpenNESS](https://github.com/open-ness/specs/blob/master/doc/applications/openness_openvino.md)
+ * [openness_va_services.md: Video Analytics Services for OpenNESS](https://github.com/open-ness/specs/blob/master/doc/applications/openness_va_services.md)
+ * [openness_service_mesh.md: Service Mesh support in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/applications/openness_service_mesh.md)
## Cloud Adapters
-* [cloud-adapters: How to deploy public cloud IoT gateways on OpenNESS Edge Cloud](https://github.com/otcshare/specs/blob/master/doc/cloud-adapters)
- * [openness_awsgreengrass.md: Deploying single or multiple instance of Amazon Greengrass IoT gateway on OpenNESS edge cloud as an edge application](https://github.com/otcshare/specs/blob/master/doc/cloud-adapters/openness_awsgreengrass.md)
- * [openness_baiducloud.md: Deploying single or multiple instance of Baidu IoT gateway on OpenNESS edge cloud as an edge application](https://github.com/otcshare/specs/blob/master/doc/cloud-adapters/openness_baiducloud.md)
+* [cloud-adapters: How to deploy public cloud IoT gateways on OpenNESS Edge Cloud](https://github.com/open-ness/specs/blob/master/doc/cloud-adapters)
+ * [openness_awsgreengrass.md: Deploying single or multiple instance of Amazon Greengrass IoT gateway on OpenNESS edge cloud as an edge application](https://github.com/open-ness/specs/blob/master/doc/cloud-adapters/openness_awsgreengrass.md)
+ * [openness_baiducloud.md: Deploying single or multiple instance of Baidu IoT gateway on OpenNESS edge cloud as an edge application](https://github.com/open-ness/specs/blob/master/doc/cloud-adapters/openness_baiducloud.md)
## API and Schema
* [Edge Application API: EAA](https://www.openness.org/api-documentation/?api=eaa)
* [Edge Application Authentication API](https://www.openness.org/api-documentation/?api=auth)
* [Core Network Configuration API](https://www.openness.org/api-documentation/?api=cups)
-* [schema: Folder containing APIs protobuf or schema for varios endpoints in OpenNESS solution](https://github.com/otcshare/specs/tree/master/schema)
+* [schema: Folder containing APIs protobuf or schema for varios endpoints in OpenNESS solution](https://github.com/open-ness/specs/tree/master/schema)
## Orchestration
-* [openness-helm.md: Helm support in OpenNESS](https://github.com/otcshare/specs/blob/master/doc/orchestration/openness-helm.md)
+* [openness-helm.md: Helm support in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/orchestration/openness-helm.md)
## Release history
-* [openness_releasenotes.md: This document provides high level system features, issues and limitations information for OpenNESS](https://github.com/otcshare/specs/blob/master/openness_releasenotes.md)
+* [openness_releasenotes.md: This document provides high level system features, issues and limitations information for OpenNESS](https://github.com/open-ness/specs/blob/master/openness_releasenotes.md)
## Related resources
diff --git a/doc/applications-onboard/network-edge-applications-onboarding.md b/doc/applications-onboard/network-edge-applications-onboarding.md
index 97ce6747..6b516bb9 100644
--- a/doc/applications-onboard/network-edge-applications-onboarding.md
+++ b/doc/applications-onboard/network-edge-applications-onboarding.md
@@ -42,14 +42,14 @@ Users must provide the application to be deployed on the OpenNESS platform for N
> **Note**: The Harbor registry setup is out of scope for this document. If users already have a docker container image file and would like to copy it to the node manually, they can use the `docker load` command to add the image. The success of using a pre-built Docker image depends on the application dependencies that users must know.
-The OpenNESS [edgeapps](https://github.com/otcshare/edgeapps) repository provides images for OpenNESS supported applications. Pull the repository to your Edge Node to build the images.
+The OpenNESS [edgeapps](https://github.com/open-ness/edgeapps) repository provides images for OpenNESS supported applications. Pull the repository to your Edge Node to build the images.
This document explains the build and deployment of two applications:
1. Sample application: a simple “Hello, World!” reference application for OpenNESS
2. OpenVINO™ application: A close to real-world inference application
## Building sample application images
-The sample application is available in [the edgeapps repository](https://github.com/otcshare/edgeapps/tree/master/applications/sample-app); further information about the application is contained within the `Readme.md` file.
+The sample application is available in [the edgeapps repository](https://github.com/open-ness/edgeapps/tree/master/applications/sample-app); further information about the application is contained within the `Readme.md` file.
The following steps are required to build the sample application Docker images for testing the OpenNESS Edge Application Agent (EAA) with consumer and producer applications:
@@ -64,7 +64,7 @@ The following steps are required to build the sample application Docker images f
docker images | grep consumer
```
## Building the OpenVINO application images
-The OpenVINO application is available in [the EdgeApps repository](https://github.com/otcshare/edgeapps/tree/master/applications/openvino); further information about the application is contained within `Readme.md` file.
+The OpenVINO application is available in [the EdgeApps repository](https://github.com/open-ness/edgeapps/tree/master/applications/openvino); further information about the application is contained within `Readme.md` file.
The following steps are required to build the sample application Docker images for testing OpenVINO consumer and producer applications:
@@ -491,12 +491,12 @@ This section guides users through the complete process of onboarding the OpenVIN
## Deploying the Application
-1. An application `yaml` specification file for the OpenVINO producer that is used to deploy the K8s pod can be found in the Edge Apps repository at [./applications/openvino/producer/openvino-prod-app.yaml](https://github.com/otcshare/edgeapps/blob/master/applications/openvino/producer/openvino-prod-app.yaml). The pod will use the Docker image, which must be [built](#building-openvino-application-images) and available on the platform. Deploy the producer application by running:
+1. An application `yaml` specification file for the OpenVINO producer that is used to deploy the K8s pod can be found in the Edge Apps repository at [./applications/openvino/producer/openvino-prod-app.yaml](https://github.com/open-ness/edgeapps/blob/master/applications/openvino/producer/openvino-prod-app.yaml). The pod will use the Docker image, which must be [built](#building-openvino-application-images) and available on the platform. Deploy the producer application by running:
```
kubectl apply -f openvino-prod-app.yaml
kubectl certificate approve openvino-prod-app
```
-2. An application `yaml` specification file for the OpenVINO consumer that is used to deploy K8s pod can be found in the Edge Apps repository at [./applications/openvino/consumer/openvino-cons-app.yaml](https://github.com/otcshare/edgeapps/blob/master/applications/openvino/consumer/openvino-cons-app.yaml). The pod will use the Docker image, which must be [built](#building-openvino-application-images) and available on the platform. Deploy the consumer application by running:
+2. An application `yaml` specification file for the OpenVINO consumer that is used to deploy K8s pod can be found in the Edge Apps repository at [./applications/openvino/consumer/openvino-cons-app.yaml](https://github.com/open-ness/edgeapps/blob/master/applications/openvino/consumer/openvino-cons-app.yaml). The pod will use the Docker image, which must be [built](#building-openvino-application-images) and available on the platform. Deploy the consumer application by running:
```
kubectl apply -f openvino-cons-app.yaml
kubectl certificate approve openvino-cons-app
@@ -593,7 +593,7 @@ The following is an example of how to set up DNS resolution for OpenVINO consume
dig openvino.openness
```
3. On the traffic generating host build the image for the [Client Simulator](#building-openvino-application-images)
-4. Run the following from [edgeapps/applications/openvino/clientsim](https://github.com/otcshare/edgeapps/blob/master/applications/openvino/clientsim/run-docker.sh) to start the video traffic via the containerized Client Simulator. A graphical user environment is required to view the results of the returning augmented videos stream.
+4. Run the following from [edgeapps/applications/openvino/clientsim](https://github.com/open-ness/edgeapps/blob/master/applications/openvino/clientsim/run-docker.sh) to start the video traffic via the containerized Client Simulator. A graphical user environment is required to view the results of the returning augmented videos stream.
```
./run_docker.sh
```
diff --git a/doc/applications-onboard/openness-interface-service.md b/doc/applications-onboard/openness-interface-service.md
index ca6f1c58..f0a2b1e1 100644
--- a/doc/applications-onboard/openness-interface-service.md
+++ b/doc/applications-onboard/openness-interface-service.md
@@ -35,7 +35,7 @@ Update the physical Ethernet interface with an IP from the `192.168.1.0/24` subn
route add -net 10.16.0.0/16 gw 192.168.1.1 dev eth1
```
-> **NOTE**: The default OpenNESS network policy applies to pods in a `default` namespace and blocks all ingress traffic. Refer to [Kubernetes NetworkPolicies](https://github.com/otcshare/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md#applying-kubernetes-network-policies) for an example policy allowing ingress traffic from the `192.168.1.0/24` subnet on a specific port.
+> **NOTE**: The default OpenNESS network policy applies to pods in a `default` namespace and blocks all ingress traffic. Refer to [Kubernetes NetworkPolicies](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md#applying-kubernetes-network-policies) for an example policy allowing ingress traffic from the `192.168.1.0/24` subnet on a specific port.
> **NOTE**: The subnet `192.168.1.0/24` is allocated by the Ansible\* playbook to the physical interface, which is attached to the first edge node. The second edge node joined to the cluster is allocated to the next subnet `192.168.2.0/24` and so on.
@@ -78,7 +78,7 @@ Currently, interface service supports the following values of the `driver` param
## Userspace (DPDK) bridge
-The default DPDK-enabled bridge `br-userspace` is only available if OpenNESS is deployed with support for [Userspace CNI](https://github.com/otcshare/specs/blob/master/doc/building-blocks/dataplane/openness-userspace-cni.md) and at least one pod was deployed using the Userspace CNI. You can check if the `br-userspace` bridge exists by running the following command on your node:
+The default DPDK-enabled bridge `br-userspace` is only available if OpenNESS is deployed with support for [Userspace CNI](https://github.com/open-ness/specs/blob/master/doc/building-blocks/dataplane/openness-userspace-cni.md) and at least one pod was deployed using the Userspace CNI. You can check if the `br-userspace` bridge exists by running the following command on your node:
```shell
ovs-vsctl list-br
diff --git a/doc/applications-onboard/openness-network-edge-vm-support.md b/doc/applications-onboard/openness-network-edge-vm-support.md
index 202fdd1e..b02194a4 100644
--- a/doc/applications-onboard/openness-network-edge-vm-support.md
+++ b/doc/applications-onboard/openness-network-edge-vm-support.md
@@ -133,7 +133,7 @@ The KubeVirt role responsible for bringing up KubeVirt components is enabled by
## VM deployment
Provided below are sample deployment instructions for different types of VMs.
-Please use sample `.yaml` specification files provided in the OpenNESS Edge Controller directory, [edgeservices/edgecontroller/kubevirt/examples/](https://github.com/otcshare/edgeservices/tree/master/edgecontroller/kubevirt/examples), to deploy the workloads. Some of the files require modification to suit the environment they will be deployed in. Specific instructions on modifications are provided in the following steps:
+Please use sample `.yaml` specification files provided in the OpenNESS Edge Controller directory, [edgeservices/edgecontroller/kubevirt/examples/](https://github.com/open-ness/edgeservices/tree/master/edgecontroller/kubevirt/examples), to deploy the workloads. Some of the files require modification to suit the environment they will be deployed in. Specific instructions on modifications are provided in the following steps:
### Stateless VM deployment
To deploy a sample stateless VM with containerDisk storage:
diff --git a/doc/applications-onboard/using-openness-cnca.md b/doc/applications-onboard/using-openness-cnca.md
index 7c250f3f..0ece630d 100644
--- a/doc/applications-onboard/using-openness-cnca.md
+++ b/doc/applications-onboard/using-openness-cnca.md
@@ -165,7 +165,7 @@ OpenNESS provides ansible scripts for setting up NGC components for two scenario
For Network Edge mode, the CNCA provides a kubectl plugin to configure the 5G Core network. Kubernetes adopted plugin concepts to extend its functionality. The `kube-cnca` plugin executes CNCA related functions within the Kubernetes ecosystem. The plugin performs remote callouts against NGC OAM and AF microservice on the controller itself.
-The `kube-cnca` plugin is installed automatically on the control plane during the installation phase of the [Converged Edge Experience Kits](https://github.com/otcshare/specs/blob/master/doc/getting-started/openness-cluster-setup.md)
+The `kube-cnca` plugin is installed automatically on the control plane during the installation phase of the [Converged Edge Experience Kits](https://github.com/open-ness/specs/blob/master/doc/getting-started/openness-cluster-setup.md)
#### Edge Node services operations with 5G Core (through OAM interface)
diff --git a/doc/applications/openness_appguide.md b/doc/applications/openness_appguide.md
index e472b57e..66b14e04 100644
--- a/doc/applications/openness_appguide.md
+++ b/doc/applications/openness_appguide.md
@@ -35,7 +35,7 @@ This guide is targeted at the Cloud Application developers who want to
- Develop applications for Edge computing that take advantage of all the capabilities exposed through Edge Compute APIs in OpenNESS.
- Port the existing applications and services in the public/private cloud to the edge unmodified.
-The document will describe how to develop applications from scratch using the template/example applications/services provided in the OpenNESS software release. All the OpenNESS Applications and services can be found in the [edgeapps repo](https://github.com/otcshare/edgeapps).
+The document will describe how to develop applications from scratch using the template/example applications/services provided in the OpenNESS software release. All the OpenNESS Applications and services can be found in the [edgeapps repo](https://github.com/open-ness/edgeapps).
## OpenNESS Edge Node Applications
OpenNESS Applications can onboard and provision on the edge nodes only through the OpenNESS Controller. The first step in Onboarding involves uploading the application image to the controller through the web interface. Both VM and Container images are supported.
diff --git a/doc/applications/openness_openvino.md b/doc/applications/openness_openvino.md
index a6fcfc43..6d2ee25e 100644
--- a/doc/applications/openness_openvino.md
+++ b/doc/applications/openness_openvino.md
@@ -100,14 +100,14 @@ For more information about CSR, refer to [OpenNESS CertSigner](../applications-o
Applications are deployed on the OpenNESS Edge Node as Docker containers. Three docker containers need to be built to get the OpenVINO pipeline working: `clientsim`, `producer`, and `consumer`. The `clientsim` Docker image must be built and executed on the client simulator machine while the `producer` and `consumer` containers/pods should be onboarded on the OpenNESS Edge Node.
-On the client simulator, clone the [OpenNESS edgeapps](https://github.com/otcshare/edgeapps) and execute the following command to build the `client-sim` container:
+On the client simulator, clone the [OpenNESS edgeapps](https://github.com/open-ness/edgeapps) and execute the following command to build the `client-sim` container:
```shell
cd /openvino/clientsim
./build-image.sh
```
-On the OpenNESS Edge Node, clone the [OpenNESS edgeapps](https://github.com/otcshare/edgeapps) and execute the following command to build the `producer` and `consumer` containers:
+On the OpenNESS Edge Node, clone the [OpenNESS edgeapps](https://github.com/open-ness/edgeapps) and execute the following command to build the `producer` and `consumer` containers:
```shell
cd /openvino/producer
./build-image.sh
diff --git a/doc/applications/openness_service_mesh.md b/doc/applications/openness_service_mesh.md
index 52bf355f..e96ef0d9 100644
--- a/doc/applications/openness_service_mesh.md
+++ b/doc/applications/openness_service_mesh.md
@@ -144,7 +144,7 @@ spec:
methods: ["GET", "POST", "DELETE"]
```
-In this `AuthorizationPolicy`, the Istio service mesh will allow "GET", "POST", and "DELETE" requests from any authenticated applications from the `default` namespace only to be passed to the service. For example, if using the [Video Analytics sample application](https://github.com/otcshare/edgeapps/tree/master/applications/vas-sample-app), the policy will allow requests from the sample application to be received by the service as it is deployed in the `default` namespace. However, if the application is deployed in a different namespace (for example, the `openness` namespace mentioned above in the output from the Kubernetes cluster), then the policy will deny access to the service as the request is coming from an application on a different namespace.
+In this `AuthorizationPolicy`, the Istio service mesh will allow "GET", "POST", and "DELETE" requests from any authenticated applications from the `default` namespace only to be passed to the service. For example, if using the [Video Analytics sample application](https://github.com/open-ness/edgeapps/tree/master/applications/vas-sample-app), the policy will allow requests from the sample application to be received by the service as it is deployed in the `default` namespace. However, if the application is deployed in a different namespace (for example, the `openness` namespace mentioned above in the output from the Kubernetes cluster), then the policy will deny access to the service as the request is coming from an application on a different namespace.
> **NOTE**: The above `AuthorizationPolicy` can be tailored so that the OpenNESS service mesh *selectively* authorizes particular applications to consume premium video analytics services such as those accelerated using HDDL or VCAC-A cards.
@@ -484,7 +484,7 @@ Users can change the namespace labeled with istio label using the parameter `ist
* in `flavors/media-analytics/all.yml` for deployment with media-analytics flavor
* in `inventory/default/group_vars/all/10-default.yml` for deployment with any flavor (and istio role enabled)
-> **NOTE**: The default OpenNESS network policy applies to pods in the `default` namespace and blocks all ingress traffic. Users must remove the default policy and apply custom network policy when deploying applications in the `default` namespace. Refer to the [Kubernetes NetworkPolicies](https://github.com/otcshare/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md#applying-kubernetes-network-policies) for an example policy allowing ingress traffic from `192.168.1.0/24` subnet on a specific port.
+> **NOTE**: The default OpenNESS network policy applies to pods in the `default` namespace and blocks all ingress traffic. Users must remove the default policy and apply custom network policy when deploying applications in the `default` namespace. Refer to the [Kubernetes NetworkPolicies](https://github.com/open-ness/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md#applying-kubernetes-network-policies) for an example policy allowing ingress traffic from `192.168.1.0/24` subnet on a specific port.
Kiali console is accessible from a browser using `http://:30001` and credentials defined in Converged Edge Experience Kits:
diff --git a/doc/architecture.md b/doc/architecture.md
index dc6ce867..21405bc9 100644
--- a/doc/architecture.md
+++ b/doc/architecture.md
@@ -112,7 +112,7 @@ Edge Multi-Cluster Orchestration(EMCO), is a Geo-distributed application orchest
![](arch-images/openness-emco.png)
-Link: [EMCO](https://github.com/otcshare/specs/blob/master/doc/building-blocks/emco/openness-emco.md)
+Link: [EMCO](https://github.com/open-ness/specs/blob/master/doc/building-blocks/emco/openness-emco.md)
### Resource Management
Resource Management represents a methodology which involves identification of the hardware and software resources on the edge cluster, Configuration and allocation of the resources and continuous monitoring of the resources for any changes.
@@ -137,7 +137,7 @@ Resource Allocation involves configuration of the certain hardware resources lik
Resource monitoring involves tracking the usage of allocated resources to the applications and services and also tracking the remaining allocatable resources. OpenNESS provides collectors, node exporters using collectd, telegraf and custom exporters as part of telemetry and monitoring of current resource usage. Resource monitoring support is provided for CPU, VPU, FPGA AND Memory.
-Link: [Enhanced Platform Awareness: Documents covering Accelerators and Resource Management](https://github.com/otcshare/specs/tree/master/doc/building-blocks/enhanced-platform-awareness)
+Link: [Enhanced Platform Awareness: Documents covering Accelerators and Resource Management](https://github.com/open-ness/specs/tree/master/doc/building-blocks/enhanced-platform-awareness)
### Accelerators
@@ -177,13 +177,13 @@ OpenNESS supports the following CNIs:
- Kube-OVN CNI: integrates the OVN-based network virtualization with Kubernetes. It offers an advanced container network fabric for enterprises with the most functions and the easiest operation.
- Calico CNI/eBPF: supports applications with higher performance using eBPF and IPv4/IPv6 dual-stack
-Link: [Dataplane and CNI](https://github.com/otcshare/specs/tree/master/doc/building-blocks/dataplane)
+Link: [Dataplane and CNI](https://github.com/open-ness/specs/tree/master/doc/building-blocks/dataplane)
### Edge Aware Service Mesh
Istio is a feature-rich, cloud-native service mesh platform that provides a collection of key capabilities such as: Traffic Management, Security and Observability uniformly across a network of services. OpenNESS integrates natively with the Istio service mesh to help reduce the complexity of large scale edge applications, services, and network functions.
-Link: [Service Mesh](https://github.com/otcshare/specs/blob/master/doc/applications/openness_service_mesh.md)
+Link: [Service Mesh](https://github.com/open-ness/specs/blob/master/doc/applications/openness_service_mesh.md)
### Telemetry and Monitoring
@@ -293,7 +293,7 @@ The following is a subset of supported reference network functions:
- gNodeB or eNodeB: 5G or 4G base station implementation on Intel architecture based on Intel’s FlexRAN.
-Link: [Documents covering OpenNESS supported Reference Architectures](https://github.com/otcshare/specs/tree/master/doc/reference-architectures)
+Link: [Documents covering OpenNESS supported Reference Architectures](https://github.com/open-ness/specs/tree/master/doc/reference-architectures)
## OpenNESS Optimized Commercial Applications
OpenNESS Optimized Commercial applications are available at [Intel® Network Builders](https://networkbuilders.intel.com/commercial-applications)
diff --git a/doc/building-blocks/dataplane/openness-interapp.md b/doc/building-blocks/dataplane/openness-interapp.md
index fd41a407..4f16371a 100644
--- a/doc/building-blocks/dataplane/openness-interapp.md
+++ b/doc/building-blocks/dataplane/openness-interapp.md
@@ -15,7 +15,7 @@ Multi-core edge cloud platforms typically host multiple containers or virtual ma
## InterApp Communication support in OpenNESS Network Edge
-InterApp communication on the OpenNESS Network Edge is supported using Open Virtual Network for Open vSwitch [OVN/OVS](https://github.com/otcshare/specs/blob/master/doc/building-blocks/dataplane/openness-ovn.md) as the infrastructure. OVN/OVS in the network edge is supported through the Kubernetes kube-OVN Container Network Interface (CNI).
+InterApp communication on the OpenNESS Network Edge is supported using Open Virtual Network for Open vSwitch [OVN/OVS](https://github.com/open-ness/specs/blob/master/doc/building-blocks/dataplane/openness-ovn.md) as the infrastructure. OVN/OVS in the network edge is supported through the Kubernetes kube-OVN Container Network Interface (CNI).
>**NOTE**: The InterApps Communication also works with Calico cni. Calico is supported as a default cni in Openness from 21.03 release.
diff --git a/doc/building-blocks/emco/openness-emco.md b/doc/building-blocks/emco/openness-emco.md
index f248b11f..6ae27580 100644
--- a/doc/building-blocks/emco/openness-emco.md
+++ b/doc/building-blocks/emco/openness-emco.md
@@ -133,7 +133,7 @@ The Distributed Application Scheduler supports operations on a deployment intent
- status: (may be invoked at any step) provides information on the status of the deployment intent group.
- terminate: terminates the application resources of an instantiated application from all of the clusters to which it was deployed. In some cases, if a remote cluster is intermittently unreachable, the instantiate operation may still retry the instantiate operation for that cluster. The terminate operation will cause the instantiate operation to complete (i.e. fail), before the termination operation is performed.
- stop: In some cases, if the remote cluster is intermittently unreachable, the Resource Synchronizer will continue retrying an instantiate or terminate operation. The stop operation can be used to force the retry operation to stop, and the instantiate or terminate operation will complete (with a failed status). In the case of terminate, this allows the deployment intent group resource to be deleted via the API, since deletion is prevented until a deployment intent group resource has reached a completed terminate operation status.
-Refer to [EMCO Resource Lifecycle Operations](https://github.com/otcshare/EMCO/tree/main/docs/user/Resource_Lifecycle.md) for more details.
+Refer to [EMCO Resource Lifecycle Operations](https://github.com/open-ness/EMCO/tree/main/docs/user/Resource_Lifecycle.md) for more details.
#### Network Configuration Management
The network configuration management (NCM) microservice:
@@ -237,7 +237,7 @@ _Figure 7 - Instantiate a Deployment Intent Group_
In this initial release of EMCO, a built-in generic placement controller is provided in the `orchestrator`. As described above, the three provided action controllers are the OVN Action, Traffic and Generic Action controllers.
#### Status Monitoring and Queries in EMCO
-When a resource like a Deployment Intent Group is instantiated, status information about both the deployment and the deployed resources in the cluster are collected and made available for query by the API. The following diagram illustrates the key components involved. For more information about status queries see [EMCO Resource Lifecycle Operations](https://github.com/otcshare/EMCO/tree/main/docs/user/Resource_Lifecycle.md).
+When a resource like a Deployment Intent Group is instantiated, status information about both the deployment and the deployed resources in the cluster are collected and made available for query by the API. The following diagram illustrates the key components involved. For more information about status queries see [EMCO Resource Lifecycle Operations](https://github.com/open-ness/EMCO/tree/main/docs/user/Resource_Lifecycle.md).
![OpenNESS EMCO](openness-emco-images/emco-status-monitoring.png)
@@ -262,7 +262,7 @@ _Figure 8 - Status Monitoring and Query Sequence_
### EMCO API
-For user interaction, EMCO provides [RESTful API](https://github.com/otcshare/EMCO/blob/main/docs/emco_apis.yaml). Apart from that, EMCO also provides CLI. For the detailed usage, refer to [EMCO CLI](https://github.com/otcshare/EMCO/tree/main/src/tools/emcoctl)
+For user interaction, EMCO provides [RESTful API](https://github.com/open-ness/EMCO/blob/main/docs/emco_apis.yaml). Apart from that, EMCO also provides CLI. For the detailed usage, refer to [EMCO CLI](https://github.com/open-ness/EMCO/tree/main/src/tools/emcoctl)
> **NOTE**: The EMCO RESTful API is the foundation for the other interaction facilities like the EMCO CLI, EMCO GUI (available in the future) and other orchestrators.
### EMCO Authentication and Authorization
@@ -287,7 +287,7 @@ The following figure shows the authentication flow with EMCO, Istio and Authserv
_Figure 10 - EMCO Authenication with external OATH2 Server_
-Detailed steps for configuring EMCO with Istio can be found in [EMCO Integrity and Access Management](https://github.com/otcshare/EMCO/tree/main/docs/user/Emco_Integrity_Access_Management.md) document.
+Detailed steps for configuring EMCO with Istio can be found in [EMCO Integrity and Access Management](https://github.com/open-ness/EMCO/tree/main/docs/user/Emco_Integrity_Access_Management.md) document.
Steps for EMCO Authentication and Authorization Setup:
- Install and Configure Keycloak Server to be used in the setup. This server runs outside the EMCO cluster
@@ -301,7 +301,7 @@ Steps for EMCO Authentication and Authorization Setup:
- Apply Authentication and Authorization Policies
### EMCO Installation With OpenNESS Flavor
-EMCO supports [multiple deployment options](https://github.com/otcshare/EMCO/tree/main/deployments). [Converged Edge Experience Kits](../../getting-started/converged-edge-experience-kits.md) offers the `central_orchestrator` flavor to automate EMCO build and deployment as mentioned below.
+EMCO supports [multiple deployment options](https://github.com/open-ness/EMCO/tree/main/deployments). [Converged Edge Experience Kits](../../getting-started/converged-edge-experience-kits.md) offers the `central_orchestrator` flavor to automate EMCO build and deployment as mentioned below.
- The first step is to prepare one server environment which needs to fulfill the [preconditions](../../getting-started/openness-cluster-setup.md#preconditions).
- Place the EMCO server hostname in `controller_group/hosts/ctrl.openness.org:` dictionary in `inventory.yml` file of converged-edge-experience-kit.
- Update the `inventory.yaml` file by setting the deployment flavor as `central_orchestrator`
@@ -348,7 +348,7 @@ emco ovnaction-5d8d4447f9-nn7l6 1/1 Running 0 14m
emco rsync-99b85b4x88-ashmc 1/1 Running 0 14m
```
-Besides that, OpenNESS EMCO also provides Azure templates and supports deployment automation for EMCO cluster on Azure public cloud. More details refer to [OpenNESS Development Kit for Microsoft Azure](https://github.com/otcshare/ido-specs/blob/master/doc/devkits/openness-azure-devkit.md).
+Besides that, OpenNESS EMCO also provides Azure templates and supports deployment automation for EMCO cluster on Azure public cloud. More details refer to [OpenNESS Development Kit for Microsoft Azure](https://github.com/open-ness/ido-specs/blob/master/doc/devkits/openness-azure-devkit.md).
## EMCO Example: SmartCity Deployment
- The [SmartCity application](https://github.com/OpenVisualCloud/Smart-City-Sample) is a sample application that is built on top of the OpenVINO™ and Open Visual Cloud software stacks for media processing and analytics. The composite application is composed of two parts: EdgeApp + WebApp (cloud application for additional post-processing such as calculating statistics and display/visualization)
@@ -375,9 +375,9 @@ Follow the guidance as [EMCO Installation With OpenNESS Flavor](#emco-installati
### Cluster Setup
The step includes:
- Prepare edge and cloud clusters kubeconfig files, SmartCity helm charts and relevant artifacts.
-- Register clusters provider by [EMCO CLI](https://github.com/otcshare/EMCO/tree/main/src/tools/emcoctl).
-- Register provider's clusters by [EMCO CLI](https://github.com/otcshare/EMCO/tree/main/src/tools/emcoctl).
-- Register EMCO controllers and resource synchroizer by [EMCO CLI](https://github.com/otcshare/EMCO/tree/main/src/tools/emcoctl).
+- Register clusters provider by [EMCO CLI](https://github.com/open-ness/EMCO/tree/main/src/tools/emcoctl).
+- Register provider's clusters by [EMCO CLI](https://github.com/open-ness/EMCO/tree/main/src/tools/emcoctl).
+- Register EMCO controllers and resource synchroizer by [EMCO CLI](https://github.com/open-ness/EMCO/tree/main/src/tools/emcoctl).
1. On the edge and cloud cluster, run the following command to make Docker logon to the Harbor deployed on the EMCO server, thus the clusters can pull SmartCity images from the Harbor:
```shell
@@ -393,7 +393,7 @@ The step includes:
> **NOTE**: should be `:30003`.
-2. On the EMCO server, download the [scripts,profiles and configmap JSON files](https://github.com/otcshare/edgeapps/tree/master/applications/smart-city-app/emco).
+2. On the EMCO server, download the [scripts,profiles and configmap JSON files](https://github.com/open-ness/edgeapps/tree/master/applications/smart-city-app/emco).
3. Artifacts Preparation for clusters's kubeconfig, smartcity helm charts and other relevant artifacts
Run the command for the environment setup with success return as below:
@@ -470,7 +470,7 @@ The setup includes:
URL: projects/project_smtc/composite-apps/composite_smtc/v1/deployment-intent-groups/smtc-deployment-intent-group/instantiate Response Code: 202 Response:
```
- > **NOTE**: EMCO supports generic K8S resource configuration including configmap, secret,etc. The example offers the usage about [configmap configuration](https://github.com/otcshare/edgeapps/blob/master/applications/smart-city-app/emco/cli-scripts/04_apps_template.yaml) to the clusters.
+ > **NOTE**: EMCO supports generic K8S resource configuration including configmap, secret,etc. The example offers the usage about [configmap configuration](https://github.com/open-ness/edgeapps/blob/master/applications/smart-city-app/emco/cli-scripts/04_apps_template.yaml) to the clusters.
> **NOTE**: The `04_apply.sh` script invokes EMCO CLI tool - `emcoctl` and applies resource template file - `04_apps_template.yaml` which contains the application related resources to create in EMCO, for example deployment-intent, application helm chart entries, override profiles, configmap...etc. The placement intent for the use case is cluster label name and provider name.
diff --git a/doc/building-blocks/enhanced-platform-awareness/openness-acc100.md b/doc/building-blocks/enhanced-platform-awareness/openness-acc100.md
index 19fc4689..5506fce4 100644
--- a/doc/building-blocks/enhanced-platform-awareness/openness-acc100.md
+++ b/doc/building-blocks/enhanced-platform-awareness/openness-acc100.md
@@ -179,7 +179,7 @@ kubectl get node -o json | jq '.status.allocatable'
```
To request the device as a resource in the pod, add the request for the resource into the pod specification file by specifying its name and the amount of resources required. If the resource is not available or the amount of resources requested is greater than the number of resources available, the pod status will be “Pending” until the resource is available.
-**NOTE**: The name of the resource must match the name specified in the configMap for the K8s devices plugin [configMap.yml](https://github.com/otcshare/converged-edge-experience-kits/blob/master/roles/kubernetes/cni/sriov/controlplane/templates/configMap.yml.j2).
+**NOTE**: The name of the resource must match the name specified in the configMap for the K8s devices plugin [configMap.yml](https://github.com/open-ness/converged-edge-experience-kits/blob/master/roles/kubernetes/cni/sriov/controlplane/templates/configMap.yml.j2).
A sample pod requesting the ACC100 (FEC) VF may look like this:
diff --git a/doc/building-blocks/enhanced-platform-awareness/openness-dedicated-core-cmk-deprecated.md b/doc/building-blocks/enhanced-platform-awareness/openness-dedicated-core-cmk-deprecated.md
index bfa98552..cf12441a 100644
--- a/doc/building-blocks/enhanced-platform-awareness/openness-dedicated-core-cmk-deprecated.md
+++ b/doc/building-blocks/enhanced-platform-awareness/openness-dedicated-core-cmk-deprecated.md
@@ -79,7 +79,7 @@ CMK can be deployed using a [Helm chart](https://helm.sh/). The CMK Helm chart u
The environment setup can be validated using steps from the [CMK operator manual](https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/operator.md#validating-the-environment).
**Note:**
-Up to version 20.12 choosing flavor was optional. Since version 21.03 and moving forward this parameter is no longer optional. To learn more about [flavors go to this page](https://github.com/otcshare/specs/blob/master/doc/flavors.md).
+Up to version 20.12 choosing flavor was optional. Since version 21.03 and moving forward this parameter is no longer optional. To learn more about [flavors go to this page](https://github.com/open-ness/specs/blob/master/doc/flavors.md).
### Usage
diff --git a/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md b/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md
index a0696a5f..3e071e93 100644
--- a/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md
+++ b/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md
@@ -263,7 +263,7 @@ kubectl get node -o json | jq '.status.allocatable'
```
To request the device as a resource in the pod, add the request for the resource into the pod specification file by specifying its name and amount of resources required. If the resource is not available or the amount of resources requested is greater than the number of resources available, the pod status will be “Pending” until the resource is available.
-**NOTE**: The name of the resource must match the name specified in the configMap for the K8s devices plugin [configMap.yml](https://github.com/otcshare/converged-edge-experience-kits/blob/master/roles/kubernetes/cni/sriov/controlplane/templates/configMap.yml.j2).
+**NOTE**: The name of the resource must match the name specified in the configMap for the K8s devices plugin [configMap.yml](https://github.com/open-ness/converged-edge-experience-kits/blob/master/roles/kubernetes/cni/sriov/controlplane/templates/configMap.yml.j2).
A sample pod requesting the FPGA (FEC) VF may look like this:
diff --git a/doc/building-blocks/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md b/doc/building-blocks/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md
index 3bc059fb..01454063 100644
--- a/doc/building-blocks/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md
+++ b/doc/building-blocks/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md
@@ -61,7 +61,7 @@ kubernetes_cnis:
### Multus usage
-Multus CNI is deployed in OpenNESS using a Helm chart. The Helm chart is available in [converged-edge-experience-kits](https://github.com/otcshare/converged-edge-experience-kits/tree/master/roles/kubernetes/cni/multus/controlplane/files/multus-cni). The Multus image is pulled by Ansible\* Multus role and pushed to a local Docker\* registry on Edge Controller.
+Multus CNI is deployed in OpenNESS using a Helm chart. The Helm chart is available in [converged-edge-experience-kits](https://github.com/open-ness/converged-edge-experience-kits/tree/master/roles/kubernetes/cni/multus/controlplane/files/multus-cni). The Multus image is pulled by Ansible\* Multus role and pushed to a local Docker\* registry on Edge Controller.
[Custom resource definition](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-resources) (CRD) is used to define an additional network that can be used by Multus.
@@ -130,7 +130,7 @@ kubernetes_cnis:
- sriov
```
-SR-IOV CNI and device plugin are deployed in OpenNESS using Helm chart. The Helm chart is available in [converged-edge-experience-kits](https://github.com/otcshare/converged-edge-experience-kits/tree/master/roles/kubernetes/cni/sriov/controlplane/files/sriov). Additional chart templates for SR-IOV device plugin can be downloaded from [container-experience-kits repository](https://github.com/intel/container-experience-kits/tree/master/roles/sriov_dp_install/charts/sriov-net-dp/templates). SR-IOV images are built from source by the Ansible SR-IOV role and pushed to a local Harbor registry on Edge Controller.
+SR-IOV CNI and device plugin are deployed in OpenNESS using Helm chart. The Helm chart is available in [converged-edge-experience-kits](https://github.com/open-ness/converged-edge-experience-kits/tree/master/roles/kubernetes/cni/sriov/controlplane/files/sriov). Additional chart templates for SR-IOV device plugin can be downloaded from [container-experience-kits repository](https://github.com/intel/container-experience-kits/tree/master/roles/sriov_dp_install/charts/sriov-net-dp/templates). SR-IOV images are built from source by the Ansible SR-IOV role and pushed to a local Harbor registry on Edge Controller.
#### Edge Node SR-IOV interfaces configuration
diff --git a/doc/building-blocks/enhanced-platform-awareness/openness-telemetry.md b/doc/building-blocks/enhanced-platform-awareness/openness-telemetry.md
index 94b42086..80368545 100644
--- a/doc/building-blocks/enhanced-platform-awareness/openness-telemetry.md
+++ b/doc/building-blocks/enhanced-platform-awareness/openness-telemetry.md
@@ -149,7 +149,7 @@ Node Exporter is a Prometheus exporter that exposes hardware and OS metrics of *
#### VCAC-A
-Node Exporter also enables exposure of telemetry from Intel's VCAC-A card to Prometheus. The telemetry from the VCAC-A card is saved into a text file; this text file is used as an input to the Node Exporter. More information on VCAC-A usage in OpenNESS is available [here](https://github.com/otcshare/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md).
+Node Exporter also enables exposure of telemetry from Intel's VCAC-A card to Prometheus. The telemetry from the VCAC-A card is saved into a text file; this text file is used as an input to the Node Exporter. More information on VCAC-A usage in OpenNESS is available [here](https://github.com/open-ness/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md).
### cAdvisor
@@ -189,7 +189,7 @@ The various CEEK flavors are enabled for CollectD deployment as follows:
1. Select the flavor for the deployment of CollectD from the CEEK during OpenNESS deployment; the flavor is to be selected with `telemetry_flavor: `.
- In the event of using the `flexran` profile, `OPAE_SDK_1.3.7-5_el7.zip` needs to be available in `./converged-edge-experience-kits/opae_fpga` directory; for details about the packages, see [FPGA support in OpenNESS](https://github.com/otcshare/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md#edge-controller)
+ In the event of using the `flexran` profile, `OPAE_SDK_1.3.7-5_el7.zip` needs to be available in `./converged-edge-experience-kits/opae_fpga` directory; for details about the packages, see [FPGA support in OpenNESS](https://github.com/open-ness/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-fpga.md#edge-controller)
2. To access metrics available from CollectD, connect to the Prometheus [dashboard](#prometheus).
3. Look up an example the CollectD metric by specifying the metric name (ie. `collectd_cpufreq`) and pressing `execute` under the `graph` tab.
![CollectD Metric](telemetry-images/collectd_metric.png)
@@ -266,7 +266,7 @@ Processor Counter Monitor (PCM) is an application programming interface (API) an
- Prometheus: responsible for collecting and providing metrics.
- Prometheus Adapter: exposes the metrics from Prometheus to a K8s API and is configured to provide metrics from Node Exporter and CollectD collectors.
-TAS is enabled by default in CEEK, a sample scheduling policy for TAS is provided for [VCAC-A node deployment](https://github.com/otcshare/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md#telemetry-support).
+TAS is enabled by default in CEEK, a sample scheduling policy for TAS is provided for [VCAC-A node deployment](https://github.com/open-ness/specs/blob/master/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md#telemetry-support).
#### Usage
diff --git a/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md b/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md
index a4f55dc7..9b4be173 100644
--- a/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md
+++ b/doc/building-blocks/enhanced-platform-awareness/openness-vcac-a.md
@@ -32,7 +32,7 @@ The VCAC-A installation involves a [two-stage build](https://github.com/OpenVisu
The CEEK automates the overall build and installation process of the VCAC-A card by joining it as a standalone logical node to the OpenNESS cluster. The CEEK supports force build VCAC-A system image (VCAD) via flag (force\_build\_enable: true (default value)), it also allows the customer to disable the flag to re-use last system image built. When successful, the OpenNESS controller is capable of selectively scheduling workloads on the "VCA node" for proximity to the hardware acceleration.
-When onboarding applications such as [Open Visual Cloud Smart City Sample](https://github.com/otcshare/edgeapps/tree/master/applications/smart-city-app) with the existence of VCAC-A, the OpenNESS controller schedules all the application pods onto the edge node except the *video analytics* processing that is scheduled on the VCA node as shown in the figure below.
+When onboarding applications such as [Open Visual Cloud Smart City Sample](https://github.com/open-ness/edgeapps/tree/master/applications/smart-city-app) with the existence of VCAC-A, the OpenNESS controller schedules all the application pods onto the edge node except the *video analytics* processing that is scheduled on the VCA node as shown in the figure below.
![Smart City Setup](vcaca-images/smart-city-app-vcac-a.png)
diff --git a/doc/devkits/openness-azure-devkit.md b/doc/devkits/openness-azure-devkit.md
index b2f0786c..213667c0 100644
--- a/doc/devkits/openness-azure-devkit.md
+++ b/doc/devkits/openness-azure-devkit.md
@@ -14,4 +14,4 @@ for automated depoyment, and supports deployment using Porter. It enables cloud
## Getting Started
Following document contains steps for quick deployment on Azure:
-* [converged-edge-experience-kits/cloud/README.md: Deployment and setup guide](https://github.com/otcshare/converged-edge-experience-kits/blob/master/cloud/README.md)
+* [converged-edge-experience-kits/cloud/README.md: Deployment and setup guide](https://github.com/open-ness/converged-edge-experience-kits/blob/master/cloud/README.md)
diff --git a/doc/getting-started/openness-cluster-setup.md b/doc/getting-started/openness-cluster-setup.md
index a5db2eda..20d29dc3 100644
--- a/doc/getting-started/openness-cluster-setup.md
+++ b/doc/getting-started/openness-cluster-setup.md
@@ -34,7 +34,7 @@ The following set of actions must be completed to set up the Open Network Edge S
1. Fulfill the [Preconditions](#preconditions).
2. Become familiar with [supported features](#supported-epa-features) and enable them if desired.
-3. Clone [Converged Edge Experience Kits](https://github.com/otcshare/converged-edge-experience-kits)
+3. Clone [Converged Edge Experience Kits](https://github.com/open-ness/converged-edge-experience-kits)
4. Install deployment helper script pre-requisites (first time only)
```shell
diff --git a/doc/orchestration/openness-helm.md b/doc/orchestration/openness-helm.md
index ac0d6aaa..fff0e018 100644
--- a/doc/orchestration/openness-helm.md
+++ b/doc/orchestration/openness-helm.md
@@ -12,7 +12,7 @@ Copyright (c) 2020 Intel Corporation
- [References](#references)
## Introduction
-Helm is a package manager for Kubernetes\*. It allows developers and operators to easily package, configure, and deploy applications and services onto Kubernetes clusters. For details refer to the [Helm Website](https://helm.sh). With OpenNESS, Helm is used to extend the [Converged Edge Experience Kits](https://github.com/otcshare/converged-edge-experience-kits) Ansible\* playbooks to deploy Kubernetes packages. Helm adds considerable flexibility. It enables users to upgrade an existing installation without requiring a re-install. It provides the option to selectively deploy individual microservices if a full installation of OpenNESS is not needed. And it provides a standard process to deploy different applications or network functions. This document aims to familiarize the user with Helm and provide instructions on how to use the specific Helm charts available for OpenNESS.
+Helm is a package manager for Kubernetes\*. It allows developers and operators to easily package, configure, and deploy applications and services onto Kubernetes clusters. For details refer to the [Helm Website](https://helm.sh). With OpenNESS, Helm is used to extend the [Converged Edge Experience Kits](https://github.com/open-ness/converged-edge-experience-kits) Ansible\* playbooks to deploy Kubernetes packages. Helm adds considerable flexibility. It enables users to upgrade an existing installation without requiring a re-install. It provides the option to selectively deploy individual microservices if a full installation of OpenNESS is not needed. And it provides a standard process to deploy different applications or network functions. This document aims to familiarize the user with Helm and provide instructions on how to use the specific Helm charts available for OpenNESS.
## Architecture
The below figure shows the architecture for the OpenNESS Helm in this document.
@@ -22,7 +22,7 @@ _Figure - Helm Architecture in OpenNESS_
## Helm Installation
-Helm 3 is used for OpenNESS. The installation is automatically conducted by the [Converged Edge Experience Kits](https://github.com/otcshare/converged-edge-experience-kits) Ansible playbooks as below:
+Helm 3 is used for OpenNESS. The installation is automatically conducted by the [Converged Edge Experience Kits](https://github.com/open-ness/converged-edge-experience-kits) Ansible playbooks as below:
```yaml
- role: kubernetes/helm
```
@@ -39,19 +39,19 @@ OpenNESS provides the following helm charts:
- CNI plugins including Multus\* and SRIOV CNI
- Video analytics service
- 5G control plane pods including AF, NEF, OAM, and CNTF
-> **NOTE**: NFD, CMK, Prometheus, NodeExporter, and Grafana leverage existing third-party helm charts: [Container Experience Kits](https://github.com/intel/container-experience-kits) and [Helm GitHub\* Repo](https://github.com/helm/charts). For other helm charts, [Converged Edge Experience Kits](https://github.com/otcshare/converged-edge-experience-kits) Ansible playbooks perform automatic charts generation and deployment.
+> **NOTE**: NFD, CMK, Prometheus, NodeExporter, and Grafana leverage existing third-party helm charts: [Container Experience Kits](https://github.com/intel/container-experience-kits) and [Helm GitHub\* Repo](https://github.com/helm/charts). For other helm charts, [Converged Edge Experience Kits](https://github.com/open-ness/converged-edge-experience-kits) Ansible playbooks perform automatic charts generation and deployment.
- Sample applications, network functions, and services that can be deployed and verified on the OpenNESS platform:
- Applications
- - [CDN Caching Application Helm Charts](https://github.com/otcshare/edgeapps/tree/master/applications/cdn-caching)
+ - [CDN Caching Application Helm Charts](https://github.com/open-ness/edgeapps/tree/master/applications/cdn-caching)
- [CDN Transcode Application Helm Charts](https://github.com/OpenVisualCloud/CDN-Transcode-Sample/tree/master/deployment/kubernetes/helm) (Leverage OpenVisualCloud)
- [Smart City Application Helm Charts](https://github.com/OpenVisualCloud/Smart-City-Sample/tree/master/deployment/kubernetes/helm) (Leverage OpenVisualCloud)
- - [Telemetry Sample Application Helm Charts](https://github.com/otcshare/edgeapps/tree/master/applications/telemetry-sample-app)
- - [EIS Sample Application Helm Charts](https://github.com/otcshare/edgeapps/tree/master/applications/eis-experience-kit)
+ - [Telemetry Sample Application Helm Charts](https://github.com/open-ness/edgeapps/tree/master/applications/telemetry-sample-app)
+ - [EIS Sample Application Helm Charts](https://github.com/open-ness/edgeapps/tree/master/applications/eis-experience-kit)
- Network Functions
- - [FlexRAN Helm Charts](https://github.com/otcshare/edgeapps/tree/master/network-functions/ran/charts/du-dev)
- - [xRAN Helm Charts](https://github.com/otcshare/edgeapps/tree/master/network-functions/xran/helmcharts/xranchart)
- - [UPF Helm Charts](https://github.com/otcshare/edgeapps/tree/master/network-functions/core-network/charts/upf)
+ - [FlexRAN Helm Charts](https://github.com/open-ness/edgeapps/tree/master/network-functions/ran/charts/du-dev)
+ - [xRAN Helm Charts](https://github.com/open-ness/edgeapps/tree/master/network-functions/xran/helmcharts/xranchart)
+ - [UPF Helm Charts](https://github.com/open-ness/edgeapps/tree/master/network-functions/core-network/charts/upf)
The EPA, Telemetry, and k8s plugins helm chart files will be saved in a specific directory on the OpenNESS controller. To modify the directory, change the following variable `ne_helm_charts_default_dir` in the `inventory/default/group_vars/all/10-default.yml` file:
```yaml
diff --git a/doc/reference-architectures/core-network/openness_upf.md b/doc/reference-architectures/core-network/openness_upf.md
index 3877a5eb..d5bd7076 100644
--- a/doc/reference-architectures/core-network/openness_upf.md
+++ b/doc/reference-architectures/core-network/openness_upf.md
@@ -45,21 +45,21 @@ As part of the end-to-end integration of the Edge cloud deployment using OpenNES
# Purpose
-This document provides the required steps to deploy UPF on the OpenNESS platform. 4G/(Long Term Evolution network)LTE or 5G UPF can run as network functions on the Edge node in a virtualized environment. The reference [Dockerfile](https://github.com/otcshare/edgeapps/blob/master/network-functions/core-network/5G/UPF/Dockerfile) and [5g-upf.yaml](https://github.com/otcshare/edgeapps/blob/master/network-functions/core-network/5G/UPF/5g-upf.yaml) provide details on how to deploy UPF as a Cloud-native network functions (CNF) in a K8s pod on OpenNESS edge node using OpenNESS Enhanced Platform Awareness (EPA) features.
+This document provides the required steps to deploy UPF on the OpenNESS platform. 4G/(Long Term Evolution network)LTE or 5G UPF can run as network functions on the Edge node in a virtualized environment. The reference [Dockerfile](https://github.com/open-ness/edgeapps/blob/master/network-functions/core-network/5G/UPF/Dockerfile) and [5g-upf.yaml](https://github.com/open-ness/edgeapps/blob/master/network-functions/core-network/5G/UPF/5g-upf.yaml) provide details on how to deploy UPF as a Cloud-native network functions (CNF) in a K8s pod on OpenNESS edge node using OpenNESS Enhanced Platform Awareness (EPA) features.
These scripts are validated through a reference UPF solution (implementation is based on Vector Packet Processing (VPP)) that is not part of the OpenNESS release.
>**NOTE**: The AF and NEF Dockerfile and pod specification can be found here:
>
-> - AF - [dockerfile](https://github.com/otcshare/epcforedge/blob/master/ngc/build/networkedge/af/Dockerfile). [Pod Specification](https://github.com/otcshare/epcforedge/blob/master/ngc/scripts/networkedge/ngctest/podAF.yaml)
-> - NEF - [dockerfile](https://github.com/otcshare/epcforedge/blob/master/ngc/build/networkedge/nef/Dockerfile). [Pod Specification](https://github.com/otcshare/epcforedge/blob/master/ngc/scripts/networkedge/ngctest/podNEF.yaml)
-> - OAM - [dockerfile](https://github.com/otcshare/epcforedge/blob/master/ngc/build/networkedge/oam/Dockerfile). [Pod Specification](https://github.com/otcshare/epcforedge/blob/master/ngc/scripts/networkedge/ngctest/podOAM.yaml)
+> - AF - [dockerfile](https://github.com/open-ness/epcforedge/blob/master/ngc/build/networkedge/af/Dockerfile). [Pod Specification](https://github.com/open-ness/epcforedge/blob/master/ngc/scripts/networkedge/ngctest/podAF.yaml)
+> - NEF - [dockerfile](https://github.com/open-ness/epcforedge/blob/master/ngc/build/networkedge/nef/Dockerfile). [Pod Specification](https://github.com/open-ness/epcforedge/blob/master/ngc/scripts/networkedge/ngctest/podNEF.yaml)
+> - OAM - [dockerfile](https://github.com/open-ness/epcforedge/blob/master/ngc/build/networkedge/oam/Dockerfile). [Pod Specification](https://github.com/open-ness/epcforedge/blob/master/ngc/scripts/networkedge/ngctest/podOAM.yaml)
# How to build
1. To keep the build and deploy process straightforward, the Docker\* build and image are stored on the Edge node.
-2. Copy the upf binary package to the Docker build folder. Reference Docker files and the Helm chart for deploying the UPF is available at [edgeapps_upf_docker](https://github.com/otcshare/edgeapps/tree/master/network-functions/core-network/5G/UPF) and [edgeapps_upf_helmchart](https://github.com/otcshare/edgeapps/tree/master/network-functions/core-network/charts/upf) respectively
+2. Copy the upf binary package to the Docker build folder. Reference Docker files and the Helm chart for deploying the UPF is available at [edgeapps_upf_docker](https://github.com/open-ness/edgeapps/tree/master/network-functions/core-network/5G/UPF) and [edgeapps_upf_helmchart](https://github.com/open-ness/edgeapps/tree/master/network-functions/core-network/charts/upf) respectively
```bash
ne-node# cp -rf <5g-upf-binary-package> edgeapps/network-functions/core-network/5G/UPF/upf
@@ -77,7 +77,7 @@ These scripts are validated through a reference UPF solution (implementation is
# UPF configuration
-To keep the bring up setup simple, the UPF configuration can be provided through the Helm charts. A reference Helm chart is available at [edgeapps_upf_helmchart](https://github.com/otcshare/edgeapps/tree/master/network-functions/core-network/charts/upf)
+To keep the bring up setup simple, the UPF configuration can be provided through the Helm charts. A reference Helm chart is available at [edgeapps_upf_helmchart](https://github.com/open-ness/edgeapps/tree/master/network-functions/core-network/charts/upf)
Below is a list of minimal configuration parameters for VPP-based applications such as UPF.
@@ -280,7 +280,7 @@ Below is a list of minimal configuration parameters for VPP-based applications s
## Deploy UPF pod from OpenNESS controller
-In this reference validation, UPF will be deployed using Helm charts. The reference Helm chart for UPF is available at [edgeapps_upf_helmchart](https://github.com/otcshare/edgeapps/tree/master/network-functions/core-network/charts/upf)
+In this reference validation, UPF will be deployed using Helm charts. The reference Helm chart for UPF is available at [edgeapps_upf_helmchart](https://github.com/open-ness/edgeapps/tree/master/network-functions/core-network/charts/upf)
helm install \ \ \
diff --git a/openness_releasenotes.md b/openness_releasenotes.md
index 96c66ef2..bc66194b 100644
--- a/openness_releasenotes.md
+++ b/openness_releasenotes.md
@@ -271,7 +271,7 @@ This document provides high-level system features, issues, and limitations infor
- Experience Kit now supports multiple detection video's – Safety equipment detection, PCB default detection and also supports external video streams.
## OpenNESS - 20.12
-- Early access release of Edge Multi-Cluster Orchestration(EMCO), a Geo-distributed application orchestrator for Kubernetes. This release supports EMCO deploying and managing the life cycle of the Smart City Application pipeline on the edge cluster. More details in the [EMCO Release Notes](https://github.com/otcshare/EMCO/blob/main/ReleaseNotes.md).
+- Early access release of Edge Multi-Cluster Orchestration(EMCO), a Geo-distributed application orchestrator for Kubernetes. This release supports EMCO deploying and managing the life cycle of the Smart City Application pipeline on the edge cluster. More details in the [EMCO Release Notes](https://github.com/open-ness/EMCO/blob/main/ReleaseNotes.md).
- Reference implementation of the offline installation package for the Converged Edge Reference Architecture (CERA) Access Edge flavor enabling installation of Kubernetes and related enhancements for Access edge deployments.
- Azure Development kit (Devkit) supporting the installation of an OpenNESS Kubernetes cluster on the Microsoft* Azure* cloud. This is typically used by a customer who wants to develop applications and services for the edge using OpenNESS building blocks.
- Support Intel® vRAN Dedicated Accelerator ACC100, Kubernetes Cloud-native deployment supporting higher capacity 4G/LTE and 5G vRANs cells/carriers for FEC offload.
@@ -331,11 +331,11 @@ There are no unsupported or discontinued features relevant to this release.
## OpenNESS - 21.03
- FlexRAN/Access Edge CERA Flavor is only available in Intel Distribution of OpenNESS
- OpenNESS repositories have been consolidated to the following
- - https://github.com/otcshare/converged-edge-experience-kits
- - https://github.com/otcshare/specs
- - https://github.com/otcshare/edgeapps
- - https://github.com/otcshare/edgeservices
- - https://github.com/otcshare/openshift-operator
+ - https://github.com/open-ness/converged-edge-experience-kits
+ - https://github.com/open-ness/specs
+ - https://github.com/open-ness/edgeapps
+ - https://github.com/open-ness/edgeservices
+ - https://github.com/open-ness/openshift-operator
# Fixed Issues
@@ -496,11 +496,11 @@ OpenNESS Edge node, OpenNESS Controller, Common, Spec, OpenNESS Applications, an
- Open Source: Edge node, Controller, Epcforedge, Common, Spec, Applications and Experience kit.
- IDO: IDO Edge node, IDO Controller, IDO Epcforedge, IDO Spec and IDO Experience kit.
## OpenNESS - 21.03
- - https://github.com/otcshare/converged-edge-experience-kits
- - https://github.com/otcshare/specs
- - https://github.com/otcshare/edgeapps
- - https://github.com/otcshare/edgeservices
- - https://github.com/otcshare/openshift-operator
+ - https://github.com/open-ness/converged-edge-experience-kits
+ - https://github.com/open-ness/specs
+ - https://github.com/open-ness/edgeapps
+ - https://github.com/open-ness/edgeservices
+ - https://github.com/open-ness/openshift-operator
# Hardware and Software Compatibility
OpenNESS Edge Node has been tested using the following hardware specification:
diff --git a/schema/5goam/5goam.swagger.json b/schema/5goam/5goam.swagger.json
index 4d050a0e..c0278b08 100644
--- a/schema/5goam/5goam.swagger.json
+++ b/schema/5goam/5goam.swagger.json
@@ -5,7 +5,7 @@
"title": "5G OAM Northbound API",
"contact": {
"name": "intel",
- "url": "github.com/otcshare/epcedge",
+ "url": "github.com/open-ness/epcedge",
"email": "support@intel.com"
},
"license": {
diff --git a/schema/5goam/5goam.swagger.yaml b/schema/5goam/5goam.swagger.yaml
index 5314c386..8da72063 100644
--- a/schema/5goam/5goam.swagger.yaml
+++ b/schema/5goam/5goam.swagger.yaml
@@ -7,7 +7,7 @@ info:
title: 5G OAM Northbound API
contact:
name: intel
- url: github.com/otcshare/epcedge
+ url: github.com/open-ness/epcedge
email: support@intel.com
license:
name: Apache 2.0 License
diff --git a/schema/pb/auth.proto b/schema/pb/auth.proto
index 69bac0d6..b60d28fe 100644
--- a/schema/pb/auth.proto
+++ b/schema/pb/auth.proto
@@ -4,7 +4,7 @@
syntax = "proto3";
package openness.auth;
-option go_package = "github.com/otcshare/schema;auth";
+option go_package = "github.com/open-ness/schema;auth";
import "google/api/annotations.proto";
import "protoc-gen-swagger/options/annotations.proto";
diff --git a/schema/pb/cups.proto b/schema/pb/cups.proto
index b9e86441..bb8ea2e6 100644
--- a/schema/pb/cups.proto
+++ b/schema/pb/cups.proto
@@ -4,7 +4,7 @@
syntax = "proto3";
package openness.cups;
-option go_package = "github.com/otcshare/cups;cups";
+option go_package = "github.com/open-ness/cups;cups";
import "google/protobuf/empty.proto";
import "google/api/annotations.proto";
diff --git a/schema/pb/eaa.proto b/schema/pb/eaa.proto
index c731a0f7..edaad841 100644
--- a/schema/pb/eaa.proto
+++ b/schema/pb/eaa.proto
@@ -4,7 +4,7 @@
syntax = "proto3";
package openness.eaa;
-option go_package = "github.com/otcshare/eaa;eaa";
+option go_package = "github.com/open-ness/eaa;eaa";
import "google/protobuf/empty.proto";
import "google/api/annotations.proto";
diff --git a/schema/pb/eva.proto b/schema/pb/eva.proto
index e1972b95..7dd8eee3 100644
--- a/schema/pb/eva.proto
+++ b/schema/pb/eva.proto
@@ -4,7 +4,7 @@
syntax = "proto3";
package openness.eva;
-option go_package = "github.com/otcshare/eva";
+option go_package = "github.com/open-ness/eva";
import "google/protobuf/empty.proto";