Skip to content

Commit

Permalink
Repurposed for flow reconciliation library (#21)
Browse files Browse the repository at this point in the history
* Preparing to repurpose this repo for data plane reconciliation library

* Defined a high-level sketch of the library interfaces

* Added a few more details to provide P4Info, etc.

* Added a sketch of the store tier

* Added minor clarification comment
  • Loading branch information
tomikazi authored Jan 20, 2023
1 parent f6fb6aa commit b6ba90b
Show file tree
Hide file tree
Showing 27 changed files with 537 additions and 1,337 deletions.
25 changes: 22 additions & 3 deletions .golangci.yml
Original file line number Diff line number Diff line change
@@ -1,18 +1,37 @@
# SPDX-FileCopyrightText: 2020-present Open Networking Foundation <[email protected]>
# SPDX-FileCopyrightText: 2019-present Open Networking Foundation <[email protected]>
#
# SPDX-License-Identifier: Apache-2.0

run:
# Autogenerated files take too much time and memory to load,
# even if we skip them with -skip-dirs or -skip-dirs;
# or mark them as generated; or use nolint annotations.
# So we define this tag and use it in the autogenerated files.
build-tags:
- codeanalysis

linters:
enable:
- gofmt
- gocyclo
- golint
- revive
- misspell
- typecheck
- errcheck
- dogsled
- unconvert
- nakedret
- exportloopref

issues:
exclude-use-default: false
exclude:
- Error return value of `.*Close` is not checked
- Error return value of `.*Flush` is not checked
- Error return value of `.*Write` is not checked
- Error return value of `.*Stop` is not checked
exclude-rules:
- path: pkg
linters:
- staticcheck
text: "SA1019:"

72 changes: 24 additions & 48 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -2,65 +2,41 @@
#
# SPDX-License-Identifier: Apache-2.0

export CGO_ENABLED=0
export CGO_ENABLED=1
export GO111MODULE=on

.PHONY: build

ONOS_CONTROL_VERSION := latest
ONOS_BUILD_VERSION := stable

build-tools:=$(shell if [ ! -d "./build/build-tools" ]; then cd build && git clone https://github.com/onosproject/build-tools.git; fi)
build-tools:=$(shell if [ ! -d "./build/build-tools" ]; then mkdir -p build && cd build && git clone https://github.com/onosproject/build-tools.git; fi)
include ./build/build-tools/make/onf-common.mk

build: # @HELP build the Go binaries and run all validations (default)
build:
CGO_ENABLED=1 go build -o build/_output/onos-control ./cmd/onos
go build -o build/_output/onos ./cmd/onos

test: # @HELP run the unit tests and source code validation
test: build deps linters license
go test github.com/onosproject/onos-control/pkg/...
go test github.com/onosproject/onos-control/cmd/...
.PHONY: build

protos: # @HELP compile the protobuf files (using protoc-go Docker)
docker run -it -v `pwd`:/go/src/github.com/onosproject/onos-control \
-w /go/src/github.com/onosproject/onos-control \
--entrypoint pkg/northbound/proto/compile-protos.sh \
onosproject/protoc-go:stable
build: # @HELP build the Go binaries (default)
build:
go build github.com/onosproject/onos-control/pkg/...

onos-control-base-docker: # @HELP build onos-control base Docker image
@go mod vendor
docker build . -f build/base/Dockerfile \
--build-arg ONOS_BUILD_VERSION=${ONOS_BUILD_VERSION} \
-t onosproject/onos-control-base:${ONOS_CONTROL_VERSION}
@rm -rf vendor
mod-update: # @HELP Download the dependencies to the vendor folder
go mod tidy
go mod vendor
mod-lint: mod-update # @HELP ensure that the required dependencies are in place
# dependencies are vendored, but not committed, go.sum is the only thing we need to check
bash -c "diff -u <(echo -n) <(git diff go.sum)"

onos-control-docker: onos-control-base-docker # @HELP build onos-control Docker image
docker build . -f build/onos-control/Dockerfile \
--build-arg ONOS_CONTROL_BASE_VERSION=${ONOS_CONTROL_VERSION} \
-t onosproject/onos-control:${ONOS_CONTROL_VERSION}

onos-cli-docker: onos-control-base-docker # @HELP build onos-cli Docker image
docker build . -f build/onos-cli/Dockerfile \
--build-arg ONOS_CONTROL_BASE_VERSION=${ONOS_CONTROL_VERSION} \
-t onosproject/onos-cli:${ONOS_CONTROL_VERSION}
test: # @HELP run the unit tests and source code validation producing a golang style report
test: mod-lint build linters license
go test -race github.com/onosproject/onos-control/pkg/...

onos-control-it-docker: onos-control-base-docker # @HELP build onos-control-integration-tests Docker image
docker build . -f build/onos-it/Dockerfile \
--build-arg ONOS_CONTROL_BASE_VERSION=${ONOS_CONTROL_VERSION} \
-t onosproject/onos-control-integration-tests:${ONOS_CONTROL_VERSION}
jenkins-test: # @HELP run the unit tests and source code validation producing a junit style report for Jenkins
jenkins-test: mod-lint build linters license jenkins-tools
TEST_PACKAGES=github.com/onosproject/onos-control/pkg/... ./build/build-tools/build/jenkins/make-unit

images: # @HELP build all Docker images
images: build onos-control-docker
publish: # @HELP publish version on github and dockerhub
./build/build-tools/publish-version ${VERSION}

kind: # @HELP build Docker images and add them to the currently configured kind cluster
kind: images
@if [ "`kind get clusters`" = '' ]; then echo "no kind cluster found" && exit 1; fi
kind load docker-image onosproject/onos-config:${ONOS_CONTROL_VERSION}
jenkins-publish: jenkins-tools # @HELP Jenkins calls this to publish artifacts
./build/build-tools/release-merge-commit

all: build images
all: test

clean:: # @HELP remove all the build artifacts
rm -rf ./build/_output ./vendor ./cmd/onos-control/onos-control ./cmd/dummy/dummy

go clean -testcache github.com/onosproject/onos-control/...
117 changes: 109 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,19 +4,120 @@ SPDX-FileCopyrightText: 2020-present Open Networking Foundation <info@opennetwor
SPDX-License-Identifier: Apache-2.0
-->

# onos-control
# Data Plane Reconciliation Library
[![Build Status](https://travis-ci.com/onosproject/onos-control.svg?branch=master)](https://travis-ci.com/onosproject/onos-control)
[![Go Report Card](https://goreportcard.com/badge/github.com/onosproject/onos-control)](https://goreportcard.com/report/github.com/onosproject/onos-control)
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/gojp/goreportcard/blob/master/LICENSE)
[![GoDoc](https://godoc.org/github.com/onosproject/onos-control?status.svg)](https://godoc.org/github.com/onosproject/onos-control)

ONOS Control subsystem built using the µONOS architecture
This piece of the µONOS architecture is provided as a library, rather than a separate component.
The reasons for this design choice are listed at the end of this section.

**Note**
This is work in progress and will be updated regularly.
Please join the effort through our [Contacts and Meetings](https://github.com/onosproject/onos-config/blob/master/docs/community-info.md).
The principal aim of the library is to provide µONOS components and applications with common and uniform means to
program the behavior of the data plane using P4Runtime constructs, while remaining reasonably insulated from
the details of a particular P4 program on the networking device, i.e. physical pipeline.

## Design Documentation
This insulation will occur by exposing a logical P4 pipeline to the applications and internally
mapping it onto the specific networking device pipeline. The logical P4 pipeline will be formally defined
as a working P4 program, which can be tested via PTF. The act of mapping the logical pipeline onto the
physical one will be the responsibility of a translation layer (see below) and translator “plugins”.

The library interface will be generalizable to any arbitrary logical pipeline and will support arbitrary roles.
Updates and entries exchanged via this interface will be “transformed” between the logical and physical pipelines.
Consequently, each transform will either have an inverse transform or the system will have means of tracking
which logical pipeline entry originated any physical pipeline entry.

To capture the original intent, the library will persist the logical (high-level) constructs specified over
its interface rather than persisting the derived physical pipeline primitives; the latter will be (re)derived
as necessary. This will be necessary to allow physical pipeline/translator upgrades at run-time.

The following is the structure proposed for the library:

![Library Structure](docs/structure.png)

## Overview
The basic idea for the library is to expose a logical pipeline which can be programmed with the usual
P4Runtime constructs, albeit using an API that is not P4Runtime, but instead either a subset or a
look-alike for the subset of P4Runtime RPC calls. For example, there is no need to set forwarding
pipeline configuration as that will be done by the device provisioner. Similarly, application will
not need to directly negotiate mastership for its role as this will be done by the library
on behalf of (and perhaps with the participation of) the application.

## API and its Subjects
Most likely, the exposed operations will be for Read, Write and Stream. The subject of these operations
should be newly defined entities that will transparently carry various P4Runtime Entries/Updates and
contain additional status to allow the application to know which state, in the process of applying
to the data plane, those updates/entities are, e.g., pending, reconciling, applied.

Portions of the stream functionality for packet-out and packet-in are expected to remain without any augmentation.
It might also be possible to bypass any intervening layers - to be confirmed.

The API would continue to be device centric, i.e., each Read/Write operation would span only a single device.
Updates given in a single Write should be treated as a transaction, meaning that reconciler should apply all
derived entities to the device or none of them. There would be no provisions for multi-device operations and transactions.

The API will simplify the mastership arbitration process as much as possible and at the very least,
will allow the application to learn of the mastership change to allow it to tailor its own activities in response.
It remains to be seen if the application needs to participate in the mastership selection.
The library will drive the arbitration process itself.

## Store Controller
The library should persist these logical pipeline updates/entities to capture the application’s intent
and allow retrieving it later, either by the API or by the lower levels of the library.
This functionality is denoted by the Store Controller.

## Translator(s)
To transform a set of the logical pipeline constructs into another set of device pipeline-specific constructs,
they need to be subjected to pipeline-specific translation. This layer of the library will manage such
translation will offer a pipeline-agnostic API to accomplish it. The translator should provide sufficient
information to allow low-level constructs to be associated with their originating logical pipeline construct.
This is to appropriately reflect the state of the operation.

To attain run-time extensibility, the translation activities may need to be provided via side-car proxies.

## Reconciliation Controller
Once a set of pipeline-specific constructs is available, it must be applied to the device.
This will be the primary objective of the reconciliation controller. In addition to merely applying
the newly arrived operation to the data-plane, the reconciler also must make sure that the device has
in fact all the expected entries in its tables. Any departures from the expected state should be addressed
as soon as possible. The reconciliation approach should consider performance at scale characteristics
and avoid continuous polling of the state. Instead, it should kick-in only when there is a reasonable
chance that a departure may have occurred, e.g., extended connection loss or controller outage, port/link outage.

## SB P4Runtime Client
This layer may be merely the protoc-generated bindings, or it may be a thin layer built atop the former.
This client would be created and operating using the underlying connection established by the reconciler.

The low-level data plane entries are reconciled on behalf of the application using the same role and role configuration.

## Benefits of the approach
There are several benefits to the above structure. They are listed below in no particular order:
* Using P4 logical pipeline, allows us to use an existing formal definition of working P4 program that can be tested
* Same approach can be applied to different P4 logical pipelines in deployments where the pipeline used for
SD Fabric is not adequate/appropriate; this allows us to tailor functionality to SD Fabric, while the
approach can be applied outside the SD Fabric use-case
* Building this functionality as a library (as opposed to a separate component) will avoid an intervening
network hop for packet-out and packet-in. It also allows use of the same message-stream for mastership
arbitration and provides connection fate-sharing
* By partitioning the pipeline resources, each application can maintain its own store of pipeline entries,
thus reducing the amount of data that needs to be reconciled and spreading the load between apps
(and instances of apps via horizontal scaling). This is in contrast with ONOS classic today,
which aggregates all flows for all devices in a single store, increasing the pressure on the store unnecessarily.
* Development of the key portions of the library can commence immediately, using an “identity” translator,
while the logical P4 pipeline is being defined

## Caveats
For accessing shared resources, e.g., ACL tables, it may be necessary to carve off a separate component
responsible for programming & reconciling such resources
* Such component would export its own API and applications would access such shared resources indirectly
to avoid collisions and/or complicated inter-app synchronization schemes
* Such component would still use the reconciliation library within

## Questions to address
* Does the library need to support multi-replica deployments for scaling applications?
* Affects how/where objects need to be stored
* Affects southbound communication/master arbitration
* As a future priority, this library (or a similar one) should also provide means to reconcile dynamic
configuration via gNMI (path/type/value) operations along the same mastership/role assignments;
yes this is in contrast with/parallel to onos-config.

The [overall design document](https://docs.google.com/document/d/1IZz_8EG1AII3JYmTYla585Gbpe9dfSwChO8lEkehp4A/edit?usp=sharing) for µONOS
provides information and an overview of the complete effort.
84 changes: 0 additions & 84 deletions cmd/onos-control/onos-control.go

This file was deleted.

13 changes: 0 additions & 13 deletions cmd/onos/main.go

This file was deleted.

Binary file added docs/structure.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
26 changes: 18 additions & 8 deletions go.mod
Original file line number Diff line number Diff line change
@@ -1,13 +1,23 @@
module github.com/onosproject/onos-control

go 1.13
go 1.19

require (
github.com/golang/protobuf v1.3.1
github.com/mitchellh/go-homedir v1.1.0
github.com/onosproject/onos-config v0.0.0-20190715180819-079d3a8dc433
github.com/spf13/cobra v0.0.5
github.com/spf13/viper v1.4.0
google.golang.org/grpc v1.22.0
k8s.io/klog v0.3.3
github.com/atomix/go-sdk v0.10.0
github.com/onosproject/onos-api/go v0.10.21
github.com/p4lang/p4runtime v1.3.0
)

require (
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/mock v1.6.0 // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/go-cmp v0.5.7 // indirect
golang.org/x/net v0.0.0-20220412020605-290c469a71a5 // indirect
golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6 // indirect
golang.org/x/text v0.3.7 // indirect
golang.org/x/xerrors v0.0.0-20220411194840-2f41105eb62f // indirect
google.golang.org/genproto v0.0.0-20220407144326-9054f6ed7bac // indirect
google.golang.org/grpc v1.46.0 // indirect
google.golang.org/protobuf v1.28.0 // indirect
)
Loading

0 comments on commit b6ba90b

Please sign in to comment.