The following write-up was run in an OpenShift v4.10.16 cluster.
-
Logged as cluster admin and using tasty we will deploy the Serverless operator into our OpenShift cluster:
tasty install serverless-operator
-
We will get three pods running in the
openshift-operators
namespace:oc -n openshift-operators get pods NAME READY STATUS RESTARTS AGE knative-openshift-5c68df4fb6-l46mg 1/1 Running 0 52s knative-openshift-ingress-7947985ccb-47f2r 1/1 Running 0 52s knative-operator-58c4bcf7f7-mxpl4 1/1 Running 0 52s
-
Now we need to deploy the Knative components we want to use, in our case we will be using
Knative Serving
andKnative Eventing
, let's get them deployed:cat <<EOF | oc apply -f - apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: {} --- kind: KnativeEventing apiVersion: operator.knative.dev/v1alpha1 metadata: name: knative-eventing namespace: knative-eventing spec: {} EOF
-
Several pods will be running on both namespaces:
oc -n knative-serving get pods NAME READY STATUS RESTARTS AGE activator-548d6fd8f6-4t6lg 2/2 Running 0 99s activator-548d6fd8f6-d96dh 2/2 Running 0 99s autoscaler-7577cc875c-7g94j 2/2 Running 0 99s autoscaler-7577cc875c-khql9 2/2 Running 0 99s autoscaler-hpa-559ff79c67-2k9p4 2/2 Running 0 96s autoscaler-hpa-559ff79c67-czs7n 2/2 Running 0 96s controller-5f95b77fd6-4lsvg 2/2 Running 0 48s controller-5f95b77fd6-rsjch 2/2 Running 0 89s domain-mapping-65645f4697-4d7hx 2/2 Running 0 98s domain-mapping-65645f4697-rgmdf 2/2 Running 0 98s domainmapping-webhook-5c99d58dd7-fdlnb 2/2 Running 0 98s domainmapping-webhook-5c99d58dd7-w295l 2/2 Running 0 98s strg-version-migration-serving-serving-1.1.2-dpfjv 0/1 Completed 0 96s webhook-6c8d66489-4xqgq 2/2 Running 0 97s webhook-6c8d66489-xw6hd 2/2 Running 0 97s oc -n knative-eventing get pods NAME READY STATUS RESTARTS AGE eventing-controller-7c9d7879f-g8c5s 2/2 Running 0 119s eventing-controller-7c9d7879f-qsb7s 2/2 Running 0 119s eventing-webhook-748d66ffd-66c5f 2/2 Running 0 119s eventing-webhook-748d66ffd-jql8n 2/2 Running 0 119s imc-controller-66bc766f97-dllw9 2/2 Running 0 115s imc-controller-66bc766f97-kkrvj 2/2 Running 0 115s imc-dispatcher-6f7484c56b-cvn4v 2/2 Running 0 114s imc-dispatcher-6f7484c56b-wx7zq 2/2 Running 0 114s mt-broker-controller-5cf8668bd8-dqrdh 2/2 Running 0 112s mt-broker-controller-5cf8668bd8-h7f4d 2/2 Running 0 112s mt-broker-filter-67b6b9d445-4fl65 2/2 Running 0 113s mt-broker-filter-67b6b9d445-9lgkk 2/2 Running 0 113s mt-broker-ingress-55984d4d57-9hs7j 2/2 Running 0 113s mt-broker-ingress-55984d4d57-dlktx 2/2 Running 0 113s strg-version-migration-eventing-eventing-1.1.0-m96pb 0/1 Completed 0 112s sugar-controller-6769b49ff4-n44sp 2/2 Running 0 112s sugar-controller-6769b49ff4-tjdb8 2/2 Running 0 112s
For this first application we will use a basic rest app you can find here.
We already have a container image for this application published in Quay.io as quay.io/mavazque/reversewords:latest
so we can go ahead and create a Serverless service by running the following commands:
-
First, we will create a Namespace for our application:
cat <<EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: reversewords-serverless EOF
-
Now we can create the serverless service inside the namespace we just created:
cat <<EOF | oc apply -f - apiVersion: serving.knative.dev/v1 kind: Service metadata: name: reversewords namespace: reversewords-serverless spec: template: spec: containers: - image: quay.io/mavazque/reversewords:latest ports: - containerPort: 8080 env: - name: RELEASE value: "Serverless" EOF
-
We can list the Knative service:
oc -n reversewords-serverless get ksvc NAME URL LATESTCREATED LATESTREADY READY REASON reversewords https://reversewords-reversewords-serverless.apps.mario-ipi.e2e.bos.redhat.com reversewords-00001 reversewords-00001 True
Once we have the knative service deployed and ready, it can start receiving requests. If we check the reversewords-serverless
namespace we will see that no pods are running:
oc -n reversewords-serverless get pods
No resources found in reversewords-serverless namespace.
This is expected, the knative services are scaled to 0 and will only spawn application pods when a request makes it to the application route. The first request needs to wait for the app pods to startup, and as such it will take longer, it's important that the startup time for the apps providing serverless services remains short.
Let's do the first request:
time curl -k $(oc -n reversewords-serverless get ksvc reversewords -o jsonpath='{.status.url}')
Reverse Words Release: Serverless. App version: v0.0.25
real 0m3.531s
user 0m0.160s
sys 0m0.101s
It took 3.5s for the first request to get answered, we can see that an application pod was spawned to take care of that request:
oc -n reversewords-serverless get pods
NAME READY STATUS RESTARTS AGE
reversewords-00001-deployment-7bf48c6fd5-gfsz2 2/2 Running 0 14s
If we run a second one fast enough it will be answered by the same app pod that was spawned for the first one:
time curl -k $(oc -n reversewords-serverless get ksvc reversewords -o jsonpath='{.status.url}')
Reverse Words Release: Serverless. App version: v0.0.25
real 0m0.091s
user 0m0.129s
sys 0m0.060s
When the application stops receiving traffic, Knative will scale to zero the service:
ℹ️ It can take up to 2 minutes for Knative to scale down our application once it stopped receiving traffic.
oc -n reversewords-serverless get pods
NAME READY STATUS RESTARTS AGE
reversewords-00001-deployment-7bf48c6fd5-gfsz2 2/2 Terminating 0 64s
We can do traffic splitting using Knative, this is useful for scenarios like blue/green deployments and canary deployments.
We can think of a revision as a snapshot-in-time of application code and configuration.
Let's see how we can create a revision.
-
We will be using the same namespace and Knative service we created in the previous steps, we just need to edit the configuration we want for this new revision. For example, let's change the
RELEASE
config of our application:cat <<EOF | oc apply -f - apiVersion: serving.knative.dev/v1 kind: Service metadata: name: reversewords namespace: reversewords-serverless spec: template: spec: containers: - image: quay.io/mavazque/reversewords:latest ports: - containerPort: 8080 env: - name: RELEASE value: "Revision2" EOF
-
We should've got a new revision in the existing Knative service:
ℹ️ The revision
00002
was created.oc -n reversewords-serverless get ksvc reversewords NAME URL LATESTCREATED LATESTREADY READY REASON reversewords https://reversewords-reversewords-serverless.apps.mario-ipi.e2e.bos.redhat.com reversewords-00002 reversewords-00002 True
-
We can access the new revision if we access the service route:
curl -k $(oc -n reversewords-serverless get ksvc reversewords -o jsonpath='{.status.url}') Reverse Words Release: Revision2. App version: v0.0.25
-
We have now the new revision running, but we also have access to the
00001
revision as it's a snapshot-in-time if you remember. In the next section we will see how we can split traffic between revisions.
-
Before we start splitting traffic we need to know which revisions are available, now we only have two, but still, let's get them listed:
oc -n reversewords-serverless get revisions NAME CONFIG NAME K8S SERVICE NAME GENERATION READY REASON ACTUAL REPLICAS DESIRED REPLICAS reversewords-00001 reversewords 1 True 0 0 reversewords-00002 reversewords 2 True 0 0
-
By default, when a new revision of a Knative Service is created, Knative defaults to directing 100% of traffic to this latest revision. We can change this default behavior by specifying the ammount of traffic we want for each of our revisions to receive. Let's configure our revisions
00001
and00002
to handle 50% traffic each:cat <<EOF | oc apply -f - apiVersion: serving.knative.dev/v1 kind: Service metadata: name: reversewords namespace: reversewords-serverless spec: template: spec: containers: - image: quay.io/mavazque/reversewords:latest ports: - containerPort: 8080 env: - name: RELEASE value: "Revision2" traffic: - latestRevision: true percent: 50 - latestRevision: false percent: 50 revisionName: reversewords-00001 EOF
-
We can now verify that the traffic is being splitted between the two revisions:
for i in $(seq 1 3) do curl -k $(oc -n reversewords-serverless get ksvc reversewords -o jsonpath='{.status.url}') done Reverse Words Release: Revision2. App version: v0.0.25 Reverse Words Release: Serverless. App version: v0.0.25 Reverse Words Release: Revision2. App version: v0.0.25
In the previous sections we have seen how to create serverless services and work with their revisions. In this next section we are going to focus on the tooling Knative provides for event-driven applications.
Event-driven applications are designed to detect events as they occur, and process these events by using user-defined, event-handling procedures.
Component | Definition |
---|---|
Source | A Kubernetes custom resource which emits events to the Broker. |
Broker | A "hub" for events in your infrastructure; a central location to send events for delivery. |
Trigger | Acts as a filter for events entering the broker, can be configured with desired event attributes. |
Sink | A destination for events. |
ℹ️ A Knative service can act as both a Source and a Sink for events. It can consume events from the Broker and send modified events back to the Broker.
ℹ️ Knative Eventing uses CloudEvents to send information back and forth between your services and these components.
Knative Eventing has many other components, you can check the full list here.
If we were using the kn CLI with the kn quickstart
we would get a default broker. Since we're not making use of it we will label the namespace to get the broker deployed as described in the official docs
There are two types of channel-based brokers:
- InMemoryChannel
- KafkaChannel
The first one is useful for development and testing, since it doesn't provide adequate event delivery guarantees for production environments. The latter provides the even delivery guarantees required for production. In this write-up we will use InMemoryChannel
.
-
In order to get a default broker deployed we will label the namespace where we want the broker to exist, in our case
knative-eventing
.oc label namespace knative-eventing eventing.knative.dev/injection=enabled
-
We can check the broker was deployed:
oc -n knative-eventing get broker NAME URL AGE READY REASON default http://broker-ingress.knative-eventing.svc.cluster.local/knative-eventing/default 21s True
-
Now, we're ready to deploy a simple implementation of an event-driven application.
In the previous section we finished creating a default broker, in this section we will deploy an application that will make use of it and will allow us to see how the different components interact with each other.
-
First, we need to deploy a new Namespace with a Knative Service with the application:
ℹ️ Take a look at the annotation
min-scale
, this will make sure that the application has 1 replica at least running at any time.cat <<EOF | oc apply -f - --- apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: cloudevents-app --- apiVersion: serving.knative.dev/v1 kind: Service metadata: name: cloudevents-player namespace: cloudevents-app spec: template: metadata: annotations: autoscaling.knative.dev/min-scale: "1" spec: containers: - image: quay.io/ruben/cloudevents-player:v1.1 ports: - containerPort: 8080 env: - name: BROKER_NAME value: "default" - name: BROKER_NAMESPACE value: "knative-eventing" - name: PLAYER_MODE value: KNATIVE EOF
-
We can access the CloudEvents application from our browser using the Knative service url
oc -n cloudevents-app get ksvc cloudevents-player -o jsonpath='{.status.url}' https://cloudevents-player-cloudevents-app.apps.mario-ipi.e2e.bos.redhat.com/
-
In this application we can create events to verify the eventing components are working. Let's create an event by filling the form with whatever data and pressing the
Send Event
button. -
Clicking the 📧 icon you will see the message as the broker sees it. You can also see that the event was sent to the broker, but the event has gone nowhere this time, the broker is a simply receptable for events and in order to get the events somewhere else we need to create a trigger which listens for events and places them somewhere. In the next section we will create that trigger.
In the previous section we managed to send an event to the broker, but that event was not received anywhere else. We now want the event to go from the broker to an event sink.
We will use the CloudEvents application as the sink as well as a source. This means that CloudEvents application will both send and receive events.
The way we have to do this is by creating a trigger that listens for events in the broker and sends them to the sink.
-
Let's create our trigger that listens for CloudEvents from the broker and places them into the sink (which is also the CloudEvents application):
ℹ️
knative-eventing-injection: enabled
annotation will create a broker if the one specified does not exist in the namespace where the trigger is created.ℹ️ The trigger has to be created in the namespace where the broker was created.
cat <<EOF | oc apply -f - --- apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: cloudevents-player namespace: knative-eventing annotations: knative-eventing-injection: enabled spec: broker: default subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: cloudevents-player namespace: cloudevents-app EOF
-
Now, if we create an event in the CloudEvents application we will see how the event gets sent and received:
The
▶️ icon shows the message was sent and the ✅ icon shows that it was received.
With Knative eventing we can also consume events emitted by the Kubernetes API server (e.g: pod creation, deployment updates, etc.), and forward them as CloudEvents to a sink.
ApiServer is a core Knative eventing component and users can create multiple instances of an ApiServerSource object.
In the example below we will create a new Guitar
CRD in our cluster and we will consume events emitted when interacting with this CRD.
-
Let's create the
Guitar
CRD:cat <<EOF | oc apply -f - apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: guitars.kool.karmalabs.local spec: group: kool.karmalabs.local versions: - name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: brand: type: string review: type: boolean scope: Namespaced names: plural: guitars singular: guitar kind: Guitar shortNames: - guitar EOF
-
In order to have everything well organized a namespace will be created:
ℹ️ Creating a namespace for your ApiServerSource and related components allows you to view changes and events for this workflor more easily.
cat <<EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: guitar-eventing EOF
-
Next, we will create the required RBAC for the ApiServerSource to work:
cat <<EOF | oc apply -f - --- apiVersion: v1 kind: ServiceAccount metadata: name: guitar-sa namespace: guitar-eventing --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: guitar-get namespace: guitar-eventing rules: - apiGroups: - "kool.karmalabs.local" resources: - guitars verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: guitar-get-binding namespace: guitar-eventing roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: guitar-get subjects: - kind: ServiceAccount name: guitar-sa namespace: guitar-eventing EOF
-
We need a sink for the events, we will re-use the CloudEvents player app:
cat <<EOF | oc apply -f - apiVersion: serving.knative.dev/v1 kind: Service metadata: name: cloudevents-player namespace: guitar-eventing spec: template: metadata: annotations: autoscaling.knative.dev/min-scale: "1" spec: containers: - image: quay.io/ruben/cloudevents-player:v1.1 ports: - containerPort: 8080 env: - name: PLAYER_MODE value: KNATIVE EOF
-
We can now create our ApiServerSource object targetting guitars:
cat <<EOF | oc apply -f - apiVersion: sources.knative.dev/v1 kind: ApiServerSource metadata: name: guitars-apisource namespace: guitar-eventing spec: serviceAccountName: guitar-sa mode: Resource resources: - apiVersion: kool.karmalabs.local/v1 kind: Guitar sink: ref: apiVersion: v1 kind: Service name: cloudevents-player EOF
-
At this point, let's access to the CloudEvents player UI:
oc -n guitar-eventing get ksvc cloudevents-player -o jsonpath='{.status.url}' https://cloudevents-player-guitar-eventing.apps.mario-ipi.e2e.bos.redhat.com
-
If we create a guitar this is what we will get:
cat <<EOF | oc apply -f - apiVersion: "kool.karmalabs.local/v1" kind: Guitar metadata: name: d15 namespace: guitar-eventing spec: brand: martin review: false EOF
-
As you can see we got the event, and if we modify the object we will get more events, let's add a label to our Guitar object for example:
oc -n guitar-eventing label guitar d15 helloknative=""
-
If we check the UI we will see a new event, from the event json we can see what changed:
ℹ️ We can see a new label
helloknative
was added to our Guitar object.