Skip to content

sterobin/kogito-cloud-operator

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Kogito Operator

Go Report Card CircleCI

The Kogito Operator deploys Kogito Runtimes services from source and every piece of infrastructure that the services might need, such as SSO (Keycloak) and Persistence (Infinispan).

Table of Contents

Created by gh-md-toc

Kogito Operator requirements

  • Go v1.13 is installed.
  • The operator-sdk v0.15.1 is installed.
  • OpenShift 3.11 or 4.x is installed. (You can use CRC for local deployment.)

Kogito Operator installation

Deploying to OpenShift 4.x

The Kogito operator is a namespaced operator, so you must install it into the namespace where you want your Kogito application to run in.

(Optional) You can import the Kogito image stream using the oc client manually with the following command:

$ oc apply -f https://raw.githubusercontent.com/kiegroup/kogito-images/<version>/kogito-imagestream.yaml -n openshift

Replace <version> with the appropriate Kogito Images version.

This step is optional because the Kogito Operator creates the required ImageStreams when it installs a new application.

Automatically in OperatorHub

The Kogito Operator is available in the OperatorHub as a community operator. To find the Operator, search by the Kogito name.

You can also verify the Operator's availability in the catalog by running the following command:

$ oc describe operatorsource.operators.coreos.com/kogito-operator -n openshift-marketplace

Follow the OpenShift Web Console instructions in the Catalog -> OperatorHub section in the left menu to install it in any namespace in the cluster.

Kogito Operator in the Catalog

Find more information about installing an Operator on OpenShift in the official documentation.

Manually in OperatorHub

If you cannot find the Kogito Operator in OperatorHub, you can install it manually by creating an entry in the OperatorHub Catalog:

$ oc create -f deploy/olm-catalog/kogito-operator/kogito-operator-operatorsource.yaml

After several minutes, the Operator appears under the Catalog -> OperatorHub section in the OpenShift Web Console. To find the Operator, search by the Kogito name. You can then install the Operator as described in the Automatically in OperatorHub section.

Use this option only if you're building the operator yourself and need to add a custom version of the Kogito Operator in your cluster catalog.

Locally on your system

You can also run the Kogito Operator locally if you have the requirements configured on your local system.

Deploying to OpenShift 3.11 or Kubernetes 1.14+

The OperatorHub catalog is not available by default for OpenShift 3.11 and Kubernetes, so you must manually install the Kogito Operator on these platforms.

$ oc new-project <project-name>
$ ./hack/3.11deploy.sh

Please also note that if you need persistence and messaging and [are relying on Kogito Operator to deploy Infinispan and/or Kafka for you, the respective operators (Infinispan Operator and Strimzi) also have to be manually installed. Please refer to their documentation about how to proper install them on OpenShift 3.11 or Kubernetes.

For reference, we have an example of how to manually install Kogito Operator and the dependant operators.

Kogito Runtimes service deployment

Deploying a new service

Use the OLM console to subscribe to the kogito Operator Catalog Source within your namespace. After you subscribe, use the console to Create KogitoApp or create one manually as shown in the following example:

$ oc create -f deploy/crds/app.kiegroup.org_v1alpha1_kogitoapp_cr.yaml
kogitoapp.app.kiegroup.org/example-quarkus created

Alternatively, you can use the Kogito CLI to deploy your services:

$ kogito deploy-service example-quarkus https://github.com/kiegroup/kogito-examples/ --context-dir=rules-quarkus-helloworld

In both approaches the Kogito Operator will set up a build configuration for you in the OpenShift cluster to build the service from source, from a file or from a binary. From version 0.11.0 onward, we introduce a new way of deploying a Kogito Service that's from a built image. See more details in the section Kubernetes Support.

Binary Builds

Kogito Operator can configure a build for you that will pull and build your application through the source to image (s2i) feature (available only on OpenShift).

Instead of using the cluster to build your entire application, you can upload your Kogito Service application binaries to the cluster using binary builds. To create a new service without a specific Git URL, run the following command:

$ kogito deploy-service example-quarkus

Access your application directory and run the following:

$ mvn clean package

This command produces a target directory that contains all its binary files:

$ ls -l target/

total 372
drwxrwxr-x. 4 ricferna ricferna   4096 Mar 10 16:01 classes
-rw-rw-r--. 1 ricferna ricferna  19667 Mar 10 16:01 drools-quarkus-example-8.0.0-SNAPSHOT.jar
-rw-r--r--. 1 ricferna ricferna 311190 Mar 10 16:01 drools-quarkus-example-8.0.0-SNAPSHOT-runner.jar
-rw-rw-r--. 1 ricferna ricferna  14168 Mar 10 16:01 drools-quarkus-example-8.0.0-SNAPSHOT-sources.jar
drwxrwxr-x. 3 ricferna ricferna   4096 Mar 10 16:01 generated-sources
-rw-rw-r--. 1 ricferna ricferna     24 Mar 10 16:01 image_metadata.json
drwxrwxr-x. 2 ricferna ricferna  12288 Mar 10 16:01 lib
drwxrwxr-x. 2 ricferna ricferna   4096 Mar 10 16:01 maven-archiver
drwxrwxr-x. 3 ricferna ricferna   4096 Mar 10 16:01 maven-status

You can upload the entire directory to the cluster, or you might select only the relevant files:

  1. The jar runner and the lib directory for Quarkus runtime or just the uber jar if you're using Spring Boot.
  2. The classes/persistence directory where resides the generated protobuf files.
  3. The image_metadata.json file, which contains the information about the image that will be built by the s2i feature.

Use the oc client to upload and start the image build:

$ oc start-build example-quarkus-binary --from-dir=target

That's it. In a couple minutes you should have your Kogito Service application up and running using the same binaries built locally.

Build From File

Kogito Operator can configure a build for you that will pull and build your application through the source to image (s2i) feature (available only on OpenShift). This kind of build consists of uploading a Kogito service file, e.g. a DMN or a BPMN file to the OpenShift Cluster and trigger a new Source to Image build automatically.

You can provide a single DMN, DRL, BPMN, BPMN2 and properties files, the standalone file can be locally on the filesystem or on a Web Site, like GitHub:

# from filesystem
$ kogito deploy-service example-dmn-quarkus /tmp/kogito-examples/dmn-quarkus-example/src/main/resources/"Traffic Violation.dmn"
File found /tmp/kogito-examples/dmn-quarkus-example/src/main/resources/Traffic Violation.dmn.
...
The requested file(s) was successfully uploaded to OpenShift, a build with this file(s) should now be running. To see the logs, run 'oc logs -f bc/example-dmn-quarkus-builder -n kogito'

While checking OpenShift build logs with the command oc logs -f bc/example-dmn-quarkus-builder -n kogito, you can see on the beginning of the build log:

Receiving source from STDIN as file TrafficViolation.dmn
Using docker-registry.default.svc:5000/openshift/kogito-quarkus-ubi8-s2i@sha256:729e158710dedba50a49943ba188d8f31d09568634896de9b903838fc4e34e94 as the s2i builder image
# from a git repository
$ kogito deploy-service example-dmn-quarkus https://raw.githubusercontent.com/kiegroup/kogito-examples/master/dmn-quarkus-example/src/main/resources/Traffic%20Violation.dmn
Asset found: TrafficViolation.dmn.
...
The requested file(s) was successfully uploaded to OpenShift, a build with this file(s) should now be running. To see the logs, run 'oc logs -f bc/example-dmn-quarkus-builder -n kogito'

If you check the logs, you can also see the DMN file being used to build the Kogito service.

Sometimes we also need to provide more than one Kogito resource file, e.g. a DRL and a BPMN. In that case, all we need to do is to specify a whole directory. If any valid file is found in the given directory, the CLI will compress these files and upload to the OpenShift cluster. To upload more than one file, put all the desired files in a directory and specify it as parameter.

Note that if you have other unsupported files in this directory, they will not be copied. For such complex cases you can use either build from source or a binary build.

After a build is created and for some reason we need to update the Kogito assets we can do it with the oc start-build command, but keep in mind that a s2i build is not able to identify what files have been changed and updated only this one, All files must be provided again. Suppose we have create a Kogito Application with the following command:

$ kogito deploy-service example-dmn-quarkus-builder /tmp/kogito-examples/dmn-quarkus-example/src/main/resources/"Traffic Violation.dmn"
Uploading file "/tmp/kogito-examples/dmn-quarkus-example/src/main/resources/Traffic Violation.dmn" as binary input for the build ...
.
Uploading finished
build.build.openshift.io/example-dmn-quarkus-builder-2 started

We can provide the files again, after we're done updating it, to do this use the following command:

$ oc start-build example-dmn-quarkus-builder --from-file /tmp/kogito-examples/dmn-quarkus-example/src/main/resources/"Traffic Violation.dmn"

If a directory was provided, just update the --from-file flag to --from-dir.

kogito deploy-service test /home/user/development/kogito-test/

As output of the cli command above, we will have something similar to:

The provided source is a dir, packing files.
File(s) found: [/home/user/development/kogito-test/Traffic Violation.dmn /home/user/development/kogito-test/application.properties].
...
The requested file(s) was successfully uploaded to OpenShift, a build with this file(s) should now be running. To see the logs, run 'oc logs -f bc/test-builder -n kogito'

If for some reason the build fails, pass the env BUILD_LOGLEVEL with the desired level, e.g. 9 as a build-env parameter:

kogito --verbose deploy-service test /home/user/development/kogito-test/  --build-env BUILD_LOGLEVEL=5

Cleaning up a Kogito service deployment

$ kogito delete-service example-quarkus

Native X JVM builds

By default, the Kogito services are built with traditional java compilers to save time and resources. This means that the final generated artifact is a JAR file with the chosen runtime (default to Quarkus) with its dependencies in the image user's home directory: /home/kogito/bin/lib.

Kogito services implemented with Quarkus can be built to native binary. This means very low footprint on runtime (see performance examples), but a lot of resources are consumed during build time. For more information about AOT compilation, see GraalVM Native Image.

In Kogito Operator tests, native builds take approximately 10 minutes and the build pod can consume up to 10GB of RAM and 1.5 CPU cores.

⚠️ By default a Kogito application doesn't contain resource requests or limits. This may lead to a situation when the native build is terminated due to insufficient memory. To prevent this behaviour the user can specify a minimum memory request for the Kogito application build, making sure that build pod is allocated on an OpenShift node with enough free memory. The side effect of this configuration is that OpenShift will prioritize the build pod. More informations about pod prioritization based on pod requests and limits can be found on Quality of Service Tiers page.

Example of memory request configuration:

apiVersion: app.kiegroup.org/v1alpha1
kind: KogitoApp
metadata:
  name: process-quarkus-example
  namespace: kogito
spec:
  build:
    gitSource:
      contextDir: process-quarkus-example
      uri: 'https://github.com/kiegroup/kogito-examples'
    native: true
    resources:
      requests:
        memory: "4Gi"
  runtime: quarkus

⚠️ Ensure that you have these resources available on your OpenShift nodes when running native builds. Otherwise the S2I build will fail. You can check currently allocated and total resources of your nodes using the command oc describe nodes invoked by a user with admin rights.

The user can also limit the maximum heap space for the JVM used for a native build. The limitation can be applied by setting the quarkus.native.native-image-xmx property in the application.properties file. In such case the build pod will require roughly xmx + 2 GB of memory. The xmx value depends on the complexity of the application, for example for process-quarkus-example the xmx value 2g is enough, resulting in builder pod consuming just up to 4.2 GB of memory.

The user can also set resource limits for a native build pod. In that case 80% of the memory limit is used for heap space in the JVM responsible for native build. If the computed heap space limit for the JVM is less than 1024 MB then all the memory from resource limits is used.

Example of memory limit configuration:

apiVersion: app.kiegroup.org/v1alpha1
kind: KogitoApp
metadata:
  name: process-quarkus-example
  namespace: kogito
spec:
  build:
    gitSource:
      contextDir: process-quarkus-example
      uri: 'https://github.com/kiegroup/kogito-examples'
    native: true
    resources:
      limits:
        memory: "4Gi"
  runtime: quarkus

To deploy a service using native builds, run the deploy-service command with the --native flag:

$ kogito deploy-service example-quarkus https://github.com/kiegroup/kogito-examples/ --context-dir=drools-quarkus-example --native

Kogito Runtimes properties configuration

When a Kogito service is deployed, a configMap will be created for the application.properties configuration of the Kogito service.

The name of the configMap consists of the name of the Kogito service and the suffix -properties. For example:

kind: ConfigMap
apiVersion: v1
metadata:
  name: process-quarkus-example-properties
data:
  application.properties : |-
    dummy1=dummy1
    dummy2=dummy2

The data application.properties of the configMap will be mounted in a volume to the container of the Kogito service. Any runtime properties added to application.properties will override the default application configuration properties of the Kogito service.

When there are changes to application.properties of the configMap, a rolling update will take place to update the deployment and configuration of the Kogito service.

Troubleshooting Kogito Runtimes service deployment

No builds are running

If you do not see any builds running nor any resources created in the namespace, review the Kogito Operator log.

To view the Operator logs, first identify where the operator is deployed:

$ oc get pods

NAME                                     READY   STATUS      RESTARTS   AGE
kogito-operator-6d7b6d4466-9ng8t   1/1     Running     0          26m

Use the pod name as input for the following command:

$ oc logs -f kogito-operator-6d7b6d4466-9ng8t

Kubernetes Support

From version 0.11.0 onward, we provide a new CR (Custom Resource) called KogitoRuntime that enables Kubernetes deployment with the Kogito Operator.

This new resource does not require to build the images in the cluster. Instead, you can just pass the Kogito service image you wish to deploy, and the Kogito Operator will do the heavy lift for you.

For this to work, we assume that you have built your own Kogito service image and pushed it to an internal or third party registry such as Quay.io.

Building a Kogito Runtime Service Image

Assuming you already have a Quarkus Kogito project in place (see the examples) and is ready to publish it in a Kubernetes or OpenShift cluster, follow these steps:

  1. Copy the example file quarkus-jvm.Dockerfile to your project's root
  2. Build your project like you would normally do. For example: mvn clean package
  3. Build the image: podman build --tag quay.io/<yournamespace>/<project-name>:latest -f quarkus-jvm.Dockerfile .
  4. Test the image locally with: podman run --rm -it -p 8080:8080 quay.io/<yournamespace>/<project-name>:latest
  5. Push it to the registry: podman push quay.io/<yournamespace>/<project-name>:latest

Deploying a Kogito Runtime Service in Kubernetes

Once you have the image ready, just create a CR like the example below:

cat << EOF > kogito-service.yaml
apiVersion: app.kiegroup.org/v1alpha1
kind: KogitoRuntime
metadata:
  name: process-quarkus-example
spec:
  replicas: 1
  image:
    domain: quay.io
    namespace: your-namespace
    name: your-image-name
EOF

Assuming you already have the Kogito Operator installed, now you can create the resource in the cluster:

kubectl apply -f kogito-service.yaml

In a few moments, you will see your service successfully deployed. All the configuration, persistence and messaging integration referenced in this documentation can also be applied to the KogitoRuntime resource.

For a more complex deployment that enables persistence and messaging to the Kogito service, please refer to the travel-agency directory in the examples section.

Kogito Data Index Service deployment

The Kogito Operator can deploy the Data Index Service as a Custom Resource (KogitoDataIndex).

The Data Index Service depends on Kafka. Starting with version 0.6.0, the Kogito Operator deploys an Apache Kafka Cluster (based on Strimzi operator) in the same namespace.

The Data Index Service also depends on Infinispan, but starting with version 0.6.0 of the Kogito Operator a Infinispan Server is automatically deployed for you.

Deploying Infinispan

If you plan to use the Data Index Service to connect to an Infinispan Server instance deployed within the same namespace, the Kogito Operator can handle this deployment for you.

When you install the Kogito Operator from OperatorHub, the Infinispan Operator is installed in the same namespace. If you do not have access to OperatorHub or OLM in your cluster, you can manually deploy the Infinispan Operator.

After you deploy the Infinispan Operator, see Deploying Strimzi for next steps.

Deploying Strimzi

When you install the Kogito Operator from OperatorHub, the Strimzi Operator is installed in the same namespace. You can also manually deploy the Strimzi Operator.

Now that you have the required infrastructure, you can deploy the Kogito Data Index Service.

Kogito Data Index Service installation

Installing the Kogito Data Index Service with the Kogito CLI

Kogito Operator can deploy a Kafka instance for you, please refer to Kafka For Data Index for the details.

If you have installed the Kogito CLI, run the following command to create the Kogito Data Index resource:

$ kogito install data-index -p my-project

You also can manually deploy a Kafka instance via Strimzi, and use the Kafka service URL or Kafka instance name to install the Data Index Service.

Run the following command to create Kogito Data Index resource and replace the URL with the Kafka URL you retrieved for your environment:

$ kogito install data-index -p my-project --kafka-url my-cluster-kafka-bootstrap:9092

Or run the following command to create the Kogito Data Index resource with the Kafka instance name:

$ kogito install data-index -p my-project --kafka-instance my-cluster

Infinispan is deployed for you using the Infinispan Operator. Ensure that the Infinispan deployment is running in your project. If the deployment fails, the following error message appears:

Infinispan Operator is not available in the Project: my-project. Please make sure to install it before deploying Data Index without infinispan-url provided

To resolve the error, review the deployment procedure to this point to ensure that all steps have been successful.

Installing the Kogito Data Index Service with the Operator Catalog (OLM)

If you are running on OpenShift 4.x, you can use the OperatorHub user interface to create the Kogito Data Index resource. In the OpenShift Web Console, go to Installed Operators -> Kogito Operator -> Kogito Data Index. Click Create Kogito Data Index and create a new resource that uses the Infinispan and Kafka services, as shown in the following example:

apiVersion: app.kiegroup.org/v1alpha1
kind: KogitoDataIndex
metadata:
  name: data-index
spec:
  # Number of pods to be deployed
  replicas: 1
  # Image to use for this deployment
  image: "quay.io/kiegroup/kogito-data-index:latest"

Installing the Kogito Data Index Service with the oc client

To create the Kogito Data Index resource using the oc client, you can use the CR file from the previous example as a reference and create the custom resource from the command line as shown in the following example:

# Clone this repository
$ git clone https://github.com/kiegroup/kogito-cloud-operator.git
$ cd kogito-cloud-operator
# Make your changes
$ vi deploy/crds/app.kiegroup.org_v1alpha1_kogitodataindex_cr.yaml
# Deploy to the cluster
$ oc create -f deploy/crds/app.kiegroup.org_v1alpha1_kogitodataindex_cr.yaml -n my-project

You can access the GraphQL interface through the route that was created for you:

$ oc get routes -l app=data-index

NAME         HOST/PORT                                        PATH   SERVICES     PORT   TERMINATION   WILDCARD
data-index   data-index-kogito.apps.mycluster.example.com            data-index   8080   None

Kogito Data Index Integration with persistent Kogito Services

If your Kogito Service has persistence enabled, Data Index will mount a volume based on a configMap created for you during the deployment of the service.

This configMap has the -protobuf-files suffix and inside it you'll see the protobuf files that your service generated during build time. For example:

kind: ConfigMap
apiVersion: v1
metadata:
  name: example-quarkus-protobuf-files
  labels:
    kogito-protobuf: true
data:
  visaApplications.proto: |-
    syntax = "proto2"; 
    package org.acme.travels.visaApplications; 
    import "kogito-index.proto";
    import "kogito-types.proto";
    option kogito_model = "VisaApplications";
    option kogito_id = "visaApplications";
    # data suppressed for brevit

During the deployment of a new set of protobuf files (e.g. a new persistent Kogito Service is deployed), Data Index will spin up a new pod referring to this new volume attached.

Updated protobuf files will be automatically refreshed by Kubernetes volumes after some time, this means that if you add a new property in your domain data, this data will be reflect automatically in the Data Index without restarts.

Please note that removed Kogito Services will remove the protobuf files associated to it as well. This means that you won't be able to see the data through the Data Index anymore, although the data still persisted in Infinispan.

Kogito Data Index Service properties configuration

When Data Index is deployed, a configMap will be created for the application.properties configuration of Data Index.

The name of the configMap consists of the name of the Data Index and the suffix -properties. For example:

kind: ConfigMap
apiVersion: v1
metadata:
  name: data-index-properties
data:
  application.properties : |-
    dummy1=dummy1
    dummy2=dummy2

The data application.properties of the configMap will be mounted in a volume to the container of the Data Index. Any runtime properties added to application.properties will override the default application configuration properties of Data Index.

When there are changes to application.properties of the configMap, a rolling update will take place to update the deployment and configuration of Data Index.

Kogito Jobs Service deployment

Like Data Index, Jobs Service can be deployed via Operator or CLI. If persistence is required, the operator will also deploy an Infinispan server using Infinispan Operator.

Kogito Jobs Service installation

There's a couple of ways to install the Jobs Service into your namespace using Kogito Operator.

Installing the Kogito Jobs Service with the Kogito CLI

If you have installed the Kogito CLI, run the following command to create the Kogito Jobs Service resource:

$ kogito install jobs-service -p my-project

There are some options to customize the Jobs Service deployment with CLI. Run kogito install jobs-service --help to understand and set them according to your requirements.

Installing the Kogito Jobs Service with the Operator Catalog (OLM)

If you are running on OpenShift 4.x, you can use the OperatorHub user interface to create the Kogito Jobs Service resource. In the OpenShift Web Console, go to Installed Operators -> Kogito Operator -> Kogito Jobs Service. Click Create Kogito Jobs Service and create a new resource as shown in the following example:

apiVersion: app.kiegroup.org/v1alpha1
kind: KogitoJobsService
metadata:
  name: jobs-service
spec:
  replicas: 1

Installing the Kogito Jobs Service with the oc client

To create the Kogito Jobs Service resource using the oc client, you can use the CR file from the previous example as a reference and create the custom resource from the command line as shown in the following example:

 # Clone this repository
 $ git clone https://github.com/kiegroup/kogito-cloud-operator.git
 $ cd kogito-cloud-operator
 # Make your changes
 $ vi deploy/crds/app.kiegroup.org_v1alpha1_kogitojobsservice_cr.yaml
 # Deploy to the cluster
 $ oc create -f deploy/crds/app.kiegroup.org_v1alpha1_kogitojobsservice_cr.yaml -n my-project

Enabling Persistence with Infinispan

Jobs Service supports persistence with Infinispan by setting the property spec.infinispan.useKogitoInfra to true in the CR or the flag --enable-persistence in the CLI.

When doing this Kogito Operator deploys a new Infinispan server using Infinispan Operator for you within the same namespace that you're deploying Jobs Service. It also sets all information regarding server authentication.

For this to work, bear in mind that Infinispan Operator must be installed in the namespace. If the Kogito Operator was installed with OLM, it means that the Infinispan Operator would also be installed. If it was installed manually, you will also have to manually install the Infinispan Operator.

It's also possible to fine tune the Infinispan integration by setting the properties spec.infinispan.credentials, spec.infinispan.uri and spec.infinispan.useKogitoInfra to false in the CR. This way the Infinispan server won't be deployed and the Jobs Service will try to connect to the given URI. Just make sure that your cluster have access to this URI.

This process behaves similarly to the one defined by the Data Index Service.

Kogito Jobs Service properties configuration

When Jobs Service is deployed, a configMap will be created for the application.properties configuration of Jobs Service.

The name of the configMap consists of the name of the Jobs Service and the suffix -properties. For example:

kind: ConfigMap
apiVersion: v1
metadata:
  name: jobs-service-properties
data:
  application.properties : |-
    dummy1=dummy1
    dummy2=dummy2

The data application.properties of the configMap will be mounted in a volume to the container of the Jobs Service. Any runtime properties added to application.properties will override the default application configuration properties of Jobs Service.

When there are changes to application.properties of the configMap, a rolling update will take place to update the deployment and configuration of Jobs Service.

Kogito Management Console Install

⚠️ Management Console only works with Data Index. Make sure to deploy the Data Index before trying to deploy this service.

Like Data Index and Jobs Service, the Management Console can also be installed via CLI or Operator.

Installing the Management Console with the Kogito CLI

If you have installed the Kogito CLI, run the following command to create the Kogito Jobs Service resource:

$ kogito install mgmt-console -p my-project

There are some options to customize the Management Console deployment with CLI. Run kogito install mgmt-console --help to understand and set them according to your requirements.

Installing the Management Console with the Operator Catalog (OLM)

If you are running on OpenShift 4.x, you can use the OperatorHub user interface to create the Kogito Management Console resource. In the OpenShift Web Console, go to Installed Operators -> Kogito Operator -> Kogito Management Console. Click Create Kogito Management Console and create a new resource as shown in the following example:

apiVersion: app.kiegroup.org/v1alpha1
kind: KogitoMgmtConsole
metadata:
  name: management-console
spec:
  replicas: 1

You should be able to see the Management Console pod up and running in a couple minutes. To see its deployed URL, run:

$ oc get kogitomgtmconsole

NAME                 REPLICAS   IMAGE                                                                      ENDPOINT
management-console   1          quay.io/kiegroup/kogito-management-console:0.11.0-rc1 (Internal Registry)   http://management-console-kogito-1445.apps-crc.testing

The ENDPOINT column contains the URL that you need to access the application.

Kogito CLI

The Kogito CLI tool enables you to deploy new Kogito services from source instead of relying on CRs and YAML files.

Kogito CLI requirements

  • The oc client is installed.
  • You are an authenticated OpenShift user with permissions to create resources in a given namespace.

Kogito CLI installation

For Linux and macOS

  1. Download the correct Kogito distribution for your machine.

  2. Unpack the binary: tar -xvf release.tar.gz

    You should see an executable named kogito.

  3. Move the binary to a pre-existing directory in your PATH, for example, # cp /path/to/kogito /usr/local/bin.

For Windows

  1. Download the latest 64-bit Windows release of the Kogito distribution.

  2. Extract the zip file through a file browser.

  3. Add the extracted directory to your PATH. You can now use kogito from the command line.

Building the Kogito CLI from source

⚠️ To build the Kogito CLI from source, ensure that Go is installed and available in your PATH.

Run the following command to build and install the Kogito CLI:

$ git clone https://github.com/kiegroup/kogito-cloud-operator
$ cd kogito-cloud-operator
$ make install-cli

This will install the CLI on GOPATH/bin by default. go documentation recommeds including this directory in your PATH. If you done that then the kogito CLI may be executed directly as below:

$ kogito
Kogito CLI deploys your Kogito Services into an OpenShift cluster

Usage:
  kogito [command]

Available Commands:
  completion     Generates a completion script for the given shell (bash or zsh)
  delete-project Deletes a Kogito Project - i.e., the Kubernetes/OpenShift project
  delete-service Deletes a Kogito Runtime Service deployed in the namespace/project
  deploy-service Deploys a new Kogito Runtime Service into the given Project
  help           Help about any command
  install        Install all sorts of infrastructure components to your Kogito project
  new-project    Creates a new Kogito Project for your Kogito Services
  project        Display the current used project
  remove         remove all sorts of infrastructure components from your Kogito project
  use-project    Sets the Kogito Project where your Kogito Service will be deployed

Flags:
      --config string   config file (default is $HOME/.kogito/config.yaml)
  -h, --help            help for kogito
  -o, --output string   output format (when defined, 'json' is supported)
  -v, --verbose         verbose output
      --version         display version

Use "kogito [command] --help" for more information about a command.

Kogito CLI output format and environment variables

When the output format is undefined, messages are outputted in simple, human-readable form.

$ kogito project
Using project 'testns1'

When the output format is defined as 'json', messages are outputted for the purpose of parsing by external programs.

$ kogito project -o json
{"level":"INFO","time":"2020-02-27T01:37:40.935-0500","name":"kogito-cli","message":"Using project 'testns1'"}

Environment variables can be used to change the keys inside the json message. Setting a key to an empty string will remove the key/value pair from the json message entirely.

$ KOGITO_LOGGER_LEVEL_KEY=Severity KOGITO_LOGGER_TIME_KEY= KOGITO_LOGGER_NAME_KEY= KOGITO_LOGGER_MESSAGE_KEY=Text kogito project -o json
{"Severity":"INFO","Text":"Using project 'testns1'"}

When the output format is undefined, setting an environment variable to a non-empty string will include its value in the human-readable message.

$ KOGITO_LOGGER_LEVEL_KEY=L kogito project
INFO    Using project 'testns1'

Deploying a Kogito service from source with the Kogito CLI

After you complete the Kogito Operator installation, you can deploy a new Kogito service by using the Kogito CLI:

# creates a new namespace in your cluster
$ kogito new-project kogito-cli

# deploys a new Kogito Runtime Service from source
$ kogito deploy-service example-drools https://github.com/kiegroup/kogito-examples --context-dir drools-quarkus-example

If you are using OpenShift 3.11 as described in Deploying to OpenShift 3.11, use the existing namespace that you created during the manual deployment, as shown in the following example:

# Use the provisioned namespace in your OpenShift 3.11 cluster
$ kogito use-project <project-name>

# Deploys new Kogito service from source
$ kogito deploy-service example-drools https://github.com/kiegroup/kogito-examples --context-dir drools-quarkus-example

You can shorten the previous command as shown in the following example:

$ kogito deploy-service example-drools https://github.com/kiegroup/kogito-examples --context-dir drools-quarkus-example --project <project-name>

Prometheus integration with the Kogito Operator

Prometheus annotations

By default, if your Kogito Runtimes service contains the monitoring-prometheus-addon dependency, metrics for the Kogito service are enabled. For more information about Prometheus metrics in Kogito services, see Enabling metrics.

The Kogito Operator adds Prometheus annotations to the pod and service of the deployed application, as shown in the following example:

apiVersion: v1
kind: Service
metadata:
  annotations:
    org.kie.kogito/managed-by: Kogito Operator
    org.kie.kogito/operator-crd: KogitoApp
    prometheus.io/path: /metrics
    prometheus.io/port: "8080"
    prometheus.io/scheme: http
    prometheus.io/scrape: "true"
  labels:
    app: onboarding-service
    onboarding: process
  name: onboarding-service
  namespace: kogito
  ownerReferences:
  - apiVersion: app.kiegroup.org/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: KogitoApp
    name: onboarding-service
spec:
  clusterIP: 172.30.173.165
  ports:
  - name: http
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: onboarding-service
    onboarding: process
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Prometheus Operator

The Prometheus Operator does not directly support the Prometheus annotations that the Kogito Operator adds to your Kogito services. If you are deploying the Kogito Operator on OpenShift 4.x, then you are likely using the Prometheus Operator.

Therefore, in a scenario where Prometheus is deployed and managed by the Prometheus Operator, and if metrics for the Kogito service are enabled, a new ServiceMonitor resource is deployed by the Kogito Operator to expose the metrics for Prometheus to scrape:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    app: onboarding-service
  name: onboarding-service
  namespace: kogito
spec:
  endpoints:
  - path: /metrics
    targetPort: 8080
    scheme: http
  namespaceSelector:
    matchNames:
    - kogito
  selector:
    matchLabels:
      app: onboarding-service

You must manually configure your Prometheus resource that is managed by the Prometheus Operator to select the ServiceMonitor resource:

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  name: prometheus
spec:
  serviceAccountName: prometheus
  serviceMonitorSelector:
    matchLabels:
      app: onboarding-service

After you configure your Prometheus resource with the ServiceMonitor resource, you can see the endpoint being scraped by Prometheus in the Targets page of the Prometheus web console:

The metrics exposed by the Kogito service appear in the Graph view:

For more information about the Prometheus Operator, see the Prometheus Operator documentation.

Infinispan integration

To help you start and run an Infinispan Server instance in your project, the Kogito Operator has a resource called KogitoInfra to handle Infinispan deployment for you.

The KogitoInfra resource uses the Infinispan Operator to deploy new Infinispan server instances if needed.

You can freely edit and manage the Infinispan instance. The Kogito Operator does not manage or handle the Infinispan instances. For example, if you have plans to scale the Infinispan cluster, you can edit the replicas field in the Infinispan CR to meet your requirements.

By default, the KogitoInfra resource creates a secret that holds the user name and password for Infinispan authentication. To view the credentials, run the following command:

$ oc get secret/kogito-infinispan-credential -o yaml

apiVersion: v1
data:
  password: VzNCcW9DeXdpMVdXdlZJZQ==
  username: ZGV2ZWxvcGVy
kind: Secret
(...)

The key values are masked by a Base64 algorithm. To view the password from the previous example output in your terminal, run the following command:

$ echo VzNCcW9DeXdpMVdXdlZJZQ== | base64 -d

W3BqoCywi1WWvVIe

For more information about Infinispan Operator, please see their official documentation.

Note: Sometimes the OperatorHub will install DataGrid operator instead of Infinispan when installing Kogito Operator. If this happens, please uninstall DataGrid and install Infinispan manually since they are not compatible

Infinispan for Kogito Services

If your Kogito Service depends on the persistence add-on, Kogito Operator installs Infinispan and inject the connection properties as environment variables into the service. Depending on the runtime, these variables will differ. See the table below:

Quarkus Runtime Springboot Runtime Description Example
QUARKUS_INFINISPAN_CLIENT_SERVER_LIST INFINISPAN_REMOTE_SERVER_LIST Service URI from deployed Infinispan kogito-infinispan:11222
QUARKUS_INFINISPAN_CLIENT_AUTH_USERNAME INFINISPAN_REMOTE_AUTH_USER_NAME Default username generated by Infinispan Operator developer
QUARKUS_INFINISPAN_CLIENT_AUTH_PASSWORD INFINISPAN_REMOTE_AUTH_PASSWORD Random password generated by Infinispan Operator Z1Nz34JpuVdzMQKi
QUARKUS_INFINISPAN_CLIENT_SASL_MECHANISM INFINISPAN_REMOTE_SASL_MECHANISM Default to PLAIN PLAIN

Just make sure that your Kogito Service can read these properties in runtime. These variables names are the same as the ones used by Infinispan clients from Quarkus and Springboot.

On Quarkus versions below 1.1.0 (Kogito 0.6.0), make sure that your aplication.properties file has the properties listed like the example below:

quarkus.infinispan-client.server-list=
quarkus.infinispan-client.auth-username=
quarkus.infinispan-client.auth-password=
quarkus.infinispan-client.sasl-mechanism=

These properties are replaced by the environment variables in runtime.

You can control the installation method for the Infinispan by using the flag enable-persistence in the Kogito CLI or editing the spec.enablePersistence in KogitoApp custom resource:

  • true - Infinispan is installed in the namespace and the connection properties environment variables are injected into the service
  • false - Infinispan is not installed. Use this option only if you don't need persistence or intend to deploy your own persistence mechanism and you know how to configure your service to access it

Infinispan for Data Index Service

For the Data Index Service, if you do not provide a service URL to connect to Infinispan, a new server is deployed via KogitoInfra.

A random password for the developer user is created and injected into the Data Index automatically. You do not need to do anything for both services to work together.

Kafka integration

Like Infinispan, Kogito Operator can deploy a Kafka cluster for your Kogito services via KogitoInfra custom resource.

To deploy a Kafka cluster with Zookeeper to support sending and receiving messages within a process, Kogito Operator relies on the Strimzi Operator.

You can freely edit the Kafka instance deployed by the operator to fulfill any requirement that you need. The Kafka instance is not managed by Kogito, instead it's managed by Strimzi. That's why Kogito Operator is dependant on Strimzi Operator and it's installed once you install the Kogito Operator using OLM.

Note: Sometimes the OperatorHub will install AMQ Streams operator instead of Strimzi when installing Kogito Operator. If this happens, please uninstall AMQ Streams and install Strimzi manually since they are not compatible

Kafka for Kogito Services

To enable Kafka installation during deployment of your service, use the following Kogito CLI command:

$ kogito deploy process-quarkus-example https://github.com/kiegroup/kogito-examples --context-dir=process-quarkus-example --enable-events"  

Or using the custom resource (CR) yaml file:

apiVersion: app.kiegroup.org/v1alpha1
kind: KogitoApp
metadata:
  name: process-quarkus-example
spec:
  enableEvents: true
  build:
    envs:
    - name: MAVEN_ARGS_APPEND
      value: -Pevents
    gitSource:
      uri: https://github.com/mswiderski/kogito-quickstarts
      contextDir: process-quarkus-example

The flag --enable-events in the CLI and the attribute spec.enableEvents: true in the CR tells to the operator to deploy a Kafka cluster in the namespace if no Kafka cluster owned by Kogito Operator is found.

A variable named KAFKA_BOOTSTRAP_SERVERS is injected into the service container. For Quarkus runtimes, this works out of the box when using Kafka Client version 1.x or greater. For Springboot you might need to rely on property substitution in the application.properties like:

spring.kafka.bootstrap.servers=${KAFKA_BOOTSTRAP_SERVERS}

Also, if the container has any environment variable with the suffix _BOOTSTRAP_SERVERS, they are injected by the value of KAFKA_BOOTSTRAP_SERVERS variable as well. For example, by running:

$ kogito deploy process-quarkus-example https://github.com/kiegroup/kogito-examples --context-dir=process-quarkus-example --enable-events \
--build-env MAVEN_ARGS_APPEND="-Pevents" \
-e MP_MESSAGING_INCOMING_TRAVELLERS_BOOTSTRAP_SERVERS -e MP_MESSAGING_OUTGOING_PROCESSEDTRAVELLERS_BOOTSTRAP_SERVERS"  

The variables MP_MESSAGING_INCOMING_TRAVELLERS_BOOTSTRAP_SERVERS and MP_MESSAGING_OUTGOING_PROCESSEDTRAVELLERS_BOOTSTRAP_SERVERS will have the deployed Kafka service URL inject into them.

Please note that for services with Quarkus version below 1.1.0 (Kogito Runtimes 0.6.0), it's required to add these Kafka properties in the application.properties. Otherwise they won't be replaced in runtime by environment variables injected by the operator.

Kafka For Data Index

If you do not provide a service URL to connect to Kafka or the name of a Kafka instance manually deployed via Strimzi, a new Kafka instance will be deployed with KogitoInfra via Strimzi.

The information required to connect to Kafka will be automatically set for you by the operator to the Data Index service.

Kogito Operator development

Before you begin fixing issues or adding new features to the Kogito Operator, see Contributing to the Kogito Operator and Kogito Operator architecture.

Requirements

Building the Kogito Operator

To build the Kogito Operator, use the following command:

$ make

The output of this command is a ready-to-use Kogito Operator image that you can deploy in any namespace.

Deploying to OpenShift 4.x for development purposes

To install the Kogito Operator on OpenShift 4.x for end-to-end (E2E) testing, ensure that you have access to a quay.io account to create an application repository.

Follow the steps below:

  1. Run make prepare-olm version=0.11.0. Bear in mind that if there's different versions in the deploy/olm-catalog/kogito-operator/kogito-operator.package.yaml file, every CSV must be included in the output folder. At this time, the script did not copy previous CSV versions to the output folder, so it must be copied manually.

  2. Grab Quay credentials with:

$ export QUAY_USERNAME=youruser
$ export QUAY_PASSWORD=yourpass

$ AUTH_TOKEN=$(curl -sH "Content-Type: application/json" -XPOST https://quay.io/cnr/api/v1/users/login -d '
{
    "user": {
        "username": "'"${QUAY_USERNAME}"'",
        "password": "'"${QUAY_PASSWORD}"'"
    }
}' | jq -r '.token')
  1. Set courier variables:
$ export OPERATOR_DIR=build/_output/operatorhub/
$ export QUAY_NAMESPACE=kiegroup # should be different in your environment
$ export PACKAGE_NAME=kogito-operator
$ export PACKAGE_VERSION=0.11.0
$ export TOKEN=$AUTH_TOKEN

If you push to another quay repository, replace QUAY_NAMESPACE with your user name or the other namespace. The push command does not overwrite an existing repository, so you must delete the bundle before you can build and upload a new version. After you upload the bundle, create an Operator Source to load your operator bundle in OpenShift.

  1. Run operator-courier to publish the operator application to Quay:
operator-courier push "$OPERATOR_DIR" "$QUAY_NAMESPACE" "$PACKAGE_NAME" "$PACKAGE_VERSION" "$TOKEN"
  1. Check if the application was pushed successfully in Quay.io. The OpenShift cluster needs access to the created application. Ensure that the application is public or that you have configured the private repository credentials in the cluster. To make the application public, go to your quay.io account, and in the Applications tab look for the kogito-operator application. Under the settings section, click make public.

  2. Publish the operator source to your OpenShift cluster:

$ oc create -f deploy/olm-catalog/kogito-operator/kogito-operator-operatorsource.yaml

Replace registryNamespace in the kogito-operator-operatorsource.yaml file with your quay namespace. The name, display name, and publisher of the Operator are the only other attributes that you can modify.

After several minutes, the Operator appears under Catalog -> OperatorHub in the OpenShift Web Console. To find the Operator, filter the provider type by Custom.

To verify the operator status, run the following command:

$ oc describe operatorsource.operators.coreos.com/kogito-operator -n openshift-marketplace

Running BDD Tests

REQUIREMENTS:

  • You need to be authenticated to the cluster before running the tests.
  • Native tests need a node with at least 4 GiB of memory available (build resource request).

If you have an OpenShift cluster and admin privileges, you can run BDD tests with the following command:

$ make run-tests [key=value]*

You can set those optional keys:

  • feature is a specific feature you want to run.
    If you define a relative path, this has to be based on the "test" folder as the run is happening there. Default are all enabled features from 'test/features' folder
    Example: feature=features/operator/deploy_quarkus_service.feature

  • tags to run only specific scenarios. It is using tags filtering.
    Scenarios with '@disabled' tag are always ignored.
    Expression can be:

    • "@wip": run all scenarios with wip tag
    • "~@wip": exclude all scenarios with wip tag
    • "@wip && ~@new": run wip scenarios, but exclude new
    • "@wip,@undone": run wip or undone scenarios

    Complete list of supported tags and descriptions can be found in List of test tags

  • concurrent is the number of concurrent tests to be ran.
    Default is 1.

  • timeout sets the timeout in minutes for the overall run.
    Default is 240 minutes.

  • debug to be set to true to activate debug mode.
    Default is false.

  • load_factor sets the tests load factor. Useful for the tests to take into account that the cluster can be overloaded, for example for the calculation of timeouts.
    Default is 1.

  • local to be set to true if running tests in local.
    Default is false.

  • ci to be set if running tests with CI. Give CI name.

  • cr_deployment_only to be set if you don't have a CLI built. Default will deploy applications via the CLI.

  • load_default_config sets to true if you want to directly use the default test config (from test/.default_config)

  • operator_image is the Operator image full name.
    Default: operator_image=quay.io/kiegroup/kogito-cloud-operator.
  • operator_tag is the Operator image tag.
    Default is the current version.
  • deploy_uri set operator deploy folder.
    Default is ./deploy.
  • cli_path set the built CLI path.
    Default is ./build/_output/bin/kogito.
  • services_image_version sets the services (jobs-service, data-index, ...) image version.
  • services_image_namespace sets the services (jobs-service, data-index, ...) image namespace.
  • services_image_registry sets the services (jobs-service, data-index, ...) image registry.
  • data_index_image_tag sets the Kogito Data Index image tag ('services_image_version' is ignored)
  • jobs_service_image_tag sets the Kogito Jobs Service image tag ('services_image_version' is ignored)
  • management_console_image_tag sets the Kogito Management Console image tag ('services_image_version' is ignored)
  • maven_mirror is the Maven mirror URL.
    This is helpful when you need to speed up the build time by referring to a closer Maven repository.
  • build_image_registry sets the build image registry.
  • build_image_namespace sets the build image namespace.
  • build_image_name_suffix sets the build image name suffix to append to usual image names.
  • build_image_version sets the build image version
  • build_s2i_image_tag sets the build S2I image full tag.
  • build_runtime_image_tag sets the build Runtime image full tag.
  • show_scenarios sets to true to display scenarios which will be executed.
    Default is false.
  • show_steps sets to true to display scenarios and their steps which will be executed.
    Default is false.
  • dry_run sets to true to execute a dry run of the tests, disable crds updates and display the scenarios which will be executed.
    Default is false.
  • keep_namespace sets to true to not delete namespace(s) after scenario run (WARNING: can be resources consuming ...).
    Default is false.
  • disabled_crds_update sets to true to disable the update of CRDs.
    Default is false.
  • namespace_name to specify name of the namespace which will be used for scenario execution (intended for development purposes).

Logs will be shown on the Terminal.

To save the test output in a local file for future reference, run the following command:

make run-tests 2>&1 | tee log.out

Running BDD tests with current branch

$ make
$ docker tag quay.io/kiegroup/kogito-cloud-operator:0.11.0 quay.io/{USERNAME}/kogito-cloud-operator:0.11.0 
$ docker push quay.io/{USERNAME}/kogito-cloud-operator:0.11.0
$ make run-tests operator_image=quay.io/{USERNAME}/kogito-cloud-operator

NOTE: Replace {USERNAME} with the username/group you want to push to. Docker needs to be logged in to quay.io and be able to push to your username/group.

Running BDD tests with custom Kogito Build images' version

$ make run-tests build_image_version=<kogito_version> 

Running smoke tests

The BDD tests do provide some smoke tests for a quick feedback on basic functionality:

$ make run-smoke-tests [key=value]*

It will run only tests tagged with @smoke. All options from BDD tests do also apply here.

Running performance tests

The BDD tests also provide performance tests. These tests are ignored unless you specifically provide the @performance tag or run:

$ make run-performance-tests [key=value]*

It will run only tests tagged with @performance. All options from BDD tests do also apply here.

NOTE: Performance tests should be run without concurrency.

List of test tags

Tag name Tag meaning
@smoke Smoke tests verifying basic functionality
@performance Performance tests
@olm OLM integration tests
@travelagency Travel agency tests
@disabled Disabled tests, usually with comment describing reasons
@cli Tests to be executed only using Kogito CLI
@springboot SpringBoot tests
@quarkus Quarkus tests
@dataindex Tests including DataIndex
@jobsservice Tests including Jobs service
@managementconsole Tests including Management console
@infra Tests checking KogitoInfra functionality
@binary Tests using Kogito applications built locally and uploaded to OCP as a binary file
@native Tests using native build
@persistence Tests verifying persistence capabilities
@events Tests verifying eventing capabilities
@discovery Tests checking service discovery functionality
@usertasks Tests interacting with user tasks to check authentication/authorization
@resources Tests checking resource requests and limits
@infinispan Tests using the infinispan operator
@kafka Tests using the kafka operator

Running the Kogito Operator locally

To run the Kogito Operator locally, change the log level at runtime with the DEBUG environment variable, as shown in the following example:

$ make mod
$ make clean
$ DEBUG=true operator-sdk run --local --namespace=<namespace>

You can use the following command to vet, format, lint, and test your code:

$ make test

Contributing to the Kogito Operator

For information about submitting bug fixes or proposed new features for the Kogito Operator, see Contributing to the Kogito Operator.

About

OCP Operator for Kogito

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Go 92.1%
  • Shell 4.0%
  • Gherkin 3.5%
  • Other 0.4%