Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue-596: Holistic review of non-quarkus cloud chapter #625

Merged
merged 8 commits into from
May 23, 2024
2 changes: 1 addition & 1 deletion serverlessworkflow/modules/ROOT/nav.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@
*** xref:cloud/operator/using-persistence.adoc[Using Persistence]
*** xref:cloud/operator/configuring-knative-eventing-resources.adoc[Knative Eventing]
*** xref:cloud/operator/known-issues.adoc[Roadmap and Known Issues]
*** xref:cloud/operator/add-custom-ca-to-a-workflow-pod.adoc[Add A Custom CA To A Workflow Pod]
*** xref:cloud/operator/add-custom-ca-to-a-workflow-pod.adoc[Add Custom CA to Workflow Pod]
* Integrations
** xref:integrations/core-concepts.adoc[]
* Job Service
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
:keywords: kogito, sonataflow, workflow, serverless, operator, kubernetes, minikube, openshift, containers
:keytool-docs: https://docs.oracle.com/en/java/javase/21/docs/specs/man/keytool.html

If you're working with containers running Java applications and need to add a CA (Certificate Authority) certificate for secure communication, you can follow these steps. This guide assumes you are familiar with containers and have basic knowledge of working with YAML files.
{product_name} applications are containers running Java. If you're working with containers running Java applications and need to add a CA (Certificate Authority) certificate for secure communication this guide will explain the necesarry steps to setup CA for your workflow application. The guide assumes you are familiar with containers and have basic knowledge of working with YAML files.

:toc:

Expand All @@ -19,11 +19,11 @@ The containerized application may not know the CA certificate in build time, so

Before proceeding, ensure you have the CA certificate file (in PEM format) that you want to add to the Java container. If you don't have it, you may need to obtain it from your system administrator or certificate provider.

For this guide, we would take the k8s cluster root CA that is automatically deployed into every container under `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`
For this guide, we are using the k8s cluster root CA that is automatically deployed into every container under `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`

=== Step 2: Prepare a trust store in an init-container

Add or amend these volumes and init-container snippet to your pod spec or `podTemplate` in a deployment:
Add or amend these `volumes` and `init-container` snippet to your pod spec or `podTemplate` in a deployment:

[source,yaml]
---
Expand Down Expand Up @@ -51,8 +51,7 @@ The default keystore under `$JAVA_HOME` is part of the container image and is no
=== Step 3: Configure Java to load the new keystore

Here you can mount the new, modified `cacerts` into the default location where the JVM looks.
The `Main.java` example uses the standard HTTP client so alternatively you could mount the `cacerts` to a different location and
configure the Java runtime to load the new keystore with a `-Djavax.net.ssl.trustStore` system property.
The `Main.java` example uses the standard HTTP client so alternatively you could mount the `cacerts` to a different location and configure the Java runtime to load the new keystore with a `-Djavax.net.ssl.trustStore` system property.
Note that libraries like RESTEasy don't respect that flag and may need to programmatically set the trust store location.

[source,yaml]
Expand Down Expand Up @@ -185,7 +184,7 @@ spec:

== Additional Resources

* Keytool documentation: {keytool-docs}
* Dynamically Creating Java keystores OpenShift - Blog Post: https://developers.redhat.com/blog/2017/11/22/dynamically-creating-java-keystores-openshift#end_to_end_springboot_demo
* link:keytool-docs[Keytool documentation]
* link:https://developers.redhat.com/blog/2017/11/22/dynamically-creating-java-keystores-openshift#end_to_end_springboot_demo[Dynamically Creating Java keystores OpenShift]


Original file line number Diff line number Diff line change
Expand Up @@ -16,25 +16,30 @@
:docker_doc_arg_url: https://docs.docker.com/engine/reference/builder/#arg
:quarkus_extensions_url: https://quarkus.io/extensions/

This document describes how to build and deploy your workflow on a cluster using the link:{kogito_serverless_operator_url}[{operator_name}] only by having a `SonataFlow` custom resource.
This document describes how to build and deploy your workflow on a cluster using the link:{kogito_serverless_operator_url}[{operator_name}].

Every time you need to change the workflow definition the system will (re)build a new immutable version of the workflow. If you're still in development phase, please see the xref:cloud/operator/developing-workflows.adoc[] guide.

[IMPORTANT]
====
The build system implemented by the {operator_name} is not suitable for complex production use cases. Consider using an external tool to build your application such as Tekton and ArgoCD. The resulting image can then be deployed with `SonataFlow` custom resource. See more at xref:cloud/operator/customize-podspec.adoc#custom-image-default-container[Setting a custom image in the default container] section in the xref:cloud/operator/customize-podspec.adoc#custom-image-default-container[] guide.
The build system implemented by the {operator_name} is not suitable for complex production use cases. Consider using an external tool to build your application such as Tekton and ArgoCD. The resulting image can then be deployed with `SonataFlow` custom resource. More details available in xref:cloud/operator/customize-podspec.adoc#custom-image-default-container[Setting a custom image in the default container] section of the xref:cloud/operator/customize-podspec.adoc[] guide.
====

Follow the <<building-kubernetes, Kubernetes>> or <<building-openshift, OpenShift>> sections of this document based on the cluster you wish to build your workflows on.

.Prerequisites
* A Workflow definition.
* The {operator_name} installed. See xref:cloud/operator/install-serverless-operator.adoc[] guide
* The {operator_name} installed. See xref:cloud/operator/install-serverless-operator.adoc[] guide.

[#configure-build-system]
[[configure-workflow-build-system]]
== Configuring the build system

The operator can build workflows on Kubernetes or OpenShift. On Kubernetes, it uses link:{kaniko_url}[Kaniko] and on OpenShift a link:{openshift_build_url}[standard BuildConfig]. The operator build system is not tailored for advanced production use cases and you can do only a few customizations.
The operator can build workflows on Kubernetes or OpenShift. On Kubernetes, it uses link:{kaniko_url}[Kaniko] and on OpenShift a link:{openshift_build_url}[standard BuildConfig].

[IMPORTANT]
====
The operator build system is not tailored for advanced production use cases and you can do only a few customizations.
====

=== Using another Workflow base builder image

Expand All @@ -52,7 +57,7 @@ kubectl patch sonataflowplatform <name> --patch 'spec:\n build:\n config:
[#customize-base-build]
=== Customize the base build Dockerfile

The operator uses the sonataflow-operator-builder-config `ConfigMap` in the operator's installation namespace ({operator_installation_namespace}) to configure and run the workflow build process.
The operator uses the `ConfigMap` named `sonataflow-operator-builder-config` in the operator's installation namespace ({operator_installation_namespace}) to configure and run the workflow build process.
You can change the `Dockerfile` entry in this `ConfigMap` to tailor the Dockerfile to your needs. Just be aware that this can break the build process.

.Example of the sonataflow-operator-builder-config `ConfigMap`
Expand Down Expand Up @@ -87,6 +92,7 @@ metadata:
The excerpt above is just an example. The current version might have a slightly different version. Don't use this example in your installation.
====

[[changing-sfplatform-resource-requirements]]
=== Changing resources requirements

You can create or edit a `SonataFlowPlatform` in the workflow namespace specifying the link:{kubernetes_resource_management_url}[resources requirements] for the internal builder pods:
Expand Down Expand Up @@ -138,6 +144,7 @@ spec:

This parameters will only apply to new build instances.

[[passing-build-arguments-to-internal-workflow-builder]]
=== Passing arguments to the internal builder

You can pass build arguments (see link:{docker_doc_arg_url}[Dockerfile ARG]) to the `SonataFlowBuild` instance.
Expand Down Expand Up @@ -210,9 +217,10 @@ The table below lists the Dockerfile arguments available in the default {operato
|MAVEN_ARGS_APPEND | Arguments passed to the maven build when the workflow build is produced. | -Dkogito.persistence.type=jdbc -Dquarkus.datasource.db-kind=postgresql
|===

[[setting-env-variables-for-internal-workflow-builder]]
=== Setting environment variables in the internal builder

You can set environment variables to the `SonataFlowBuild` internal builder pod.
You can set environment variables to the `SonataFlowBuild` internal builder pod. This is useful in cases where you would like to influence only the build of the workflow.

[IMPORTANT]
====
Expand Down Expand Up @@ -275,7 +283,7 @@ Since the `envs` attribute is an array of link:{kubernetes_envvar_url}[Kubernete
On Minikube and Kubernetes only plain values, `ConfigMap` and `Secret` are supported due to a restriction on the build system provided by these platforms.
====

[#building-kubernetes]
[[building-and-deploying-on-kubernetes]]
== Building on Kubernetes

[TIP]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
// NOTE: this guide can be expanded in the future to include prod images, hence the file name
// please change the title section and rearrange the others once it's done

This document describes how to build a custom development image to use in SonataFlow.
This document describes how to build a custom development image to use in {product_name}.

== The development mode image structure

Expand Down Expand Up @@ -95,7 +95,7 @@ The container exposes port 8080 by default. When running the container locally,

Next, we mount a local volume to the container's application path. Any local workflow definitions, specification files, or properties should be mounted to `src/main/resources`. Alternatively, you can also mount custom Java files to `src/main/java`.

Finally, to use the new generated image with the dev profile you can see: xref:cloud/operator/developing-workflows.adoc#_using_another_workflow_base_image[Using another Workflow base image].
Finally, to use the new generated image with the dev profile follow the procedure at xref:cloud/operator/developing-workflows.adoc#_using_another_workflow_base_image[Using another Workflow base image] guide.

== Additional resources

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,17 @@

This document describes how you can configure the workflows to let operator create the Knative eventing resources on Kubernetes.

{operator_name} can analyze the event definitions from the `spec.flow` and create `SinkBinding`/`Trigger` based on the type of the event. Then the workflow service can utilize them for event communications. The same purpose of this feature in quarkus extension can be found xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc#ref-example-sw-event-definition-knative[here].
{operator_name} can analyze the event definitions from the `spec.flow` and create `SinkBinding`/`Trigger` based on the type of the event. Then the workflow service can utilize them for event communications.

[NOTE]
====
Alternativelly, you can follow our xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc#ref-example-sw-event-definition-knative[advanced guide] that uses Java and Quarkus to introduce this feature.
====

== Prerequisite
1. Knative is installed on the cluster and Knative Eventing is initiated with a `KnativeEventing` CR.
2. A broker named `default` is created. Currently, all Triggers created by the {operator_name} will read events from `default`
1. The {operator_name} installed. See xref:cloud/operator/install-serverless-operator.adoc[] guide.
2. Knative is installed on the cluster and Knative Eventing is initiated with a `KnativeEventing` CR.
3. A broker named `default` is created. Currently, all Triggers created by the {operator_name} will read events from `default`

== Configuring the workflow

Expand Down Expand Up @@ -52,7 +58,7 @@ Knative resources are not watched by the operator, indicating they will not unde
== Additional resources

* https://knative.dev/docs/eventing/[Knative Eventing official site]
* xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc[quarkus extension for Knative eventing]
* xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc[Quarkus extension for Knative eventing]
* xref:job-services/core-concepts.adoc#knative-eventing-supporting-resources[Knative eventing for Job service]
* xref:data-index/data-index-core-concepts.adoc#_knative_eventing[Knative eventing for data index]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -85,10 +85,10 @@ If you try to change any of them, the operator will override them with the defau

== Additional resources

* https://quarkus.io/guides/config-reference#profile-aware-files[Quarkus - Profile aware files]
* link:https://quarkus.io/guides/config-reference#profile-aware-files[Quarkus Configuration Reference Guide - Profile aware files]
* xref:core/configuration-properties.adoc[]
* xref:cloud/operator/known-issues.adoc[]
* xref:cloud/operator/developing-workflows.adoc[]
* xref:cloud/operator/build-and-deploy-workflows.adoc[]
* xref:cloud/operator/known-issues.adoc[]

include::../../../pages/_common-content/report-issue.adoc[]
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ In this scenario, the `.spec.resources` attribute is ignored since it's only use
xref:cloud/operator/known-issues.adoc[In the roadmap] you will find that we plan to consider the `.spec.resources` attribute when the image is specified in the default container.
====

It's advised that the SonataFlow `.spec.flow` definition and the workflow built within the image corresponds to the same workflow. If these definitions don't match you may experience poorly management and configuration. The {operator_name} uses the `.spec.flow` attribute to configure the application, service discovery, and service binding with other deployments within the topology.
It's advised that the SonataFlow `.spec.flow` definition and the workflow built within the image corresponds to the same workflow. If these definitions don't match you may experience poor management and configuration. The {operator_name} uses the `.spec.flow` attribute to configure the application, service discovery, and service binding with other deployments within the topology.

[IMPORTANT]
====
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,11 @@ Workflows in the development profile are not tailored for production environment
{operator_name} is under active development with features yet to be implemented. Please see xref:cloud/operator/known-issues.adoc[].
====

.Prerequisites
* You have set up your environment according to the xref:getting-started/preparing-environment.adoc#proc-minimal-local-environment-setup[minimal environment setup] guide.
* You have the cluster instance up and running. See xref:getting-started/preparing-environment.adoc#proc-starting-cluster-fo-local-development[starting the cluster for local development] guide.

[[proc-introduction-to-development-profile]]
== Introduction to the Development Profile

The development profile is the easiest way to start playing around with Workflows and the operator.
Expand Down Expand Up @@ -74,13 +79,13 @@ spec:

<2> In the `flow` attribute goes the Workflow definition as described by the xref:core/cncf-serverless-workflow-specification-support.adoc[CNCF Serverless Workflow specification]. So if you already have a workflow definition, you can use it there. Alternatively, you can use the xref:tooling/serverless-workflow-editor/swf-editor-overview.adoc[editors to create your workflow definition].

[[proc-deploying-new-workflow]]
== Deploying a New Workflow

.Prerequisites
* You have xref:cloud/operator/install-serverless-operator.adoc[installed the {operator_name}]
* You have created a new {product_name} Kubernetes YAML file
* You have a new {product_name} Kubernetes Workflow definition in YAML file. You can use the Greeting example in <<proc-introduction-to-development-profile,introduction to development profile>> section.

Having a new Kubernetes Workflow definition in a YAML file (you can use the above example), you can deploy it in your cluster with the following command:
Having a Kubernetes Workflow definition in a YAML file , you can deploy it in your cluster with the following command:

.Deploying a new SonataFlow Custom Resource in Kubernetes
[source,bash,subs="attributes+"]
Expand Down Expand Up @@ -134,7 +139,7 @@ and changing the Workflow definition inside the Custom Resource Spec section.

Alternatively, you can save the Custom Resource definition file and edit it with your desired editor and re-apply it.

For example using VS Code, there are the commands needed:
For example using VS Code, these are the commands needed:

[source,bash,subs="attributes+"]
----
Expand All @@ -146,22 +151,58 @@ kubectl apply -f workflow_devmode.yaml -n <your_namespace>
The operator ensures that the latest Workflow definition is running and ready.
This way, you can include the Workflow in your development scenario and start making requests to it.

[[proc-check-if-workflow-is-running]]
== Check if the Workflow is running

.Prerequisites
* You have deployed a workflow to your cluster following the example in <<proc-deploying-new-workflow,deploying new workflow>> section.

In order to check that the {product_name} Greeting workflow is up and running, you can try to perform a test HTTP call. First, you must get the service URL:

.Exposing the Workflow
[source,bash,subs="attributes+"]
. Exposing the workflow
[tabs]
====
Minikube::
+
--
.Expose the workflow on minikube
[source,shell]
----
# Input
minikube service greeting -n <your_namespace> --url

# Example output, use the URL as a base to acces the current workflow
http://127.0.0.1:57053

# use the above output to get the current Workflow URL in your environment
# Your workflow is accessible at http://127.0.0.1:57053/greeting
----
--
Kind::
+
--
.Expose the workflow on kind
[source,shell]
----
# Find the service of your workflow
kubectl get service -n <namespace>

# Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
greetings ClusterIP 10.96.0.1 <none> 31852/TCP 21h

# Now forward the port and keep the terminal window open
kubectl port-forward service/greeting 31852:80 -n <namespace>

# Your workflow is accessible at localhost:31852/greetings
----
--
====

[TIP]
====
When running on Minikube, the service is already exposed for you via `NodePort`. On OpenShift, link:{openshift_route_url}[a Route is automatically created in devmode]. If you're running on Kubernetes you can link:{kubernetes_url}[expose your service using an Ingress].
* When running on Minikube, the service is already exposed for you via `NodePort`.
* On OpenShift, link:{openshift_route_url}[a Route is automatically created in devmode].
* If you're running on Kubernetes you can link:{kubernetes_url}[expose your service using an Ingress].
====

You can now point your browser to the Swagger UI and start making requests with the REST interface.
Expand Down Expand Up @@ -259,7 +300,7 @@ It can give you a clue about what might be happening. See xref:cloud/operator/wo
.Watch the workflow logs
[source,shell,subs="attributes+"]
----
kubectl logs deployment/<workflow-name> -f
kubectl logs deployment/<workflow-name> -f -n <your_namespace>
----
+
If you decide to open an issue or ask for help in {product_name} communication channels, this logging information is always useful for the person who will try to help you.
Expand Down
Loading