Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NO-ISSUE: Fix recurring minor typos #607

Merged
merged 1 commit into from
Mar 26, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,15 @@ We accept all kinds of contributions:
2. Opening [an issue](https://github.com/apache/incubator-kie-kogito-docs/issues/new) by describing what problem you see that we need to fix
3. Opening [a PR](https://github.com/apache/incubator-kie-kogito-docs/compare) if you see a typo, broken link, or any other minor changes.

> To include a new guide or documentation content, please **open an issue first** so we can discuss in more detail what needs to be done. We use [Issues](https://github.com/apache/incubator-kie-kogito-docs/issues) to track our tasks. Please include a good title and thorought description.
> To include a new guide or documentation content, please **open an issue first** so we can discuss in more detail what needs to be done. We use [Issues](https://github.com/apache/incubator-kie-kogito-docs/issues) to track our tasks. Please include a good title and thorough description.

## Including a new guide

1. Open a [an issue](https://github.com/apache/incubator-kie-kogito-docs/issues/new) provide a description and link any pull-requests realted to the guide.
1. Open [an issue](https://github.com/apache/incubator-kie-kogito-docs/issues/new) provide a description and link any pull-requests related to the guide.
2. Write the guide.
3. Add a link to the guide in [serverlessworkflow/modules/ROOT/nav.adoc](serverlessworkflow/modules/ROOT/nav.adoc)
4. Add a card for the guide in [serverlessworkflow/modules/ROOT/pages/index.adoc](serverlessworkflow/modules/ROOT/pages/index.adoc)
5. Submit a [a PR](https://github.com/apache/incubator-kie-kogito-docs/compare)
5. Submit [a PR](https://github.com/apache/incubator-kie-kogito-docs/compare)

## Opening an Issue

Expand Down Expand Up @@ -84,7 +84,7 @@ Use active voice.
:x: _Passive:_ The Limits window is used to specify the minimum and maximum values.
:white_check_mark: _Active:_ In the Limits window, specify the minimum and maximum values.

Use second person (you). Avoid first person (I, we, us). Be gender neutral. Use the appropriate tone. Write for a global audience.
Use second person (you). Avoid first person (I, we, us). Be gender-neutral. Use the appropriate tone. Write for a global audience.

:x: We can add a model to the project that we created in the previous step.
:white_check_mark: You can add a model to the project that you created in the previous step.
Expand Down Expand Up @@ -203,7 +203,7 @@ Content
====
```

Similarly you can have other admonitions:
Similarly, you can have other admonitions:

- `TIP`
- `IMPORTANT`
Expand Down
4 changes: 2 additions & 2 deletions serverlessworkflow/modules/ROOT/pages/cloud/index.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ The cards below list all features included in the platform to deploy workflow ap

[NOTE]
====
Eventually, these two options will converge, and the {operator_name} will also be able to handle full Quarkus projects. So if you opt-in to use Quarkus now and manually deploy your workflows, bear in mind that it's on the project's roadmap to integrate the Quarkus experience with the Operator.
Eventually, these two options will converge, and the {operator_name} will also be able to handle full Quarkus projects. So if you opt in to use Quarkus now and manually deploy your workflows, bear in mind that it's on the project's roadmap to integrate the Quarkus experience with the Operator.
====

[.card-section]
Expand Down Expand Up @@ -128,7 +128,7 @@ Learn about the known issues and feature Roadmap of the {operator_name}
[.card-section]
== Kubernetes with Quarkus

For Java developers, you can use Quarkus and a few add-ons to help you build and deploy the application in a Kubernetes cluster. {product_name} also generates basic Kubernetes objects YAML files to help you getting started. The application should be managed by a Kubernetes administrator.
For Java developers, you can use Quarkus and a few add-ons to help you build and deploy the application in a Kubernetes cluster. {product_name} also generates basic Kubernetes objects YAML files to help you to get started. The application should be managed by a Kubernetes administrator.

[.card]
--
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -481,7 +481,7 @@ spec:
end: true
----

Save a file in your local file system with this contents named `greetings-workflow.yaml` then run:
Save a file in your local file system with this content named `greetings-workflow.yaml` then run:

[source,bash,subs="attributes+"]
----
Expand Down Expand Up @@ -562,7 +562,7 @@ metadata:

After editing the resource, the operator will start a new build of the workflow. Once this is finished, the workflow will be notified and updated accordingly.

If the build fails, but the workflow has a working deployment, the operator won't rollout a new deployment.
If the build fails, but the workflow has a working deployment, the operator won't roll out a new deployment.

Ideally you should use this feature if there's a problem with your workflow or the initial build revision.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ CMD ["/home/kogito/launch/run-app-devmode.sh"] <8>
----

<1> The dev mode image as the base image
<2> Change to super user to run privileged actions
<2> Change to superuser to run privileged actions
<3> Install additional packages
<4> Change back to the default user without admin privileges
<5> Add a new binary path to the `PATH`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ This document describes how you can configure the workflows to let operator crea

== Prerequisite
1. Knative is installed on the cluster and Knative Eventing is initiated with a `KnativeEventing` CR.
2. A broker named `default` is created. Currently all Triggers created by the {operator_name} will read events from `default`
2. A broker named `default` is created. Currently, all Triggers created by the {operator_name} will read events from `default`

== Configuring the workflow

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ sonataflow-platform-jobs-service-cdf85d969-sbwkj 1/1 Running 0
Keep in mind that this setup is not recommended for production environments, especially because the data does not persist when the pod restarts.

=== Using an existing PostgreSQL service
For robust environments it is recommened to use an dedicated database service and configure Jobs Service to make use of it. Currently, the Jobs Service
For robust environments it is recommened to use a dedicated database service and configure Jobs Service to make use of it. Currently, the Jobs Service
only supports PostgreSQL database.

Configuring Jobs Service to communicate with an existing PostgreSQL instance is supported in two ways. In both cases it requires providing the persistence
Expand All @@ -57,7 +57,7 @@ By default, the persistence specification defined in the `SonataFlow` workflow's
==== Using the persistence field defined in the `SonataFlowPlatform` CR
Using the persistence configuration in the `SonataFlowPlatform` CR located in the same namespace requires to have the `SonataFlow` CR persistence field configured
to have an empty `{}` value, signaling the Operator to derive the persistence from the active `SonataFlowPlatform`, when available. If no persistence is defined
the operator will fallback to the ephemeral persistence previously described.
the operator will fall back to the ephemeral persistence previously described.

[source,yaml,subs="attributes+"]
---
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
:kubernetes_operator_uninstall_url: https://olm.operatorframework.io/docs/tasks/uninstall-operator/
:operatorhub_url: https://operatorhub.io/

This guide describes how to install the {operator_name} in a Kubernetes or OpenShift cluster. The operator is in an xref:/cloud/operator/known-issues.adoc[early development stage] (community only) and has been tested on OpenShift {openshift_version_min}+, Kubernetes {kubernetes_version}+, and link:{minikube_url}[Minikube].
This guide describes how to install the {operator_name} in a Kubernetes or OpenShift cluster. The operator is in an xref:cloud/operator/known-issues.adoc[early development stage] (community only) and has been tested on OpenShift {openshift_version_min}+, Kubernetes {kubernetes_version}+, and link:{minikube_url}[Minikube].

.Prerequisites
* A Kubernetes or OpenShift cluster with admin privileges. Alternatively, you can use Minikube or KIND.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
// links
:kogito_serverless_operator_url: https://github.com/apache/incubator-kie-kogito-serverless-operator/

By default, workflows use an embedded version of xref:../../data-index/data-index-core-concepts.adoc[Data Index]. This document describes how to deploy supporting services, like Data Index, on a cluster using the link:{kogito_serverless_operator_url}[{operator_name}].
By default, workflows use an embedded version of xref:data-index/data-index-core-concepts.adoc[Data Index]. This document describes how to deploy supporting services, like Data Index, on a cluster using the link:{kogito_serverless_operator_url}[{operator_name}].

[IMPORTANT]
====
Expand Down Expand Up @@ -125,7 +125,7 @@ These cluster-wide services can be overridden in any namespace, by configuring t

== Additional resources

* xref:../../data-index/data-index-service.adoc[]
* xref:data-index/data-index-service.adoc[]
* xref:cloud/operator/enabling-jobs-service.adoc[]
* xref:cloud/operator/known-issues.adoc[]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ This document describes how to configure a SonataFlow instance to use persistenc

Kubernetes's pods are stateless by definition. In some scenarios, this can be a challenge for workloads that require maintaining the status of
the application regardless of the pod's lifecycle. In the case of {product_name}, the context of the workflow is lost when the pod restarts.
If your workflow requires recovery from such scenarios, you must to make these additions to your workflow CR:
If your workflow requires recovery from such scenarios, you have to make these additions to your workflow CR:
Use the `persistence` field in the `SonataFlow` workflow spec to define the database service located in the same cluster.
There are 2 ways to accomplish this:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ The following table lists the possible Conditions.
| Running
| False
| AttemptToRedeployFailed
| If the Workflow Deployment is not available, the operator will try to rollout the Deployment three times before entering this stage. Check the message in this Condition and the Workflow Pod logs for more info
| If the Workflow Deployment is not available, the operator will try to roll out the Deployment three times before entering this stage. Check the message in this Condition and the Workflow Pod logs for more info

|===

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -586,7 +586,7 @@ This particular endpoint expects as body a JSON object whose field `numbers` is

If `inputNumbers` contains `1`, `2`, and `3`, the output of the call will be `1*3+2*3+3*3=18.

In case you want to specify headers in your HTTP request, you might do it by adding arguments starting with the `HEADER_` prefix. Therefore if you add `"HEADER_ce_id": "123"` to the previous argument set, you will be adding a header named `ce_id` with the value `123` to your request. A similar approach might be used to add query params to a GET request, in that case, you must add arguments starting with the `QUERY_` prefix. Note that you can also use {} notation for replacements of query parameters included directly in the `operation` string.
In case you want to specify headers in your HTTP request, you might do it by adding arguments starting with the `HEADER_` prefix. Therefore, if you add `"HEADER_ce_id": "123"` to the previous argument set, you will be adding a header named `ce_id` with the value `123` to your request. A similar approach might be used to add query params to a GET request, in that case, you must add arguments starting with the `QUERY_` prefix. Note that you can also use {} notation for replacements of query parameters included directly in the `operation` string.

For example, given the following function definition that performs a `get` request

Expand Down Expand Up @@ -639,7 +639,7 @@ It must contain a Java class that inherits from `WorkItemTypeHandler`. Its respo
+
The runtime project consists of a `WorkflowWorkItemHandler` implementation, which name must match with the one provided to `WorkItemNodeFactory` during the deployment phase, and a `WorkItemHandlerConfig` bean that registers that handler with that name.
+
When a Serverless Workflow function is called, Kogito identifies the proper `WorkflowWorkItemHandler` instance to be used for that function type (using the handler name associated with that type by the deployment project) and then invokes the `internalExecute` method. The `Map` parameter contains the function arguments defined in the workflow, and the `WorkItem` parameter contains the metadata information added to the handler by the deployment project. Hence, the `executeWorkItem` implementation has an access to all the information needed to perform the computational logic intended for that custom type.
When a Serverless Workflow function is called, Kogito identifies the proper `WorkflowWorkItemHandler` instance to be used for that function type (using the handler name associated with that type by the deployment project) and then invokes the `internalExecute` method. The `Map` parameter contains the function arguments defined in the workflow, and the `WorkItem` parameter contains the metadata information added to the handler by the deployment project. Hence, the `executeWorkItem` implementation has access to all the information needed to perform the computational logic intended for that custom type.

=== Custom function type example

Expand All @@ -666,7 +666,7 @@ The `operation` starts with `rpc`, which is the custom type identifier, and cont

A Kogito addon that defines the `rpc` custom type must be developed for this function definition to be identified. It is consist of a link:{kogito_sw_examples_url}/serverless-workflow-custom-type/serverless-workflow-custom-rpc-deployment[deployment project] and a link:{kogito_sw_examples_url}/serverless-workflow-custom-type/serverless-workflow-custom-rpc[runtime project].

The deployment project is responsible for extending the link:{kogito_sw_examples_url}/serverless-workflow-custom-type/serverless-workflow-custom-rpc-deployment/src/main/java/org/kie/kogito/examples/sw/services/RPCCustomTypeHandler.java[`WorkItemTypeHandler`] and setup the `WorkItemNodeFactory` as follows:
The deployment project is responsible for extending the link:{kogito_sw_examples_url}/serverless-workflow-custom-type/serverless-workflow-custom-rpc-deployment/src/main/java/org/kie/kogito/examples/sw/services/RPCCustomTypeHandler.java[`WorkItemTypeHandler`] and setup of the `WorkItemNodeFactory` as follows:

.Example of the RPC function Java implementation

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ In the previous definition, the `schema` property is a URI, which holds the path

== Output schema

Serverless Workflow specification does not support JSON output schema until version 0.9. Therefore {product_name} is implementing it as a link:{spec_doc_url}#extensions[Serverless Workflow specification extension]. Output schema is applied after workflow execution to verify that the output model has the expected format. It is also useful for Swagger generation purposes.
Serverless Workflow specification does not support JSON output schema until version 0.9. Therefore, {product_name} is implementing it as a link:{spec_doc_url}#extensions[Serverless Workflow specification extension]. Output schema is applied after workflow execution to verify that the output model has the expected format. It is also useful for Swagger generation purposes.

Similar to Input schema, you must specify the URL to the JSON schema, using `outputSchema` as follows:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ The workflow expressions in the link:{spec_doc_url}#workflow-expressions[Serverl

This document describes the usage of jq expressions in functions, switch state conditions, action function arguments, data filtering, and event publishing.

JQ expression might be tricky to master, for non trivial cases, it is recommended to use helper tools like link:{jq_play}[JQ Play] to validate the expression before including it in the workflow file.
JQ expression might be tricky to master, for non-trivial cases, it is recommended to use helper tools like link:{jq_play}[JQ Play] to validate the expression before including it in the workflow file.

[[ref-example-jq-expression-function]]
== Example of jq expression in functions
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
= Workflow embedded execution in Java

This guide uses a standard Java virtual machine and a small set of Maven dependencies to execute a link:{spec_doc_url}[CNCF Serverless Workflow] definition. Therefore, it is assumed you are fluent both in Java and Maven.
The workflow definition to be executed can be read from a `.json` or `.yaml` file or programmatically defined using the {product_name} fluent API.
The workflow definition to be executed can be read from a `.json` or `.yaml` file or programmatically defined using the {product_name} fluent API.
.Prerequisites
. Install https://openjdk.org/[OpenJDK] {java_min_version}
. Install https://maven.apache.org/index.html[Apache Maven] {maven_min_version}.
Expand Down Expand Up @@ -42,7 +42,7 @@ public class DefinitionFileExecutor {
<1> Reads the workflow file definition from the project root directory
<2> Creates a static workflow application object. It is done within the try block since the instance is `Closeable`. This is the reference that allow you to execute workflow definitions.
<3> Reads the Serverless Workflow Java SDK `Workflow` object from the file.
<4> Execute the workflow, passing `Workflow` reference and no parameters (an empty Map). The result of the workflow execution: process instance id and workflow output model, can accessed using `result` variable.
<4> Execute the workflow, passing `Workflow` reference and no parameters (an empty Map). The result of the workflow execution: process instance id and workflow output model, can be accessed using `result` variable.
<5> Prints the workflow model in the configured standard output.

If you compile and execute this Java class, you will see the following log in your configured standard output:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ If you are new, start with the minimal one.
[[proc-minimal-local-environment-setup]]
== Minimal local environment setup

Recommended steps to setup your local development environment. By completing these steps you are able to
Recommended steps to set up your local development environment. By completing these steps you are able to
start the development on your local machine using our guides.

.Procedure
Expand All @@ -26,7 +26,7 @@ If you have used https://knative.dev/docs/install/quickstart-install/[Knative us
Please note, that if the knative quickstart procedure is not used, you need to install Knative Serving and Eventing manually. See <<proc-additional-options-for-local-environment>>.


.To startup the selected cluster without quickstart, use the following command:
.To start up the selected cluster without quickstart, use the following command:
[tabs]
====
Minikube with Docker::
Expand Down Expand Up @@ -85,7 +85,7 @@ If you are interested in our Java and Quarkus development path, consider complet
.Procedure
. Install https://openjdk.org/[OpenJDK] {java_min_version} and configure `JAVA_HOME` appropriately by adding it to the `PATH`.
. Install https://maven.apache.org/index.html[Apache Maven] {maven_min_version}.
. Install https://quarkus.io/guides/cli-tooling[Quarkus CLI] corresponding to the currently supported version by {product_name}. Currently it is {quarkus_version}.
. Install https://quarkus.io/guides/cli-tooling[Quarkus CLI] corresponding to the currently supported version by {product_name}. Currently, it is {quarkus_version}.

[[proc-additional-options-for-local-environment]]
== Additional options for local environment setup
Expand Down
Loading