Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1.8.0 documentation and unit test #73

Merged
merged 5 commits into from
Dec 10, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
61 changes: 41 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ What Process-Automator do:
Process-Automator do not

* Execute service task in unit-scenario
* It is not expected to throw a BPMN Message in the flow: Process-Automator piloted a real system.
* It is not expected to catch a BPMN Message in the flow: Process-Automator piloted a real system.

## Requirement

Expand All @@ -70,7 +70,6 @@ A scenario can be executed on a Camunda 7 or a Camunda 8 server. Process-Automat

## Different usages


### Unit test and regression, unit performance test (unit-scenario)

The unit scenario describes one process instance execution. Creation and user tasks are described.
Expand All @@ -94,6 +93,32 @@ Visit [Load Test Scenario](doc/loadtestscenario/README.md) and the [Load test Tu

## Scenario

Process-Execution-Automator execute a scenario.

A scenario define
* a list of robots. A robot
* Create process instances,
* Execute service task,
* Execute user tasks
* some objectives
* How many process instance must be created
* How many service task has to be executed
* Time to execute a section in the process
* a warm up section

Due to the “backoff strategy”, workers need to be wake up

Robots can be registered in the same pod
![C8CrawlUrl.png](doc/scenarioreference/C8CrawlUrl.png)

Or multiple pods can be defined. Each pod run the process-execution-automator on the same scenario, but limit which robots are executed
![C8CrawlUrl-multiple-pods.png](doc/scenarioreference/C8CrawlUrl-multiple-pods.png)


For unit testing, the execution is different. The pod is started, but don't start immediately the scenario.



This section references all the information to build a scenario.
Visit [Scenario reference](doc/scenarioreference/README.md)

Expand Down Expand Up @@ -424,43 +449,39 @@ automator.servers:
Rebuilt the image via
````
mvn clean install
mvn springboot:build-image
````

# Push the docker image
The docker image is build using the Dockerfile present on the root level.


Push the image to
```
ghcr.io/camunda-community-hub/process-execution-automator:
```

## Detail

Run command
````
mvn clean install
````
Now, create a docker image
````
docker build -t pierre-yves-monnet/processautomator:1.7.1 .
docker build -t pierre-yves-monnet/process-execution-automator:1.8.1 .
````


Push the image to the Camunda hub (you must be login first to the docker registry)

````
docker tag pierre-yves-monnet/processautomator:1.7.1 ghcr.io/camunda-community-hub/process-execution-automator:1.7.1
docker push ghcr.io/camunda-community-hub/process-execution-automator:1.7.1

docker tag pierre-yves-monnet/process-execution-automator:1.8.0 ghcr.io/camunda-community-hub/process-execution-automator:1.8.0
docker push ghcr.io/camunda-community-hub/process-execution-automator:1.8.0
````


Temporary:
docker build -t pierre-yves-monnet/process-execution-automator:1.8.3 .
docker tag pierre-yves-monnet/process-execution-automator:1.8.3 pycamunda/camunda-hub:process-execution-automator-1.8.3
docker push pycamunda/camunda-hub:process-execution-automator-1.8.3



Tag as the latest:
````
docker tag pierre-yves-monnet/processautomator:1.7.1 ghcr.io/camunda-community-hub/process-execution-automator:latest
docker tag pierre-yves-monnet/process-execution-automator:1.8.0 ghcr.io/camunda-community-hub/process-execution-automator:latest
docker push ghcr.io/camunda-community-hub/process-execution-automator:latest
````

Check on
https://github.com/camunda-community-hub/process-execution-automator/pkgs/container/process-execution-automator
https://github.com/camunda-community-hub/zeebe-cherry-runtime/pkgs/container/process-execution-automator

570 changes: 570 additions & 0 deletions doc/scenarioreference/C8CrawlUrl.bpmn

Large diffs are not rendered by default.

9 changes: 8 additions & 1 deletion doc/scenarioreference/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -270,14 +270,21 @@ the variable `loopcrawl` will be a list of 500 random string.

**generateuniqueid(<Prefix>)**
Generate a unique sequential number.
The prefix is used to allo wmultiple counter
The prefix is used to allow multiple counter
Example:
````
"tidblue": "generateuniqueid(blue)"
"tidred": "generateuniqueid(red)"
````
Variables `tidblue` and `tidred` got a unique id, each following a different counter.

**now(LOCALDATETIME|DATE|ZONEDATETIME|LOCALDATE)**
Generate a String object, containing the current date and time.


**stringToDate(LOCALDATETIME|DATE|ZONEDATETIME|LOCALDATE, dateSt)**
Transform a String to a Date (LocalDateTime, Date, ZoneDateTime or LocalDate)


## Verification

Expand Down
89 changes: 62 additions & 27 deletions doc/unittestscenario/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,26 +16,18 @@ There is multiple use case:

### Verification (path and performance)

![Process](../explanationProcess.png)
![ScoreAcceptance.png](resources/ScoreAcceptance.png)

in a CD/CI, you want to verify that a process follows the same behavior in the same performance
time. Running every day (or hours) or asking via an API call to replay a scenario is useful to
verify there is no difference. If the customer is 4555, do we still move the process instance to
Review Level 1"? The second verification is the performance. The scenario can record an expected
duration target (for example, 4 seconds to execute the Get Context service task. Does the execution
still at this time?
verify there is no difference. If the score is 200, do we still move the process instance to
Send Acceptation?

### Coverage report

Execute multiple scenarios to be sure that all the process is covered correctly. An "Execution
round" is a set of scenarios executed at the same time. At the end of the execution, a coverage test
can be performed. A CD/CI verification may be to check the scenario execution, the target time, and
the coverage.

### Advance process instances to a step for development

During the development, you verify the task "Notify applicant". To test it in the situation, you
must have a process instance in the process and pass four user tasks. Each test takes time: when you
During the development, you debug the task "Send rejection". To test it in the situation, you
must have a process instance in the process and pass all user tasks. Each test takes time: when you
deploy a new process or want a new process instance, you need to execute again the different user
task. Using Automator with the correct scenario solves the issue. Deploy a new process, but instead
of starting from the beginning of a new process instance, start it via Automator. The scenario will
Expand All @@ -50,27 +42,70 @@ In the unit scenario, you should place some Event (for example, the end event):

This verification implies to give an Operate access.

The scenario will contains:

The name, the process ID
The scenario contain:

A list of flow to execute under the attribut `executions`
* The name, the process ID,

* A list of flow to execute under attribut `executions`
* A list of verification under attribut `verifications`

* a STARTEVENT, to start one process instance
* the list of all SERVICETASK

## Scenario definition

## Generate from a real execution
Automator can generate a scenario from a real execution. The user creates a process instance and
executes it. It executes user tasks until the end of the process instance or at a certain point. Via
the UI (or the API), the user gives the process instance. Automator queries Camunda Engine to
collect the history of the process and, for each user task, which variable was provided. A new
scenario is created from this example.
Check the scenario:

Note: this function is yet available
[ScoreAcceptanceScn.json](resources/ScoreAcceptanceScn.json)

## execute

In progress

1. Deploy the scenario on the cluster, via the Modeler

2. Create the pod process-execution-automator

```
kubectl create -f doc/unittestscenario/resources/UnittestAutomator.yaml -n camunda
```

3. Port forward

```
kubectl port-forward svc/process-execution-automator 8381:8381 -n camunda
```

4. Upload the scenario


```
curl -X POST http://localhost:8381/api/files/upload \
-H "Content-Type: multipart/form-data" \
-F "file=@doc/unittestscenario/resources/ScoreAcceptanceScn.json"

curl -X GET "http://localhost:8381/api/content/list" -H "Content-Type: application/json"

```



6. Check the scenario is uploaded

```
curl -X GET "http://localhost:8381/api/content/list" -H "Content-Type: application/json"
```


7. upload the scenario
```
curl -X POST -F "file=@/path/to/your/file.txt" http://localhost:8080/api/files/upload

curl -X GET "http://localhost:8381/api/unittest/get?id=1732767184446" -H "Content-Type: application/json"
```

Option: give the scenario via the configMap

a. create the configMap
````
kubectl create configmap scoreacceptancescn --from-file=doc/unittestscenario/resources/scoreacceptancescn.json -n camunda
````

b. Chheck the configuration
39 changes: 39 additions & 0 deletions doc/unittestscenario/SendUnitTestCommand.rest
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
### POST Request
POST http://localhost:8381/api/unittest/run?name=ScoreAcceptanceScn&server=Camunda8Ruby&wait=true
Content-Type: application/json

{
}


### POST Request
GET http://localhost:8381/api/unittest/get?id=1732767184446
Content-Type: application/json

{
}

### GET LIST
GET http://localhost:8381/api/unittest/list
Content-Type: application/json

{
}

### Content Manager
GET http://localhost:8381/api/content/list
Content-Type: application/json

{
}

### Upload file
POST http://localhost:8381/api/content/add
Content-Type: multipart/form-data; boundary=boundary

--boundary
Content-Disposition: form-data; name="FileToUpload"; filename="ScoreAcceptanceScn.json"

< ./resources/ScoreAcceptanceScn.json

--boundary--
Loading
Loading