The cmt
quickstart demonstrates Container-Managed Transactions (CMT), showing how to use transactions managed by the container.
The cmt
quickstart demonstrates how to use container-managed transactions (CMT), which are transactions managed by the container in WildFly Application Server. It is a fairly typical scenario of updating a database and sending a JMS message in the same transaction. A simple MDB is provided that prints out the message sent but this is not a transactional MDB and is purely provided for debugging purposes.
Aspects touched upon in the code:
-
XA transaction control using the container managed transaction annotations
-
XA access to the standard default datasource using the JPA API
-
XA access to a JMS queue
Prior to EJB, getting the right incantation to ensure sound transactional operation of the business logic was a highly specialized skill. Although this still holds true to a great extent, EJB has provided a series of improvements to allow simplified transaction demarcation notation that is therefore easier to read and test.
With CMT, the EJB container sets the boundaries of a transaction. This differs from BMT (bean-managed transactions), where the developer is responsible for initiating and completing a transaction using the begin
, commit
, and rollback
methods on a jakarta.transaction.UserTransaction
.
Take a look at org.jboss.as.quickstarts.cmt.ejb.CustomerManagerEJB
. You can see that this stateless session bean has been marked up with the @jakarta.ejb.TransactionAttribute
annotation.
The following options are available for this annotation.
- Required
-
As demonstrated in the quickstart. If a transaction does not already exist, this will initiate a transaction and complete it for you, otherwise the business logic will be integrated into the existing transaction.
- RequiresNew
-
If there is already a transaction running, it will be suspended, the work performed within a new transaction which is completed at exit of the method and then the original transaction resumed.
- Mandatory
-
If there is no transaction running, calling a business method with this annotation will result in an error.
- NotSupported
-
If there is a transaction running, it will be suspended and no transaction will be initiated for this business method.
- Supports
-
This will run the method within a transaction if a transaction exists, alternatively, if there is no transaction running, the method will not be executed within the scope of a transaction.
- Never
-
If the client has a transaction running and does not suspend it but calls a method annotated with Never then an EJB exception will be raised.
- H2 Database
-
This quickstart uses the H2 database included with WildFly Application Server 34. It is a lightweight, relational example datasource that is used for examples only. It is not robust or scalable, is not supported, and should NOT be used in a production environment.
The application this project produces is designed to be run on WildFly Application Server 34 or later.
All you need to build this project is Java SE 17.0 or later, and Maven 3.6.0 or later. See Configure Maven to Build and Deploy the Quickstarts to make sure you are configured correctly for testing the quickstarts.
In the following instructions, replace WILDFLY_HOME
with the actual path to your WildFly installation. The installation path is described in detail here: Use of WILDFLY_HOME and JBOSS_HOME Variables.
When you see the replaceable variable QUICKSTART_HOME, replace it with the path to the root directory of all of the quickstarts.
-
Open a terminal and navigate to the root of the WildFly directory.
-
Start the WildFly server with the full profile by typing the following command.
$ WILDFLY_HOME/bin/standalone.sh -c standalone-full.xml
NoteFor Windows, use the WILDFLY_HOME\bin\standalone.bat
script.
-
Make sure WildFly server is started.
-
Open a terminal and navigate to the root directory of this quickstart.
-
Type the following command to build the quickstart.
$ mvn clean package
-
Type the following command to deploy the quickstart.
$ mvn wildfly:deploy
This deploys the cmt/target/cmt.war
to the running instance of the server.
You should see a message in the server log indicating that the archive deployed successfully.
The application will be running at the following URL: http://localhost:8080/cmt/
You are presented with a simple form for adding customers to a database.
After a customer is successfully added to the database, a message is produced containing the details of the customer. An example MDB dequeues this message and print the following contents.
Received Message: Created invoice for customer named: Jack
If an existing customer name is provided, no JMS message is sent. Instead of the above message, a duplicate warning is displayed.
The customer name should match: letter & '-', otherwise an error is given. This is to show that a LogMessage
entity is still stored in the database. That is because the logCreateCustomer
method in the LogMessageManagerEJB
EJB is decorated with the @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
annotation.
This quickstart includes integration tests, which are located under the src/test/
directory. The integration tests verify that the quickstart runs correctly when deployed on the server.
Follow these steps to run the integration tests.
-
Make sure WildFly server is started.
-
Make sure the quickstart is deployed.
-
Type the following command to run the
verify
goal with theintegration-testing
profile activated.$ mvn verify -Pintegration-testing
Instead of using a standard WildFly server distribution, you can alternatively provision a WildFly server to deploy and run the quickstart. The functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml
:
<profile>
<id>provisioned-server</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<build>
<plugins>
<plugin>
<groupId>org.wildfly.plugins</groupId>
<artifactId>wildfly-maven-plugin</artifactId>
<configuration>
<discover-provisioning-info>
<version>${version.server}</version>
</discover-provisioning-info>
<add-ons>...</add-ons>
</configuration>
<executions>
<execution>
<goals>
<goal>package</goal>
</goals>
</execution>
</executions>
</plugin>
...
</plugins>
</build>
</profile>
When built, the provisioned WildFly server can be found in the target/server
directory, and its usage is similar to a standard server distribution, with the simplification that there is never the need to specify the server configuration to be started.
Follow these steps to run the quickstart using the provisioned server.
-
Make sure the server is provisioned.
$ mvn clean package
-
Start the WildFly provisioned server, using the WildFly Maven Plugin
start
goal.$ mvn wildfly:start
-
Type the following command to run the integration tests.
$ mvn verify -Pintegration-testing
-
Shut down the WildFly provisioned server.
$ mvn wildfly:shutdown
On OpenShift, the S2I build with Apache Maven uses an openshift
Maven profile to provision a WildFly server, deploy and run the quickstart in OpenShift environment.
The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml
:
<profile>
<id>openshift</id>
<build>
<plugins>
<plugin>
<groupId>org.wildfly.plugins</groupId>
<artifactId>wildfly-maven-plugin</artifactId>
<configuration>
<discover-provisioning-info>
<version>${version.server}</version>
<context>cloud</context>
</discover-provisioning-info>
<add-ons>...</add-ons>
</configuration>
<executions>
<execution>
<goals>
<goal>package</goal>
</goals>
</execution>
</executions>
</plugin>
...
</plugins>
</build>
</profile>
You may note that unlike the provisioned-server
profile it uses the cloud context which enables a configuration tuned for OpenShift environment.
The plugin uses WildFly Glow to discover the feature packs and layers required to run the application, and provisions a server containing those layers.
If you get an error or the server is missing some functionality which cannot be auto-discovered, you can download the WildFly Glow CLI and run the following command to see more information about what add-ons are available:
wildfly-glow show-add-ons
This section contains the basic instructions to build and deploy this quickstart to WildFly for OpenShift or WildFly for OpenShift Online using Helm Charts.
-
You must be logged in OpenShift and have an
oc
client to connect to OpenShift -
Helm must be installed to deploy the backend on OpenShift.
Once you have installed Helm, you need to add the repository that provides Helm Charts for WildFly.
$ helm repo add wildfly https://docs.wildfly.org/wildfly-charts/
"wildfly" has been added to your repositories
$ helm search repo wildfly
NAME CHART VERSION APP VERSION DESCRIPTION
wildfly/wildfly ... ... Build and Deploy WildFly applications on OpenShift
wildfly/wildfly-common ... ... A library chart for WildFly-based applications
Log in to your OpenShift instance using the oc login
command.
The backend will be built and deployed on OpenShift with a Helm Chart for WildFly.
Navigate to the root directory of this quickstart and run the following command:
$ helm install cmt -f charts/helm.yaml wildfly/wildfly --wait --timeout=10m0s
NAME: cmt
...
STATUS: deployed
REVISION: 1
This command will return once the application has successfully deployed. In case of a timeout, you can check the status of the application with the following command in another terminal:
oc get deployment cmt
The Helm Chart for this quickstart contains all the information to build an image from the source code using S2I on Java 17:
build:
uri: https://github.com/wildfly/quickstart.git
ref: main
contextDir: cmt
deploy:
replicas: 1
This will create a new deployment on OpenShift and deploy the application.
If you want to see all the configuration elements to customize your deployment you can use the following command:
$ helm show readme wildfly/wildfly
Get the URL of the route to the deployment.
$ oc get route cmt -o jsonpath="{.spec.host}"
Access the application in your web browser using the displayed URL.
The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with the quickstart running on OpenShift.
Note
|
The integration tests expect a deployed application, so make sure you have deployed the quickstart on OpenShift before you begin. |
Run the integration tests using the following command to run the verify
goal with the integration-testing
profile activated and the proper URL:
$ mvn verify -Pintegration-testing -Dserver.host=https://$(oc get route cmt --template='{{ .spec.host }}')
Note
|
The tests are using SSL to connect to the quickstart running on OpenShift. So you need the certificates to be trusted by the machine the tests are run from. |
For Kubernetes, the build with Apache Maven uses an openshift
Maven profile to provision a WildFly server, suitable for running on Kubernetes.
The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml
:
<profile>
<id>openshift</id>
<build>
<plugins>
<plugin>
<groupId>org.wildfly.plugins</groupId>
<artifactId>wildfly-maven-plugin</artifactId>
<configuration>
<discover-provisioning-info>
<version>${version.server}</version>
<context>cloud</context>
</discover-provisioning-info>
<add-ons>...</add-ons>
</configuration>
<executions>
<execution>
<goals>
<goal>package</goal>
</goals>
</execution>
</executions>
</plugin>
...
</plugins>
</build>
</profile>
You may note that unlike the provisioned-server
profile it uses the cloud context which enables a configuration tuned for Kubernetes environment.
The plugin uses WildFly Glow to discover the feature packs and layers required to run the application, and provisions a server containing those layers.
If you get an error or the server is missing some functionality which cannot be auto-discovered, you can download the WildFly Glow CLI and run the following command to see more information about what add-ons are available:
wildfly-glow show-add-ons
This section contains the basic instructions to build and deploy this quickstart to Kubernetes using Helm Charts.
In this example we are using Minikube as our Kubernetes provider. See the Minikube Getting Started guide for how to install it. After installing it, we start it with 4GB of memory.
minikube start --memory='4gb'
The above command should work if you have Docker installed on your machine. If, you are using Podman instead of Docker, you will also need to pass in --driver=podman
, as covered in the Minikube documentation.
Once Minikube has started, we need to enable its registry since that is where we will push the image needed to deploy the quickstart, and where we will tell the Helm charts to download it from.
minikube addons enable registry
In order to be able to push images to the registry we need to make it accessible from outside Kubernetes. How we do this depends on your operating system. All the below examples will expose it at localhost:5000
# On Mac:
docker run --rm -it --network=host alpine ash -c "apk add socat && socat TCP-LISTEN:5000,reuseaddr,fork TCP:$(minikube ip):5000"
# On Linux:
kubectl port-forward --namespace kube-system service/registry 5000:80 &
# On Windows:
kubectl port-forward --namespace kube-system service/registry 5000:80
docker run --rm -it --network=host alpine ash -c "apk add socat && socat TCP-LISTEN:5000,reuseaddr,fork TCP:host.docker.internal:5000"
-
Helm must be installed to deploy the backend on Kubernetes.
Once you have installed Helm, you need to add the repository that provides Helm Charts for WildFly.
$ helm repo add wildfly https://docs.wildfly.org/wildfly-charts/
"wildfly" has been added to your repositories
$ helm search repo wildfly
NAME CHART VERSION APP VERSION DESCRIPTION
wildfly/wildfly ... ... Build and Deploy WildFly applications on OpenShift
wildfly/wildfly-common ... ... A library chart for WildFly-based applications
The backend will be built and deployed on Kubernetes with a Helm Chart for WildFly.
Navigate to the root directory of this quickstart and run the following commands:
mvn -Popenshift package wildfly:image
This will use the openshift
Maven profile we saw earlier to build the application, and create a Docker image containing the WildFly server with the application deployed. The name of the image will be cmt
.
Next we need to tag the image and make it available to Kubernetes. You can push it to a registry like quay.io
. In this case we tag as localhost:5000/cmt:latest
and push it to the internal registry in our Kubernetes instance:
# Tag the image
docker tag cmt localhost:5000/cmt:latest
# Push the image to the registry
docker push localhost:5000/cmt:latest
In the below call to helm install
which deploys our application to Kubernetes, we are passing in some extra arguments to tweak the Helm build:
-
--set build.enabled=false
- This turns off the s2i build for the Helm chart since Kubernetes, unlike OpenShift, does not have s2i. Instead, we are providing the image to use. -
--set deploy.route.enabled=false
- This disables route creation normally performed by the Helm chart. On Kubernetes we will use port-forwards instead to access our application, since routes are an OpenShift specific concept and thus not available on Kubernetes. -
--set image.name="localhost:5000/cmt"
- This tells the Helm chart to use the image we built, tagged and pushed to Kubernetes' internal registry above.
$ helm install cmt -f charts/helm.yaml wildfly/wildfly --wait --timeout=10m0s --set build.enabled=false --set deploy.route.enabled=false --set image.name="localhost:5000/cmt"
NAME: cmt
...
STATUS: deployed
REVISION: 1
This command will return once the application has successfully deployed. In case of a timeout, you can check the status of the application with the following command in another terminal:
kubectl get deployment cmt
The Helm Chart for this quickstart contains all the information to build an image from the source code using S2I on Java 17:
build:
uri: https://github.com/wildfly/quickstart.git
ref: main
contextDir: cmt
deploy:
replicas: 1
This will create a new deployment on Kubernetes and deploy the application.
If you want to see all the configuration elements to customize your deployment you can use the following command:
$ helm show readme wildfly/wildfly
To be able to connect to our application running in Kubernetes from outside, we need to set up a port-forward to the cmt
service created for us by the Helm chart.
This service will run on port 8080
, and we set up the port forward to also run on port 8080
:
kubectl port-forward service/cmt 8080:8080
The server can now be accessed via http://localhost:8080
from outside Kubernetes. Note that the command to create the port-forward will not return, so it is easiest to run this in a separate terminal.
The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with the quickstart running on Kubernetes.
Note
|
The integration tests expect a deployed application, so make sure you have deployed the quickstart on Kubernetes before you begin. |
Run the integration tests using the following command to run the verify
goal with the integration-testing
profile activated and the proper URL:
$ mvn verify -Pintegration-testing -Dserver.host=http://localhost:8080