The IBMStockTrader application rebuilt with Quarkus.
If you’re running on Fedora you may also require
-
buildah
-
editorconfig maven plugin
Ensure the following operators are installed in cluster:
-
-
NOTE Only version 1.6.x is supported at present
-
-
Keycloak or Red Hat SSO Operator
NOTE: In OpenShift the operators can be installed via Operator Hub integration from Administrator console.
NOTE: Make sure you install Keycloak into the keycloak namespace
git clone https://github.com/steven-ellis/quarkus-stocktrader
We will refer to the cloned project sources folder as $PROJECT_HOME
.
cd $PROJECT_HOME
Make sure you’re oc client is logged into your target OpenShift cluter. A useful check is
oc get clusterversion
# or
oc whoami
oc whoami --show-server
We now have a scripted deployment approach. This normally takes two steps
-
deployment
-
status check
There is still a risk that parts of the deployment time out - so please check the log output.
Assumptions
-
Required Operators have been deployed
-
Tradr application has been built and updated to match your current OpenShift instances, or you used the build_tradr option below.
-
trader-orders service will use the version from quay.io/sellisnz
-
You have pre-populated your IEX_API_KEY
# Make sure you're logged into the correct cluster
oc whoami --show-server
# Then deploy the service and configure identities in Keycloak
./deploy.sh deploy
# To confirm the environment is up correctly
./deploy.sh status
The status step should return the URL required to access the application frontend, the ability to monitor the kafka data sync via trader-orders, and the details required to login to keycloak if you need to debug anything.
If this is a new Openshift environment build and re-deploy the Tradr WebUI with the correct configuration
Note this assumes you’re already logged into quay.io
./deploy.sh build_tradr
If you have any major issues you can force a cleanup up and then re-deploy the environment
./deploy.sh delete
oc get all -n daytrader
./deploy.sh deploy
If the OpenShift environment has been restarted some of the pods can get into an unstable state. This can usually be resolved by running
./deploy.sh reset
For more details on using Keycloak with OpenShift reference https://www.keycloak.org/getting-started/getting-started-openshift
oc apply -k k8s/keycloak
This might take several minutes to deploy - which you can monitor via
watch oc get pods -n keycloak
Because of the limitations with Keycloak Operator, the roles needs to be manually created by login into the Keycloak console.
The following roles need to be created:
-
api-admins
-
api-users
-
admins
Add the user1
part of the api-admins
, to enable the user to perform API operations
To retrieve the Keycloak ADMIN_USERNAME
and ADMIN_PASSWORD
run the following command:
export ADMIN_USERNAME=$(oc get secrets credential-stocktrader-keycloak -n keycloak -ojson | jq -r '.data.ADMIN_USERNAME'| base64 -d)
export ADMIN_PASSWORD=$(oc get secrets credential-stocktrader-keycloak -n keycloak -ojson | jq -r '.data.ADMIN_PASSWORD' | base64 -d)
You can find the Keycloak web console url using the command oc get -n keycloak routes
.
KEYCLOAK_URL=https://$(oc get route keycloak --template='{{ .spec.host }}')/auth &&
echo "" &&
echo "Keycloak: $KEYCLOAK_URL" &&
echo "Keycloak Admin Console: $KEYCLOAK_URL/admin" &&
echo "Keycloak Account Console: $KEYCLOAK_URL/realms/myrealm/account" &&
echo "" &&
echo "Credentials ${ADMIN_USERNAME} : ${ADMIN_PASSWORD}" &&
echo ""
oc apply -k k8s/kafka/prod
Important
|
This should be done only on the target clusters, i.e. the clusters where the topics/data needs to mirrored, from the Kafka Cluster where sampledaytrader8 is deployed. |
Copy the $PROJECT_HOME/k8s/kafka-mirrormaker/base/daytrader-mirrormaker-example.yaml to $PROJECT_HOME/k8s/kafka-mirrormaker/base/daytrader-mirrormaker.yaml
:
cp $PROJECT_HOME/k8s/kafka-mirrormaker/base/daytrader-mirrormaker-example.yaml $PROJECT_HOME/k8s/kafka-mirrormaker/base/daytrader-mirrormaker.yaml
Edit and update the $PROJECT_HOME/k8s/kafka-mirrormaker/base/daytrader-mirrormaker.yaml
for Kafka cluster external bootstrapserver LoadBalancer IP Address.
The Kafka cluster bootstrapservers can be retrieved using the command:
The legacy cluster is identified by alias daytrader-kafka-legacy
in the file
$PROJECT_HOME/k8s/kafka-mirrormaker/base/daytrader-mirrormaker.yaml
, and the bootstrap address found by the following command need to be updated as the value for daytrader-kafka-legacy.bootstrapServers
:
oc get svc -n daytrader daytrader-kafka-external-bootstrap \
-ojsonpath='{.status.loadBalancer.ingress[0].ip}'
Important
|
If any of your cluster is on AWS then use the following command: oc get svc -n daytrader daytrader-kafka-external-bootstrap \
-ojsonpath='{.status.loadBalancer.ingress[0].hostname}' |
The legacy cluster is identified by alias daytrader-kafka-modern
in the file
$PROJECT_HOME/k8s/kafka-mirrormaker/base/daytrader-mirrormaker.yaml
, and the bootstrap address found by the following command need to be updated as the value for daytrader-kafka-modern.bootstrapServers
:
oc get svc -n daytrader daytrader-kafka-external-bootstrap \
-ojsonpath='{.status.loadBalancer.ingress[0].ip}'
Important
|
If any of your cluster is on AWS then use the following command: oc get svc -n daytrader daytrader-kafka-external-bootstrap \
-ojsonpath='{.status.loadBalancer.ingress[0].hostname}' |
kustomize build $PROJECT_HOME/k8s/kafka-mirrormaker/prod | oc apply -f -
kustomize build $PROJECT_HOME/k8s/db/prod | oc apply -f -
Login to the database admin console using user traderdb
and password traderdb
and import the schema.
To get the route to the console type
echo "https://$(oc get route -n daytrader db-adminer -o jsonpath='{.spec.host}')"
OR import directly as follows
PSQL_POD=$(oc get pods -n daytrader -l "app=postgresql" -o jsonpath='{.items[0].metadata.name}')
oc -n daytrader rsh ${PSQL_POD} psql --user tradedb -d tradedb < db/schema.sql
Obtain an API Key from IEXCloud, copy the file $PROJECT_HOME/k8s/stock-quote/base/api-keys.env.example to
$PROJECT_HOME/k8s/stock-quote/base/api-keys.env
:
cp $PROJECT_HOME/k8s/stock-quote/base/api-keys.env.example $PROJECT_HOME/k8s/stock-quote/base/api-keys.env
Edit and update the IEX_API_KEY key in the file $PROJECT_HOME/k8s/stock-quote/base/api-keys.env
to match your API Key.
kustomize build $PROJECT_HOME/k8s/stock-quote/prod | oc apply -f -
kustomize build $PROJECT_HOME/k8s/portfolio/prod | oc apply -f -
The portfolio deployment will fail to resolve the Keycloak
url and hence will fail to start.
oc get pods -n daytrader -lapp=quarkus-portfolio
The output of the above command should be like:
NAME READY STATUS RESTARTS AGE
quarkus-portfolio-7d744cf954-kjf4r 0/1 CrashLoopBackOff 5 5m28s
Run the following command to update the deployment:
KEYCLOAK_ROUTE=$(oc get route -n keycloak keycloak -o=jsonpath='{.spec.host}')
oc set env -n daytrader deploy/quarkus-portfolio \
QUARKUS_OIDC_AUTH_SERVER_URL="https://$KEYCLOAK_ROUTE/auth/realms/stocktrader"
And now check the pod to be restarted:
oc get pods -n daytrader -lapp=quarkus-portfolio -w
We also need to make sure we’ve got the correct endpoint for a Tradr app to communicate with the quarkus-portfolio web service
echo "https://$(oc get route -n daytrader portfolio -o jsonpath='{.spec.host}')/api/portfolios"
PORTFOLIO_ROUTE="$(oc get route -n daytrader portfolio -o jsonpath='{.spec.host}')"
kustomize build $PROJECT_HOME/k8s/trade-orders-service/prod | oc apply -f -
Note
|
The default image registry is |
As tradr
is a static Single Page Application, it is required to update the environment and rebuild it:
Building Using Docker / Maven
export KEYCLOAK_ROUTE
export PORTFOLIO_ROUTE
cd ${PROJECT_HOME}/tradr
envsubst < ${PROJECT_HOME}/tradr/.env.example > ${PROJECT_HOME}/tradr/.env
cd ..
make tradr_image_build_push
Building suing Buildah
# Same initial Steps
export KEYCLOAK_ROUTE
export PORTFOLIO_ROUTE
cd tradr
envsubst < ${PROJECT_HOME}/tradr/.env.example > ${PROJECT_HOME}/tradr/.env
# Assumes we're already logged into quay.io via
# buildah login -u="sellisnz" -p="<my token>" quay.io
#
# run the build
buildah build-using-dockerfile --no-cache -t quay.io/sellisnz/tradr:latest .
buildah push quay.io/sellisnz/tradr:latest
cd ..
Now update the $PROJECT_HOME/k8s/tradr/base/deployment.yaml
image to match the tradr image that you rebuilt.
make update_tradr_deployment_image
kustomize build $PROJECT_HOME/k8s/tradr/prod | oc apply -f -
With all applications successfully deployed, your daytrader
namespace should look like
oc get pods -n daytrader
Show show an output like:
NAME READY STATUS RESTARTS AGE
daytrader-entity-operator-84687c54c6-5hjnn 3/3 Running 0 67m
daytrader-kafka-0 1/1 Running 0 67m
daytrader-kafka-1 1/1 Running 0 67m
daytrader-kafka-2 1/1 Running 0 67m
daytrader-mirror-maker2-mirrormaker2-5dd869f49-7hhx7 1/1 Running 0 25m
daytrader-zookeeper-0 1/1 Running 0 73m
daytrader-zookeeper-1 1/1 Running 0 73m
daytrader-zookeeper-2 1/1 Running 0 73m
db-adminer-7cfc4bb868-fw9qk 1/1 Running 0 25m
postgresql-756679bdd5-8xblx 1/1 Running 0 25m
quarkus-portfolio-7f58764ccf-lblhz 1/1 Running 0 3m28s
quarkus-stock-quote-86f86bc4d5-wvbrd 1/1 Running 0 21m
trade-orders-service-64fcb6dd98-27nk6 1/1 Running 0 17m
tradr-b55bd7dd-n7r5k 1/1 Running 0 17m
Note
|
The application domain may vary according to your deployment |
oc get route trader-orders -n daytrader
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
trader-orders trader-orders-daytrader.apps.gcp.kameshs.dev trade-orders-service 8080 edge None
oc get route tradr -n daytrader
Should show an output like:
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
tradr tradr-daytrader.apps.gcp.kameshs.dev tradr 8080 edge None
To be able to login into the application you might need to create the Keycloak client called tradr
, login to the Keycloak console as did earlier and add a new client called tradr
under realm stocktrader
with root URL set to value of tradr
OpenShift route.
If the PostgreSQL contain isn’t running correctly there might be a permission issue with the storage
This usually occurs if the trader-orders app isn’t polling the correct endpoints. You can tail the pod logs while generating fresh orders from the legacy daytrader app.
TRADE_ORDERS_POD=$(oc get pods -n daytrader -l "app=trade-orders-service" \
-o jsonpath='{.items[0].metadata.name}')
oc logs -n daytrader -f ${TRADE_ORDERS_POD}
If you’re still having issues make sure the queues on the legacy app are working correctly. Login to the OpenShift environment for your legacy services - then try
oc project daytrader
oc rsh pod/daytrader-kafka-0
# you're now inside a kafka pod
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic daytrader-kafka-legacy.openshift.traderdb.accountprofileejb --from-beginning --max-messages 10
# If this still doesn't work check the consumer groups
./bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list
# And list the topics
bin/kafka-topics.sh --bootstrap-server localhost:9092 --list
Sometimes the kafka-console-consumer.sh command is enought to get the queue working again.
NOTE Make sure you log back into the modern OpenShift cluster once you’ve finished troubleshooting.
This usually because you don’t have the correct keycloak endpoint in the Tradr Build Environment file, or you haven’t uploaded a fresh build of tradr for the deployment.
Refer to the Tradr build documentation above
If you login but the UI isn’t rendering correctly check the logs of the portfiolo service to see if there are JVM errors.
PORTFOLIO_POD=$(oc get pods -n daytrader -l "app=quarkus-portfolio" \
-o jsonpath='{.items[0].metadata.name}')
oc logs -n daytrader -f ${PORTFOLIO_POD}
# If JVM is showing issues force the pod to be re-depoyed
oc delete pod -n daytrader ${PORTFOLIO_POD}
In addition you can query the portfolio service to see if there are valid entries via
PORTFOLIO_ROUTE="$(oc get route -n daytrader portfolio -o jsonpath='{.spec.host}')"
curl "https://${PORTFOLIO_ROUTE}/api/portfolios/all"