- Determine which message passing strategies would integrate well when refactoring the starter code into a microservice architecture.
- Using the design decisions from the previous step, create an architecture diagram of your microservice architecture showing the services and message passing techniques between them.
- Flask - API webserver
- SQLAlchemy - Database ORM
- PostgreSQL - Relational database
- PostGIS - Spatial plug-in for PostgreSQL enabling geographic queries]
- Vagrant - Tool for managing virtual deployed environments
- VirtualBox - Hypervisor allowing you to run multiple operating systems
- K3s - Lightweight distribution of K8s to easily develop against a local cluster
- Apache-Kafka - Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.
- gRPC - gRPC is a modern open source high performance Remote Procedure Call (RPC) framework that can run in any environment.
The project has been set up such that you should be able to have the project up and running with Kubernetes.
We will be installing the tools that we'll need to use for getting our environment set up properly.
- Install Docker
- Set up a DockerHub account
- Set up
kubectl
- Install VirtualBox with at least version 6.0
- Install Vagrant with at least version 2.0
To run the application, you will need a K8s cluster running locally and to interface with it via kubectl
. We will be using Vagrant with VirtualBox to run K3s.
In this project's root, run vagrant up
.
$ vagrant up
The command will take a while and will leverage VirtualBox to load an openSUSE OS and automatically install K3s. When we are taking a break from development, we can run vagrant suspend
to conserve some ouf our system's resources and vagrant resume
when we want to bring our resources back up. Some useful vagrant commands can be found in this cheatsheet.
After vagrant up
is done, you will SSH into the Vagrant environment and retrieve the Kubernetes config file used by kubectl
. We want to copy the contents of this file into our local environment so that kubectl
knows how to communicate with the K3s cluster.
$ vagrant ssh
You will now be connected inside of the virtual OS. Run sudo cat /etc/rancher/k3s/k3s.yaml
to print out the contents of the file. You should see output similar to the one that I've shown below. Note that the output below is just for your reference: every configuration is unique and you should NOT copy the output I have below.
Copy the contents from the output issued from your own command into your clipboard -- we will be pasting it somewhere soon!
$ sudo cat /etc/rancher/k3s/k3s.yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJWekNCL3FBREFnRUNBZ0VBTUFvR0NDcUdTTTQ5QkFNQ01DTXhJVEFmQmdOVkJBTU1HR3N6Y3kxelpYSjIKWlhJdFkyRkFNVFU1T1RrNE9EYzFNekFlRncweU1EQTVNVE13T1RFNU1UTmFGdzB6TURBNU1URXdPVEU1TVROYQpNQ014SVRBZkJnTlZCQU1NR0dzemN5MXpaWEoyWlhJdFkyRkFNVFU1T1RrNE9EYzFNekJaTUJNR0J5cUdTTTQ5CkFnRUdDQ3FHU000OUF3RUhBMElBQk9rc2IvV1FEVVVXczJacUlJWlF4alN2MHFseE9rZXdvRWdBMGtSN2gzZHEKUzFhRjN3L3pnZ0FNNEZNOU1jbFBSMW1sNXZINUVsZUFOV0VTQWRZUnhJeWpJekFoTUE0R0ExVWREd0VCL3dRRQpBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSVFERjczbWZ4YXBwCmZNS2RnMTF1dCswd3BXcWQvMk5pWE9HL0RvZUo0SnpOYlFJZ1JPcnlvRXMrMnFKUkZ5WC8xQmIydnoyZXpwOHkKZ1dKMkxNYUxrMGJzNXcwPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://127.0.0.1:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
password: 485084ed2cc05d84494d5893160836c9
username: admin
Type exit
to exit the virtual OS and you will find yourself back in your computer's session. Create the file (or replace if it already exists) ~/.kube/config
and paste the contents of the k3s.yaml
output here.
Afterwards, you can test that kubectl
works by running a command like kubectl describe services
. It should not return any errors.
Run shell script ./deploy-pod.sh
to deploy udaconnect pod
Run shell script ./delete-pod.sh
to delete udaconnect pod
Note: The first time you run this project, you will need to seed the database with dummy data. Use the command sh scripts/run_db_command.sh $(kubectl get pods | grep -i "postgres" | awk '{print $1}')
. Subsequent runs of kubectl apply
for making changes to deployments or services shouldn't require you to seed the database again!
Once the project is up and running, you should be able to see deployments and services in Kubernetes:
kubectl get pods
and kubectl get services
- should both return
http://localhost:30000/
- Frontend ReactJS Application
- a producer was implemented in the code of the locations api to produce and save messages in the Kafka broker whenever you make an api endpoint call /locations.
- a consumer was also implemented to consume data from kafka and save it to location db, hint, Errors could happed due to dummy data primary key, keep on requesting till you pass to the right auto generated location_id primary key.
- first you need to run kafka-zookeeper kubectl port-forward kafka-zookeeper-0 2181:2181 Start port-forwarding kafka-zookeeper first and run kafka by port-forwarding kubectl port-forward kafka-0 9092:9092