This repository includes a simple Python Flask API with a single route that returns JSON. You can use this project as a starting point for your own Micro Service APIs.
The repository is designed for use with Docker containers, Big Bang's Istio Package, both for local development and deployment, and includes infrastructure files for deployment to Azure Kubernetes Service utilizing Application Insights for Micro Service Observability and Keyvault's CSI Provider for Secret Mounting. 🐳
The code is organized using Flask Blueprints, tested with pytest, linted with ruff, and formatted with black. Code quality issues are all checked with both pre-commit and Github actions.
This project has Dev Container support, so it will be be setup automatically if you open it in Github Codespaces or in local VS Code with the Dev Containers extension.
If you're not using one of those options for opening the project, then you'll need to:
-
Create a Python virtual environment and activate it.
-
Install the requirements:
python3 -m pip install -r src/requirements-dev.txt
-
Install the pre-commit hooks:
pre-commit install
-
Run the local server:
python3 -m flask --debug --app src/app:app run --port 5000
-
Click 'http://127.0.0.1:5000' in the terminal, which should open a new tab in the browser.
You can also run this app with Docker, thanks to the Dockerfile
.
You need to either have Docker Desktop installed or have this open in Github Codespaces for these commands to work.
-
Build the image:
docker build --tag flask-app src/
-
Run the image:
docker run --publish 5000:5000 -e APPLICATIONINSIGHTS_CONNECTION_STRING=<replace-with-the-provisioned-app-insight-conn-string> flask-app
- Sign up for a free Azure account and create an Azure Subscription.
- Install the Azure Developer CLI. (If you open this repository in Codespaces or with the VS Code Dev Containers extension, that part will be done for you.)
- Provision a Platform One Cluster using the AZD Platform One / Big Bang (AKS) Software Factory Template.
This repo is set up for deployment on Azure Kubernetes Service using the configuration files in the infra
folder.
Steps for deployment:
-
Login to Azure:
azd auth login -—tenant-id <replace-with-your-tenant-id> az login -t <replace-with-your-tenant-id>
-
Provision and deploy all the resources:
azd up
It will prompt you to provide an
azd
environment name (like "flask-app"), select a subscription from your Azure account, and select a location (like "eastus"). The setup wizard will prompt you for which AKS cluster you want to use to target your service deployment. All AKS clusters provisioned using the AKS Platform One template will appear in the drop down menu. Then it will provision the resources in your account and deploy the latest code. If you get an error with deployment, changing the location can help, as there may be availability constraints for some of the resources. -
When
azd
has finished deploying, you'll see an endpoint URI in the command output. Visit that URI, and you should see the API output! 🎉
This template integrates Application Insights for distributed tracing and log reporting. An Azure Monitor custom dashboard is also provisioned to summarize all traces, logs and errors across your service mesh fleet. You can reference this tutorial to create your own custom dashboard and export the bicep file into the infra
folder. To view the distributed tracing and log reporting, follow these steps:
The workflow azure-dev.yaml uses the Azure Developer CLI container image which has the CLI installed to login to the Azure environment with azd login
, provision the infrastructure with azd provision
.
To configure the GitHub repository with the secrets needed to run the pipeline, you'll need to run azd pipeline config
.
Since the infrastructure template requires setting up some role assignments, the created service principal will need to have Owner
permissions on the resource group.
azd pipeline config --principal-role Owner
Once you do so, and if you commit your changes, you should see the pipeline running to build and deploy your application.
This solution provides a post pipeline setup script which uploads the required variables and secrets to the Github Repository. You'll be able to run the Github Action workflow once you run the below script:
chmod -R +x ./scripts/postpipelineconfig.sh && ./scripts/postpipelineconfig.sh
Pricing varies per region and usage, so it isn't possible to predict exact costs for your usage. The majority of the Azure resources used in this infrastructure are on usage-based pricing tiers.
You can try the Azure pricing calculator for the resources:
- Log analytics: Pay-as-you-go tier. Costs based on data ingested. Pricing
azd down
.