Skip to content

Guide to Deployment

Lauri Koskela edited this page Dec 17, 2020 · 1 revision

This document

This document will display two different types of deployment for the epimetheus platform to AWS.

These were done as a study to find out factors that could affect the usability of the application.

This document aims to give light to some necessary features that must be kept in mind during the deployment. The document also aims to give a template infrastructure as code for further development.

Architecture of Epimetheus

Epimetheus is a tool used to display data stored in a database by TestArchiver.

Two containers are used to deploy the epimetheus application, one for the react frontend and one for the python tornado backend.

The resources that need to be created for the deployment of Epimetheus can be seen in the Epimetheus system on the diagram below.

AWS Lightsail

The first deployment type we will cover to AWS is with Lightsail which is a low barrier to entry platform on AWS. For this type of deployment we will need to first create a postgresql database for TestArchiver. This can be created in AWS using RDS.

Requirements for the database:

  • Needs to be accessible for the server used to execute tests and TestArchiver.
  • Needs to be accessible for Epimetheus Backend.

After this we must head to AWS Lightsail.

AWS Lightsail is a platform used to create a complete deployment of service similar to services such as Platform.sh and Heroku. Since Epimetheus is built as 2 container images we choose to create a new container service.

Here you may input whatever you like for service location container capacity. The Nano capacity provides good-enough functionality. The Lightsail platform displays the monitoring logs for container so it is possible to verify if the chosen capacity is enough.

Choose specify a custom deployment from under "Set up your first deployment".

Frontend container

First lets create the definition for the frontend container.

Environment variables for Frontend:

Key : BACKEND_URL, Value: "http://localhost:5000"

Note about frontend variables:

  • If you want to run nginx in a different port than 8080 define NGINX_PORT environment variable. The Open ports parameter on the bottom of the container description needs to match the new port.

Backend container

Lets add a second container to the same deployment, when there are two containers in the same deployment they can communicate with each other through localhost, see BACKEND_URL of frontend.

Environment variables for Backend:

  1. Key : "HOST", Value : "host_of_rds_db"
  2. Key : "DATABASE", "value" : "postgres"
  3. Key : "USER", "value" : "user_name_of_rds_db"
  4. Key : "PASSWORD", "value" : "password_of_rds_db"
  5. Key : "PORT", "value" : "5000"

Notes about backend variables:

  • AWS RDS by default creates a database called postgres, hence the database value of postgres. This is not your RDS resource name.
  • If you create another database in your RDS instance for your TestArchiver data the name should be used here.
  • You can see the host of your database when viewing your RDS instance in the RDS platform page.
  • The PORT defined for backend container is the port the service will run at, this means that this port should match the one in your frontend BACKEND_URL definition.

Notes for both container definitions:

  • Note that you need to define an open port for the container. This should match the port your NGINX is running at for your frontend and the port you have defined for your backend. By default 8080 and 5000.

Public endpoint and service name

To finish the configuration set public endpoint to point to the frontend container and give the service a desired name.

With this the service should be ready for creation.

Fault scenarios:

  • Frontend does not reach the backend, if you have deployed the containers in the same service check that the backend_url points to localhost + backend port
  • Backend crashes due to database being empty, the epimetheus backend is designed to break in the case of the database not inhabiting the required schema. The breaking of the Backend will also break the Frontend container. After inputting data to the database with testarchiver, Lightsail should try to boot the containers after a while causing the containers to start up.

AWS Fargate

The second deployment is created with AWS Fargate which container runtime provided by AWS. Fargate abstracts a lot less than Lightsail in the sense of networking and security but in doing that it gives the user more control over the environment the containers are executed in. The deploy.tf file contains the required infrastructure as code.

Terraform file used is deploy_epimetheus_fargate.tf, which can be found under deployment-templates.

More on terraform here: https://www.terraform.io/intro/index.html

The infrastructure generated by the terraform file:

Acronyms:

  • ECS: Elastic Container Service, service used by AWS to deploy containers
  • VPC: Virtual Private Cloud, Virtual network for your private resources
  • AZ: Availability Zone, These are used to separate resources to different hosts to allow for high availability.
  • RDS: Relational Database Service, AWS service to provide a relational database.

The terraform generates:

  • One VPC, virtual network
  • Two subnets for the VPC in different availability zones (AZs)
  • Internet Gateway for the virtual network
  • Route Tables and Security Groups for the virtual network and subnets.
  • One PostgreSQL instance in RDS which requires at least two AZs to build. The PostgreSQL gains a public IP with an elastic network interface.
  • Task Execution Role for ECS Fargate, this allows ECS to control containers.
  • Task Role for ECS Fargate, This allows the actual containers to access RDS.
  • ECS Task Definition describing us using two containers with the defined environment variables, notice that here we also use localhost for BACKEND_URL.
  • ECS Cluster Definition defining the cluster used to execute the containers
  • ECS Service which defines that the defined task definition should ran on the defined cluster.

Both the ECS and RDS are connected to the other availability zone but the installation done through terraform only creates a single AZ setup. The additional availability zone should not add any costs to your environment.

To deploy the terraform values you need to have your aws credentials in an .env file or in the local environment of your terminal. Then execute

terraform plan

This command should pass, after it run

terraform apply

which will generate the environment in around 10 minutes, RDS takes a while to generate.

After the terraform apply has been completed you need to add data to the TestArchiver database, before the first execution of TestArchiver on the database containers will be broken. This is caused by the backend verifying schema of the backend and the frontend being dependant on the backend.

The containers will start up after a while once the schema is present.

If you wish to destroy your installation execute

terraform destroy

This might give you an error on destroying subnets as they are used by the RDS which takes a long while to destroy. Just run the same command again and the subnets should be destroyed.

Notes about Fargate Installation

  • The containers of the ECS Service will fail until the database created has been given data. READ fault scenarios of lightsail installation
  • Currently the username and password are given as clear text in the terraform file. The Username and Password are given as local variables at the beginning of the file. The RDS definition and backend container definition uses these local variables.
  • The RDS is created with a public interface to allow for easy access with TestArchiver from resources outside the private subnet. For a more complete deployment the database should not be public.

Errors

  • If the containers fail to run on ECS verify that the RDS instance has been generated a testarchiver schema to the "postgres" database. Refer to the Fault Scenarios of Lightsail.