Skip to content

Latest commit

 

History

History
839 lines (531 loc) · 54.6 KB

File metadata and controls

839 lines (531 loc) · 54.6 KB

Monolith to Microservices with Docker and AWS Fargate

Welcome to the Mythical Mysfits team!

In this lab, you'll build the monolithic Mythical Mysfits adoption platform with Docker, deploy it on Amazon ECS, and then break it down into a couple of more manageable microservices. Let's get started!

Requirements:

  • AWS account - if you don't have one, it's easy and free to create one.
  • AWS IAM account with elevated privileges allowing you to interact with CloudFormation, IAM, EC2, ECS, ECR, ELB/ALB, VPC, SNS, CloudWatch, Cloud9. Learn how.
  • Familiarity with Python, Docker, and AWS - not required but a bonus.

What you'll do:

These labs are designed to be completed in sequence, and the full set of instructions are documented below. Read and follow along to complete the labs. If you're at a live AWS event, the workshop staff will give you a high-level overview of the labs and help answer any questions. Don't worry if you get stuck, we provide hints along the way.

Conventions:

Throughout this workshop, we will provide commands for you to run in the terminal. These commands will look like this:

$ ssh -i PRIVATE_KEY.PEM ec2-user@EC2_PUBLIC_DNS_NAME

The command starts after the $. Text that is UPPER_ITALIC_BOLD indicates a value that is unique to your environment. For example, PRIVATE_KEY.PEM refers to the private key of an SSH key pair that you've created in your account, and EC2_PUBLIC_DNS_NAME is a value that is specific to an EC2 instance launched in your account. You can find these unique values either in the CloudFormation outputs or by navigating to the specific service dashboard in the AWS management console.

Hints are also provided along the way and will look like this:

HINT

Nice work, you just revealed a hint!

Click on the arrow to show the contents of the hint.

IMPORTANT: Workshop Cleanup

You will be deploying infrastructure on AWS which will have an associated cost. If you're attending an AWS event, credits will be provided. When you're done with the workshop, follow the steps at the very end of the instructions to make sure everything is cleaned up and avoid unnecessary charges.

Let's Begin!

Workshop Setup:

  1. Open the CloudFormation launch template link below in a new tab. The link will load the CloudFormation Dashboard and start the stack creation process in the chosen region:

    Click on one of the Deploy to AWS icons below to region to stand up the core workshop infrastructure.

Region Launch Template
Oregon (us-west-2) Launch Mythical Mysfits Stack into Oregon with CloudFormation
Ohio (us-east-2) Launch Mythical Mysfits Stack into Ohio with CloudFormation
Ireland (eu-west-1) Launch Mythical Mysfits Stack into Ireland with CloudFormation
Singapore (ap-southeast-1) Launch Mythical Mysfits Stack into Singapore with CloudFormation
  1. The template will automatically bring you to the CloudFormation Dashboard and start the stack creation process in the specified region. Give the stack a name that is unique within your account, and proceed through the wizard to launch the stack. Leave all options at their default values, but make sure to check the box to allow CloudFormation to create IAM roles on your behalf:

    IAM resources acknowledgement

    See the Events tab for progress on the stack launch. You can also see details of any problems here if the launch fails. Proceed to the next step once the stack status advances to "CREATE_COMPLETE".

  2. Access the AWS Cloud9 Environment created by CloudFormation:

    On the AWS Console home page, type Cloud9 into the service search bar and select it. Find the environment named like "Project-STACK_NAME":

    Cloud9 project selection

    When you open the IDE, you'll be presented with a welcome screen that looks like this: cloud9-welcome

    On the left pane (Blue), any files downloaded to your environment will appear here in the file tree. In the middle (Red) pane, any documents you open will show up here. Test this out by double clicking on README.md in the left pane and edit the file by adding some arbitrary text. Then save it by clicking File and Save. Keyboard shortcuts will work as well. On the bottom, you will see a bash shell (Yellow). For the remainder of the lab, use this shell to enter all commands. You can also customize your Cloud9 environment by changing themes, moving panes around, etc. (if you like the dark theme, you can select it by clicking the gear icon in the upper right, then "Themes", and choosing the dark theme).

  3. Clone the Mythical Mysfits Workshop Repository:

    In the bottom panel of your new Cloud9 IDE, you will see a terminal command line terminal open and ready to use. Run the following git command in the terminal to clone the necessary code to complete this tutorial:

    $ git clone https://github.com/aws-samples/amazon-ecs-mythicalmysfits-workshop.git
    

    After cloning the repository, you'll see that your project explorer now includes the files cloned.

    In the terminal, change directory to the subdirectory for this workshop in the repo:

    $ cd amazon-ecs-mythicalmysfits-workshop/workshop-1
    
  4. Run some additional automated setup steps with the setup script:

    $ script/setup
    

    This script will delete some unneeded Docker images to free up disk space, populate a DynamoDB table with some seed data, upload site assets to S3, and install some Docker-related authentication mechanisms that will be discussed later. Make sure you see the "Success!" message when the script completes.

Checkpoint:

At this point, the Mythical Mysfits website should be available at the static site endpoint for the S3 bucket created by CloudFormation. You can visit the site at http://BUCKET_NAME.s3-website.REGION.amazonaws.com/. For your convenience, we've created a link in the CloudFormation outputs tab in the console. Alternatively, you can find the BUCKET_NAME in the CloudFormation outputs saved in the file workshop-1/cfn-outputs.json. REGION should be the code for the region that you deployed your CloudFormation stack in (e.g. us-west-2 for the Oregon region.) Check that you can view the site, but there won't be much content visible yet until we launch the Mythical Mysfits monolith service:

initial website

^ back to top

Lab 1 - Containerize the Mythical Mysfits adoption agency platform

The Mythical Mysfits adoption agency infrastructure has always been running directly on EC2 VMs. Our first step will be to modernize how our code is packaged by containerizing the current Mythical Mysfits adoption platform, which we'll also refer to as the monolith application. To do this, you will create a Dockerfile, which is essentially a recipe for Docker to build a container image. You'll use your AWS Cloud9 development environment to author the Dockerfile, build the container image, and run it to confirm it's able to process adoptions.

Containers are a way to package software (e.g. web server, proxy, batch process worker) so that you can run your code and all of its dependencies in a resource isolated process. You might be thinking, "Wait, isn't that a virtual machine (VM)?" Containers virtualize the operating system, while VMs virtualize the hardware. Containers provide isolation, portability and repeatability, so your developers can easily spin up an environment and start building without the heavy lifting. More importantly, containers ensure your code runs in the same way anywhere, so if it works on your laptop, it will also work in production.

Here's what you're going to work on in lab 1:

Lab 1 Architecture

  1. Review the draft Dockerfile and add the missing instructions indicated by comments in the file:

    Note: If you're already familiar with how Dockerfiles work and want to focus on breaking the monolith apart into microservices, skip down to "HINT: Final Dockerfile" near the end of step 5, create a Dockerfile in the monolith directory with the hint contents, build the "monolith" image, and continue to step 6. Otherwise continue on...

    One of the Mythical Mysfits' developers started working on a Dockerfile in her free time, but she was pulled to a high priority project before she finished.

    In the Cloud9 file tree, navigate to workshop-1/app/monolith-service, and double-click on Dockerfile.draft to open the file for editing.

    Note: If you would prefer to use the bash shell and a text editor like vi or emacs instead, you're welcome to do so.

    Review the contents, and you'll see a few comments at the end of the file noting what still needs to be done. Comments are denoted by a "#".

    Docker builds container images by stepping through the instructions listed in the Dockerfile. Docker is built on this idea of layers starting with a base and executing each instruction that introduces change as a new layer. It caches each layer, so as you develop and rebuild the image, Docker will reuse layers (often referred to as intermediate layers) from cache if no modifications were made. Once it reaches the layer where edits are introduced, it will build a new intermediate layer and associate it with this particular build. This makes tasks like image rebuild very efficient, and you can easily maintain multiple build versions.

    Docker Container Image

    For example, in the draft file, the first line - FROM ubuntu:latest - specifies a base image as a starting point. The next instruction - RUN apt-get -y update - creates a new layer where Docker updates package lists from the Ubuntu repositories. This continues until you reach the last instruction which in most cases is an ENTRYPOINT (hint hint) or executable being run.

    Add the remaining instructions to Dockerfile.draft.

    HINT: Helpful links for completing Dockefile.draft
     Here are links to external documentation to give you some ideas:
    

    #[TODO]: Copy the "service" directory into container image

    • Consider the COPY command
    • You're copying both the python source files and requirements.txt from the "monolith-service/service" directory on your EC2 instance into a working directory within the container, which can be something like "/MythicalMysfitsService"
    • Consider the WORKDIR command as a way to navigate within the context of the container's directory structure

    #[TODO]: Install dependencies listed in the requirements.txt file using pip3

    #[TODO]: Specify a listening port for the container

    • Consider the EXPOSE command
    • App listening portNum can be found in the app source - mythicalMysfitsService.py

    #[TODO]: Run "mythicalMysfitsService.py" as the final step. We want this container to run as an executable. Looking at ENTRYPOINT and CMD for this?

    • Consider the ENTRYPOINT and CMD commands
    • ENTRYPOINT and CMD can be used together
    • Our ops team typically runs 'python3 mythicalMysfitsService.py' to launch the application on our servers.

    Once you're happy with your additions OR if you get stuck, you can check your work by comparing your work with the hint below.

    HINT: Completed Dockerfile
     FROM ubuntu:latest
     RUN apt-get update -y
     RUN apt-get install -y python3-pip python-dev build-essential
     RUN pip3 install --upgrade pip
     #[TODO]: Copy python source files and requirements file into container image
     COPY ./service /MythicalMysfitsService
     WORKDIR /MythicalMysfitsService
     #[TODO]: Install dependencies listed in the requirements.txt file using pip3
     RUN pip3 install -r ./requirements.txt
     #[TODO]: Specify a listening port for the container
     EXPOSE 80
     #[TODO]: Run mythicalMysfitsService.py as the final step
     ENTRYPOINT ["python3"]
     CMD ["mythicalMysfitsService.py"]
     

    If your Dockerfile looks good, rename your file from "Dockerfile.draft" to "Dockerfile" and continue to the next step.

     $ mv Dockerfile.draft Dockerfile
     
  2. Build the image using the Docker build command.

    This command needs to be run in the same directory where your Dockerfile is. Note the trailing period which tells the build command to look in the current directory for the Dockerfile.

     $ docker build -t monolith-service .
     

    You'll see a bunch of output as Docker builds all layers of the image. If there is a problem along the way, the build process will fail and stop (red text and warnings along the way are fine as long as the build process does not fail). Otherwise, you'll see a success message at the end of the build output like this:

     Step 9/10 : ENTRYPOINT ["python3"]
      ---> Running in 7abf5edefb36
     Removing intermediate container 7abf5edefb36
      ---> 653ccee71620
     Step 10/10 : CMD ["mythicalMysfitsService.py"]
      ---> Running in 291edf3d5a6f
     Removing intermediate container 291edf3d5a6f
      ---> a8d2aabc6a7b
     Successfully built a8d2aabc6a7b
     Successfully tagged monolith-service:latest
     

    Note: Your output will not be exactly like this, but it will be similar.

    Awesome, your Dockerfile built successfully, but our developer didn't optimize the Dockefile for the microservices effort later. Since you'll be breaking apart the monolith codebase into microservices, you will be editing the source code (e.g. mythicalMysfitsService.py) often and rebuilding this image a few times. Looking at your existing Dockerfile, what is one thing you can do to improve build times?

    HINT Remember that Docker tries to be efficient by caching layers that have not changed. Once change is introduced, Docker will rebuild that layer and all layers after it.

    Edit mythicalMysfitsService.py by adding an arbitrary comment somewhere in the file. If you're not familiar with Python, comments start with the hash character, '#' and are essentially ignored when the code is interpreted.

    For example, here a comment (# Author: Mr Bean) was added before importing the time module:

     # Author: Mr Bean
    
     import time
     from flask import Flask
     from flask import request
     import json
     import requests
     ....
     

    Rebuild the image using the 'docker build' command from above and notice Docker references layers from cache, and starts rebuilding layers starting from Step 5, when mythicalMysfitsService.py is copied over since that is where change is first introduced:

     Step 5/10 : COPY ./service /MythicalMysfitsService
      ---> 9ec17281c6f9
     Step 6/10 : WORKDIR /MythicalMysfitsService
      ---> Running in 585701ed4a39
     Removing intermediate container 585701ed4a39
      ---> f24fe4e69d88
     Step 7/10 : RUN pip3 install -r ./requirements.txt
      ---> Running in 1c878073d631
     Collecting Flask==0.12.5 (from -r ./requirements.txt (line 1))
     

    Try reordering the instructions in your Dockerfile to copy the monolith code over after the requirements are installed. The thinking here is that the Python source will see more changes than the dependencies noted in requirements.txt, so why rebuild requirements every time when we can just have it be another cached layer.

    Edit your Dockerfile with what you think will improve build times and compare it with the Final Dockerfile hint below.

    Final Dockerfile

    HINT: Final Dockerfile
     FROM ubuntu:latest
     RUN apt-get update -y
     RUN apt-get install -y python3-pip python-dev build-essential
     RUN pip3 install --upgrade pip
     COPY ./service/requirements.txt .
     RUN pip3 install -r ./requirements.txt
     COPY ./service /MythicalMysfitsService
     WORKDIR /MythicalMysfitsService
     EXPOSE 80
     ENTRYPOINT ["python3"]
     CMD ["mythicalMysfitsService.py"]
     

    To see the benefit of your optimizations, you'll need to first rebuild the monolith image using your new Dockerfile (use the same build command at the beginning of step 5). Then, introduce a change in mythicalMysfitsService.py (e.g. add another arbitrary comment) and rebuild the monolith image again. Docker cached the requirements during the first rebuild after the re-ordering and references cache during this second rebuild. You should see something similar to below:

     Step 6/11 : RUN pip3 install -r ./requirements.txt
      ---> Using cache
      ---> 612509a7a675
     Step 7/11 : COPY ./service /MythicalMysfitsService
      ---> c44c0cf7e04f
     Step 8/11 : WORKDIR /MythicalMysfitsService
      ---> Running in 8f634cb16820
     Removing intermediate container 8f634cb16820
      ---> 31541db77ed1
     Step 9/11 : EXPOSE 80
      ---> Running in 02a15348cd83
     Removing intermediate container 02a15348cd83
      ---> 6fd52da27f84
     

    You now have a Docker image built. The -t flag names the resulting container image. List your docker images and you'll see the "monolith-service" image in the list. Here's a sample output, note the monolith image in the list:

     $ docker images
     REPOSITORY                                                              TAG                 IMAGE ID            CREATED              SIZE
     monolith-service                                                        latest              29f339b7d63f        About a minute ago   506MB
     ubuntu                                                                  latest              ea4c82dcd15a        4 weeks ago          85.8MB
     golang                                                                  1.9                 ef89ef5c42a9        4 months ago         750MB
     

    Note: Your output will not be exactly like this, but it will be similar.

    Notice the image is also tagged as "latest". This is the default behavior if you do not specify a tag of your own, but you can use this as a freeform way to identify an image, e.g. monolith-service:1.2 or monolith-service:experimental. This is very convenient for identifying your images and correlating an image with a branch/version of code as well.

  3. Run the docker container and test the adoption agency platform running as a container:

    Use the docker run command to run your image; the -p flag is used to map the host listening port to the container listening port.

     $ docker run -p 8000:80 -e AWS_DEFAULT_REGION=REGION -e DDB_TABLE_NAME=TABLE_NAME monolith-service
     

    Note: You can find your DynamoDB table name in the file workshop-1/cfn-output.json derived from the outputs of the CloudFormation stack.

    Here's sample output as the application starts:

    * Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
    

    Note: Your output will not be exactly like this, but it will be similar.

    To test the basic functionality of the monolith service, query the service using a utility like cURL, which is bundled with Cloud9.

    Click on the plus sign next to your tabs and choose New Terminal or click Window -> New Terminal from the Cloud9 menu to open a new shell session to run the following curl command.

     $ curl http://localhost:8000/mysfits
     

    You should see a JSON array with data about a number of Mythical Mysfits.

    Note: Processes running inside of the Docker container are able to authenticate with DynamoDB because they can access the EC2 metadata API endpoint running at 169.254.169.254 to retrieve credentials for the instance profile that was attached to our Cloud9 environment in the initial setup script. Processes in containers cannot access the ~/.aws/credentials file in the host filesystem (unless it is explicitly mounted into the container).

    Switch back to the original shell tab where you're running the monolith container to check the output from the monolith.

    The monolith container runs in the foreground with stdout/stderr printing to the screen, so when the request is received, you should see a 200. "OK".

    Here is sample output:

     INFO:werkzeug:172.17.0.1 - - [16/Nov/2018 22:24:18] "GET /mysfits HTTP/1.1" 200 -
     

    In the tab you have the running container, type Ctrl-C to stop the running container. Notice, the container ran in the foreground with stdout/stderr printing to the console. In a production environment, you would run your containers in the background and configure some logging destination. We'll worry about logging later, but you can try running the container in the background using the -d flag.

     $ docker run -d -p 8000:80 -e AWS_DEFAULT_REGION=REGION -e DDB_TABLE_NAME=TABLE_NAME monolith-service
     

    List running docker containers with the docker ps command to make sure the monolith is running.

     $ docker ps
     

    You should see monolith running in the list. Now repeat the same curl command as before, ensuring you see the same list of Mysfits. You can check the logs again by running docker logs (it takes a container name or id fragment as an argument).

     $ docker logs CONTAINER_ID
     

    Here's sample output from the above commands:

     $ docker run -d -p 8000:80 -e AWS_DEFAULT_REGION=REGION -e DDB_TABLE_NAME=TABLE_NAME monolith-service
     51aba5103ab9df25c08c18e9cecf540592dcc67d3393ad192ebeda6e872f8e7a
     $ docker ps
     CONTAINER ID        IMAGE                           COMMAND                  CREATED             STATUS              PORTS                  NAMES
     51aba5103ab9        monolith-service:latest         "python mythicalMysf…"   24 seconds ago      Up 23 seconds       0.0.0.0:8000->80/tcp   awesome_varahamihira
     $ curl localhost:8000/mysfits
     {"mysfits": [...]}
     $ docker logs 51a
      * Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
     172.17.0.1 - - [16/Nov/2018 22:56:03] "GET /mysfits HTTP/1.1" 200 -
     INFO:werkzeug:172.17.0.1 - - [16/Nov/2018 22:56:03] "GET /mysfits HTTP/1.1" 200 -
     

    In the sample output above, the container was assigned the name "awesome_varahamihira". Names are arbitrarily assigned. You can also pass the docker run command a name option if you want to specify the running name. You can read more about it in the Docker run reference. Kill the container using docker kill now that we know it's working properly.

  4. Now that you have a working Docker image, tag and push the image to Elastic Container Registry (ECR). ECR is a fully-managed Docker container registry that makes it easy to store, manage, and deploy Docker container images. In the next lab, we'll use ECS to pull your image from ECR.

    In the AWS Management Console, navigate to Repositories in the ECS dashboard. You should see repositories for the monolith service and like service. These were created by CloudFormation and named like STACK_NAME-mono-xxx and STACK_NAME-like-xxx where STACK_NAME is the name of the CloudFormation stack (the stack name may be truncated).

    ECR repositories

    Click on the repository name for the monolith, and note down the Repository URI (you will use this value again in the next lab):

    ECR monolith repo

    Note: Your repository URI will be unique.

    Tag and push your container image to the monolith repository.

     $ docker tag monolith-service:latest ECR_REPOSITORY_URI:latest
     $ docker push ECR_REPOSITORY_URI:latest
     

    When you issue the push command, Docker pushes the layers up to ECR.

    Here's sample output from these commands:

     $ docker tag monolith-service:latest 873896820536.dkr.ecr.us-east-2.amazonaws.com/mysfit-mono-oa55rnsdnaud:latest
     $ docker push 873896820536.dkr.ecr.us-east-2.amazonaws.com/mysfit-mono-oa55rnsdnaud:latest
     The push refers to a repository [873896820536.dkr.ecr.us-east-2.amazonaws.com/mysfit-mono-oa55rnsdnaud:latest]
     0f03d692d842: Pushed
     ddca409d6822: Pushed
     d779004749f3: Pushed
     4008f6d92478: Pushed
     e0c4f058a955: Pushed
     7e33b38be0e9: Pushed
     b9c7536f9dd8: Pushed
     43a02097083b: Pushed
     59e73cf39f38: Pushed
     31df331e1f23: Pushed
     630730f8c75d: Pushed
     827cd1db9e95: Pushed
     e6e107f1da2f: Pushed
     c41b9462ea4b: Pushed
     latest: digest: sha256:a27cb7c6ad7a62fccc3d56dfe037581d314bd8bd0d73a9a8106d979ac54b76ca size: 3252
     

    Note: Typically, you'd have to log into your ECR repo. However, you did not need to authenticate docker with ECR because the Amazon ECR Credential Helper has been installed and configured for you on the Cloud9 Environment. This was done earlier when you ran the setup script. You can read more about the credentials helper in this article.

    If you refresh the ECR repository page in the console, you'll see a new image uploaded and tagged as latest.

    ECR push complete

Checkpoint:

At this point, you should have a working container for the monolith codebase stored in an ECR repository and ready to deploy with ECS in the next lab.

^ back to the top

Lab 2 - Deploy your container using ECR/ECS

Deploying individual containers is not difficult. However, when you need to coordinate many container deployments, a container management tool like ECS can greatly simplify the task (no pun intended).

ECS refers to a JSON formatted template called a Task Definition that describes one or more containers making up your application or service. The task definition is the recipe that ECS uses to run your containers as a task on your EC2 instances or AWS Fargate.

INFO: What is a task? A task is a running set of containers on a single host. You may hear or see 'task' and 'container' used interchangeably. Often, we refer to tasks instead of containers because a task is the unit of work that ECS launches and manages on your cluster. A task can be a single container, or multiple containers that run together.

Fun fact: a task is very similar to a Kubernetes 'pod'.

Most task definition parameters map to options and arguments passed to the docker run command which means you can describe configurations like which container image(s) you want to use, host:container port mappings, cpu and memory allocations, logging, and more.

In this lab, you will create a task definition to serve as a foundation for deploying the containerized adoption platform stored in ECR with ECS. You will be using the Fargate launch type, which let's you run containers without having to manage servers or other infrastructure. Fargate containers launch with a networking mode called awsvpc, which gives ECS tasks the same networking properties of EC2 instances. Tasks will essentially receive their own elastic network interface. This offers benefits like task-specific security groups. Let's get started!

Lab 2 Architecture

Note: You will use the AWS Management Console for this lab, but remember that you can programmatically accomplish the same thing using the AWS CLI, SDKs, or CloudFormation.

Instructions:

  1. Create an ECS task definition that describes what is needed to run the monolith.

    The CloudFormation template you ran at the beginning of the workshop created some placeholder ECS resources running a simple "Hello World" NGINX container. (You can see this running now at the public endpoint for the ALB also created by CloudFormation available in cfn-output.json.) We'll begin to adapt this placeholder infrastructure to run the monolith by creating a new "Task Definition" referencing the container built in the previous lab.

    In the AWS Management Console, navigate to Task Definitions in the ECS dashboard. Find the Task Definition named Monolith-Definition-STACK_NAME, select it, and click "Create new revision". Select the "monolith-service" container under "Container Definitions", and update "Image" to point to the Image URI of the monolith container that you just pushed to ECR (something like 018782361163.dkr.ecr.us-east-1.amazonaws.com/mysfit-mono-oa55rnsdnaud:latest).

    Edit container example

  2. Check the CloudWatch logging settings in the container definition.

    In the previous lab, you attached to the running container to get stdout, but no one should be doing that in production and it's good operational practice to implement a centralized logging solution. ECS offers integration with CloudWatch logs through an awslogs driver that can be enabled in the container definition.

    Verify that under "Storage and Logging", the "log driver" is set to "awslogs".

    The Log configuration should look something like this:

    CloudWatch Logs integration

    Click "Update" to save the container settings and then "Create" to create the Task Definition revision.

  3. Run the task definition using the Run Task method.

    You should be at the task definition view where you can do things like create a new revision or invoke certain actions. In the Actions dropdown, select Run Task to launch your container.

    Run Task

    Configure the following fields:

    • Launch Type - select Fargate
    • Cluster - select your workshop cluster from the dropdown menu
    • Task Definition - select the task definition you created from the dropdown menu

    In the "VPC and security groups" section, enter the following:

    • Cluster VPC - Your workshop VPC, named like Mysfits-VPC-STACK_NAME
    • Subnets - Select a public subnet, such as Mysfits-PublicOne-STACK_NAME
    • Security goups - The default is fine, but you can confirm that it allows inbound traffic on port 80
    • Auto-assign public IP - "ENABLED"

    Leave all remaining fields as their defaults and click Run Task.

    You'll see the task start in the PENDING state (the placeholder NGINX task is still running as well).

    Task state

    In a few seconds, click on the refresh button until the task changes to a RUNNING state.

    Task state

  4. Test the running task by using cURL from your Cloud9 environment to send a simple GET request.

    First we need to determine the IP of your task. When using the "Fargate" launch type, each task gets its own ENI and Public/Private IP address. Click on the ID of the task you just launched to go to the detail page for the task. Note down the Public IP address to use with your curl command.

    Container Instance IP

    Run the same curl command as before (or view the endpoint in your browser) and ensure that you get a list of Mysfits in the response.

    HINT: curl refresher
     $ curl http://TASK_PUBLIC_IP_ADDRESS/mysfits
     

    Navigate to the CloudWatch Logs dashboard, and click on the monolith log group (e.g.: mysfits-MythicalMonolithLogGroup-LVZJ0H2I2N4). Logging statements are written to log streams within the log group. Click on the most recent log stream to view the logs. The output should look very familiar from your testing in Lab 1.

    CloudWatch Log Entries

    If the curl command was successful, stop the task by going to your cluster, select the Tasks tab, select the running monolith task, and click Stop.

Checkpoint:

Nice work! You've created a task definition and are able to deploy the monolith container using ECS. You've also enabled logging to CloudWatch Logs, so you can verify your container works as expected.

^ back to the top

Lab 3 - Scale the adoption platform monolith with an ALB

The Run Task method you used in the last lab is good for testing, but we need to run the adoption platform as a long running process.

In this lab, you will use an Elastic Load Balancing Appliction Load Balancer (ALB) to distribute incoming requests to your running containers. In addition to simple load balancing, this provieds capabilities like path-based routing to different services.

What ties this all together is an ECS Service, which maintains a desired task count (i.e. n number of containers as long running processes) and integrates with the ALB (i.e. handles registration/deregistration of containers to the ALB). An initial ECS service and ALB were created for you by CloudFormation at the beginning of the workshop. In this lab, you'll update those resources to host the containerized monolith service. Later, you'll make a new service from scratch once we break apart the monolith.

Lab 3 Architecture

Instructions:

  1. Test the placeholder service:

    The CloudFormation stack you launched at the beginning of the workshop included an ALB in front of a placeholder ECS service running a simple container with the NGINX web server. Find the hostname for this ALB in the "LoadBalancerDNS" output variable in the cfn-output.json file, and verify that you can load the NGINX default page:

    NGINX default page

  2. Update the service to use your task definition:

    Find the ECS cluster named Cluster-STACK_NAME, then select the service named STACK_NAME-MythicalMonolithService-XXX and click "Update" in the upper right:

    update service

    Update the Task Definition to the revision you created in the previous lab, then click through the rest of the screens and update the service.

  3. Test the functionality of the website:

    You can monitor the progress of the deployment on the "Tasks" tab of the service page:

    monitoring the update

    The update is fully deployed once there is just one instance of the Task running the latest revision:

    fully deployed

    Visit the S3 static site for the Mythical Mysfits (which was empty earlier) and you should now see the page filled with Mysfits once your update is fully deployed. Remember you can access the website at http://BUCKET_NAME.s3-website.REGION.amazonaws.com/ where the bucket name can be found in the workshop-1/cfn-output.json file:

    the functional website

    Click the heart icon to like a Mysfit, then click the Mysfit to see a detailed profile, and ensure that the like count has incremented:

    like functionality

    This ensures that the monolith can read from and write to DynamoDB, and that it can process likes. Check the CloudWatch logs from ECS and ensure that you can see the "Like processed." message in the logs:

    like logs

INFO: What is a service and how does it differ from a task??

An ECS service is a concept where ECS allows you to run and maintain a specified number (the "desired count") of instances of a task definition simultaneously in an ECS cluster.

tl;dr a Service is comprised of multiple tasks and will keep them up and running. See the link above for more detail.

Checkpoint:

Sweet! Now you have a load-balanced ECS service managing your containerized Mythical Mysfits application. It's still a single monolith container, but we'll work on breaking it down next.

^ back to the top

Lab 4: Incrementally build and deploy each microservice using Fargate

It's time to break apart the monolithic adoption into microservices. To help with this, let's see how the monolith works in more detail.

The monolith serves up several different API resources on different routes to fetch info about Mysfits, "like" them, or adopt them.

The logic for these resources generally consists of some "processing" (like ensuring that the user is allowed to take a particular action, that a Mysfit is eligible for adoption, etc) and some interaction with the persistence layer, which in this case is DynamoDB.

It is often a bad idea to have many different services talking directly to a single database (adding indexes and doing data migrations is hard enough with just one application), so rather than split off all of the logic of a given resource into a separate service, we'll start by moving only the "processing" business logic into a separate service and continue to use the monolith as a facade in front of the database. This is sometimes described as the Strangler Application pattern, as we're "strangling" the monolith out of the picture and only continuing to use it for the parts that are toughest to move out until it can be fully replaced.

The ALB has another feature called path-based routing, which routes traffic based on URL path to particular target groups. This means you will only need a single instance of the ALB to host your microservices. The monolith service will receive all traffic to the default path, '/'. Adoption and like services will be '/adopt' and '/like', respectively.

Here's what you will be implementing:

Lab 4

*Note: The green tasks denote the monolith and the orange tasks denote the "like" microservice

As with the monolith, you'll be using Fargate to deploy these microservices, but this time we'll walk through all the deployment steps for a fresh service.

Instructions:

  1. First, we need to add some glue code in the monolith to support moving the "like" function into a separate service. You'll use your Cloud9 environment to do this. If you've closed the tab, go to the Cloud9 Dashboard and find your environment. Click "Open IDE". Find the app/monolith-service/service/mythicalMysfitsService.py source file, and uncomment the following section:

    # @app.route("/mysfits/<mysfit_id>/fulfill-like", methods=['POST'])
    # def fulfillLikeMysfit(mysfit_id):
    #     serviceResponse = mysfitsTableClient.likeMysfit(mysfit_id)
    #     flaskResponse = Response(serviceResponse)
    #     flaskResponse.headers["Content-Type"] = "application/json"
    #     return flaskResponse
    

    This provides an endpoint that can still manage persistence to DynamoDB, but omits the "business logic" (okay, in this case it's just a print statement, but in real life it could involve permissions checks or other nontrivial processing) handled by the process_like_request function.

  2. With this new functionality added to the monolith, rebuild the monolith docker image with a new tag, such as nolike, and push it to ECR just as before (It is a best practice to avoid the latest tag, which can be ambiguous. Instead choose a unique, descriptive name, or even better user a Git SHA and/or build ID):

     $ cd app/monolith-service
     $ docker build -t monolith-service:nolike .
     $ docker tag monolith-service:nolike ECR_REPOSITORY_URI:nolike
     $ docker push ECR_REPOSITORY_URI:nolike
     
  3. Now, just as in Lab 2, create a new revision of the monolith Task Definition (this time pointing to the "nolike" version of the container image), AND update the monolith service to use this revision as you did in Lab 3.

  4. Now, build the like service and push it to ECR.

    To find the like-service ECR repo URI, navigate to Repositories in the ECS dashboard, and find the repo named like STACK_NAME-like-XXX. Click on the like-service repository and copy the repository URI.

    Getting Like Service Repo

    Note: Your URI will be unique.

     $ cd app/like-service
     $ docker build -t like-service .
     $ docker tag like-service:latest ECR_REPOSITORY_URI:latest
     $ docker push ECR_REPOSITORY_URI:latest
     
  5. Create a new Task Definition for the like service using the image pushed to ECR.

    Navigate to Task Definitions in the ECS dashboard. Click on Create New Task Definition.

    Select Fargate launch type, and click Next step.

    Enter a name for your Task Definition, e.g. mysfits-like.

    In the "Task execution IAM role" section, Fargate needs an IAM role to be able to pull container images and log to CloudWatch. Select the role named like STACK_NAME-EcsServiceRole-XXXXX that was already created for the monolith service.

    The "Task size" section lets you specify the total cpu and memory used for the task. This is different from the container-specific cpu and memory values, which you will also configure when adding the container definition.

    Select 0.5GB for Task memory (GB) and select 0.25vCPU for Task CPU (vCPU).

    Your progress should look similar to this:

    Fargate Task Definition

    Click Add container to associate the like service container with the task.

    Enter values for the following fields:

    • Container name - this is a logical identifier, not the name of the container image (e.g. mysfits-like).
    • Image - this is a reference to the container image stored in ECR. The format should be the same value you used to push the like service container to ECR -
      ECR_REPOSITORY_URI:latest
    • Port mapping - set the container port to be 80.

    Here's an example:

    Fargate like service container definition

    Note: Notice you didn't have to specify the host port because Fargate uses the awsvpc network mode. Depending on the launch type (EC2 or Fargate), some task definition parameters are required and some are optional. You can learn more from our task definition documentation.

    The like service code is designed to call an endpoint on the monolith to persist data to DynamoDB. It references an environment variable called MONOLITH_URL to know where to send fulfillment.

    Scroll down to the "Advanced container configuration" section, and in the "Environment" section, create an environment variable using MONOLITH_URL for the key. For the value, enter the ALB DNS name that currently fronts the monolith.

    Here's an example (make sure you enter just the hostname like alb-mysfits-1892029901.eu-west-1.elb.amazonaws.com without any "http" or slashes):

    monolith env var

    Fargate conveniently enables logging to CloudWatch for you. Keep the default log settings and take note of the awslogs-group and the awslogs-stream-prefix, so you can find the logs for this task later.

    Here's an example:

    Fargate logging

    Click Add to associate the container definition, and click Create to create the task definition.

  6. Create an ECS service to run the Like Service task definition you just created and associate it with the existing ALB.

    Navigate to the new revision of the Like task definition you just created. Under the Actions drop down, choose Create Service.

    Configure the following fields:

    • Launch type - select Fargate
    • Cluster - select your workshop ECS cluster
    • Service name - enter a name for the service (e.g. mysfits-like-service)
    • Number of tasks - enter 1.

    Here's an example:

    ECS Service

    Leave other settings as defaults and click Next Step

    Since the task definition uses awsvpc network mode, you can choose which VPC and subnet(s) to host your tasks.

    For Cluster VPC, select your workshop VPC. And for Subnets, select the private subnets; you can identify these based on the tags.

    Leave the default security group which allows inbound port 80. If you had your own security groups defined in the VPC, you could assign them here.

    Here's an example:

    ECS Service VPC

    Scroll down to "Load balancing" and select Application Load Balancer for Load balancer type.

    You'll see a Load balancer name drop-down menu appear. Select the same Mythical Mysfits ALB used for the monolith ECS service.

    In the "Container to load balance" section, select the Container name : port combo from the drop-down menu that corresponds to the like service task definition.

    Your progress should look similar to this:

    ECS Load Balancing

    Click Add to load balancer to reveal more settings.

    For the Production listener Port, select 80:HTTP from the drop-down.

    For the Target Group Name, you'll need to create a new group for the Like containers, so leave it as "create new" and replace the auto-generated value with mysfits-like. This is a friendly name to identify the target group, so any value that relates to the Like microservice will do.

    Change the path pattern to /mysfits/*/like. The ALB uses this path to route traffic to the like service target group. This is how multiple services are being served from the same ALB listener. Note the existing default path routes to the monolith target group.

    For Evaluation order enter 1. Edit the Health check path to be /.

    And finally, uncheck Enable service discovery integration. While public namespaces are supported, a public zone needs to be configured in Route53 first. Consider this convenient feature for your own services, and you can read more about service discovery in our documentation.

    Your configuration should look similar to this:

    Like Service

    Leave the other fields as defaults and click Next Step.

    Skip the Auto Scaling configuration by clicking Next Step.

    Click Create Service on the Review page.

    Once the Service is created, click View Service and you'll see your task definition has been deployed as a service. It starts out in the PROVISIONING state, progresses to the PENDING state, and if your configuration is successful, the service will finally enter the RUNNING state. You can see these state changes by periodically click on the refresh button.

  7. Once the new like service is deployed, test liking a Mysfit again by visiting the website. Check the CloudWatch logs again and make sure that the like service now shows a "Like processed." message. If you see this, you have succesfully factored out like functionality into the new microservice!

  8. If you have time, you can now remove the old like endpoint from the monolith now that it is no longer seeing production use.

    Go back to your Cloud9 environment where you built the monolith and like service container images.

    In the monolith folder, open mythicalMysfitsService.py in the Cloud9 editor and find the code that reads:

    # increment the number of likes for the provided mysfit.
    @app.route("/mysfits/<mysfit_id>/like", methods=['POST'])
    def likeMysfit(mysfit_id):
        serviceResponse = mysfitsTableClient.likeMysfit(mysfit_id)
        process_like_request()
        flaskResponse = Response(serviceResponse)
        flaskResponse.headers["Content-Type"] = "application/json"
        return flaskResponse
    

    Once you find that line, you can delete it or comment it out.

    Tip: if you're not familiar with Python, you can comment out a line by adding a hash character, "#", at the beginning of the line.

  9. Build, tag and push the monolith image to the monolith ECR repository.

    Use the tag nolike2 now instead of nolike.

     $ docker build -t monolith-service:nolike2 .
     $ docker tag monolith-service:nolike2 ECR_REPOSITORY_URI:nolike2
     $ docker push ECR_REPOSITORY_URI:nolike2
     

    If you look at the monolith repository in ECR, you'll see the pushed image tagged as nolike2:

    ECR nolike image

  10. Now make one last Task Definition for the monolith to refer to this new container image URI (this process should be familiar now, and you can probably see that it makes sense to leave this drudgery to a CI/CD service in production), update the monolith service to use the new Task Definition, and make sure the app still functions as before.

Checkpoint:

Congratulations, you've successfully rolled out the like microservice from the monolith. If you have time, try repeating this lab to break out the adoption microservice. Otherwise, please remember to follow the steps below in the Workshop Cleanup to make sure all assets created during the workshop are removed so you do not see unexpected charges after today.

Workshop Cleanup

This is really important because if you leave stuff running in your account, it will continue to generate charges. Certain things were created by CloudFormation and certain things were created manually throughout the workshop. Follow the steps below to make sure you clean up properly.

Delete manually created resources throughout the labs:

  • ECS service(s) - first update the desired task count to be 0. Then delete the ECS service itself.
  • ECR - delete any Docker images pushed to your ECR repository.
  • CloudWatch logs groups
  • ALBs and associated target groups

Finally, delete the CloudFormation stack launched at the beginning of the workshop to clean up the rest. If the stack deletion process encountered errors, look at the Events tab in the CloudFormation dashboard, and you'll see what steps failed. It might just be a case where you need to clean up a manually created asset that is tied to a resource goverened by CloudFormation.