This example demonstrates how HashiCorp tools run on AWS, including:
- Boundary
- HashiCorp Cloud Platform Vault
- HashiCorp Cloud Platform Consul
- Terraform Cloud
It uses the following AWS services:
- Amazon ECS
- AWS KMS
To run this example, you need Terraform Cloud to set up a series of workspaces. The workspaces need to be set up as follows, with the appropriate working directory, secrets, and remote workspace sharing.
Workspace Name | Working Directory for VCS | Variables | Remote State Sharing |
---|---|---|---|
hcp | hcp/ |
name, trusted_role_arn, bootstrap AWS access keys, HCP credentials | infrastructure, consul, boundary, vault-aws |
vault-aws | vault/aws/ |
name, AWS access keys (for AWS secrets engine) | |
infrastructure | infrastructure/ |
name, client_cidr_block, HCP service principal credentials, database_password, boundary_database_password, key_pair_name. [FROM VAULT] AWS access keys | boundary, apps, vault-products |
vault-products | vault/products/ |
name, HCP service principal credentials | boundary |
boundary | boundary/ |
name. [FROM VAULT] db_password, db_username, AWS access keys | |
apps | apps/ |
name, client_cidr_block. [FROM VAULT] db_password, db_username, AWS access keys |
You need to run plan and apply for each workspace in the order indicated.
Imagine you want to issue AWS access keys for each group that runs Terraform. You can use Vault's AWS secrets engine to generate access keys for each group.
For example, you set up an initial AWS access and secret key for Vault to issue new credentials. The AWS access and secret key assume a role with sufficient permissions for Terraform to configure infrastructure on AWS.
-
Run
terraform apply
for thehcp
workspace. It creates:- HCP network
- HCP Vault cluster
- HCP Consul cluster
- AWS IAM Role for Terraform
-
Sets the Vault address, token, and namespace for you to get a new set of AWS access keys from Vault in your CLI.
source set.sh
-
Next, generate a set of AWS access keys for the Vault secrets engine. These should be different than the ones you used to bootstrap HCP and the AWS IAM role!
-
Add the new AWS access keys to
vault-aws
workspace. -
Run
terraform apply
for thevault-aws
workspace. It creates:- Path for AWS secrets engine in Vault at
terraform/aws
- Role for your team (e.g.,
hashicups
)
- Path for AWS secrets engine in Vault at
-
Run
make vault-aws
. This retrieves a new set of AWS access keys from Vault via the secrets engine and saves it to thesecrets/
directory locally.make vault-aws
-
Use the AWS access and secret keys from
secrets/aws.json
and add them to theinfrastructure
,boundary
, andapps
workspaces. -
Run
terraform apply
for theinfrastructure
workspace. It creates:- AWS VPC and peers to HCP network
- HashiCups database (PostgreSQL)
- Boundary cluster (1 worker, 1 controller, database)
- Amazon ECS cluster (1 EC2 container instance)
We need to generate a few things for the products API (and Boundary).
- Database secrets engine for HashiCups data (used by
product-api
and Boundary) - AWS IAM Auth Method for
vault-agent
in HashiCupsproduct-api
To configure this, you need to add HCP service credentials with the Vault address, token, and
namespace to vault-products
.
You have two identities that need to access the application's database:
- Application (
product-api
) to read from the database - Human user (
ops
ordev
team) to update the database using Boundary
Configure the following.
- Run
terraform apply
for thevault-products
workspace. It creates:- Path for database credentials in Vault at
hashicups/database
- Role for the application that will access it (e.g.,
product
) - Role for Boundary user to access it (e.g.,
boundary
)
- Path for database credentials in Vault at
Boundary needs a set of organizations and projects. You have two projects:
core_infra
: ECS container instance. Allowops
team to SSH into it.product_infra
: Application database. Allowops
ordev
team to configure it.
Configure the following.
- Run
terraform apply
for theboundary
workspace. It creates:- Two projects, one for
core_infra
and the other forproduct_infra
. - Three users,
jeff
for theops
team,rosemary
for thedev
team, andtaylor
for thesecurity
team. - Two targets:
- ECS container instance (not yet added)
- Application database, brokered by Vault credentials
- Two projects, one for
-
Run
source set.sh
to set your Boundary address. -
Run
make boundary-host-catalog
to configure the host catalog for the ECS container instances. This uses dynamic host catalog plugins in Boundary to auto-discover AWS EC2 instances with the cluster tag. -
You can also SSH into the ECS container instance as the
ops
team. Runmake ssh-ecs
.
- Boundary uses Vault as a credentials store to retrieve a new set of database credentials!
Run
make configure-db
to log into Boundary as thedev
team and configure the database - all without knowing the username or password!
You may need to control network policy between services on ECS and other services registered to Consul. You can use intentions to secure service-to-service communication.
-
Run
terraform apply
for theapps
workspace. It creates three ECS services:frontend
(Fargate launch type)public-api
(Fargate launch type)product-api
(EC2 launch type)
-
Run
terraform apply
for thevault-products
workspace. It adds:- AWS IAM authentication method for ECS task to authenticate to vault
-
Run
make products
to mark theproduct-api
to be recreated. -
Run
terraform apply
for theapps
workspace. It should redeploy theproduct-api
. -
Try to access the frontend via the ALB. You might get an error! We need to enable traffic between the services registered to Consul.
-
Try to access the frontend via the ALB. You'll get a
Packer Spiced Latte
!