Skip to content

A Terraform Module to create initial resources needed to use Terraform in AWS

License

Notifications You must be signed in to change notification settings

USSBA/terraform-aws-bootstrapper

Repository files navigation

terraform-aws-bootstraper

A terraform module to configure a terraform remote state for your AWS accounts.

Overview

Terraform uses state to keep track of your infrastructure and configuration. When working with Terraform in a team setting, it is recommended to use a remote state. This module creates all of the necessary resources to keep your state files is a centralized remote location using an S3 backend. It will also create DynamoDB table(s) that will track deployment locks thus preventing any accidental overlap.

Usage

This module serves as more of a blueprint to quickly create and configure all of the necessary resources to integrate your terraform project using an S3 backend.

Note: This module is designed to be a once and done deployment; however, you may want to consider keeping the state file it generates with your code repository. If you choose to retain the state generated by this module then the module itself must be kept in complete isolation from the rest of your project.

Directory Structure

In the example below we fragment the infrastructure where each sub directory represents a separate and isolated state.

terraform-project/
├── application1/
│   └── application1.tf
├── application2/
│   └── application2.tf
├── network/
│   └── vpc.tf
└── bootstrap/
    └── bootstrap.tf

Module Variables

bucket_name

  • A globally unique s3 bucket name where terraform state files will be managed.

bucket_source_account_id

  • The AWS account ID where you want to provision this s3 bucket.

account_ids

  • A list of AWS accounts where you intend to use terraform to deploy infrastructure.

lock_table_names

  • A list of lock-table names to be created, but in most cases a single table is more then adequate.

tags

  • Optional; A map of tags (key, value) pairs for s3 and dynamodb table.

Module Example

It is recommended to create a directory within your terraform code project named bootstrap and within that directory create a new .tf file with a name of your choosing, copy and paste the following code snippet then customize the settings to your liking.

module "my-aws-terraform-remote-state" {
  source                   = "USSBA/bootstrapper/aws"
  bucket_name              = "my-terraform-remote-state-bucket"
  bucket_source_account_id = "000011112222"
  account_ids              = ["000011112222", "333344445555"]
  lock_table_names         = ["my-terraform-remote-state-locktable"]

  tags = {
    foo = bar
  }
}

If you want to limit access to the bucket to a specific set of AWS principals.

module "my-aws-terraform-remote-state" {
  source                   = "USSBA/bootstrapper/aws"
  bucket_name              = "my-bucket"
  bucket_source_account_id = "123412341234"
  lock_table_names         = ["my-locktable"]
  account_ids              = [
    "123412341234",
    "678967896789"
  ]
  principals               = [
    "role/role-name", # matches with account_ids[0] --> 123412341234
    "role/role-name", # matches with account_ids[1] --> 678967896789
  ]

Configuration

Once you have applied the bootstrap module to each of your AWS accounts you may now take advantage of those resources to configure the S3 backend.

Using the directory structure suggested above lets create a new file in the application1 directory called backend.tf, copy and paste the following code snippet then customize the settings to match the resources created by the bootstrap module. Run a terraform init and viola you are now free to create any number of terraform workspaces (or not) and your state files tracked remotely.

terraform {
  backend "s3" {
    bucket               = "my-terraform-remote-state-bucket"
    key                  = "aplication1.tfstate"
    region               = "us-east-1"
    dynamodb_table       = "my-terraform-remote-state-locktable"
    workspace_key_prefix = "applicaitons"
    acl                  = "bucket-owner-full-control"
  }
}

Now, lets configure the application2 backend, but notice that we change the key so that the state files names do not conflict.

terraform {
  backend "s3" {
    bucket               = "my-terraform-remote-state-bucket"
    key                  = "aplication2.tfstate"
    region               = "us-east-1"
    dynamodb_table       = "my-terraform-remote-state-locktable"
    workspace_key_prefix = "applicaitons"
    acl                  = "bucket-owner-full-control"
  }
}

Contributing

We welcome contributions. To contribute please read our CONTRIBUTING document.

All contributions are subject to the license and in no way imply compensation for contributions.

Code of Conduct

We strive for a welcoming and inclusive environment for all SBA projects.

Please follow this guidelines in all interactions:

  • Be Respectful: use welcoming and inclusive language.
  • Assume best intentions: seek to understand others opinions.

Security Policy

Please do not submit an issue on GitHub for a security vulnerability. Instead, contact the development team through HQVulnerabilityManagement. Be sure to include all pertinent information.

The agency reserves the right to change this policy at any time.

About

A Terraform Module to create initial resources needed to use Terraform in AWS

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages