Skip to content
This repository has been archived by the owner on Jan 25, 2023. It is now read-only.

Commit

Permalink
Merge pull request #35 from hashicorp/remove-s3
Browse files Browse the repository at this point in the history
Remove S3
  • Loading branch information
KFishner authored Jan 25, 2018
2 parents 40b8b8e + 037a717 commit a61761c
Show file tree
Hide file tree
Showing 22 changed files with 237 additions and 303 deletions.
9 changes: 4 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,8 @@

This repo contains a Module for how to deploy a [Vault](https://www.vaultproject.io/) cluster on
[AWS](https://aws.amazon.com/) using [Terraform](https://www.terraform.io/). Vault is an open source tool for managing
secrets. This Module uses [S3](https://aws.amazon.com/s3/) as a [storage
backend](https://www.vaultproject.io/docs/configuration/storage/index.html) and a [Consul](https://www.consul.io)
server cluster as a [high availability backend](https://www.vaultproject.io/docs/concepts/ha.html):
secrets. By default, this Module uses [Consul](https://www.consul.io) as a [storage
backend](https://www.vaultproject.io/docs/configuration/storage/index.html).

![Vault architecture](https://github.com/hashicorp/terraform-aws-vault/blob/master/_docs/architecture.png?raw=true)

Expand Down Expand Up @@ -73,8 +72,8 @@ Each Module has the following folder structure:
Click on each of the modules above for more details.

To deploy Vault with this Module, you will need to deploy two separate clusters: one to run
[Consul](https://www.consul.io/) servers (which Vault uses as a [high availability
backend](https://www.vaultproject.io/docs/concepts/ha.html)) and one to run Vault servers.
[Consul](https://www.consul.io/) servers (which Vault uses as a [storage
backend](https://www.vaultproject.io/docs/configuration/storage/index.html)) and one to run Vault servers.

To deploy the Consul server cluster, use the [Consul AWS Module](https://github.com/hashicorp/terraform-aws-consul).

Expand Down
Binary file modified _docs/architecture-elb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _docs/architecture.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
14 changes: 7 additions & 7 deletions circle.yml
Original file line number Diff line number Diff line change
@@ -1,21 +1,21 @@
machine:
environment:
PATH: $PATH:$HOME/terraform:$HOME/packer:$HOME/glide/linux-amd64
PATH: $PATH:$HOME/terraform:$HOME/packer
VAULT_HOSTED_ZONE_DOMAIN_NAME: gruntwork.in # Domain name of Route 53 hosted zone to use at test time

dependencies:
override:
# Install the gruntwork-module-circleci-helpers and use it to configure the build environment and run tests.
- curl -Ls https://raw.githubusercontent.com/gruntwork-io/gruntwork-installer/master/bootstrap-gruntwork-installer.sh | bash /dev/stdin --version v0.0.16
- gruntwork-install --module-name "gruntwork-module-circleci-helpers" --repo "https://github.com/gruntwork-io/module-ci" --tag "v0.3.17"
- gruntwork-install --module-name "build-helpers" --repo "https://github.com/gruntwork-io/module-ci" --tag "v0.3.17"
- gruntwork-install --module-name "aws-helpers" --repo "https://github.com/gruntwork-io/module-ci" --tag "v0.3.17"
- configure-environment-for-gruntwork-module --go-src-path test
- curl -Ls https://raw.githubusercontent.com/gruntwork-io/gruntwork-installer/master/bootstrap-gruntwork-installer.sh | bash /dev/stdin --version v0.0.20
- gruntwork-install --module-name "gruntwork-module-circleci-helpers" --repo "https://github.com/gruntwork-io/module-ci" --tag "v0.6.0"
- gruntwork-install --module-name "build-helpers" --repo "https://github.com/gruntwork-io/module-ci" --tag "v0.6.0"
- gruntwork-install --module-name "aws-helpers" --repo "https://github.com/gruntwork-io/module-ci" --tag "v0.6.0"
- configure-environment-for-gruntwork-module --go-src-path test --use-go-dep

cache_directories:
- ~/terraform
- ~/packer
- ~/glide
- ~/dep

test:
override:
Expand Down
2 changes: 1 addition & 1 deletion examples/root-example/user-data-vault.sh
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@ readonly VAULT_TLS_KEY_FILE="/opt/vault/tls/vault.key.pem"

# The cluster_tag variables below are filled in via Terraform interpolation
/opt/consul/bin/run-consul --client --cluster-tag-key "${consul_cluster_tag_key}" --cluster-tag-value "${consul_cluster_tag_value}"
/opt/vault/bin/run-vault --s3-bucket "${s3_bucket_name}" --s3-bucket-region "${aws_region}" --tls-cert-file "$VAULT_TLS_CERT_FILE" --tls-key-file "$VAULT_TLS_KEY_FILE"
/opt/vault/bin/run-vault --tls-cert-file "$VAULT_TLS_CERT_FILE" --tls-key-file "$VAULT_TLS_KEY_FILE"
4 changes: 0 additions & 4 deletions examples/vault-cluster-private/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,6 @@ module "vault_cluster" {
ami_id = "${var.ami_id}"
user_data = "${data.template_file.user_data_vault_cluster.rendered}"

s3_bucket_name = "${var.s3_bucket_name}"
force_destroy_s3_bucket = "${var.force_destroy_s3_bucket}"

vpc_id = "${data.aws_vpc.default.id}"
subnet_ids = "${data.aws_subnet_ids.default.ids}"

Expand Down Expand Up @@ -66,7 +63,6 @@ data "template_file" "user_data_vault_cluster" {

vars {
aws_region = "${var.aws_region}"
s3_bucket_name = "${var.s3_bucket_name}"
consul_cluster_tag_key = "${var.consul_cluster_tag_key}"
consul_cluster_tag_value = "${var.consul_cluster_name}"
}
Expand Down
2 changes: 1 addition & 1 deletion examples/vault-cluster-private/user-data-vault.sh
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@ readonly VAULT_TLS_KEY_FILE="/opt/vault/tls/vault.key.pem"

# The variables below are filled in via Terraform interpolation
/opt/consul/bin/run-consul --client --cluster-tag-key "${consul_cluster_tag_key}" --cluster-tag-value "${consul_cluster_tag_value}"
/opt/vault/bin/run-vault --s3-bucket "${s3_bucket_name}" --s3-bucket-region "${aws_region}" --tls-cert-file "$VAULT_TLS_CERT_FILE" --tls-key-file "$VAULT_TLS_KEY_FILE"
/opt/vault/bin/run-vault --tls-cert-file "$VAULT_TLS_CERT_FILE" --tls-key-file "$VAULT_TLS_KEY_FILE"
9 changes: 0 additions & 9 deletions examples/vault-cluster-private/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,6 @@ variable "ami_id" {
description = "The ID of the AMI to run in the cluster. This should be an AMI built from the Packer template under examples/vault-consul-ami/vault-consul.json."
}

variable "s3_bucket_name" {
description = "The name of an S3 bucket to create and use as a storage backend. Note: S3 bucket names must be *globally* unique."
}

variable "ssh_key_name" {
description = "The name of an EC2 Key Pair that can be used to SSH to the EC2 Instances in this cluster. Set to an empty string to not associate a Key Pair."
}
Expand Down Expand Up @@ -68,11 +64,6 @@ variable "consul_cluster_tag_key" {
default = "consul-servers"
}

variable "force_destroy_s3_bucket" {
description = "If you set this to true, when you run terraform destroy, this tells Terraform to delete all the objects in the S3 bucket used for backend storage. You should NOT set this to true in production or you risk losing all your data! This property is only here so automated tests of this module can clean up after themselves."
default = false
}

variable "vpc_id" {
description = "The ID of the VPC to deploy into. Leave an empty string to use the Default VPC in this region."
default = ""
Expand Down
4 changes: 0 additions & 4 deletions main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -68,9 +68,6 @@ module "vault_cluster" {
ami_id = "${var.ami_id == "" ? data.aws_ami.vault_consul.image_id : var.ami_id}"
user_data = "${data.template_file.user_data_vault_cluster.rendered}"

s3_bucket_name = "${var.s3_bucket_name}"
force_destroy_s3_bucket = "${var.force_destroy_s3_bucket}"

vpc_id = "${data.aws_vpc.default.id}"
subnet_ids = "${data.aws_subnet_ids.default.ids}"

Expand Down Expand Up @@ -112,7 +109,6 @@ data "template_file" "user_data_vault_cluster" {

vars {
aws_region = "${var.aws_region}"
s3_bucket_name = "${var.s3_bucket_name}"
consul_cluster_tag_key = "${var.consul_cluster_tag_key}"
consul_cluster_tag_value = "${var.consul_cluster_name}"
}
Expand Down
30 changes: 10 additions & 20 deletions modules/run-vault/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ module](https://github.com/hashicorp/terraform-aws-vault/tree/master/modules/ins
run:

```
/opt/vault/bin/run-vault --s3-bucket my-vault-bucket --s3-bucket-region us-east-1 --tls-cert-file /opt/vault/tls/vault.crt.pem --tls-key-file /opt/vault/tls/vault.key.pem
/opt/vault/bin/run-vault --tls-cert-file /opt/vault/tls/vault.crt.pem --tls-key-file /opt/vault/tls/vault.key.pem
```

This will:
Expand Down Expand Up @@ -48,8 +48,6 @@ See the [vault-cluster-public](https://github.com/hashicorp/terraform-aws-vault/

The `run-vault` script accepts the following arguments:

* `--s3-bucket` (required): Specifies the S3 bucket to use to store Vault data.
* `--s3-bucket-region` (required): Specifies the AWS region where `--s3-bucket` lives.
* `--tls-cert-file` (required): Specifies the path to the certificate for TLS. To configure the listener to use a CA
certificate, concatenate the primary certificate and the CA certificate together. The primary certificate should
appear first in the combined file. See [How do you handle encryption?](#how-do-you_handle-encryption) for more info.
Expand All @@ -62,13 +60,13 @@ The `run-vault` script accepts the following arguments:
* `config-dir` (optional): The path to the Vault config folder. Default is to take the absolute path of `../config`,
relative to the `run-vault` script itself.
* `user` (optional): The user to run Vault as. Default is to use the owner of `config-dir`.
* `skip-vault-config`: If this flag is set, don't generate a Vault configuration file. This is useful if you have
a custom configuration file and don't want to use any of of the default settings from `run-vault`.
* `skip-vault-config` (optional): If this flag is set, don't generate a Vault configuration file. This is useful if you
have a custom configuration file and don't want to use any of of the default settings from `run-vault`.

Example:

```
/opt/vault/bin/run-vault --s3-bucket my-vault-bucket --s3-bucket-region us-east-1 --tls-cert-file /opt/vault/tls/vault.crt.pem --tls-key-file /opt/vault/tls/vault.key.pem
/opt/vault/bin/run-vault --tls-cert-file /opt/vault/tls/vault.crt.pem --tls-key-file /opt/vault/tls/vault.key.pem
```


Expand All @@ -86,17 +84,9 @@ available.

`run-vault` sets the following configuration values by default:

* [storage](https://www.vaultproject.io/docs/configuration/index.html#storage): Configure S3 as the storage backend
* [storage](https://www.vaultproject.io/docs/configuration/index.html#storage): Configure Consul as the storage backend
with the following settings:

* [bucket](https://www.vaultproject.io/docs/configuration/storage/s3.html#bucket): Set to the `--s3-bucket`
parameter.
* [region](https://www.vaultproject.io/docs/configuration/storage/s3.html#region): Set to the `--s3-bucket-region`
parameter.

* [ha_storage](https://www.vaultproject.io/docs/configuration/index.html#ha_storage): Configure Consul as the [high
availability](https://www.vaultproject.io/docs/concepts/ha.html) storage backend with the following settings:

* [address](https://www.vaultproject.io/docs/configuration/storage/consul.html#address): Set the address to
`127.0.0.1:8500`. This is based on the assumption that the Consul agent is running on the same server.
* [scheme](https://www.vaultproject.io/docs/configuration/storage/consul.html#scheme): Set to `http` since our
Expand Down Expand Up @@ -143,7 +133,7 @@ If you want to override *all* the default settings, you can tell `run-vault` not
at all using the `--skip-vault-config` flag:

```
/opt/vault/bin/run-vault --s3-bucket my-vault-bucket --s3-bucket-region us-east-1 --tls-cert-file /opt/vault/tls/vault.crt.pem --tls-key-file /opt/vault/tls/vault.key.pem --skip-vault-config
/opt/vault/bin/run-vault --tls-cert-file /opt/vault/tls/vault.crt.pem --tls-key-file /opt/vault/tls/vault.key.pem --skip-vault-config
```


Expand All @@ -163,17 +153,17 @@ When you execute the `run-vault` script, you need to provide the paths to the pu
certificate:

```
/opt/vault/bin/run-vault --s3-bucket my-vault-bucket --s3-bucket-region us-east-1 --tls-cert-file /opt/vault/tls/vault.crt.pem --tls-key-file /opt/vault/tls/vault.key.pem
/opt/vault/bin/run-vault --tls-cert-file /opt/vault/tls/vault.crt.pem --tls-key-file /opt/vault/tls/vault.key.pem
```

See the [private-tls-cert module](https://github.com/hashicorp/terraform-aws-vault/tree/master/modules/private-tls-cert) for information on how to generate a TLS certificate.


### Consul encryption

Since this Vault Module uses Consul as a high availability storage backend, you may want to enable encryption for
Consul too. Note that Vault encrypts any data *before* sending it to a storage backend, so this isn't strictly
necessary, but may be a good extra layer of security.
Since this Vault Module uses Consul as a storage backend, you may want to enable encryption for Consul too. Note that
Vault encrypts any data *before* sending it to a storage backend, so this isn't strictly necessary, but may be a good
extra layer of security.

By default, the Vault server nodes communicate with a local Consul agent running on the same server over (unencrypted)
HTTP. However, you can configure those agents to talk to the Consul servers using TLS. Check out the [official Consul
Expand Down
27 changes: 3 additions & 24 deletions modules/run-vault/run-vault
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,6 @@ function print_usage {
echo
echo "Options:"
echo
echo -e " --s3-bucket\tSpecifies the S3 bucket where Vault data should be stored. Required."
echo -e " --s3-bucket-region\tSpecifies the AWS region where --s3-bucket lives. Required."
echo -e " --tls-cert-file\tSpecifies the path to the certificate for TLS. Required. To use a CA certificate, concatenate the primary certificate and the CA certificate together."
echo -e " --tls-key-file\tSpecifies the path to the private key for the certificate. Required."
echo -e " --port\t\tThe port for Vault to listen on. Optional. Default is $DEFAULT_PORT."
Expand All @@ -37,7 +35,7 @@ function print_usage {
echo
echo "Example:"
echo
echo " run-vault --s3-bucket my-vault-bucket --s3-bucket-region us-east-1 --tls-cert-file /opt/vault/tls/vault.crt.pem --tls-key-file /opt/vault/tls/vault.key.pem"
echo " run-vault --tls-cert-file /opt/vault/tls/vault.crt.pem --tls-key-file /opt/vault/tls/vault.key.pem"
}

function log {
Expand Down Expand Up @@ -105,21 +103,14 @@ function generate_vault_config {
local readonly cluster_port="$4"
local readonly config_dir="$5"
local readonly user="$6"
local readonly s3_bucket="$7"
local readonly s3_bucket_region="$8"
local readonly config_path="$config_dir/$VAULT_CONFIG_FILE"

local instance_ip_address
instance_ip_address=$(get_instance_ip_address)

log_info "Creating default Vault config file in $config_path"
cat > "$config_path" <<EOF
storage "s3" {
bucket = "$s3_bucket"
region = "$s3_bucket_region"
}
ha_storage "consul" {
storage "consul" {
address = "127.0.0.1:8500"
path = "vault/"
scheme = "http"
Expand Down Expand Up @@ -179,8 +170,6 @@ function run {
local tls_key_file=""
local port="$DEFAULT_PORT"
local cluster_port=""
local s3_bucket=""
local s3_bucket_region=""
local config_dir=""
local bin_dir=""
local log_dir=""
Expand All @@ -201,14 +190,6 @@ function run {
tls_key_file="$2"
shift
;;
--s3-bucket)
s3_bucket="$2"
shift
;;
--s3-bucket-region)
s3_bucket_region="$2"
shift
;;
--port)
assert_not_empty "$key" "$2"
port="$2"
Expand Down Expand Up @@ -263,8 +244,6 @@ function run {

assert_not_empty "--tls-cert-file" "$tls_cert_file"
assert_not_empty "--tls-key-file" "$tls_key_file"
assert_not_empty "--s3-bucket" "$s3_bucket"
assert_not_empty "--s3-bucket-region" "$s3_bucket_region"

assert_is_installed "supervisorctl"
assert_is_installed "aws"
Expand Down Expand Up @@ -294,7 +273,7 @@ function run {
if [[ "$skip_vault_config" == "true" ]]; then
log_info "The --skip-vault-config flag is set, so will not generate a default Vault config file."
else
generate_vault_config "$tls_cert_file" "$tls_key_file" "$port" "$cluster_port" "$config_dir" "$user" "$s3_bucket" "$s3_bucket_region"
generate_vault_config "$tls_cert_file" "$tls_key_file" "$port" "$cluster_port" "$config_dir" "$user"
fi

generate_supervisor_config "$SUPERVISOR_CONFIG_PATH" "$config_dir" "$bin_dir" "$log_dir" "$log_level" "$user"
Expand Down
32 changes: 8 additions & 24 deletions modules/vault-cluster/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,10 @@ module "vault_cluster" {
# Specify the ID of the Vault AMI. You should build this using the scripts in the install-vault module.
ami_id = "ami-abcd1234"
# This module uses S3 as a storage backend
s3_bucket_name = "${var.vault_s3_bucket}"
# Configure and start Vault during boot.
user_data = <<-EOF
#!/bin/bash
/opt/vault/bin/run-vault --s3-bucket ${var.vault_s3_bucket} --s3-bucket-region ${var.aws_region} --tls-cert-file /opt/vault/tls/vault.crt.pem --tls-key-file /opt/vault/tls/vault.key.pem
/opt/vault/bin/run-vault --tls-cert-file /opt/vault/tls/vault.crt.pem --tls-key-file /opt/vault/tls/vault.key.pem
EOF
# Add tag to each node in the cluster with value set to var.cluster_name
Expand Down Expand Up @@ -63,8 +60,6 @@ Note the following parameters:
(AMI)](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) to deploy on each server in the cluster. You
should install Vault in this AMI using the scripts in the [install-vault](https://github.com/hashicorp/terraform-aws-vault/tree/master/modules/install-vault) module.

* `s3_bucket_name`: This module creates an [S3](https://aws.amazon.com/s3/) to use as a storage backend for Vault.

* `user_data`: Use this parameter to specify a [User
Data](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scripts) script that each
server will run during boot. This is where you can use the [run-vault script](https://github.com/hashicorp/terraform-aws-vault/tree/master/modules/run-vault) to configure and
Expand Down Expand Up @@ -277,7 +272,6 @@ This module creates the following architecture:
This architecture consists of the following resources:

* [Auto Scaling Group](#auto-scaling-group)
* [S3 bucket](#s3-bucket)
* [Security Group](#security-group)
* [IAM Role and Permissions](#iam-role-and-permissions)

Expand All @@ -291,14 +285,6 @@ Instances should be running an AMI that has had Vault installed via the [install
module. You pass in the ID of the AMI to run using the `ami_id` input parameter.


### S3 Bucket

This module creates an [S3 bucket](https://aws.amazon.com/s3/) that Vault can use as a storage backend. S3 is a good
choice for storage because it provides outstanding durability (99.999999999%) and availability (99.99%). Unfortunately,
S3 cannot be used for Vault High Availability coordination, so this module expects a separate Consul server cluster to
be deployed as a high availability backend.


### Security Group

Each EC2 Instance in the ASG has a Security Group that allows:
Expand All @@ -315,9 +301,8 @@ Check out the [Security section](#security) for more details.

### IAM Role and Permissions

Each EC2 Instance in the ASG has an [IAM Role](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) attached
with permissions to access its S3 bucket. The IAM Role ARN is exported as an output variable so you can add custom
permissions.
Each EC2 Instance in the ASG has an [IAM Role](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) attached.
The IAM Role ARN is exported as an output variable so you can add custom permissions.



Expand Down Expand Up @@ -460,15 +445,14 @@ This module does NOT handle the following items, which you may want to provide o

### Consul

This module configures Vault to use Consul as a high availability storage backend. This module assumes you already
have Consul servers deployed in a separate cluster. We do not recommend co-locating Vault and Consul servers in the
same cluster because:
This module configures Vault to use Consul as a storage backend. This module assumes you already have Consul servers
deployed in a separate cluster. We do not recommend co-locating Vault and Consul servers in the same cluster because:

1. Vault is a tool built specifically for security, and running any other software on the same server increases its
surface area to attackers.
1. This Vault Module uses Consul as a high availability storage backend and both Vault and Consul keep their working
set in memory. That means for every 1 byte of data in Vault, you'd also have 1 byte of data in Consul, doubling
your memory consumption on each server.
1. This Vault Module uses Consul as a storage backend and both Vault and Consul keep their working set in memory. That
means for every 1 byte of data in Vault, you'd also have 1 byte of data in Consul, doubling your memory consumption
on each server.

Check out the [Consul AWS Module](https://github.com/hashicorp/terraform-aws-consul) for how to deploy a Consul
server cluster in AWS. See the [vault-cluster-public](https://github.com/hashicorp/terraform-aws-vault/tree/master/examples/vault-cluster-public) and
Expand Down
Loading

0 comments on commit a61761c

Please sign in to comment.