Skip to content

Commit

Permalink
Copy bastion module from Softwire repo into this one. Also, add missi…
Browse files Browse the repository at this point in the history
…ng MIT license
  • Loading branch information
hugh-emerson committed Dec 31, 2024
1 parent 066e7cc commit cf68c12
Show file tree
Hide file tree
Showing 16 changed files with 828 additions and 3 deletions.
21 changes: 21 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2024 Ministry of Housing, Communities & Local Government

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
59 changes: 59 additions & 0 deletions terraform/modules/bastion_host/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
# AWS Bastion Host module for Terraform

> This module creates a bastion which manages access by uploading SSH public keys to an S3 bucket.
> AWS now has tools to manage instance access using AWS credentials instead, either EC2 Instance Connect or AWS Systems Manager Session Manager - consider those alternatives to this module.
## Features

* Optional custom AMI
* Optional multiple instances
* Automatic scaling across multiple subnets
* Route53-based alias DNS record
* User public key management via S3

## User management

* To create a new user, upload their SSH public key to the S3 bucket referenced in the script output. See the README stored there for details.
* To delete a user, remove their key from the S3 bucket.

Any changes to the S3 bucket will be synchronised within 5 minutes

## Input Variables

| Variable | Description | Type | Required | Default |
|----------|-------------|:----:|:--------:|:-------:|
| region | AWS region name | string | yes | |
| public_subnet_ids | List of public subnet ARNs where NLB listeners will be deployed. | list | yes | |
| instance_subnet_ids | List of subnet ARNs where instances will be deployed. | list | yes | |
| vpc_id | ID of the VPC where the bastion will be deployed | string | yes | |
| admin_ssh_key_pair_name | Name of the SSH key pair for the admin user account | string | yes | |
| name_prefix | Prefix to be applied to names of all resources, max 3 characters | string | no | `bst` |
| external_allowed_cidrs | List of CIDRs which can access the bastion | list | no | `["0.0.0.0/0"]` |
| external_ssh_port | Which port to use to SSH into the bastion | number | no | `22` |
| internal_ssh_port | Which port the bastion will use to SSH into other private instances | number | no | `22` |
| instance_count | Number of instances to deploy. Defaults to one per subnet ARN provided. | number | no | `count(var.instance_subnet_ids)` |
| custom_ami | Provide your own AWS AMI to use - useful if you need specific tools on the bastion | string | no | |
| dns_config | Optional details of an alias DNS record for the bastion. [See below](#dns-config) for properties | object | no | |
| tags_default | Tags to apply to all resources | map | no | `{}` |
| tags_lb | Tags to apply to the bastion load balancer | map | no | `{}` |
| tags_asg | Tags to apply to the bastion autoscaling group | map | no | `{}` |
| tags_sg | Tags to apply to the bastion security groups | map | no | `{}` |
| tags_host_key | Tags to apply to the bastion host key secret and KMS key | map | no | `{}` |
| extra_userdata | Extra commands to append to the instance user data script | string | no | |
| log_group_name | The name of a CloudWatch log group to send logs of SSH logins and user/key changes to | string | no | |
| s3_access_log_expiration_days | Days to keep S3 access logs for the keys bucket, defaults to forever | number | no | |

### DNS Config

| Variable | Description | Type | Required | Default |
|----------|-------------|:----:|:--------:|:-------:|
| record_name | Description | DNS alias record name of the bastion host | string | yes | |
| hosted_zone_name | Description | Name of the Route53 hosted zone where to register the record | string | yes | |

## Outputs

| Variable | Description |
|----------|-------------|
| bastion_security_group_id | Security group of the bastion instances |
| bastion_dns_name | DNS name of the bastion. |
| ssh_keys_bucket | Name of the S3 bucket used for user public key storage |
13 changes: 13 additions & 0 deletions terraform/modules/bastion_host/dns.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
resource "aws_route53_record" "dns_record" {
count = var.dns_config != null ? length(local.dns_record_types) : 0

name = var.dns_config.domain
zone_id = var.dns_config.zone_id
type = local.dns_record_types[count.index]

alias {
evaluate_target_health = true
name = aws_lb.bastion.dns_name
zone_id = aws_lb.bastion.zone_id
}
}
36 changes: 36 additions & 0 deletions terraform/modules/bastion_host/elb.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# tfsec:ignore:aws-elb-alb-not-public
resource "aws_lb" "bastion" {
name_prefix = "${var.name_prefix}lb-"
internal = false

subnets = var.public_subnet_ids

load_balancer_type = "network"
tags = merge({ "Name" = "${var.name_prefix}lb" }, var.tags_default, var.tags_lb)
}

resource "aws_lb_target_group" "bastion_default" {
vpc_id = var.vpc_id
port = var.external_ssh_port
protocol = "TCP"
target_type = "instance"
preserve_client_ip = true

health_check {
port = 2345
protocol = "TCP"
}

tags = merge({ "Name" = "${var.name_prefix}lb" }, var.tags_default, var.tags_lb)
}

resource "aws_lb_listener" "bastion_ssh" {
load_balancer_arn = aws_lb.bastion.arn
port = var.external_ssh_port
protocol = "TCP"

default_action {
target_group_arn = aws_lb_target_group.bastion_default.arn
type = "forward"
}
}
21 changes: 21 additions & 0 deletions terraform/modules/bastion_host/host_key_secret.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
resource "aws_kms_key" "bastion_host_key_encryption_key" {
description = "${var.name_prefix}bastion-host-key-kms-key"
enable_key_rotation = true
tags = merge(var.tags_default, var.tags_host_key)
}

resource "aws_secretsmanager_secret" "bastion_host_key" {
name_prefix = "${var.name_prefix}bastion-ssh-host-key-"
description = "SSH Host key for bastion"
kms_key_id = aws_kms_key.bastion_host_key_encryption_key.id
tags = merge(var.tags_default, var.tags_host_key)
}

resource "aws_secretsmanager_secret_version" "bastion_host_key" {
secret_id = aws_secretsmanager_secret.bastion_host_key.id
secret_string = tls_private_key.bastion_host_key.private_key_openssh
}

resource "tls_private_key" "bastion_host_key" {
algorithm = "ED25519"
}
67 changes: 67 additions & 0 deletions terraform/modules/bastion_host/iam.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
data "aws_iam_policy_document" "bastion_assume_role" {
statement {
actions = ["sts:AssumeRole"]
principals {
identifiers = ["ec2.amazonaws.com"]
type = "Service"
}
effect = "Allow"
}
}

resource "aws_iam_role" "bastion" {
name_prefix = "${var.name_prefix}bastion"
assume_role_policy = data.aws_iam_policy_document.bastion_assume_role.json
}

data "aws_iam_policy_document" "bastion_policy" {
# Allow downloading of user SSH public keys
statement {
actions = ["s3:GetObject"]
resources = ["${aws_s3_bucket.ssh_keys.arn}/*"]
effect = "Allow"
}

# Allow listing SSH public keys
statement {
actions = ["s3:ListBucket"]
resources = [aws_s3_bucket.ssh_keys.arn]
}

# Allow reading the host key secret
statement {
actions = ["secretsmanager:GetSecretValue"]
resources = [aws_secretsmanager_secret.bastion_host_key.arn]
}

# Allow use of the KMS key used to encrypt the host key secret
statement {
actions = ["kms:Decrypt"]
resources = [aws_kms_key.bastion_host_key_encryption_key.arn]
}
}

resource "aws_iam_policy" "bastion" {
name_prefix = "${var.name_prefix}bastion"
policy = data.aws_iam_policy_document.bastion_policy.json
}

resource "aws_iam_role_policy_attachment" "bastion_policy" {
role = aws_iam_role.bastion.name
policy_arn = aws_iam_policy.bastion.arn
}

data "aws_iam_policy" "cloudwatch_agent" {
arn = "arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy"
}

resource "aws_iam_role_policy_attachment" "cloudwatch_agent" {
count = var.log_group_name == null ? 0 : 1
role = aws_iam_role.bastion.name
policy_arn = data.aws_iam_policy.cloudwatch_agent.arn
}

resource "aws_iam_instance_profile" "bastion_host_profile" {
name_prefix = "${var.name_prefix}bastion-profile"
role = aws_iam_role.bastion.name
}
143 changes: 143 additions & 0 deletions terraform/modules/bastion_host/init.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,143 @@
#!/bin/bash

set -xe

yum -y update --security
yum -y install jq nc amazon-cloudwatch-agent iptables-services

mkdir /usr/bin/bastion
mkdir /var/log/bastion

systemctl enable iptables
systemctl start iptables
# Block non-root users from accessing the instance metadata service
iptables -A OUTPUT -m owner ! --uid-owner root -d 169.254.169.254 -j DROP
# Allow port 2345 for health checks
iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 2345 -j ACCEPT
service iptables save

# Fetch the host key from AWS Secrets Manager
aws secretsmanager get-secret-value --region ${region} --secret-id ${host_key_secret_id} --query SecretString --output text > /etc/ssh/ssh_host_ed25519_key
ssh-keygen -y -f /etc/ssh/ssh_host_ed25519_key > /etc/ssh/ssh_host_ed25519_key.pub
chmod 600 /etc/ssh/ssh_host_ed25519_key

sed -i 's|HostKey /etc/ssh/ssh_host_ecdsa_key|#HostKey /etc/ssh/ssh_host_ecdsa_key|' /etc/ssh/sshd_config
sed -i 's|HostKey /etc/ssh/ssh_host_rsa_key|#HostKey /etc/ssh/ssh_host_rsa_key|' /etc/ssh/sshd_config
rm -f /etc/ssh/ssh_host_rsa_key /etc/ssh/ssh_host_rsa_key.pub
rm -f /etc/ssh/ssh_host_ecdsa_key /etc/ssh/ssh_host_ecdsa_key.pub


# Check the SSH config is valid, otherwise sshd will not come back up
/usr/sbin/sshd -t
systemctl restart sshd

if [ ! -z "${cloudwatch_config_ssm_parameter}" ]; then
amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c "ssm:${cloudwatch_config_ssm_parameter}"
fi

cat > /usr/bin/bastion/sync_users_with_s3 <<'EOF'
#!/usr/bin/env bash
set -xe
LOG_FILE="/var/log/bastion/changelog.log"
# Where we store etags of public keys for registered users
ETAGS_DIR=~/etags
# This file keeps track of which keys we've registered as users. Note: there are other system users,
# so this is specifically the users installed via S3 sync
REGISTERED_KEYS_FILE=~/registered_keys
# Where to dump the list of files in S3
S3_DATA_FILE=~/s3_data
AWS_BUCKET="${bucket_name}"
AWS_REGION="${region}"
aws s3api list-objects\
--bucket $AWS_BUCKET\
--region $AWS_REGION\
--output json\
--query 'Contents[?Size>`0`].{Key: Key, ETag: ETag}' > "$S3_DATA_FILE"
# Convert to lowercase and strip out the .pub at the end, if any
parse_username() {
echo "$1" | tr '[:upper:]' '[:lower:]' | sed -e "s/\.pub//g"
}
# Add/Update users with a key in S3
# We're encoding each entry in array to base64 so it fits onto a single line. We decode when we read the line.
for row in $(cat "$S3_DATA_FILE" | jq -r '.[] | @base64'); do
_jq() {
# Double dollar for Terraform escaping purposes
echo $${row} | base64 --decode | jq -r $${1}
}
# Cut the .pub from the end of the public key name
KEY=$(_jq '.Key')
USER_NAME=$(parse_username "$KEY")
ETAG=$(_jq '.ETag')
# Check the username starts with a letter and only contains letters, numbers, dashes and underscores afterwards
if [[ "$USER_NAME" =~ ^[a-z][-a-z0-9_]*$ ]]; then
# Check whether the user already exists
cut -d: -f1 /etc/passwd | grep -qx $USER_NAME || error_code=$?
if [ $error_code -eq 1 ]; then
# See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/managing-users.html#create-user-account
adduser $USER_NAME
[ -d /home/$USER_NAME/.ssh ] || mkdir -m 700 /home/$USER_NAME/.ssh
chown $USER_NAME:$USER_NAME /home/$USER_NAME/.ssh
echo "$KEY" >> "$REGISTERED_KEYS_FILE"
echo "$(date --iso-8601='seconds'): Created user $USER_NAME" >> $LOG_FILE
fi
ETAG_FILE="$ETAGS_DIR/$USER_NAME"
# If there is no etag, or key etag doesn't match, download from S3
if [ ! -f "$ETAG_FILE" ] || [ "$(cat "$ETAG_FILE")" != "$ETAG" ]; then
aws s3 cp s3://$AWS_BUCKET/$KEY /home/$USER_NAME/.ssh/authorized_keys --region $AWS_REGION
if [ ! -f "$ETAG_FILE" ]; then
mkdir -p "$ETAGS_DIR"
touch "$ETAG_FILE"
fi
chmod 600 /home/$USER_NAME/.ssh/authorized_keys
chown $USER_NAME:$USER_NAME /home/$USER_NAME/.ssh/authorized_keys
# Update the etag
echo $ETAG > "$ETAG_FILE"
echo "$(date --iso-8601='seconds'): Updated public key for $USER_NAME from file ($KEY)" >> $LOG_FILE
fi
fi
done
# Remove users which no longer have a public key in S3
if [ -f "$REGISTERED_KEYS_FILE" ]; then
# Convert JSON entries to simple list
cat "$S3_DATA_FILE" | jq -r '.[].Key' > ~/s3_keys
touch ~/tmp_registered_keys
while read KEY; do
if grep -Fxq "$KEY" ~/s3_keys; then
# The key exists, so keep it
echo "$KEY" >> ~/tmp_registered_keys
else
# The key is gone, so remove the user
USER_NAME="$(parse_username "$KEY")"
userdel -r -f $USER_NAME
echo "$(date --iso-8601='seconds'): Deleted user $USER_NAME with key $KEY" >> $LOG_FILE
fi
done < "$REGISTERED_KEYS_FILE"
# Replace the old list with the new list
mv ~/tmp_registered_keys "$REGISTERED_KEYS_FILE"
fi
EOF

chmod 700 /usr/bin/bastion/sync_users_with_s3
PATH=$PATH:/sbin /usr/bin/bastion/sync_users_with_s3

# Update users every 5 minutes, check for security updates at 3AM
cat > ~/crontab << EOF
*/5 * * * * PATH=$PATH:/sbin /usr/bin/bastion/sync_users_with_s3
0 3 * * * yum -y update --security
@reboot bash -c "cat /dev/null | nohup nc -kl 2345 >/dev/null 2>&1 &"
EOF
crontab ~/crontab
rm ~/crontab

# Listen on port 2345 for healthcheck pings from the load balancer
bash -c "cat /dev/null | nohup nc -kl 2345 >/dev/null 2>&1 &"
Loading

0 comments on commit cf68c12

Please sign in to comment.