Skip to content

A Terraform module for provisioning and installing Terraform Enterprise on AWS EKS as described in HashiCorp Validated Designs

License

Notifications You must be signed in to change notification settings

hashicorp/terraform-aws-terraform-enterprise-eks-hvd

Terraform Enterprise HVD on AWS EKS

Terraform module aligned with HashiCorp Validated Designs (HVD) to deploy Terraform Enterprise on AWS Elastic Kubernetes Service (EKS). This module supports bringing your own EKS cluster, or optionally creating a new EKS cluster dedicated to running TFE. This module does not use the Kubernetes or Helm Terraform providers, but rather includes Post Steps for the application layer portion of the deployment leveraging the kubectl and helm CLIs.

Prerequisites

General

  • TFE license file (e.g. terraform.hclic)
  • Terraform CLI >= 1.9 installed on clients/workstations that will be used to deploy TFE
  • General understanding of how to use Terraform (Community Edition)
  • General understanding of how to use AWS
  • General understanding of how to use Kubernetes and Helm
  • git CLI and Visual Studio Code editor installed on workstations are strongly recommended
  • AWS account that TFE will be deployed in with permissions to provision these resources via Terraform CLI
  • (Optional) AWS S3 bucket for S3 remote state backend that will be used to manage the Terraform state of this TFE deployment (out-of-band from the TFE application) via Terraform CLI (Community Edition)

Networking

  • AWS VPC ID and the following subnets:
    • Load balancer subnet IDs (can be the same as EKS subnets if desired)
    • EKS (compute) subnet IDs for TFE pods
    • RDS (database) subnet IDs
    • Redis subnet IDs (can be the same as RDS subnets if desirable)
  • (Optional) S3 VPC Endpoint configured within VPC
  • (Optional) AWS Route53 Hosted Zone for TFE DNS record creation
  • Chosen fully qualified domain name (FQDN) for TFE (e.g. tfe.aws.example.com)

Security groups

  • This module will automatically create the necessary EKS-related security groups and attach them to the applicable resources when create_eks_cluster is true
  • Identify CIDR range(s) that will need to access the TFE application
  • (Optional) Identify CIDR range(s) of any monitoring/observability tools that will need to access (scrape) TFE metrics endpoints
  • Identify CIDR range(s) that will need to access the TFE EKS cluster
  • If your EKS cluster is private, your clients/workstations must be able to access the control plane via kubectl and helm
  • Be familiar with the TFE ingress requirements
  • Be familiar with the TFE egress requirements
  • If you are bringing your own EKS cluster (create_eks_cluster is false), then you must account for the following:
    • Allow TCP/8443 (HTTPS) and TCP/8080 (HTTP) ingress to EKS node group/TFE pods subnet from TFE load balancer subnet (for TFE application traffic)
    • Allow TCP/8201 ingress between nodes in EKS node group/TFE pods subnet (for TFE embedded Vault internal cluster traffic)
    • (Optional) Allow TCP/9091 (HTTPS) and/or TCP/9090 (HTTP) ingress to EKS node group/TFE pods subnet from metrics collection tool (for scraping TFE metrics endpoints)
    • Allow TCP/443 egress to Terraform endpoints listed here from EKS node group/TFE pods subnet

TLS certificates

  • TLS certificate (e.g. cert.pem) and private key (e.g. privkey.pem) that matches your chosen fully qualified domain name (FQDN) for TFE
    • TLS certificate and private key must be in PEM format
    • Private key must not be password protected
  • TLS certificate authority (CA) bundle (e.g. ca_bundle.pem) corresponding with the CA that issues your TFE TLS certificates
    • CA bundle must be in PEM format

đź“ť Note: The TLS certificate and private key will be created as Kubernetes secrets durint the Post Steps.

Secrets management

The following bootstrap secrets stored in AWS Secrets Manager in order to bootstrap the TFE deployment:

  • RDS (PostgreSQL) database password - random characters stored as a plaintext secret; value must be between 8 and 128 characters long and must not contain '@', '"', or '/' characters
  • Redis password - random characters stored as a plaintext secret; value must be between 16 and 128 characters long and must not contain '@', '"', or '/' characters

Compute (optional)

If you plan to create a new EKS cluster using this module (create_eks_cluster is true), then you may skip this section. Otherwise:

  • EKS cluster with the following configurations:
    • EKS node group
    • EKS OIDC provider URL (used by module to create TFE IRSA)
    • EKS OIDC provider ARN (used by module to create TFE IRSA)
    • (Optional) AWS load balancer controller installed within EKS cluster (unless you plan to use a custom Kubernetes ingress controller load balancer)

Log Forwarding (optional)

One of the following logging destinations:

  • AWS CloudWatch log group
  • AWS S3 bucket

Usage

  1. Create/configure/validate the applicable prerequisites.

  2. Nested within the examples directory are subdirectories containing ready-made Terraform configurations for example scenarios on how to call and deploy this module. To get started, choose the example scenario that most closely matches your requirements. You can customize your deployment later by adding additional module inputs as you see fit (see the Deployment-Customizations doc for more details).

  3. Copy all of the Terraform files from your example scenario of choice into a new destination directory to create your Terraform configuration that will manage your TFE deployment. This is a common directory structure for managing multiple TFE deployments:

    .
    └── environments
        ├── production
        │   ├── backend.tf
        │   ├── main.tf
        │   ├── outputs.tf
        │   ├── terraform.tfvars
        │   └── variables.tf
        └── sandbox
            ├── backend.tf
            ├── main.tf
            ├── outputs.tf
            ├── terraform.tfvars
            └── variables.tf
    

    đź“ť Note: In this example, the user will have two separate TFE deployments; one for their sandbox environment, and one for their production environment. This is recommended, but not required.

  4. (Optional) Uncomment and update the S3 remote state backend configuration provided in the backend.tf file with your own custom values. While this step is highly recommended, it is technically not required to use a remote backend config for your TFE deployment (if you are in a sandbox environment, for example).

  5. Populate your own custom values into the terraform.tfvars.example file that was provided (in particular, values enclosed in the <> characters). Then, remove the .example file extension such that the file is now named terraform.tfvars.

  6. Navigate to the directory of your newly created Terraform configuration for your TFE deployment, and run terraform init, terraform plan, and terraform apply.

The TFE infrastructure resources have now been created. Next comes the application layer portion of the deployment (which we refer to as the Post Steps), which will involve interacting with your EKS cluster via kubectl and installing the TFE application via helm.

Post Steps

  1. Authenticate to your EKS cluster:

    aws eks --region <aws-region> update-kubeconfig --name <eks-cluster-name>

    đź“ť Note: You can get the value of your EKS cluster name from the eks_cluster_name Terraform output if you created your EKS cluster via this module.

  2. AWS recommends installing the AWS load balancer controller for EKS. If it is not already installed in your EKS cluster, install the AWS load balancer controller within the kube-system namespace via the Helm chart:

    Add the AWS eks-charts Helm chart repository:

    helm repo add eks https://aws.github.io/eks-charts

    Update your local repo to make sure that you have the most recent charts:

    helm repo update eks

    Install the AWS load balancer controller:

    helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
     --namespace kube-system \
     --set clusterName=<eks-cluster-name> \
     --set serviceAccount.create=true \
     --set serviceAccount.name=aws-load-balancer-controller \
     --set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=<aws-lb-controller-irsa-role-arn> \
     --set region=<aws-region> \
     --set vpcId=<vpc-id>

    đź“ť Note: You can get the value of your AWS load balancer controller IRSA role ARN from the aws_lb_controller_irsa_role_arn Terraform output (if create_aws_lb_controller_irsa was true).

  3. Create the Kubernetes namespace for TFE:

    kubectl create namespace tfe

    đź“ť Note: You can name your TFE namespace something different than tfe if you prefer. If you do name it differently, be sure to update your value of the tfe_kube_namespace input variable accordingly.

  4. Create the required secrets for your TFE deployment within your new Kubernetes namespace for TFE. There are several ways to do this, whether it be from the CLI via kubectl, or another method involving a third-party secrets helper/tool. See the kubernetes-secrets docs for details on the required secrets and how to create them.

  5. This Terraform module will automatically generate a Helm overrides file within your Terraform working directory named ./helm/module_generated_helm_overrides.yaml. This Helm overrides file contains values interpolated from some of the infrastructure resources that were created by Terraform in step 6. Within the Helm overrides file, update or validate the values for the remaining settings that are enclosed in the <> characters. You may also add any additional configuration settings into your Helm overrides file at this time (see the helm-overrides doc for more details).

  6. Now that you have customized your module_generated_helm_overrides.yaml file, rename it to something more applicable to your deployment, such as prod_tfe_overrides.yaml (or whatever you prefer). Then, within your terraform.tfvars file, set the value of create_helm_overrides_file to false, as we no longer want the Terraform module to manage this file or generate a new one on a subsequent Terraform run.

  7. Add the HashiCorp Helm chart repository:

    helm repo add hashicorp https://helm.releases.hashicorp.com

đź“ť Note: If you have already added the HashiCorp Helm registry, you should run helm repo update hashicorp to ensure you have the latest version.

  1. Install the TFE application via helm:

    helm install terraform-enterprise hashicorp/terraform-enterprise --namespace <TFE_NAMESPACE> --values <TFE_OVERRIDES_FILE>
  2. Verify the TFE pod(s) are starting successfully:

    View the events within the namespace:

    kubectl get events --namespace <TFE_NAMESPACE>

    View the pods within the namespace:

    kubectl get pods --namespace <TFE_NAMESPACE>

    View the logs from the pod:

    kubectl logs <TFE_POD_NAME> --namespace <TFE_NAMESPACE> -f
  3. Create a DNS record for your TFE FQDN. The DNS record should resolve to your TFE load balancer, depending on how the load balancer was configured during your TFE deployment:

    • If you configured a Kubernetes service of type LoadBalancer (what the module-generated Helm overrides defaults to), the DNS record should resolve to the DNS name of your AWS network load balancer (NLB).

      kubectl get services --namespace <TFE_NAMESPACE>
    • If you configured a custom Kubernetes ingress (meaning you customized your Helm overrides during step 11), the DNS record should resolve to the IP address of your ingress controller load balancer.

      kubectl get ingress <INGRESS_NAME> --namespace <INGRESS_NAMESPACE>

    đź“ť Note: If you are creating your DNS record in Route53, AWS recommends creating an alias record (if your TFE load balancer is an AWS-managed load balancer resource).

  4. Verify the TFE application is ready:

    curl https://<TFE_FQDN>/_health_check
  5. Follow the remaining steps here to finish the installation setup, which involves creating the initial admin user.


Docs

Below are links to various docs related to the customization and management of your TFE deployment:


Requirements

Name Version
terraform >= 1.9
aws ~> 5.63
local 2.5.1
tls 4.0.5

Providers

Name Version
aws ~> 5.63
local 2.5.1
tls 4.0.5

Resources

Name Type
aws_db_parameter_group.tfe resource
aws_db_subnet_group.tfe resource
aws_eks_access_entry.tfe_cluster_creator resource
aws_eks_access_policy_association.tfe_cluster_creator resource
aws_eks_cluster.tfe resource
aws_eks_node_group.tfe resource
aws_elasticache_replication_group.redis_cluster resource
aws_elasticache_subnet_group.tfe resource
aws_iam_openid_connect_provider.tfe_eks_irsa resource
aws_iam_policy.aws_load_balancer_controller_policy resource
aws_iam_policy.s3_crr resource
aws_iam_policy.tfe_eks_nodegroup_custom resource
aws_iam_policy.tfe_irsa resource
aws_iam_policy_attachment.s3_crr resource
aws_iam_role.aws_lb_controller_irsa resource
aws_iam_role.eks_cluster resource
aws_iam_role.s3_crr resource
aws_iam_role.tfe_eks_nodegroup resource
aws_iam_role.tfe_irsa resource
aws_iam_role_policy_attachment.aws_load_balancer_controller_policy resource
aws_iam_role_policy_attachment.eks_cluster_cluster_policy resource
aws_iam_role_policy_attachment.eks_cluster_service_policy resource
aws_iam_role_policy_attachment.eks_cluster_vpc_resource_controller_policy resource
aws_iam_role_policy_attachment.tfe_eks_nodegroup_cni_policy resource
aws_iam_role_policy_attachment.tfe_eks_nodegroup_container_registry_readonly resource
aws_iam_role_policy_attachment.tfe_eks_nodegroup_ebs_kms resource
aws_iam_role_policy_attachment.tfe_eks_nodegroup_worker_node_policy resource
aws_iam_role_policy_attachment.tfe_irsa resource
aws_launch_template.tfe_eks_nodegroup resource
aws_rds_cluster.tfe resource
aws_rds_cluster_instance.tfe resource
aws_rds_cluster_parameter_group.tfe resource
aws_rds_global_cluster.tfe resource
aws_s3_bucket.tfe resource
aws_s3_bucket_public_access_block.tfe resource
aws_s3_bucket_replication_configuration.tfe resource
aws_s3_bucket_server_side_encryption_configuration.tfe resource
aws_s3_bucket_versioning.tfe resource
aws_security_group.eks_cluster_allow resource
aws_security_group.rds_allow_ingress resource
aws_security_group.redis_allow_ingress resource
aws_security_group.tfe_eks_nodegroup_allow resource
aws_security_group.tfe_lb_allow resource
aws_security_group_rule.eks_cluster_allow_all_egress resource
aws_security_group_rule.eks_cluster_allow_ingress_nodegroup resource
aws_security_group_rule.rds_allow_ingress_from_cidr resource
aws_security_group_rule.rds_allow_ingress_from_nodegroup resource
aws_security_group_rule.rds_allow_ingress_from_sg resource
aws_security_group_rule.redis_allow_ingress_from_cidr resource
aws_security_group_rule.redis_allow_ingress_from_nodegroup resource
aws_security_group_rule.redis_allow_ingress_from_sg resource
aws_security_group_rule.tfe_eks_nodegroup_allow_10250_from_cluster resource
aws_security_group_rule.tfe_eks_nodegroup_allow_443_from_cluster resource
aws_security_group_rule.tfe_eks_nodegroup_allow_443_from_lb resource
aws_security_group_rule.tfe_eks_nodegroup_allow_4443_from_cluster resource
aws_security_group_rule.tfe_eks_nodegroup_allow_6443_from_cluster resource
aws_security_group_rule.tfe_eks_nodegroup_allow_8443_from_cluster resource
aws_security_group_rule.tfe_eks_nodegroup_allow_9443_from_cluster resource
aws_security_group_rule.tfe_eks_nodegroup_allow_all_egress resource
aws_security_group_rule.tfe_eks_nodegroup_allow_nodes_53_tcp resource
aws_security_group_rule.tfe_eks_nodegroup_allow_nodes_53_udp resource
aws_security_group_rule.tfe_eks_nodegroup_allow_nodes_ephemeral resource
aws_security_group_rule.tfe_eks_nodegroup_allow_tfe_http_from_lb resource
aws_security_group_rule.tfe_eks_nodegroup_allow_tfe_https_from_lb resource
aws_security_group_rule.tfe_eks_nodegroup_allow_tfe_metrics_http_from_cidr resource
aws_security_group_rule.tfe_eks_nodegroup_allow_tfe_metrics_https_from_cidr resource
aws_security_group_rule.tfe_lb_allow_all_egress_to_cidr resource
aws_security_group_rule.tfe_lb_allow_all_egress_to_nodegroup resource
aws_security_group_rule.tfe_lb_allow_all_egress_to_sg resource
aws_security_group_rule.tfe_lb_allow_ingress_443 resource
local_file.helm_overrides_values resource
aws_ami.tfe_eks_nodegroup_custom data source
aws_ami.tfe_eks_nodegroup_default data source
aws_availability_zones.available data source
aws_caller_identity.current data source
aws_iam_policy_document.aws_lb_controller_irsa_assume_role data source
aws_iam_policy_document.aws_load_balancer_controller_policy data source
aws_iam_policy_document.eks_cluster_assume_role data source
aws_iam_policy_document.s3_crr data source
aws_iam_policy_document.s3_crr_assume_role data source
aws_iam_policy_document.tfe_eks_nodegroup_assume_role data source
aws_iam_policy_document.tfe_eks_nodegroup_ebs_kms_cmk data source
aws_iam_policy_document.tfe_irsa_assume_role data source
aws_iam_policy_document.tfe_irsa_combined data source
aws_iam_policy_document.tfe_irsa_cost_estimation data source
aws_iam_policy_document.tfe_irsa_rds_kms_cmk data source
aws_iam_policy_document.tfe_irsa_redis_kms_cmk data source
aws_iam_policy_document.tfe_irsa_s3 data source
aws_iam_policy_document.tfe_irsa_s3_kms_cmk data source
aws_iam_session_context.current data source
aws_partition.current data source
aws_region.current data source
aws_secretsmanager_secret_version.tfe_database_password data source
aws_secretsmanager_secret_version.tfe_redis_password data source
tls_certificate.tfe_eks data source

Inputs

Name Description Type Default Required
friendly_name_prefix Friendly name prefix used for uniquely naming all AWS resources for this deployment. Most commonly set to either an environment (e.g. 'sandbox', 'prod') a team name, or a project name. string n/a yes
rds_subnet_ids List of subnet IDs to use for RDS database subnet group. list(string) n/a yes
tfe_database_password_secret_arn ARN of AWS Secrets Manager secret for the TFE RDS Aurora (PostgreSQL) database password. string n/a yes
tfe_fqdn Fully qualified domain name (FQDN) of TFE instance. This name should eventually resolve to the TFE load balancer DNS name or IP address and will be what clients use to access TFE. string n/a yes
vpc_id ID of VPC where TFE will be deployed. string n/a yes
aws_lb_controller_kube_namespace Name of Kubernetes namespace for AWS Load Balancer Controller service account (to be created by Helm chart). Used to configure EKS IRSA. string "kube-system" no
aws_lb_controller_kube_svc_account Name of Kubernetes service account for AWS Load Balancer Controller (to be created by Helm chart). Used to configure EKS IRSA. string "aws-load-balancer-controller" no
cidr_allow_egress_from_tfe_lb List of CIDR ranges to allow all outbound traffic from TFE load balancer. Only set this to your TFE pod CIDR ranges when an EKS cluster already exists outside of this module. list(string) null no
cidr_allow_ingress_tfe_443 List of CIDR ranges to allow TCP/443 inbound to TFE load balancer (load balancer is managed by Helm/K8s). list(string) [] no
cidr_allow_ingress_tfe_metrics_http List of CIDR ranges to allow TCP/9090 (TFE HTTP metrics endpoint) inbound to TFE pods. list(string) [] no
cidr_allow_ingress_tfe_metrics_https List of CIDR ranges to allow TCP/9091 (TFE HTTPS metrics endpoint) inbound to TFE pods. list(string) [] no
cidr_allow_ingress_to_rds List of CIDR ranges to allow TCP/5432 (PostgreSQL) inbound to RDS cluster. list(string) null no
cidr_allow_ingress_to_redis List of CIDR ranges to allow TCP/6379 (Redis) inbound to Redis cluster. list(string) null no
common_tags Map of common tags for all taggable AWS resources. map(string) {} no
create_aws_lb_controller_irsa Boolean to create AWS Load Balancer Controller IAM role and policies to enable EKS IAM role for service accounts (IRSA). bool false no
create_eks_cluster Boolean to create new EKS cluster for TFE. bool false no
create_eks_oidc_provider Boolean to create OIDC provider used to configure AWS IRSA. bool false no
create_helm_overrides_file Boolean to generate a YAML file from template with Helm overrides values for TFE deployment. bool true no
create_tfe_eks_irsa Boolean to create TFE IAM role and policies to enable TFE EKS IAM role for service accounts (IRSA). bool false no
create_tfe_lb_security_group Boolean to create security group for TFE load balancer (load balancer is managed by Helm/K8s). bool true no
eks_cluster_authentication_mode Authentication mode for access config of EKS cluster. string "API_AND_CONFIG_MAP" no
eks_cluster_endpoint_public_access Boolean to enable public access to the EKS cluster endpoint. bool false no
eks_cluster_name Name of EKS cluster. string "tfe-eks-cluster" no
eks_cluster_public_access_cidrs List of CIDR blocks to allow public access to the EKS cluster endpoint. Only valid when eks_cluster_endpoint_public_access is true. list(string) null no
eks_cluster_service_ipv4_cidr CIDR block for the EKS cluster Kubernetes service network. Must be a valid /16 CIDR block. EKS will auto-assign from either 10.100.0.0/16 or 172.20.0.0/16 CIDR blocks when null. string null no
eks_nodegroup_ami_id ID of AMI to use for EKS node group. Required when eks_nodegroup_ami_type is CUSTOM. string null no
eks_nodegroup_ami_type Type of AMI to use for EKS node group. Must be set to CUSTOM when eks_nodegroup_ami_id is not null. string "AL2023_x86_64_STANDARD" no
eks_nodegroup_ebs_kms_key_arn ARN of KMS customer managed key (CMK) to encrypt EKS node group EBS volumes. string null no
eks_nodegroup_instance_type Instance type for worker nodes within EKS node group. string "m7i.xlarge" no
eks_nodegroup_name Name of EKS node group. string "tfe-eks-nodegroup" no
eks_nodegroup_scaling_config Scaling configuration for EKS node group. map(number)
{
"desired_size": 3,
"max_size": 3,
"min_size": 2
}
no
eks_oidc_provider_arn ARN of existing OIDC provider for EKS cluster. Required when create_eks_oidc_provider is false. string null no
eks_oidc_provider_url URL of existing OIDC provider for EKS cluster. Required when create_eks_oidc_provider is false. string null no
eks_subnet_ids List of subnet IDs to use for EKS cluster. list(string) null no
is_secondary_region Boolean indicating whether this TFE deployment is in the 'primary' region or 'secondary' region. bool false no
rds_apply_immediately Boolean to apply changes immediately to RDS cluster instance. bool true no
rds_aurora_engine_mode RDS Aurora database engine mode. string "provisioned" no
rds_aurora_engine_version Engine version of RDS Aurora PostgreSQL. number 16.2 no
rds_aurora_instance_class Instance class of Aurora PostgreSQL database. string "db.r6i.xlarge" no
rds_aurora_replica_count Number of replica (reader) cluster instances to create within the RDS Aurora database cluster (within the same region). number 1 no
rds_availability_zones List of AWS availability zones to spread Aurora database cluster instances across. Leave as null and RDS will automatically assign 3 availability zones. list(string) null no
rds_backup_retention_period The number of days to retain backups for. Must be between 0 and 35. Must be greater than 0 if the database cluster is used as a source of a read replica cluster. number 35 no
rds_deletion_protection Boolean to enable deletion protection for RDS global cluster. bool false no
rds_force_destroy Boolean to enable the removal of RDS database cluster members from RDS global cluster on destroy. bool false no
rds_global_cluster_id ID of RDS global cluster. Only required only when is_secondary_region is true, otherwise leave as null. string null no
rds_kms_key_arn ARN of KMS customer managed key (CMK) to encrypt TFE RDS cluster. string null no
rds_parameter_group_family Family of Aurora PostgreSQL database parameter group. string "aurora-postgresql16" no
rds_performance_insights_enabled Boolean to enable performance insights for RDS cluster instance(s). bool true no
rds_performance_insights_retention_period Number of days to retain RDS performance insights data. Must be between 7 and 731. number 7 no
rds_preferred_backup_window Daily time range (UTC) for RDS backup to occur. Must not overlap with rds_preferred_maintenance_window. string "04:00-04:30" no
rds_preferred_maintenance_window Window (UTC) to perform RDS database maintenance. Must not overlap with rds_preferred_backup_window. string "Sun:08:00-Sun:09:00" no
rds_replication_source_identifier ARN of source RDS cluster or cluster instance if this cluster is to be created as a read replica. Only required when is_secondary_region is true, otherwise leave as null. string null no
rds_skip_final_snapshot Boolean to enable RDS to take a final database snapshot before destroying. bool false no
rds_source_region Source region for RDS cross-region replication. Only required when is_secondary_region is true, otherwise leave as null. string null no
rds_storage_encrypted Boolean to encrypt RDS storage. An AWS managed key will be used when true unless a value is also specified for rds_kms_key_arn. bool true no
redis_apply_immediately Boolean to apply changes immediately to Redis cluster. bool true no
redis_at_rest_encryption_enabled Boolean to enable encryption at rest on Redis cluster. An AWS managed key will be used when true unless a value is also specified for redis_kms_key_arn. bool true no
redis_auto_minor_version_upgrade Boolean to enable automatic minor version upgrades for Redis cluster. bool true no
redis_automatic_failover_enabled Boolean for deploying Redis nodes in multiple availability zones and enabling automatic failover. bool true no
redis_engine_version Redis version number. string "7.1" no
redis_kms_key_arn ARN of KMS customer managed key (CMK) to encrypt Redis cluster with. string null no
redis_multi_az_enabled Boolean to create Redis nodes across multiple availability zones. If true, redis_automatic_failover_enabled must also be true, and more than one subnet must be specified within redis_subnet_ids. bool true no
redis_node_type Type (size) of Redis node from a compute, memory, and network throughput standpoint. string "cache.m5.large" no
redis_parameter_group_name Name of parameter group to associate with Redis cluster. string "default.redis7" no
redis_port Port number the Redis nodes will accept connections on. number 6379 no
redis_subnet_ids List of subnet IDs to use for Redis cluster subnet group. list(string) null no
redis_transit_encryption_enabled Boolean to enable TLS encryption between TFE and the Redis cluster. bool true no
s3_destination_bucket_arn ARN of destination S3 bucket for cross-region replication configuration. Bucket should already exist in secondary region. Required when s3_enable_bucket_replication is true. string "" no
s3_destination_bucket_kms_key_arn ARN of KMS key of destination S3 bucket for cross-region replication configuration if it is encrypted with a customer managed key (CMK). string null no
s3_enable_bucket_replication Boolean to enable cross-region replication for TFE S3 bucket. Do not enable when is_secondary_region is true. An s3_destination_bucket_arn is also required when true. bool false no
s3_kms_key_arn ARN of KMS customer managed key (CMK) to encrypt TFE S3 bucket with. string null no
sg_allow_egress_from_tfe_lb Security group ID of EKS node group to allow all egress traffic from TFE load balancer. Only set this to your TFE pod security group ID when an EKS cluster already exists outside of this module. string null no
sg_allow_ingress_to_rds Security group ID to allow TCP/5432 (PostgreSQL) inbound to RDS cluster. string null no
sg_allow_ingress_to_redis Security group ID to allow TCP/6379 (Redis) inbound to Redis cluster. string null no
tfe_cost_estimation_iam_enabled Boolean to add AWS pricing actions to TFE IAM role for service account (IRSA). Only implemented when create_tfe_eks_irsa is true. string true no
tfe_database_name Name of TFE database to create within RDS global cluster. string "tfe" no
tfe_database_parameters PostgreSQL server parameters for the connection URI. Used to configure the PostgreSQL connection. string "sslmode=require" no
tfe_database_user Username for TFE RDS database cluster. string "tfe" no
tfe_http_port HTTP port number that the TFE application will listen on within the TFE pods. It is recommended to leave this as the default value. number 8080 no
tfe_https_port HTTPS port number that the TFE application will listen on within the TFE pods. It is recommended to leave this as the default value. number 8443 no
tfe_kube_namespace Name of Kubernetes namespace for TFE service account (to be created by Helm chart). Used to configure EKS IRSA. string "tfe" no
tfe_kube_svc_account Name of Kubernetes service account for TFE (to be created by Helm chart). Used to configure EKS IRSA. string "tfe" no
tfe_metrics_http_port HTTP port number that the TFE metrics endpoint will listen on within the TFE pods. It is recommended to leave this as the default value. number 9090 no
tfe_metrics_https_port HTTPS port number that the TFE metrics endpoint will listen on within the TFE pods. It is recommended to leave this as the default value. number 9091 no
tfe_object_storage_s3_access_key_id Access key ID for S3 bucket. Required when tfe_object_storage_s3_use_instance_profile is false. string null no
tfe_object_storage_s3_secret_access_key Secret access key for S3 bucket. Required when tfe_object_storage_s3_use_instance_profile is false. string null no
tfe_object_storage_s3_use_instance_profile Boolean to use instance profile for S3 bucket access. If false, tfe_object_storage_s3_access_key_id and tfe_object_storage_s3_secret_access_key are required. bool true no
tfe_redis_password_secret_arn ARN of AWS Secrets Manager secret for the TFE Redis password. Value of secret must contain from 16 to 128 alphanumeric characters or symbols (excluding @, ", and /). string null no

Outputs

Name Description
aws_lb_controller_irsa_role_arn ARN of IAM role for AWS Load Balancer Controller IRSA.
eks_cluster_name Name of TFE EKS cluster.
elasticache_replication_group_arn ARN of ElastiCache Replication Group (Redis) cluster.
elasticache_replication_group_id ID of ElastiCache Replication Group (Redis) cluster.
elasticache_replication_group_primary_endpoint_address Primary endpoint address of ElastiCache Replication Group (Redis) cluster.
rds_aurora_cluster_arn ARN of RDS Aurora database cluster.
rds_aurora_cluster_endpoint RDS Aurora database cluster endpoint.
rds_aurora_cluster_members List of instances that are part of this RDS Aurora database cluster.
rds_aurora_global_cluster_id RDS Aurora global database cluster identifier.
s3_bucket_arn ARN of TFE S3 bucket.
s3_bucket_name Name of TFE S3 bucket.
s3_crr_iam_role_arn ARN of S3 cross-region replication IAM role.
tfe_database_host PostgreSQL server endpoint in the format that TFE will connect to.
tfe_database_password TFE PostgreSQL database password.
tfe_database_password_base64 Base64-encoded TFE PostgreSQL database password.
tfe_irsa_role_arn ARN of IAM role for TFE EKS IRSA.
tfe_lb_security_group_id ID of security group for TFE load balancer.
tfe_redis_password TFE Redis password.
tfe_redis_password_base64 Base64-encoded TFE Redis password.
tfe_url URL to access TFE application based on value of tfe_fqdn input.

About

A Terraform module for provisioning and installing Terraform Enterprise on AWS EKS as described in HashiCorp Validated Designs

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •