Terraform module aligned with HashiCorp Validated Designs (HVD) to deploy Terraform Enterprise on AWS Elastic Kubernetes Service (EKS). This module supports bringing your own EKS cluster, or optionally creating a new EKS cluster dedicated to running TFE. This module does not use the Kubernetes or Helm Terraform providers, but rather includes Post Steps for the application layer portion of the deployment leveraging the kubectl
and helm
CLIs.
- TFE license file (e.g.
terraform.hclic
) - Terraform CLI
>= 1.9
installed on clients/workstations that will be used to deploy TFE - General understanding of how to use Terraform (Community Edition)
- General understanding of how to use AWS
- General understanding of how to use Kubernetes and Helm
git
CLI and Visual Studio Code editor installed on workstations are strongly recommended- AWS account that TFE will be deployed in with permissions to provision these resources via Terraform CLI
- (Optional) AWS S3 bucket for S3 remote state backend that will be used to manage the Terraform state of this TFE deployment (out-of-band from the TFE application) via Terraform CLI (Community Edition)
- AWS VPC ID and the following subnets:
- Load balancer subnet IDs (can be the same as EKS subnets if desired)
- EKS (compute) subnet IDs for TFE pods
- RDS (database) subnet IDs
- Redis subnet IDs (can be the same as RDS subnets if desirable)
- (Optional) S3 VPC Endpoint configured within VPC
- (Optional) AWS Route53 Hosted Zone for TFE DNS record creation
- Chosen fully qualified domain name (FQDN) for TFE (e.g.
tfe.aws.example.com
)
- This module will automatically create the necessary EKS-related security groups and attach them to the applicable resources when
create_eks_cluster
istrue
- Identify CIDR range(s) that will need to access the TFE application
- (Optional) Identify CIDR range(s) of any monitoring/observability tools that will need to access (scrape) TFE metrics endpoints
- Identify CIDR range(s) that will need to access the TFE EKS cluster
- If your EKS cluster is private, your clients/workstations must be able to access the control plane via
kubectl
andhelm
- Be familiar with the TFE ingress requirements
- Be familiar with the TFE egress requirements
- If you are bringing your own EKS cluster (
create_eks_cluster
isfalse
), then you must account for the following:- Allow
TCP/8443
(HTTPS) andTCP/8080
(HTTP) ingress to EKS node group/TFE pods subnet from TFE load balancer subnet (for TFE application traffic) - Allow
TCP/8201
ingress between nodes in EKS node group/TFE pods subnet (for TFE embedded Vault internal cluster traffic) - (Optional) Allow
TCP/9091
(HTTPS) and/orTCP/9090
(HTTP) ingress to EKS node group/TFE pods subnet from metrics collection tool (for scraping TFE metrics endpoints) - Allow
TCP/443
egress to Terraform endpoints listed here from EKS node group/TFE pods subnet
- Allow
- TLS certificate (e.g.
cert.pem
) and private key (e.g.privkey.pem
) that matches your chosen fully qualified domain name (FQDN) for TFE- TLS certificate and private key must be in PEM format
- Private key must not be password protected
- TLS certificate authority (CA) bundle (e.g.
ca_bundle.pem
) corresponding with the CA that issues your TFE TLS certificates- CA bundle must be in PEM format
đź“ť Note: The TLS certificate and private key will be created as Kubernetes secrets durint the Post Steps.
The following bootstrap secrets stored in AWS Secrets Manager in order to bootstrap the TFE deployment:
- RDS (PostgreSQL) database password - random characters stored as a plaintext secret; value must be between 8 and 128 characters long and must not contain '@', '"', or '/' characters
- Redis password - random characters stored as a plaintext secret; value must be between 16 and 128 characters long and must not contain '@', '"', or '/' characters
If you plan to create a new EKS cluster using this module (create_eks_cluster
is true
), then you may skip this section. Otherwise:
- EKS cluster with the following configurations:
- EKS node group
- EKS OIDC provider URL (used by module to create TFE IRSA)
- EKS OIDC provider ARN (used by module to create TFE IRSA)
- (Optional) AWS load balancer controller installed within EKS cluster (unless you plan to use a custom Kubernetes ingress controller load balancer)
One of the following logging destinations:
- AWS CloudWatch log group
- AWS S3 bucket
-
Create/configure/validate the applicable prerequisites.
-
Nested within the examples directory are subdirectories containing ready-made Terraform configurations for example scenarios on how to call and deploy this module. To get started, choose the example scenario that most closely matches your requirements. You can customize your deployment later by adding additional module inputs as you see fit (see the Deployment-Customizations doc for more details).
-
Copy all of the Terraform files from your example scenario of choice into a new destination directory to create your Terraform configuration that will manage your TFE deployment. This is a common directory structure for managing multiple TFE deployments:
. └── environments ├── production │  ├── backend.tf │  ├── main.tf │  ├── outputs.tf │  ├── terraform.tfvars │  └── variables.tf └── sandbox ├── backend.tf ├── main.tf ├── outputs.tf ├── terraform.tfvars └── variables.tf
đź“ť Note: In this example, the user will have two separate TFE deployments; one for their
sandbox
environment, and one for theirproduction
environment. This is recommended, but not required. -
(Optional) Uncomment and update the S3 remote state backend configuration provided in the
backend.tf
file with your own custom values. While this step is highly recommended, it is technically not required to use a remote backend config for your TFE deployment (if you are in a sandbox environment, for example). -
Populate your own custom values into the
terraform.tfvars.example
file that was provided (in particular, values enclosed in the<>
characters). Then, remove the.example
file extension such that the file is now namedterraform.tfvars
. -
Navigate to the directory of your newly created Terraform configuration for your TFE deployment, and run
terraform init
,terraform plan
, andterraform apply
.
The TFE infrastructure resources have now been created. Next comes the application layer portion of the deployment (which we refer to as the Post Steps), which will involve interacting with your EKS cluster via kubectl
and installing the TFE application via helm
.
-
Authenticate to your EKS cluster:
aws eks --region <aws-region> update-kubeconfig --name <eks-cluster-name>
đź“ť Note: You can get the value of your EKS cluster name from the
eks_cluster_name
Terraform output if you created your EKS cluster via this module. -
AWS recommends installing the AWS load balancer controller for EKS. If it is not already installed in your EKS cluster, install the AWS load balancer controller within the
kube-system
namespace via the Helm chart:Add the AWS
eks-charts
Helm chart repository:helm repo add eks https://aws.github.io/eks-charts
Update your local repo to make sure that you have the most recent charts:
helm repo update eks
Install the AWS load balancer controller:
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \ --namespace kube-system \ --set clusterName=<eks-cluster-name> \ --set serviceAccount.create=true \ --set serviceAccount.name=aws-load-balancer-controller \ --set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=<aws-lb-controller-irsa-role-arn> \ --set region=<aws-region> \ --set vpcId=<vpc-id>
đź“ť Note: You can get the value of your AWS load balancer controller IRSA role ARN from the
aws_lb_controller_irsa_role_arn
Terraform output (ifcreate_aws_lb_controller_irsa
wastrue
). -
Create the Kubernetes namespace for TFE:
kubectl create namespace tfe
đź“ť Note: You can name your TFE namespace something different than
tfe
if you prefer. If you do name it differently, be sure to update your value of thetfe_kube_namespace
input variable accordingly. -
Create the required secrets for your TFE deployment within your new Kubernetes namespace for TFE. There are several ways to do this, whether it be from the CLI via
kubectl
, or another method involving a third-party secrets helper/tool. See the kubernetes-secrets docs for details on the required secrets and how to create them. -
This Terraform module will automatically generate a Helm overrides file within your Terraform working directory named
./helm/module_generated_helm_overrides.yaml
. This Helm overrides file contains values interpolated from some of the infrastructure resources that were created by Terraform in step 6. Within the Helm overrides file, update or validate the values for the remaining settings that are enclosed in the<>
characters. You may also add any additional configuration settings into your Helm overrides file at this time (see the helm-overrides doc for more details). -
Now that you have customized your
module_generated_helm_overrides.yaml
file, rename it to something more applicable to your deployment, such asprod_tfe_overrides.yaml
(or whatever you prefer). Then, within yourterraform.tfvars
file, set the value ofcreate_helm_overrides_file
tofalse
, as we no longer want the Terraform module to manage this file or generate a new one on a subsequent Terraform run. -
Add the HashiCorp Helm chart repository:
helm repo add hashicorp https://helm.releases.hashicorp.com
đź“ť Note: If you have already added the HashiCorp Helm registry, you should run
helm repo update hashicorp
to ensure you have the latest version.
-
Install the TFE application via
helm
:helm install terraform-enterprise hashicorp/terraform-enterprise --namespace <TFE_NAMESPACE> --values <TFE_OVERRIDES_FILE>
-
Verify the TFE pod(s) are starting successfully:
View the events within the namespace:
kubectl get events --namespace <TFE_NAMESPACE>
View the pods within the namespace:
kubectl get pods --namespace <TFE_NAMESPACE>
View the logs from the pod:
kubectl logs <TFE_POD_NAME> --namespace <TFE_NAMESPACE> -f
-
Create a DNS record for your TFE FQDN. The DNS record should resolve to your TFE load balancer, depending on how the load balancer was configured during your TFE deployment:
-
If you configured a Kubernetes service of type
LoadBalancer
(what the module-generated Helm overrides defaults to), the DNS record should resolve to the DNS name of your AWS network load balancer (NLB).kubectl get services --namespace <TFE_NAMESPACE>
-
If you configured a custom Kubernetes ingress (meaning you customized your Helm overrides during step 11), the DNS record should resolve to the IP address of your ingress controller load balancer.
kubectl get ingress <INGRESS_NAME> --namespace <INGRESS_NAMESPACE>
đź“ť Note: If you are creating your DNS record in Route53, AWS recommends creating an alias record (if your TFE load balancer is an AWS-managed load balancer resource).
-
-
Verify the TFE application is ready:
curl https://<TFE_FQDN>/_health_check
-
Follow the remaining steps here to finish the installation setup, which involves creating the initial admin user.
Below are links to various docs related to the customization and management of your TFE deployment:
- Deployment customizations
- Helm overrides
- TFE version upgrades
- TFE TLS certificate rotation
- TFE configuration settings
- TFE Kubernetes secrets
- TFE IAM role for service accounts
Name | Version |
---|---|
terraform | >= 1.9 |
aws | ~> 5.63 |
local | 2.5.1 |
tls | 4.0.5 |
Name | Version |
---|---|
aws | ~> 5.63 |
local | 2.5.1 |
tls | 4.0.5 |
Name | Description | Type | Default | Required |
---|---|---|---|---|
friendly_name_prefix | Friendly name prefix used for uniquely naming all AWS resources for this deployment. Most commonly set to either an environment (e.g. 'sandbox', 'prod') a team name, or a project name. | string |
n/a | yes |
rds_subnet_ids | List of subnet IDs to use for RDS database subnet group. | list(string) |
n/a | yes |
tfe_database_password_secret_arn | ARN of AWS Secrets Manager secret for the TFE RDS Aurora (PostgreSQL) database password. | string |
n/a | yes |
tfe_fqdn | Fully qualified domain name (FQDN) of TFE instance. This name should eventually resolve to the TFE load balancer DNS name or IP address and will be what clients use to access TFE. | string |
n/a | yes |
vpc_id | ID of VPC where TFE will be deployed. | string |
n/a | yes |
aws_lb_controller_kube_namespace | Name of Kubernetes namespace for AWS Load Balancer Controller service account (to be created by Helm chart). Used to configure EKS IRSA. | string |
"kube-system" |
no |
aws_lb_controller_kube_svc_account | Name of Kubernetes service account for AWS Load Balancer Controller (to be created by Helm chart). Used to configure EKS IRSA. | string |
"aws-load-balancer-controller" |
no |
cidr_allow_egress_from_tfe_lb | List of CIDR ranges to allow all outbound traffic from TFE load balancer. Only set this to your TFE pod CIDR ranges when an EKS cluster already exists outside of this module. | list(string) |
null |
no |
cidr_allow_ingress_tfe_443 | List of CIDR ranges to allow TCP/443 inbound to TFE load balancer (load balancer is managed by Helm/K8s). | list(string) |
[] |
no |
cidr_allow_ingress_tfe_metrics_http | List of CIDR ranges to allow TCP/9090 (TFE HTTP metrics endpoint) inbound to TFE pods. | list(string) |
[] |
no |
cidr_allow_ingress_tfe_metrics_https | List of CIDR ranges to allow TCP/9091 (TFE HTTPS metrics endpoint) inbound to TFE pods. | list(string) |
[] |
no |
cidr_allow_ingress_to_rds | List of CIDR ranges to allow TCP/5432 (PostgreSQL) inbound to RDS cluster. | list(string) |
null |
no |
cidr_allow_ingress_to_redis | List of CIDR ranges to allow TCP/6379 (Redis) inbound to Redis cluster. | list(string) |
null |
no |
common_tags | Map of common tags for all taggable AWS resources. | map(string) |
{} |
no |
create_aws_lb_controller_irsa | Boolean to create AWS Load Balancer Controller IAM role and policies to enable EKS IAM role for service accounts (IRSA). | bool |
false |
no |
create_eks_cluster | Boolean to create new EKS cluster for TFE. | bool |
false |
no |
create_eks_oidc_provider | Boolean to create OIDC provider used to configure AWS IRSA. | bool |
false |
no |
create_helm_overrides_file | Boolean to generate a YAML file from template with Helm overrides values for TFE deployment. | bool |
true |
no |
create_tfe_eks_irsa | Boolean to create TFE IAM role and policies to enable TFE EKS IAM role for service accounts (IRSA). | bool |
false |
no |
create_tfe_lb_security_group | Boolean to create security group for TFE load balancer (load balancer is managed by Helm/K8s). | bool |
true |
no |
eks_cluster_authentication_mode | Authentication mode for access config of EKS cluster. | string |
"API_AND_CONFIG_MAP" |
no |
eks_cluster_endpoint_public_access | Boolean to enable public access to the EKS cluster endpoint. | bool |
false |
no |
eks_cluster_name | Name of EKS cluster. | string |
"tfe-eks-cluster" |
no |
eks_cluster_public_access_cidrs | List of CIDR blocks to allow public access to the EKS cluster endpoint. Only valid when eks_cluster_endpoint_public_access is true . |
list(string) |
null |
no |
eks_cluster_service_ipv4_cidr | CIDR block for the EKS cluster Kubernetes service network. Must be a valid /16 CIDR block. EKS will auto-assign from either 10.100.0.0/16 or 172.20.0.0/16 CIDR blocks when null . |
string |
null |
no |
eks_nodegroup_ami_id | ID of AMI to use for EKS node group. Required when eks_nodegroup_ami_type is CUSTOM . |
string |
null |
no |
eks_nodegroup_ami_type | Type of AMI to use for EKS node group. Must be set to CUSTOM when eks_nodegroup_ami_id is not null . |
string |
"AL2023_x86_64_STANDARD" |
no |
eks_nodegroup_ebs_kms_key_arn | ARN of KMS customer managed key (CMK) to encrypt EKS node group EBS volumes. | string |
null |
no |
eks_nodegroup_instance_type | Instance type for worker nodes within EKS node group. | string |
"m7i.xlarge" |
no |
eks_nodegroup_name | Name of EKS node group. | string |
"tfe-eks-nodegroup" |
no |
eks_nodegroup_scaling_config | Scaling configuration for EKS node group. | map(number) |
{ |
no |
eks_oidc_provider_arn | ARN of existing OIDC provider for EKS cluster. Required when create_eks_oidc_provider is false . |
string |
null |
no |
eks_oidc_provider_url | URL of existing OIDC provider for EKS cluster. Required when create_eks_oidc_provider is false . |
string |
null |
no |
eks_subnet_ids | List of subnet IDs to use for EKS cluster. | list(string) |
null |
no |
is_secondary_region | Boolean indicating whether this TFE deployment is in the 'primary' region or 'secondary' region. | bool |
false |
no |
rds_apply_immediately | Boolean to apply changes immediately to RDS cluster instance. | bool |
true |
no |
rds_aurora_engine_mode | RDS Aurora database engine mode. | string |
"provisioned" |
no |
rds_aurora_engine_version | Engine version of RDS Aurora PostgreSQL. | number |
16.2 |
no |
rds_aurora_instance_class | Instance class of Aurora PostgreSQL database. | string |
"db.r6i.xlarge" |
no |
rds_aurora_replica_count | Number of replica (reader) cluster instances to create within the RDS Aurora database cluster (within the same region). | number |
1 |
no |
rds_availability_zones | List of AWS availability zones to spread Aurora database cluster instances across. Leave as null and RDS will automatically assign 3 availability zones. |
list(string) |
null |
no |
rds_backup_retention_period | The number of days to retain backups for. Must be between 0 and 35. Must be greater than 0 if the database cluster is used as a source of a read replica cluster. | number |
35 |
no |
rds_deletion_protection | Boolean to enable deletion protection for RDS global cluster. | bool |
false |
no |
rds_force_destroy | Boolean to enable the removal of RDS database cluster members from RDS global cluster on destroy. | bool |
false |
no |
rds_global_cluster_id | ID of RDS global cluster. Only required only when is_secondary_region is true , otherwise leave as null . |
string |
null |
no |
rds_kms_key_arn | ARN of KMS customer managed key (CMK) to encrypt TFE RDS cluster. | string |
null |
no |
rds_parameter_group_family | Family of Aurora PostgreSQL database parameter group. | string |
"aurora-postgresql16" |
no |
rds_performance_insights_enabled | Boolean to enable performance insights for RDS cluster instance(s). | bool |
true |
no |
rds_performance_insights_retention_period | Number of days to retain RDS performance insights data. Must be between 7 and 731. | number |
7 |
no |
rds_preferred_backup_window | Daily time range (UTC) for RDS backup to occur. Must not overlap with rds_preferred_maintenance_window . |
string |
"04:00-04:30" |
no |
rds_preferred_maintenance_window | Window (UTC) to perform RDS database maintenance. Must not overlap with rds_preferred_backup_window . |
string |
"Sun:08:00-Sun:09:00" |
no |
rds_replication_source_identifier | ARN of source RDS cluster or cluster instance if this cluster is to be created as a read replica. Only required when is_secondary_region is true , otherwise leave as null . |
string |
null |
no |
rds_skip_final_snapshot | Boolean to enable RDS to take a final database snapshot before destroying. | bool |
false |
no |
rds_source_region | Source region for RDS cross-region replication. Only required when is_secondary_region is true , otherwise leave as null . |
string |
null |
no |
rds_storage_encrypted | Boolean to encrypt RDS storage. An AWS managed key will be used when true unless a value is also specified for rds_kms_key_arn . |
bool |
true |
no |
redis_apply_immediately | Boolean to apply changes immediately to Redis cluster. | bool |
true |
no |
redis_at_rest_encryption_enabled | Boolean to enable encryption at rest on Redis cluster. An AWS managed key will be used when true unless a value is also specified for redis_kms_key_arn . |
bool |
true |
no |
redis_auto_minor_version_upgrade | Boolean to enable automatic minor version upgrades for Redis cluster. | bool |
true |
no |
redis_automatic_failover_enabled | Boolean for deploying Redis nodes in multiple availability zones and enabling automatic failover. | bool |
true |
no |
redis_engine_version | Redis version number. | string |
"7.1" |
no |
redis_kms_key_arn | ARN of KMS customer managed key (CMK) to encrypt Redis cluster with. | string |
null |
no |
redis_multi_az_enabled | Boolean to create Redis nodes across multiple availability zones. If true , redis_automatic_failover_enabled must also be true , and more than one subnet must be specified within redis_subnet_ids . |
bool |
true |
no |
redis_node_type | Type (size) of Redis node from a compute, memory, and network throughput standpoint. | string |
"cache.m5.large" |
no |
redis_parameter_group_name | Name of parameter group to associate with Redis cluster. | string |
"default.redis7" |
no |
redis_port | Port number the Redis nodes will accept connections on. | number |
6379 |
no |
redis_subnet_ids | List of subnet IDs to use for Redis cluster subnet group. | list(string) |
null |
no |
redis_transit_encryption_enabled | Boolean to enable TLS encryption between TFE and the Redis cluster. | bool |
true |
no |
s3_destination_bucket_arn | ARN of destination S3 bucket for cross-region replication configuration. Bucket should already exist in secondary region. Required when s3_enable_bucket_replication is true . |
string |
"" |
no |
s3_destination_bucket_kms_key_arn | ARN of KMS key of destination S3 bucket for cross-region replication configuration if it is encrypted with a customer managed key (CMK). | string |
null |
no |
s3_enable_bucket_replication | Boolean to enable cross-region replication for TFE S3 bucket. Do not enable when is_secondary_region is true . An s3_destination_bucket_arn is also required when true . |
bool |
false |
no |
s3_kms_key_arn | ARN of KMS customer managed key (CMK) to encrypt TFE S3 bucket with. | string |
null |
no |
sg_allow_egress_from_tfe_lb | Security group ID of EKS node group to allow all egress traffic from TFE load balancer. Only set this to your TFE pod security group ID when an EKS cluster already exists outside of this module. | string |
null |
no |
sg_allow_ingress_to_rds | Security group ID to allow TCP/5432 (PostgreSQL) inbound to RDS cluster. | string |
null |
no |
sg_allow_ingress_to_redis | Security group ID to allow TCP/6379 (Redis) inbound to Redis cluster. | string |
null |
no |
tfe_cost_estimation_iam_enabled | Boolean to add AWS pricing actions to TFE IAM role for service account (IRSA). Only implemented when create_tfe_eks_irsa is true . |
string |
true |
no |
tfe_database_name | Name of TFE database to create within RDS global cluster. | string |
"tfe" |
no |
tfe_database_parameters | PostgreSQL server parameters for the connection URI. Used to configure the PostgreSQL connection. | string |
"sslmode=require" |
no |
tfe_database_user | Username for TFE RDS database cluster. | string |
"tfe" |
no |
tfe_http_port | HTTP port number that the TFE application will listen on within the TFE pods. It is recommended to leave this as the default value. | number |
8080 |
no |
tfe_https_port | HTTPS port number that the TFE application will listen on within the TFE pods. It is recommended to leave this as the default value. | number |
8443 |
no |
tfe_kube_namespace | Name of Kubernetes namespace for TFE service account (to be created by Helm chart). Used to configure EKS IRSA. | string |
"tfe" |
no |
tfe_kube_svc_account | Name of Kubernetes service account for TFE (to be created by Helm chart). Used to configure EKS IRSA. | string |
"tfe" |
no |
tfe_metrics_http_port | HTTP port number that the TFE metrics endpoint will listen on within the TFE pods. It is recommended to leave this as the default value. | number |
9090 |
no |
tfe_metrics_https_port | HTTPS port number that the TFE metrics endpoint will listen on within the TFE pods. It is recommended to leave this as the default value. | number |
9091 |
no |
tfe_object_storage_s3_access_key_id | Access key ID for S3 bucket. Required when tfe_object_storage_s3_use_instance_profile is false . |
string |
null |
no |
tfe_object_storage_s3_secret_access_key | Secret access key for S3 bucket. Required when tfe_object_storage_s3_use_instance_profile is false . |
string |
null |
no |
tfe_object_storage_s3_use_instance_profile | Boolean to use instance profile for S3 bucket access. If false , tfe_object_storage_s3_access_key_id and tfe_object_storage_s3_secret_access_key are required. |
bool |
true |
no |
tfe_redis_password_secret_arn | ARN of AWS Secrets Manager secret for the TFE Redis password. Value of secret must contain from 16 to 128 alphanumeric characters or symbols (excluding @, ", and /). | string |
null |
no |
Name | Description |
---|---|
aws_lb_controller_irsa_role_arn | ARN of IAM role for AWS Load Balancer Controller IRSA. |
eks_cluster_name | Name of TFE EKS cluster. |
elasticache_replication_group_arn | ARN of ElastiCache Replication Group (Redis) cluster. |
elasticache_replication_group_id | ID of ElastiCache Replication Group (Redis) cluster. |
elasticache_replication_group_primary_endpoint_address | Primary endpoint address of ElastiCache Replication Group (Redis) cluster. |
rds_aurora_cluster_arn | ARN of RDS Aurora database cluster. |
rds_aurora_cluster_endpoint | RDS Aurora database cluster endpoint. |
rds_aurora_cluster_members | List of instances that are part of this RDS Aurora database cluster. |
rds_aurora_global_cluster_id | RDS Aurora global database cluster identifier. |
s3_bucket_arn | ARN of TFE S3 bucket. |
s3_bucket_name | Name of TFE S3 bucket. |
s3_crr_iam_role_arn | ARN of S3 cross-region replication IAM role. |
tfe_database_host | PostgreSQL server endpoint in the format that TFE will connect to. |
tfe_database_password | TFE PostgreSQL database password. |
tfe_database_password_base64 | Base64-encoded TFE PostgreSQL database password. |
tfe_irsa_role_arn | ARN of IAM role for TFE EKS IRSA. |
tfe_lb_security_group_id | ID of security group for TFE load balancer. |
tfe_redis_password | TFE Redis password. |
tfe_redis_password_base64 | Base64-encoded TFE Redis password. |
tfe_url | URL to access TFE application based on value of tfe_fqdn input. |