-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: Upgrade to v18 of EKS module #64
refactor: Upgrade to v18 of EKS module #64
Conversation
eks-cluster.tf
Outdated
workers_group_defaults = { | ||
root_volume_type = "gp2" | ||
# Extend cluster security group rules | ||
cluster_security_group_additional_rules = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The security group rules added here put the definition back inline with the v17 version. In v18 the security group rules were reduced to only the bare minimum required for a cluster to provision successfully while allowing users to open/extend access as they see fit for their workload
host = module.eks.cluster_endpoint | ||
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) | ||
|
||
exec { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The exec
plugin method is recommended for EKS by the provider to avoid token expiration https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#exec-plugins
@@ -1,12 +1,31 @@ | |||
# Kubernetes provider |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its common practice to have a generic main.tf
for bits that are generic and not resource specific. This captures the providers, common local/data sources, and the random string resource used in the naming scheme
value = module.eks.cluster_security_group_id | ||
} | ||
|
||
output "kubectl_config" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this output or an equivalent is no longer provided in v18; users are able to retrieve this through the awscli:
aws eks update-kubeconfig --region <REGION> --name <CLUSTER_NAME>
@@ -28,20 +27,3 @@ resource "aws_security_group" "worker_group_mgmt_two" { | |||
] | |||
} | |||
} | |||
|
|||
resource "aws_security_group" "all_worker_mgmt" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This resource looks like it was never used anywhere so removed
|
||
resource "aws_security_group" "worker_group_mgmt_one" { | ||
name_prefix = "worker_group_mgmt_one" | ||
resource "aws_security_group" "node_group_one" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
naming modified to reflect what is used by the module and AWS. Instead of "workers" its "self-managed node groups", instead of "node groups" its "EKS managed node groups". Here I have just used the generic "node group" for brevity
} | ||
|
||
random = { | ||
source = "hashicorp/random" | ||
version = "3.1.0" | ||
} | ||
|
||
local = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
local
and null
providers not utilized here so removed
@@ -2,27 +2,17 @@ terraform { | |||
required_providers { | |||
aws = { | |||
source = "hashicorp/aws" | |||
version = ">= 3.20.0" | |||
version = ">= 3.72" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not really pertinent here, but setting to match the module's requirements
cluster_name = "education-eks-${random_string.suffix.result}" | ||
} | ||
cidr = "10.0.0.0/16" | ||
azs = slice(data.aws_availability_zones.available.names, 0, 3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
adding slice()
since some regions have more than 3 AZs and we want to ensure we only pull 3 to match the subnets prescribed
enable_nat_gateway = true | ||
single_nat_gateway = true | ||
enable_dns_hostnames = true | ||
|
||
tags = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not required, removed
Thanks for doing this :) would have definitely helped me a few weeks ago, and I'm sure it'll help hundreds or thousands to come! |
@bryantbiggs can you point me to the change where its fixing terraform-aws-modules/terraform-aws-eks#978 ? |
@someshkoli https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/main.tf#L354-L459 |
@im2nguyen any thoughts on this change? |
Hi @bryantbiggs, thank you for updating this! @judithpatudith can you have the team look into updating the tutorial? this is a high value update |
Thank you @bryantbiggs, this is a fantastic contribution! I'm on the education team at HashiCorp and am hoping to update our EKS tutorial to work with the changes you've made in this PR. Using version I can make other k8s API calls (like
I have a hunch it has something to do with security groups, but I'm not familiar enough with EKS to know what to modify. Do you have any ideas? Thanks again for your help, we really appreciate it. |
ah yes, I think I know what it is - let me try on my end and update the PR shortly |
validation steps:
Please note that the project now provisions the cc @alanszlosek |
Thanks, @bryantbiggs, this is great! Those changes fixed the proxy issue. Based on what I'm reading, Switching gears ... I've been talking to my team about the k8s dashboard installation section of the tutorial. The goal of the section was to "verify" the cluster, but we think it would be sufficient and simpler to have the user run If you agree, this means we can remove the helm provider as well as the service account, role binding, metrics server, and dashboard resources from the PR entirely. I think you can revert some of your changes to the kubernetes provider as well. Lastly, can you restore the terraform lock file? It's a recommendation within our docs, and it helps us ensure that learners don't encounter any version-related hiccups while going through a tutorial. Does that make sense? Again, thank you! |
Close - the EKS module in v18 was designed to only contain the bare minimum security group requirements to bring up a cluster successfully. Users then have the option to start opening up additional access to support the workloads they run. Its allowing users to start from a secure, least-access perspective and only opening up access for their specific workloads to maintain a stronger security posture. Using this cluster primary security group means that all traffic among the resources attached to the security is permitted, and all egress to the public internet is allowed. This is the EKS "default behavior" when a security group isn't provided to nodes.
Absolutely. I wasn't sure of the intentions so just maintained the status quo but have now removed those
Done! @alanszlosek |
@bryantbiggs, that looks great. I'm going to merge this into a testing branch, update the tutorial text, and test it all end-to-end before release. Thanks for helping us make things better! |
Thank you @alanszlosek - let me know if I can be of any further assistance or if you have any questions on the module usage Also, I think you can probably safely close out most, if not all, of the open issues/PRs with this. Not sure why it didn't automatically close them on this PR Edit: Ah, nvm - I see it went into a temp branch and not |
Thanks for your willingness, @bryantbiggs . I do have a question about the
Assuming I'm reading it correctly, it implies we shouldn't need Terraform to manage the ConfigMap, at least not for the tutorial. I'm hoping the mappings for each node group's IAM Role will be added automatically to the cluster when Terraform creates each group. I know it's just one param, but if we don't need it, then I believe I can also remove the Thanks for the heads-up on the Issues that can be closed. I'll make a note to close them. |
Correct - with EKS managed node groups and Fargate profiles, those are automatically added to the configmap. If you want to remove that line you can also remove the Kubernetes provider block as well |
This updates the project to use the latest v18 EKS module in a similar manner to what was defined.
Motiviation
We would like to move users away from v17 and any newcomers should start off using v18. I don't know where the associated article repo is located but I suspect it will need to be updated to reflect these changes https://learn.hashicorp.com/tutorials/terraform/eks
Closes #14
Closes #19
Closes #22
Closes #25
Closes #28
Closes #29
Closes #31
Closes #33
Closes #35
Closes #42
Closes #43
Closes #51
Closes #56
Closes #61
Closes #63
Please let me know if there are any questions or concerns