Skip to content

Commit

Permalink
feat/updates (#37)
Browse files Browse the repository at this point in the history
* feat: monthly updates
* fix: remove deprecated resolve_conflicts and replace with new attributes for aws_eks_addon resource
* feat: update csi driver to 1.20.0
* feat: update k8s control plane to 1.27 and node pool to 1.27
* terraform-docs: automated action

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
  • Loading branch information
venkatamutyala and github-actions[bot] authored Jul 4, 2023
1 parent 82a2ff4 commit c55625c
Show file tree
Hide file tree
Showing 11 changed files with 85 additions and 46 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/aws-cloud-regression-suite.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,11 +23,11 @@ jobs:
env:
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: |
docker run -v $(pwd):/app --workdir /app/tests --rm -e AWS_SECRET_ACCESS_KEY -e AWS_ACCESS_KEY_ID=AKIA3COQJC7C2PNUKZV4 -e AWS_DEFAULT_REGION=us-west-2 ghcr.io/glueops/codespaces:v0.23.0 ./run.sh
docker run -v $(pwd):/app --workdir /app/tests --rm -e AWS_SECRET_ACCESS_KEY -e AWS_ACCESS_KEY_ID=AKIA3COQJC7C2PNUKZV4 -e AWS_DEFAULT_REGION=us-west-2 ghcr.io/glueops/codespaces:v0.26.0 ./run.sh
- name: Run AWS Destroy Only (in case previous step failed)
env:
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: |
docker run -v $(pwd):/app --workdir /app/tests --rm -e AWS_SECRET_ACCESS_KEY -e AWS_ACCESS_KEY_ID=AKIA3COQJC7C2PNUKZV4 -e AWS_DEFAULT_REGION=us-west-2 ghcr.io/glueops/codespaces:v0.23.0 ./destroy-aws.sh
docker run -v $(pwd):/app --workdir /app/tests --rm -e AWS_SECRET_ACCESS_KEY -e AWS_ACCESS_KEY_ID=AKIA3COQJC7C2PNUKZV4 -e AWS_DEFAULT_REGION=us-west-2 ghcr.io/glueops/codespaces:v0.26.0 ./destroy-aws.sh
if: always()
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,10 +119,10 @@ No requirements.
| Name | Source | Version |
|------|--------|---------|
| <a name="module_kubernetes"></a> [kubernetes](#module\_kubernetes) | cloudposse/eks-cluster/aws | 2.6.0 |
| <a name="module_node_pool"></a> [node\_pool](#module\_node\_pool) | cloudposse/eks-node-group/aws | 2.9.1 |
| <a name="module_subnets"></a> [subnets](#module\_subnets) | cloudposse/dynamic-subnets/aws | 2.0.4 |
| <a name="module_vpc"></a> [vpc](#module\_vpc) | cloudposse/vpc/aws | 2.0.0 |
| <a name="module_kubernetes"></a> [kubernetes](#module\_kubernetes) | cloudposse/eks-cluster/aws | 2.8.1 |
| <a name="module_node_pool"></a> [node\_pool](#module\_node\_pool) | cloudposse/eks-node-group/aws | 2.10.0 |
| <a name="module_subnets"></a> [subnets](#module\_subnets) | cloudposse/dynamic-subnets/aws | 2.4.1 |
| <a name="module_vpc"></a> [vpc](#module\_vpc) | cloudposse/vpc/aws | 2.1.0 |
| <a name="module_vpc_peering_accepter_with_routes"></a> [vpc\_peering\_accepter\_with\_routes](#module\_vpc\_peering\_accepter\_with\_routes) | ./modules/vpc_peering_accepter_with_routes | n/a |
## Resources
Expand All @@ -144,10 +144,10 @@ No requirements.
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| <a name="input_availability_zones"></a> [availability\_zones](#input\_availability\_zones) | The availability zones to deploy into | `list(string)` | <pre>[<br> "us-west-2a",<br> "us-west-2b",<br> "us-west-2c"<br>]</pre> | no |
| <a name="input_csi_driver_version"></a> [csi\_driver\_version](#input\_csi\_driver\_version) | You should grab the appropriate version number from: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/CHANGELOG.md | `string` | `"v1.19.0-eksbuild.1"` | no |
| <a name="input_eks_version"></a> [eks\_version](#input\_eks\_version) | The version of EKS to deploy | `string` | `"1.26"` | no |
| <a name="input_csi_driver_version"></a> [csi\_driver\_version](#input\_csi\_driver\_version) | You should grab the appropriate version number from: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/CHANGELOG.md | `string` | `"v1.20.0-eksbuild.1"` | no |
| <a name="input_eks_version"></a> [eks\_version](#input\_eks\_version) | The version of EKS to deploy | `string` | `"1.27"` | no |
| <a name="input_iam_role_to_assume"></a> [iam\_role\_to\_assume](#input\_iam\_role\_to\_assume) | The full ARN of the IAM role to assume | `string` | n/a | yes |
| <a name="input_node_pools"></a> [node\_pools](#input\_node\_pools) | node pool configurations:<br> - name (string): Name of the node pool. MUST BE UNIQUE! Recommended to use YYYYMMDD in the name<br> - node\_count (number): number of nodes to create in the node pool.<br> - instance\_type (string): Instance type to use for the nodes. ref: https://instances.vantage.sh/<br> - ami\_image\_id (string): AMI to use for EKS worker nodes. ref: https://github.com/awslabs/amazon-eks-ami/releases<br> - spot (bool): Enable spot instances for the nodes. DO NOT ENABLE IN PROD!<br> - disk\_size\_gb (number): Disk size in GB for the nodes.<br> - max\_pods (number): max pods that can be scheduled per node. | <pre>list(object({<br> name = string<br> node_count = number<br> instance_type = string<br> ami_image_id = string<br> spot = bool<br> disk_size_gb = number<br> max_pods = number<br> }))</pre> | <pre>[<br> {<br> "ami_image_id": "amazon-eks-node-1.26-v20230607",<br> "disk_size_gb": 20,<br> "instance_type": "t3a.large",<br> "max_pods": 110,<br> "name": "default-pool",<br> "node_count": 1,<br> "spot": false<br> }<br>]</pre> | no |
| <a name="input_node_pools"></a> [node\_pools](#input\_node\_pools) | node pool configurations:<br> - name (string): Name of the node pool. MUST BE UNIQUE! Recommended to use YYYYMMDD in the name<br> - node\_count (number): number of nodes to create in the node pool.<br> - instance\_type (string): Instance type to use for the nodes. ref: https://instances.vantage.sh/<br> - ami\_image\_id (string): AMI to use for EKS worker nodes. ref: https://github.com/awslabs/amazon-eks-ami/releases<br> - spot (bool): Enable spot instances for the nodes. DO NOT ENABLE IN PROD!<br> - disk\_size\_gb (number): Disk size in GB for the nodes.<br> - max\_pods (number): max pods that can be scheduled per node. | <pre>list(object({<br> name = string<br> node_count = number<br> instance_type = string<br> ami_image_id = string<br> spot = bool<br> disk_size_gb = number<br> max_pods = number<br> }))</pre> | <pre>[<br> {<br> "ami_image_id": "amazon-eks-node-1.27-v20230607",<br> "disk_size_gb": 20,<br> "instance_type": "t3a.large",<br> "max_pods": 110,<br> "name": "default-pool",<br> "node_count": 1,<br> "spot": false<br> }<br>]</pre> | no |
| <a name="input_peering_configs"></a> [peering\_configs](#input\_peering\_configs) | A list of maps containing VPC peering configuration details | <pre>list(object({<br> vpc_peering_connection_id = string<br> destination_cidr_block = string<br> }))</pre> | `[]` | no |
| <a name="input_region"></a> [region](#input\_region) | The AWS region to deploy into | `string` | n/a | yes |
| <a name="input_vpc_cidr_block"></a> [vpc\_cidr\_block](#input\_vpc\_cidr\_block) | The CIDR block for the VPC | `string` | `"10.65.0.0/26"` | no |
Expand Down
22 changes: 11 additions & 11 deletions main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,11 @@ provider "aws" {

module "kubernetes" {
source = "cloudposse/eks-cluster/aws"
version = "2.6.0"
version = "2.8.1"

region = var.region
vpc_id = module.vpc.vpc_id
subnet_ids = module.subnets.public_subnet_ids
region = var.region
vpc_id = module.vpc.vpc_id
subnet_ids = module.subnets.public_subnet_ids

oidc_provider_enabled = true
name = "captain"
Expand All @@ -29,7 +29,7 @@ module "node_pool" {
for_each = { for np in var.node_pools : np.name => np }
source = "cloudposse/eks-node-group/aws"
# Cloud Posse recommends pinning every module to a specific version
version = "2.9.1"
version = "2.10.0"

instance_types = [each.value.instance_type]
subnet_ids = module.subnets.public_subnet_ids
Expand Down Expand Up @@ -97,13 +97,13 @@ resource "aws_iam_role_policy_attachment" "ebs_csi" {
}

resource "aws_eks_addon" "ebs_csi" {
cluster_name = module.kubernetes.eks_cluster_id
addon_name = "aws-ebs-csi-driver"
addon_version = var.csi_driver_version
resolve_conflicts = "OVERWRITE"
cluster_name = module.kubernetes.eks_cluster_id
addon_name = "aws-ebs-csi-driver"
addon_version = var.csi_driver_version
resolve_conflicts_on_create = "OVERWRITE"
resolve_conflicts_on_update = "OVERWRITE"

service_account_role_arn = aws_iam_role.eks_addon_ebs_csi_role.arn
depends_on = [aws_iam_role_policy_attachment.ebs_csi, module.node_pool]
count = length(var.node_pools) > 0 ? 1 : 0
}


6 changes: 3 additions & 3 deletions network.tf
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
module "vpc" {
source = "cloudposse/vpc/aws"
# Cloud Posse recommends pinning every module to a specific version
version = "2.0.0"
version = "2.1.0"
ipv4_primary_cidr_block = local.vpc.cidr_block
name = "captain"
}
Expand All @@ -10,7 +10,7 @@ module "vpc" {
module "subnets" {
source = "cloudposse/dynamic-subnets/aws"
# Cloud Posse recommends pinning every module to a specific version
version = "2.0.4"
version = "2.4.1"

vpc_id = module.vpc.vpc_id
igw_id = [module.vpc.igw_id]
Expand Down Expand Up @@ -55,4 +55,4 @@ resource "aws_security_group_rule" "allow_all_within_group" {
to_port = 0
protocol = "-1" # All protocols
source_security_group_id = aws_security_group.captain.id
}
}
43 changes: 43 additions & 0 deletions tests/calico.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
installation:
enabled: true
kubernetesProvider: EKS
typhaMetricsPort: 9093
cni:
type: Calico
calicoNetwork:
bgp: Disabled
ipPools:
- cidr: 172.16.0.0/16
encapsulation: VXLAN

apiServer:
enabled: true

# Resource requests and limits for the tigera/operator pod.
resources: {}

# Tolerations for the tigera/operator pod.
tolerations:
- effect: NoExecute
operator: Exists
- effect: NoSchedule
operator: Exists

# NodeSelector for the tigera/operator pod.
nodeSelector:
kubernetes.io/os: linux

# Custom annotations for the tigera/operator pod.
podAnnotations: {}

# Custom labels for the tigera/operator pod.
podLabels: {}

# Image and registry configuration for the tigera/operator pod.
tigeraOperator:
image: tigera/operator
version: v1.30.4
registry: quay.io
calicoctl:
image: docker.io/calico/ctl
tag: v3.26.1
7 changes: 6 additions & 1 deletion tests/k8s-test.sh
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
#!/bin/bash

set -e

# Step 1: Verify storage driver installation (Amazon EBS CSI Driver)
echo "Checking if the storage driver is installed..."
kubectl get pods -n kube-system | grep "ebs-csi-"
Expand Down Expand Up @@ -65,6 +67,9 @@ echo "Waiting for the PVC to be bound and the pod to be running..."
sleep 30
kubectl get pvc
kubectl get pods
kubectl describe pods
kubectl describe pvc


# Step 5: Test the storage functionality
TEST_POD_NAME=$(kubectl get pods -l app=test-app -o jsonpath="{.items[0].metadata.name}")
Expand All @@ -76,4 +81,4 @@ kubectl exec -it $TEST_POD_NAME -- cat /data/test.txt
echo "Cleaning up test resources..."
kubectl delete deployment test-app
kubectl delete pvc test-pvc
kubectl delete storageclass ebs-sc
kubectl delete storageclass ebs-sc
16 changes: 8 additions & 8 deletions tests/main.tf
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
module "captain" {
iam_role_to_assume = "arn:aws:iam::761182885829:role/glueops-captain"
source = "../"
eks_version = "1.26"
csi_driver_version = "v1.18.0-eksbuild.1"
vpc_cidr_block = "10.65.0.0/26"
region = "us-west-2"
availability_zones = ["us-west-2a", "us-west-2b"]
iam_role_to_assume = "arn:aws:iam::761182885829:role/glueops-captain"
source = "../"
eks_version = "1.27"
csi_driver_version = "v1.20.0-eksbuild.1"
vpc_cidr_block = "10.65.0.0/26"
region = "us-west-2"
availability_zones = ["us-west-2a", "us-west-2b"]
node_pools = [
# {
# "ami_image_id" : "amazon-eks-node-1.26-v20230411",
# "ami_image_id" : "amazon-eks-node-1.27-v20230607",
# "instance_type" : "t3a.small",
# "name" : "clusterwide-node-pool-1",
# "node_count" : 2,
Expand Down
2 changes: 1 addition & 1 deletion tests/run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ kubectl delete daemonset -n kube-system aws-node
echo "Install Calico CNI"
helm repo add projectcalico https://docs.tigera.io/calico/charts
helm repo update
helm install calico projectcalico/tigera-operator --version v3.25.1 --namespace tigera-operator -f values.yaml --create-namespace
helm install calico projectcalico/tigera-operator --version v3.26.1 --namespace tigera-operator -f calico.yaml --create-namespace
echo "Deploy node pool"
sed -i 's/#//g' main.tf
terraform apply -auto-approve
Expand Down
9 changes: 0 additions & 9 deletions tests/values.yaml

This file was deleted.

6 changes: 3 additions & 3 deletions variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ variable "region" {

variable "csi_driver_version" {
type = string
default = "v1.19.0-eksbuild.1"
default = "v1.20.0-eksbuild.1"
description = "You should grab the appropriate version number from: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/CHANGELOG.md"
}

Expand All @@ -25,7 +25,7 @@ variable "availability_zones" {
variable "eks_version" {
type = string
description = "The version of EKS to deploy"
default = "1.26"
default = "1.27"
}

variable "node_pools" {
Expand All @@ -42,7 +42,7 @@ variable "node_pools" {
name = "default-pool"
node_count = 1
instance_type = "t3a.large"
ami_image_id = "amazon-eks-node-1.26-v20230607"
ami_image_id = "amazon-eks-node-1.27-v20230607"
spot = false
disk_size_gb = 20
max_pods = 110
Expand Down
2 changes: 1 addition & 1 deletion versions.tf
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
source = "hashicorp/aws"
}
}
}

0 comments on commit c55625c

Please sign in to comment.