diff --git a/v2.7/deploy/configurations/index.html b/v2.7/deploy/configurations/index.html index c931f7e22..babc2a044 100644 --- a/v2.7/deploy/configurations/index.html +++ b/v2.7/deploy/configurations/index.html @@ -1279,6 +1279,12 @@

Controller command line flagsTraffic Routingalb.ingress.kubernetes.io/subnets specifies the Availability Zones that the ALB will route traffic to. See Load Balancer subnets for more details.

-

You must specify at least two subnets in different AZs. Either subnetID or subnetName(Name tag on subnets) can be used.

+

You must specify at least two subnets in different AZs unless utilizing the outpost locale, in which case a single subnet suffices. Either subnetID or subnetName(Name tag on subnets) can be used.

+
+
+

You must not mix subnets from different locales: availability-zone, local-zone, wavelength-zone, outpost.

Tip

diff --git a/v2.7/guide/service/annotations/index.html b/v2.7/guide/service/annotations/index.html index d0f6a657a..ce5283318 100644 --- a/v2.7/guide/service/annotations/index.html +++ b/v2.7/guide/service/annotations/index.html @@ -1376,6 +1376,12 @@

Annotationstrue + +service.beta.kubernetes.io/aws-load-balancer-inbound-sg-rules-on-private-link-traffic +string + + +

Traffic Routing

@@ -1601,6 +1607,9 @@

Resource attributesclient IP preservation
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true
 
+
  • disable immediate connection termination for unhealthy targets and configure a 30s draining interval (available range is 0-360000 seconds) +
    service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: target_health_state.unhealthy.connection_termination.enabled=false,target_health_state.unhealthy.draining_interval_seconds=30
    +
  • @@ -1628,6 +1637,9 @@

    Resource attributes
    service.beta.kubernetes.io/aws-load-balancer-attributes: load_balancing.cross_zone.enabled=true
     
    +
  • enable client availability zone affinity +
    service.beta.kubernetes.io/aws-load-balancer-attributes: dns_record.client_routing_policy=availability_zone_affinity
    +
  • @@ -1895,6 +1907,14 @@

    Access controlservice.beta.kubernetes.io/aws-load-balancer-inbound-sg-rules-on-private-link-traffic specifies whether to apply security group rules to traffic sent to the load balancer through AWS PrivateLink.

    +
    +

    Example

    +
    service.beta.kubernetes.io/aws-load-balancer-inbound-sg-rules-on-private-link-traffic: "off"
    +
    +
    +

    Legacy Cloud Provider

    The AWS Load Balancer Controller manages Kubernetes Services in a compatible way with the AWS cloud provider's legacy service controller.

    diff --git a/v2.7/guide/targetgroupbinding/targetgroupbinding/index.html b/v2.7/guide/targetgroupbinding/targetgroupbinding/index.html index a34d10736..ecfb11be1 100644 --- a/v2.7/guide/targetgroupbinding/targetgroupbinding/index.html +++ b/v2.7/guide/targetgroupbinding/targetgroupbinding/index.html @@ -735,6 +735,20 @@ Sample YAML + + +
  • + + VpcID + + +
  • + +
  • + + Sample YAML + +
  • @@ -1044,6 +1058,20 @@ Sample YAML +
  • + +
  • + + VpcID + + +
  • + +
  • + + Sample YAML + +
  • @@ -1103,7 +1131,7 @@

    TargetGroupBinding

    usage to support Ingress and Service

    The AWS LoadBalancer controller internally used TargetGroupBinding to support the functionality for Ingress and Service resource as well. -It automatically creates TargetGroupBinding in the same namespace of the Service used.

    +It automatically creates TargetGroupBinding in the same namespace of the Service used.

    You can view all TargetGroupBindings in a namespace by kubectl get targetgroupbindings -n <your-namespace> -o wide

    TargetType

    @@ -1122,6 +1150,23 @@

    Sample YAMLport: 80 targetGroupARN: <arn-to-targetGroup> +

    VpcID

    +

    TargetGroupBinding CR supports the explicit definition of the Virtual Private Cloud (VPC) of your TargetGroup.

    +
    +

    If the VpcID is not explicitly specified, a mutating webhook will automatically call AWS API to find the VpcID for your TargetGroup and set it to correct value.

    +
    +

    Sample YAML

    +
    apiVersion: elbv2.k8s.aws/v1beta1
    +kind: TargetGroupBinding
    +metadata:
    +  name: my-tgb
    +spec:
    +  serviceRef:
    +    name: awesome-service # route traffic to the awesome-service
    +    port: 80
    +  targetGroupARN: <arn-to-targetGroup>
    +  vpcID: <vpcID>
    +

    NodeSelector

    Default Node Selector

    For TargetType: instance, all nodes of a cluster that match the following diff --git a/v2.7/search/search_index.json b/v2.7/search/search_index.json index ae5719a35..31d88f14b 100644 --- a/v2.7/search/search_index.json +++ b/v2.7/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"A Kubernetes controller for Elastic Load Balancers AWS Load Balancer Controller \u00b6 AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster. It satisfies Kubernetes Ingress resources by provisioning Application Load Balancers . It satisfies Kubernetes Service resources by provisioning Network Load Balancers . This project was formerly known as \"AWS ALB Ingress Controller\", we rebranded it to be \"AWS Load Balancer Controller\". AWS ALB Ingress Controller was originated by Ticketmaster and CoreOS as part of Ticketmaster's move to AWS and CoreOS Tectonic. Learn more about Ticketmaster's Kubernetes initiative from Justin Dean's video at Tectonic Summit . AWS ALB Ingress Controller was donated to Kubernetes SIG-AWS to allow AWS, CoreOS, Ticketmaster and other SIG-AWS contributors to officially maintain the project. SIG-AWS reached this consensus on June 1, 2018. Support Policy \u00b6 Currently, AWS provides security updates and bug fixes to the latest available minor versions of AWS LBC. For other ad-hoc supports on older versions, please reach out through AWS support ticket.","title":"Welcome"},{"location":"#aws-load-balancer-controller","text":"AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster. It satisfies Kubernetes Ingress resources by provisioning Application Load Balancers . It satisfies Kubernetes Service resources by provisioning Network Load Balancers . This project was formerly known as \"AWS ALB Ingress Controller\", we rebranded it to be \"AWS Load Balancer Controller\". AWS ALB Ingress Controller was originated by Ticketmaster and CoreOS as part of Ticketmaster's move to AWS and CoreOS Tectonic. Learn more about Ticketmaster's Kubernetes initiative from Justin Dean's video at Tectonic Summit . AWS ALB Ingress Controller was donated to Kubernetes SIG-AWS to allow AWS, CoreOS, Ticketmaster and other SIG-AWS contributors to officially maintain the project. SIG-AWS reached this consensus on June 1, 2018.","title":"AWS Load Balancer Controller"},{"location":"#support-policy","text":"Currently, AWS provides security updates and bug fixes to the latest available minor versions of AWS LBC. For other ad-hoc supports on older versions, please reach out through AWS support ticket.","title":"Support Policy"},{"location":"CONTRIBUTING/","text":"Contributing Guidelines \u00b6 Welcome to Kubernetes. We are excited about the prospect of you joining our community ! The Kubernetes community abides by the CNCF code of conduct . Here is an excerpt: As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. Getting Started \u00b6 Building the project \u00b6 Controller development documentation has instructions on how to build the project and project specific expectations. Contributing to docs \u00b6 The documentation is generated using Material for MkDocs . In order to generate and preview docs locally, use the steps below - Install pipenv run make docs-preview . This will generate and serve docs locally at http://127.0.0.1:8000 Contributing \u00b6 We also have more documentation on how to get started contributing here: Contributor License Agreement Kubernetes projects require that you sign a Contributor License Agreement (CLA) before we can accept your pull requests Kubernetes Contributor Guide - Main contributor documentation, or you can just jump directly to the contributing section Contributor Cheat Sheet - Common resources for existing developers Mentorship \u00b6 Mentoring Initiatives - We have a diverse set of mentorship programs available that are always looking for volunteers! Contact Information \u00b6 Slack channel Mailing list","title":"Contributing Guidelines"},{"location":"CONTRIBUTING/#contributing-guidelines","text":"Welcome to Kubernetes. We are excited about the prospect of you joining our community ! The Kubernetes community abides by the CNCF code of conduct . Here is an excerpt: As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities.","title":"Contributing Guidelines"},{"location":"CONTRIBUTING/#getting-started","text":"","title":"Getting Started"},{"location":"CONTRIBUTING/#building-the-project","text":"Controller development documentation has instructions on how to build the project and project specific expectations.","title":"Building the project"},{"location":"CONTRIBUTING/#contributing-to-docs","text":"The documentation is generated using Material for MkDocs . In order to generate and preview docs locally, use the steps below - Install pipenv run make docs-preview . This will generate and serve docs locally at http://127.0.0.1:8000","title":"Contributing to docs"},{"location":"CONTRIBUTING/#contributing","text":"We also have more documentation on how to get started contributing here: Contributor License Agreement Kubernetes projects require that you sign a Contributor License Agreement (CLA) before we can accept your pull requests Kubernetes Contributor Guide - Main contributor documentation, or you can just jump directly to the contributing section Contributor Cheat Sheet - Common resources for existing developers","title":"Contributing"},{"location":"CONTRIBUTING/#mentorship","text":"Mentoring Initiatives - We have a diverse set of mentorship programs available that are always looking for volunteers!","title":"Mentorship"},{"location":"CONTRIBUTING/#contact-information","text":"Slack channel Mailing list","title":"Contact Information"},{"location":"code-of-conduct/","text":"Kubernetes Community Code of Conduct \u00b6 Please refer to our Kubernetes Community Code of Conduct","title":"Kubernetes Community Code of Conduct"},{"location":"code-of-conduct/#kubernetes-community-code-of-conduct","text":"Please refer to our Kubernetes Community Code of Conduct","title":"Kubernetes Community Code of Conduct"},{"location":"controller-devel/","text":"AWS Load Balancer Controller Development Guide \u00b6 We'll walk you through the setup to start contributing to the AWS Load Balancer Controller project. No matter if you're contributing code or docs, follow the steps below to set up your development environment. Issue before PR Of course we're happy about code drops via PRs, however, in order to give us time to plan ahead and also to avoid disappointment, consider creating an issue first and submit a PR later. This also helps us to coordinate between different contributors and should in general help keeping everyone happy. Prerequisites \u00b6 Please ensure that you have properly installed Go . Go version We recommend to use a Go version of 1.14 or above for development. Fork upstream repository \u00b6 The first step in setting up your AWS Load Balancer controller development environment is to fork the upstream AWS Load Balancer controller repository to your personal Github account. Ensure source code organization directories exist \u00b6 Make sure in your $GOPATH/src that you have directories for the sigs.k8s.io organization: mkdir -p $GOPATH /src/github.com/sigs.k8s.io git clone forked repository and add upstream remote \u00b6 For the forked repository, you will git clone the repository into the appropriate folder in your $GOPATH . Once git clone 'd, you will want to set up a Git remote called \"upstream\" (remember that \"origin\" will be pointing at your forked repository location in your personal Github space). You can use this script to do this for you: GITHUB_ID = \"your GH username\" cd $GOPATH /src/github.com/sigs.k8s.io git clone git@github.com: $GITHUB_ID /aws-load-balancer-controller cd aws-load-balancer-controller/ git remote add upstream git@github.com:kubernetes-sigs/aws-load-balancer-controller git fetch --all Create your local branch \u00b6 Next, you create a local branch where you work on your feature or bug fix. Let's say you want to enhance the docs, so set BRANCH_NAME=docs-improve and then: git fetch --all && git checkout -b $BRANCH_NAME upstream/main Commit changes \u00b6 Make your changes locally, commit and push using: git commit -a -m \"improves the docs a lot\" git push origin $BRANCH_NAME Create a pull request \u00b6 Finally, submit a pull request against the upstream source repository. We monitor the GitHub repo and try to follow up with comments within a working day. Building the controller \u00b6 To build the controller binary, run the following command. make controller To install CRDs into a Kubernetes cluster, run the following command. make install To uninstall CRD from a Kubernetes cluster, run the following command. make uninstall To build the container image for the controller and push to a container registry, run the following command. make docker-push To deploy the CRDs and the container image to a Kubernetes cluster, run the following command. make deploy","title":"AWS Load Balancer Controller Development Guide"},{"location":"controller-devel/#aws-load-balancer-controller-development-guide","text":"We'll walk you through the setup to start contributing to the AWS Load Balancer Controller project. No matter if you're contributing code or docs, follow the steps below to set up your development environment. Issue before PR Of course we're happy about code drops via PRs, however, in order to give us time to plan ahead and also to avoid disappointment, consider creating an issue first and submit a PR later. This also helps us to coordinate between different contributors and should in general help keeping everyone happy.","title":"AWS Load Balancer Controller Development Guide"},{"location":"controller-devel/#prerequisites","text":"Please ensure that you have properly installed Go . Go version We recommend to use a Go version of 1.14 or above for development.","title":"Prerequisites"},{"location":"controller-devel/#fork-upstream-repository","text":"The first step in setting up your AWS Load Balancer controller development environment is to fork the upstream AWS Load Balancer controller repository to your personal Github account.","title":"Fork upstream repository"},{"location":"controller-devel/#ensure-source-code-organization-directories-exist","text":"Make sure in your $GOPATH/src that you have directories for the sigs.k8s.io organization: mkdir -p $GOPATH /src/github.com/sigs.k8s.io","title":"Ensure source code organization directories exist"},{"location":"controller-devel/#git-clone-forked-repository-and-add-upstream-remote","text":"For the forked repository, you will git clone the repository into the appropriate folder in your $GOPATH . Once git clone 'd, you will want to set up a Git remote called \"upstream\" (remember that \"origin\" will be pointing at your forked repository location in your personal Github space). You can use this script to do this for you: GITHUB_ID = \"your GH username\" cd $GOPATH /src/github.com/sigs.k8s.io git clone git@github.com: $GITHUB_ID /aws-load-balancer-controller cd aws-load-balancer-controller/ git remote add upstream git@github.com:kubernetes-sigs/aws-load-balancer-controller git fetch --all","title":"git clone forked repository and add upstream remote"},{"location":"controller-devel/#create-your-local-branch","text":"Next, you create a local branch where you work on your feature or bug fix. Let's say you want to enhance the docs, so set BRANCH_NAME=docs-improve and then: git fetch --all && git checkout -b $BRANCH_NAME upstream/main","title":"Create your local branch"},{"location":"controller-devel/#commit-changes","text":"Make your changes locally, commit and push using: git commit -a -m \"improves the docs a lot\" git push origin $BRANCH_NAME","title":"Commit changes"},{"location":"controller-devel/#create-a-pull-request","text":"Finally, submit a pull request against the upstream source repository. We monitor the GitHub repo and try to follow up with comments within a working day.","title":"Create a pull request"},{"location":"controller-devel/#building-the-controller","text":"To build the controller binary, run the following command. make controller To install CRDs into a Kubernetes cluster, run the following command. make install To uninstall CRD from a Kubernetes cluster, run the following command. make uninstall To build the container image for the controller and push to a container registry, run the following command. make docker-push To deploy the CRDs and the container image to a Kubernetes cluster, run the following command. make deploy","title":"Building the controller"},{"location":"how-it-works/","text":"How AWS Load Balancer controller works \u00b6 Design \u00b6 The following diagram details the AWS components this controller creates. It also demonstrates the route ingress traffic takes from the ALB to the Kubernetes cluster. Note The controller manages the configurations of the resources it creates, and we do not recommend out-of-band modifications to these resources because the controller may revert the manual changes during reconciliation. We recommend to use configuration options provided as best practice, such as ingress and service annotations, controller command line flags and IngressClassParams. Ingress Creation \u00b6 This section describes each step (circle) above. This example demonstrates satisfying 1 ingress resource. [1] : The controller watches for ingress events from the API server. When it finds ingress resources that satisfy its requirements, it begins the creation of AWS resources. [2] : An ALB (ELBv2) is created in AWS for the new ingress resource. This ALB can be internet-facing or internal. You can also specify the subnets it's created in using annotations. [3] : Target Groups are created in AWS for each unique Kubernetes service described in the ingress resource. [4] : Listeners are created for every port detailed in your ingress resource annotations. When no port is specified, sensible defaults ( 80 or 443 ) are used. Certificates may also be attached via annotations. [5] : Rules are created for each path specified in your ingress resource. This ensures traffic to a specific path is routed to the correct Kubernetes Service. Along with the above, the controller also... deletes AWS components when ingress resources are removed from k8s. modifies AWS components when ingress resources change in k8s. assembles a list of existing ingress-related AWS components on start-up, allowing you to recover if the controller were to be restarted. Ingress Traffic \u00b6 AWS Load Balancer controller supports two traffic modes: Instance mode IP mode By default, Instance mode is used, users can explicitly select the mode via alb.ingress.kubernetes.io/target-type annotation. Instance mode \u00b6 Ingress traffic starts at the ALB and reaches the Kubernetes nodes through each service's NodePort. This means that services referenced from ingress resources must be exposed by type:NodePort in order to be reached by the ALB. IP mode \u00b6 Ingress traffic starts at the ALB and reaches the Kubernetes pods directly. CNIs must support directly accessible POD ip via secondary IP addresses on ENI .","title":"How it works"},{"location":"how-it-works/#how-aws-load-balancer-controller-works","text":"","title":"How AWS Load Balancer controller works"},{"location":"how-it-works/#design","text":"The following diagram details the AWS components this controller creates. It also demonstrates the route ingress traffic takes from the ALB to the Kubernetes cluster. Note The controller manages the configurations of the resources it creates, and we do not recommend out-of-band modifications to these resources because the controller may revert the manual changes during reconciliation. We recommend to use configuration options provided as best practice, such as ingress and service annotations, controller command line flags and IngressClassParams.","title":"Design"},{"location":"how-it-works/#ingress-creation","text":"This section describes each step (circle) above. This example demonstrates satisfying 1 ingress resource. [1] : The controller watches for ingress events from the API server. When it finds ingress resources that satisfy its requirements, it begins the creation of AWS resources. [2] : An ALB (ELBv2) is created in AWS for the new ingress resource. This ALB can be internet-facing or internal. You can also specify the subnets it's created in using annotations. [3] : Target Groups are created in AWS for each unique Kubernetes service described in the ingress resource. [4] : Listeners are created for every port detailed in your ingress resource annotations. When no port is specified, sensible defaults ( 80 or 443 ) are used. Certificates may also be attached via annotations. [5] : Rules are created for each path specified in your ingress resource. This ensures traffic to a specific path is routed to the correct Kubernetes Service. Along with the above, the controller also... deletes AWS components when ingress resources are removed from k8s. modifies AWS components when ingress resources change in k8s. assembles a list of existing ingress-related AWS components on start-up, allowing you to recover if the controller were to be restarted.","title":"Ingress Creation"},{"location":"how-it-works/#ingress-traffic","text":"AWS Load Balancer controller supports two traffic modes: Instance mode IP mode By default, Instance mode is used, users can explicitly select the mode via alb.ingress.kubernetes.io/target-type annotation.","title":"Ingress Traffic"},{"location":"how-it-works/#instance-mode","text":"Ingress traffic starts at the ALB and reaches the Kubernetes nodes through each service's NodePort. This means that services referenced from ingress resources must be exposed by type:NodePort in order to be reached by the ALB.","title":"Instance mode"},{"location":"how-it-works/#ip-mode","text":"Ingress traffic starts at the ALB and reaches the Kubernetes pods directly. CNIs must support directly accessible POD ip via secondary IP addresses on ENI .","title":"IP mode"},{"location":"release/","text":"AWS Load Balancer Controller Release Process \u00b6 Create the Release Commit \u00b6 Run hack/set-version to set the new version number and commit the resulting changes. This is called the \"release commit\". Merge the Release Commit \u00b6 Create a pull request with the release commit. Get it reviewed and merged to main . Upon merge to main , GitHub Actions will create a release tag for the new release. If the release is a \".0-beta.1\" release, GitHub Actions will also create a release branch for the minor version. (Remaining steps in process yet to be documented.)","title":"AWS Load Balancer Controller Release Process"},{"location":"release/#aws-load-balancer-controller-release-process","text":"","title":"AWS Load Balancer Controller Release Process"},{"location":"release/#create-the-release-commit","text":"Run hack/set-version to set the new version number and commit the resulting changes. This is called the \"release commit\".","title":"Create the Release Commit"},{"location":"release/#merge-the-release-commit","text":"Create a pull request with the release commit. Get it reviewed and merged to main . Upon merge to main , GitHub Actions will create a release tag for the new release. If the release is a \".0-beta.1\" release, GitHub Actions will also create a release branch for the minor version. (Remaining steps in process yet to be documented.)","title":"Merge the Release Commit"},{"location":"deploy/configurations/","text":"Controller configuration options \u00b6 This document covers configuration of the AWS Load Balancer controller limitation The v2.0.0+ version of AWSLoadBalancerController currently only support one controller deployment(with one or multiple replicas) per cluster. The AWSLoadBalancerController assumes it's the solo owner of worker node security group rules with elbv2.k8s.aws/targetGroupBinding=shared description, running multiple controller deployment will cause these controllers compete with each other updating worker node security group rules. We will remove this limitation in future versions: tracking issue AWS API Access \u00b6 To perform operations, the controller must have required IAM role capabilities for accessing and provisioning ALB resources. There are many ways to achieve this, such as loading AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY as environment variables or using kube2iam . Refer to the installation guide for installing the controller in your kubernetes cluster and for the minimum required IAM permissions. Setting Ingress Resource Scope \u00b6 You can limit the ingresses ALB ingress controller controls by combining following two approaches: Limiting ingress class \u00b6 Setting the --ingress-class argument constrains the controller's scope to ingresses with matching ingressClassName field. An example of the container spec portion of the controller, only listening for resources with the class \"alb\", would be as follows. spec : containers : - args : - --ingress-class=alb Now, only ingress resources with the appropriate class are picked up, as seen below. apiVersion : networking.k8s.io/v1 kind : Ingress metadata : name : echoserver namespace : echoserver spec : ingressClassName : alb ... If the ingress class is not specified, the controller will reconcile Ingress objects without the ingress class specified or ingress class alb . Limiting Namespaces \u00b6 Setting the --watch-namespace argument constrains the controller's scope to a single namespace. Ingress events outside of the namespace specified are not be seen by the controller. An example of the container spec, for a controller watching only the default namespace, is as follows. spec : containers : - args : - --watch-namespace=default Currently, you can set only 1 namespace to watch in this flag. See this Kubernetes issue for more details. Controller command line flags \u00b6 The --cluster-name flag is mandatory and the value must match the name of the kubernetes cluster. If you specify an incorrect name, the subnet auto-discovery will not work. Flag Type Default Description aws-api-endpoints AWS API Endpoints Config AWS API endpoints mapping, format: serviceID1=URL1,serviceID2=URL2 aws-api-throttle AWS Throttle Config default value throttle settings for AWS APIs, format: serviceID1:operationRegex1=rate:burst,serviceID2:operationRegex2=rate:burst aws-max-retries int 10 Maximum retries for AWS APIs aws-region string instance metadata AWS Region for the kubernetes cluster aws-vpc-id string instance metadata AWS VPC ID for the Kubernetes cluster backend-security-group string Backend security group id to use for the ingress rules on the worker node SG cluster-name string Kubernetes cluster name default-ssl-policy string ELBSecurityPolicy-2016-08 Default SSL Policy that will be applied to all Ingresses or Services that do not have the SSL Policy annotation default-tags stringMap AWS Tags that will be applied to all AWS resources managed by this controller. Specified Tags takes highest priority default-target-type string instance Default target type for Ingresses and Services - ip, instance disable-ingress-class-annotation boolean false Disable new usage of the kubernetes.io/ingress.class annotation disable-ingress-group-name-annotation boolean false Disallow new use of the alb.ingress.kubernetes.io/group.name annotation disable-restricted-sg-rules boolean false Disable the usage of restricted security group rules enable-backend-security-group boolean true Enable sharing of security groups for backend traffic enable-endpoint-slices boolean false Use EndpointSlices instead of Endpoints for pod endpoint and TargetGroupBinding resolution for load balancers with IP targets. enable-leader-election boolean true Enable leader election for the load balancer controller manager. Enabling this will ensure there is only one active controller manager enable-pod-readiness-gate-inject boolean true If enabled, targetHealth readiness gate will get injected to the pod spec for the matching endpoint pods enable-shield boolean true Enable Shield addon for ALB enable-waf boolean true Enable WAF addon for ALB enable-wafv2 boolean true Enable WAF V2 addon for ALB external-managed-tags stringList AWS Tag keys that will be managed externally. Specified Tags are ignored during reconciliation feature-gates stringMap A set of key=value pairs to enable or disable features health-probe-bind-addr string :61779 The address the health probes binds to ingress-class string alb Name of the ingress class this controller satisfies ingress-max-concurrent-reconciles int 3 Maximum number of concurrently running reconcile loops for ingress kubeconfig string in-cluster config Path to the kubeconfig file containing authorization and API server information leader-election-id string aws-load-balancer-controller-leader Name of the leader election ID to use for this controller leader-election-namespace string Name of the leader election ID to use for this controller load-balancer-class string service.k8s.aws/nlb Name of the load balancer class specified in service spec.loadBalancerClass reconciled by this controller log-level string info Set the controller log level - info, debug metrics-bind-addr string :8080 The address the metric endpoint binds to service-max-concurrent-reconciles int 3 Maximum number of concurrently running reconcile loops for service sync-period duration 10h0m0s Period at which the controller forces the repopulation of its local object stores targetgroupbinding-max-concurrent-reconciles int 3 Maximum number of concurrently running reconcile loops for targetGroupBinding targetgroupbinding-max-exponential-backoff-delay duration 16m40s Maximum duration of exponential backoff for targetGroupBinding reconcile failures tolerate-non-existent-backend-service boolean true Whether to allow rules which refer to backend services that do not exist (When enabled, it will return 503 error if backend service not exist) tolerate-non-existent-backend-action boolean true Whether to allow rules which refer to backend actions that do not exist (When enabled, it will return 503 error if backend action not exist) watch-namespace string Namespace the controller watches for updates to Kubernetes objects, If empty, all namespaces are watched. webhook-bind-port int 9443 The TCP port the Webhook server binds to webhook-cert-dir string /tmp/k8s-webhook-server/serving-certs The directory that contains the server key and certificate webhook-cert-file string tls.crt The server certificate name webhook-key-file string tls.key The server key name disable-ingress-class-annotation \u00b6 --disable-ingress-class-annotation controls whether to disable new usage of the kubernetes.io/ingress.class annotation. Once disabled: you can no longer create Ingresses with the value of the kubernetes.io/ingress.class annotation equal to alb (can be overridden via --ingress-class flag of this controller). you can no longer update Ingresses to set the value of the kubernetes.io/ingress.class annotation equal to alb (can be overridden via --ingress-class flag of this controller). you can still create Ingresses with a kubernetes.io/ingress.class annotation that has other values (for example: \"nginx\") disable-ingress-group-name-annotation \u00b6 --disable-ingress-group-name-annotation controls whether to disable new usage of alb.ingress.kubernetes.io/group.name annotation. Once disabled: you can no longer create Ingresses with the alb.ingress.kubernetes.io/group.name annotation. you can no longer alter the value of an alb.ingress.kubernetes.io/group.name annotation on an existing Ingress. sync-period \u00b6 --sync-period defines a fixed interval for the controller to reconcile all resources even if there is no change, default to 10 hr. Please be mindful that frequent reconciliations may incur unnecessary AWS API usage. As best practice, we do not recommend users to manually modify the resources managed by the controller. And users should not depend on the controller auto-reconciliation to revert the manual modification, or to mitigate any security risks. waf-addons \u00b6 By default, the controller assumes sole ownership of the WAF addons associated to the provisioned ALBs, via the flag --enable-waf and --enable-wafv2 . And the users should disable them accordingly if they want a third party like AWS Firewall Manager to associate or remove the WAF-ACL of the ALBs. Once disabled, the controller shall not take any actions on the waf addons of the provisioned ALBs. throttle config \u00b6 Controller uses the following default throttle config: WAF Regional:^AssociateWebACL|DisassociateWebACL=0.5:1,WAF Regional:^GetWebACLForResource|ListResourcesForWebACL=1:1,WAFV2:^AssociateWebACL|DisassociateWebACL=0.5:1,WAFV2:^GetWebACLForResource|ListResourcesForWebACL=1:1,Elastic Load Balancing v2:^RegisterTargets|^DeregisterTargets=4:20,Elastic Load Balancing v2:.*=10:40 Client side throttling enables gradual scaling of the api calls. Additional throttle config can be specified via the --aws-api-throttle flag. You can get the ServiceID from the API definition in AWS SDK. For e.g, ELBv2 it is Elastic Load Balancing v2 . Here is an example of throttle config to specify client side throttling of ELBv2 calls. --aws-api-throttle=Elastic Load Balancing v2:RegisterTargets|DeregisterTargets=4:20,Elastic Load Balancing v2:.*=10:40 Instance metadata \u00b6 If running on EC2, the default values are obtained from the instance metadata service. Feature Gates \u00b6 They are a set of kye=value pairs that describe AWS load balance controller features. You can use it as flags --feature-gates=key1=value1,key2=value2 Features-gate Supported Key Type Default Value Description ListenerRulesTagging string true Enable or disable tagging AWS load balancer listeners and rules WeightedTargetGroups string true Enable or disable weighted target groups ServiceTypeLoadBalancerOnly string false If enabled, controller will be limited to reconciling service of type LoadBalancer EndpointsFailOpen string true Enable or disable allowing endpoints with ready:unknown state in the target groups. EnableServiceController string true Toggles support for Service type resources. EnableIPTargetType string true Used to toggle support for target-type ip across Ingress and Service type resources. EnableRGTAPI string false If enabled, the tagging manager will describe resource tags via RGT APIs, otherwise via ELB APIs. In order to enable RGT API, tag:GetResources is needed in controller IAM policy. SubnetsClusterTagCheck string true Enable or disable the check for kubernetes.io/cluster/${cluster-name} during subnet auto-discovery NLBHealthCheckAdvancedConfiguration string true Enable or disable advanced health check configuration for NLB, for example health check timeout ALBSingleSubnet string false If enabled, controller will allow using only 1 subnet for provisioning ALB, which need to get whitelisted by ELB in advance NLBSecurityGroup string true Enable or disable all NLB security groups actions including frontend sg creation, backend sg creation, and backend sg modifications","title":"Configurations"},{"location":"deploy/configurations/#controller-configuration-options","text":"This document covers configuration of the AWS Load Balancer controller limitation The v2.0.0+ version of AWSLoadBalancerController currently only support one controller deployment(with one or multiple replicas) per cluster. The AWSLoadBalancerController assumes it's the solo owner of worker node security group rules with elbv2.k8s.aws/targetGroupBinding=shared description, running multiple controller deployment will cause these controllers compete with each other updating worker node security group rules. We will remove this limitation in future versions: tracking issue","title":"Controller configuration options"},{"location":"deploy/configurations/#aws-api-access","text":"To perform operations, the controller must have required IAM role capabilities for accessing and provisioning ALB resources. There are many ways to achieve this, such as loading AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY as environment variables or using kube2iam . Refer to the installation guide for installing the controller in your kubernetes cluster and for the minimum required IAM permissions.","title":"AWS API Access"},{"location":"deploy/configurations/#setting-ingress-resource-scope","text":"You can limit the ingresses ALB ingress controller controls by combining following two approaches:","title":"Setting Ingress Resource Scope"},{"location":"deploy/configurations/#limiting-ingress-class","text":"Setting the --ingress-class argument constrains the controller's scope to ingresses with matching ingressClassName field. An example of the container spec portion of the controller, only listening for resources with the class \"alb\", would be as follows. spec : containers : - args : - --ingress-class=alb Now, only ingress resources with the appropriate class are picked up, as seen below. apiVersion : networking.k8s.io/v1 kind : Ingress metadata : name : echoserver namespace : echoserver spec : ingressClassName : alb ... If the ingress class is not specified, the controller will reconcile Ingress objects without the ingress class specified or ingress class alb .","title":"Limiting ingress class"},{"location":"deploy/configurations/#limiting-namespaces","text":"Setting the --watch-namespace argument constrains the controller's scope to a single namespace. Ingress events outside of the namespace specified are not be seen by the controller. An example of the container spec, for a controller watching only the default namespace, is as follows. spec : containers : - args : - --watch-namespace=default Currently, you can set only 1 namespace to watch in this flag. See this Kubernetes issue for more details.","title":"Limiting Namespaces"},{"location":"deploy/configurations/#controller-command-line-flags","text":"The --cluster-name flag is mandatory and the value must match the name of the kubernetes cluster. If you specify an incorrect name, the subnet auto-discovery will not work. Flag Type Default Description aws-api-endpoints AWS API Endpoints Config AWS API endpoints mapping, format: serviceID1=URL1,serviceID2=URL2 aws-api-throttle AWS Throttle Config default value throttle settings for AWS APIs, format: serviceID1:operationRegex1=rate:burst,serviceID2:operationRegex2=rate:burst aws-max-retries int 10 Maximum retries for AWS APIs aws-region string instance metadata AWS Region for the kubernetes cluster aws-vpc-id string instance metadata AWS VPC ID for the Kubernetes cluster backend-security-group string Backend security group id to use for the ingress rules on the worker node SG cluster-name string Kubernetes cluster name default-ssl-policy string ELBSecurityPolicy-2016-08 Default SSL Policy that will be applied to all Ingresses or Services that do not have the SSL Policy annotation default-tags stringMap AWS Tags that will be applied to all AWS resources managed by this controller. Specified Tags takes highest priority default-target-type string instance Default target type for Ingresses and Services - ip, instance disable-ingress-class-annotation boolean false Disable new usage of the kubernetes.io/ingress.class annotation disable-ingress-group-name-annotation boolean false Disallow new use of the alb.ingress.kubernetes.io/group.name annotation disable-restricted-sg-rules boolean false Disable the usage of restricted security group rules enable-backend-security-group boolean true Enable sharing of security groups for backend traffic enable-endpoint-slices boolean false Use EndpointSlices instead of Endpoints for pod endpoint and TargetGroupBinding resolution for load balancers with IP targets. enable-leader-election boolean true Enable leader election for the load balancer controller manager. Enabling this will ensure there is only one active controller manager enable-pod-readiness-gate-inject boolean true If enabled, targetHealth readiness gate will get injected to the pod spec for the matching endpoint pods enable-shield boolean true Enable Shield addon for ALB enable-waf boolean true Enable WAF addon for ALB enable-wafv2 boolean true Enable WAF V2 addon for ALB external-managed-tags stringList AWS Tag keys that will be managed externally. Specified Tags are ignored during reconciliation feature-gates stringMap A set of key=value pairs to enable or disable features health-probe-bind-addr string :61779 The address the health probes binds to ingress-class string alb Name of the ingress class this controller satisfies ingress-max-concurrent-reconciles int 3 Maximum number of concurrently running reconcile loops for ingress kubeconfig string in-cluster config Path to the kubeconfig file containing authorization and API server information leader-election-id string aws-load-balancer-controller-leader Name of the leader election ID to use for this controller leader-election-namespace string Name of the leader election ID to use for this controller load-balancer-class string service.k8s.aws/nlb Name of the load balancer class specified in service spec.loadBalancerClass reconciled by this controller log-level string info Set the controller log level - info, debug metrics-bind-addr string :8080 The address the metric endpoint binds to service-max-concurrent-reconciles int 3 Maximum number of concurrently running reconcile loops for service sync-period duration 10h0m0s Period at which the controller forces the repopulation of its local object stores targetgroupbinding-max-concurrent-reconciles int 3 Maximum number of concurrently running reconcile loops for targetGroupBinding targetgroupbinding-max-exponential-backoff-delay duration 16m40s Maximum duration of exponential backoff for targetGroupBinding reconcile failures tolerate-non-existent-backend-service boolean true Whether to allow rules which refer to backend services that do not exist (When enabled, it will return 503 error if backend service not exist) tolerate-non-existent-backend-action boolean true Whether to allow rules which refer to backend actions that do not exist (When enabled, it will return 503 error if backend action not exist) watch-namespace string Namespace the controller watches for updates to Kubernetes objects, If empty, all namespaces are watched. webhook-bind-port int 9443 The TCP port the Webhook server binds to webhook-cert-dir string /tmp/k8s-webhook-server/serving-certs The directory that contains the server key and certificate webhook-cert-file string tls.crt The server certificate name webhook-key-file string tls.key The server key name","title":"Controller command line flags"},{"location":"deploy/configurations/#disable-ingress-class-annotation","text":"--disable-ingress-class-annotation controls whether to disable new usage of the kubernetes.io/ingress.class annotation. Once disabled: you can no longer create Ingresses with the value of the kubernetes.io/ingress.class annotation equal to alb (can be overridden via --ingress-class flag of this controller). you can no longer update Ingresses to set the value of the kubernetes.io/ingress.class annotation equal to alb (can be overridden via --ingress-class flag of this controller). you can still create Ingresses with a kubernetes.io/ingress.class annotation that has other values (for example: \"nginx\")","title":"disable-ingress-class-annotation"},{"location":"deploy/configurations/#disable-ingress-group-name-annotation","text":"--disable-ingress-group-name-annotation controls whether to disable new usage of alb.ingress.kubernetes.io/group.name annotation. Once disabled: you can no longer create Ingresses with the alb.ingress.kubernetes.io/group.name annotation. you can no longer alter the value of an alb.ingress.kubernetes.io/group.name annotation on an existing Ingress.","title":"disable-ingress-group-name-annotation"},{"location":"deploy/configurations/#sync-period","text":"--sync-period defines a fixed interval for the controller to reconcile all resources even if there is no change, default to 10 hr. Please be mindful that frequent reconciliations may incur unnecessary AWS API usage. As best practice, we do not recommend users to manually modify the resources managed by the controller. And users should not depend on the controller auto-reconciliation to revert the manual modification, or to mitigate any security risks.","title":"sync-period"},{"location":"deploy/configurations/#waf-addons","text":"By default, the controller assumes sole ownership of the WAF addons associated to the provisioned ALBs, via the flag --enable-waf and --enable-wafv2 . And the users should disable them accordingly if they want a third party like AWS Firewall Manager to associate or remove the WAF-ACL of the ALBs. Once disabled, the controller shall not take any actions on the waf addons of the provisioned ALBs.","title":"waf-addons"},{"location":"deploy/configurations/#throttle-config","text":"Controller uses the following default throttle config: WAF Regional:^AssociateWebACL|DisassociateWebACL=0.5:1,WAF Regional:^GetWebACLForResource|ListResourcesForWebACL=1:1,WAFV2:^AssociateWebACL|DisassociateWebACL=0.5:1,WAFV2:^GetWebACLForResource|ListResourcesForWebACL=1:1,Elastic Load Balancing v2:^RegisterTargets|^DeregisterTargets=4:20,Elastic Load Balancing v2:.*=10:40 Client side throttling enables gradual scaling of the api calls. Additional throttle config can be specified via the --aws-api-throttle flag. You can get the ServiceID from the API definition in AWS SDK. For e.g, ELBv2 it is Elastic Load Balancing v2 . Here is an example of throttle config to specify client side throttling of ELBv2 calls. --aws-api-throttle=Elastic Load Balancing v2:RegisterTargets|DeregisterTargets=4:20,Elastic Load Balancing v2:.*=10:40","title":"throttle config"},{"location":"deploy/configurations/#instance-metadata","text":"If running on EC2, the default values are obtained from the instance metadata service.","title":"Instance metadata"},{"location":"deploy/configurations/#feature-gates","text":"They are a set of kye=value pairs that describe AWS load balance controller features. You can use it as flags --feature-gates=key1=value1,key2=value2 Features-gate Supported Key Type Default Value Description ListenerRulesTagging string true Enable or disable tagging AWS load balancer listeners and rules WeightedTargetGroups string true Enable or disable weighted target groups ServiceTypeLoadBalancerOnly string false If enabled, controller will be limited to reconciling service of type LoadBalancer EndpointsFailOpen string true Enable or disable allowing endpoints with ready:unknown state in the target groups. EnableServiceController string true Toggles support for Service type resources. EnableIPTargetType string true Used to toggle support for target-type ip across Ingress and Service type resources. EnableRGTAPI string false If enabled, the tagging manager will describe resource tags via RGT APIs, otherwise via ELB APIs. In order to enable RGT API, tag:GetResources is needed in controller IAM policy. SubnetsClusterTagCheck string true Enable or disable the check for kubernetes.io/cluster/${cluster-name} during subnet auto-discovery NLBHealthCheckAdvancedConfiguration string true Enable or disable advanced health check configuration for NLB, for example health check timeout ALBSingleSubnet string false If enabled, controller will allow using only 1 subnet for provisioning ALB, which need to get whitelisted by ELB in advance NLBSecurityGroup string true Enable or disable all NLB security groups actions including frontend sg creation, backend sg creation, and backend sg modifications","title":"Feature Gates"},{"location":"deploy/installation/","text":"AWS Load Balancer Controller installation \u00b6 The AWS Load Balancer controller (LBC) provisions AWS Network Load Balancer (NLB) and Application Load Balancer (ALB) resources. The LBC watches for new service or ingress Kubernetes resources and configures AWS resources. The LBC is supported by AWS. Some clusters may be using the legacy \"in-tree\" functionality to provision AWS load balancers. The AWS Load Balancer Controller should be installed instead. Existing AWS ALB Ingress Controller users The AWS ALB Ingress controller must be uninstalled before installing the AWS Load Balancer Controller. Please follow our migration guide to do a migration. When using AWS Load Balancer Controller v2.5+ The AWS LBC provides a mutating webhook for service resources to set the spec.loadBalancerClass field for service of type LoadBalancer on create. This makes the AWS LBC the default controller for service of type LoadBalancer. You can disable this feature and revert to set Cloud Controller Manager (in-tree controller) as the default by setting the helm chart value enableServiceMutatorWebhook to false with --set enableServiceMutatorWebhook=false . You will no longer be able to provision new Classic Load Balancer (CLB) from your kubernetes service unless you disable this feature. Existing CLB will continue to work fine. Supported Kubernetes versions \u00b6 AWS Load Balancer Controller v2.0.0~v2.1.3 requires Kubernetes 1.15+ AWS Load Balancer Controller v2.2.0~v2.3.1 requires Kubernetes 1.16-1.21 AWS Load Balancer Controller v2.4.0+ requires Kubernetes 1.19+ AWS Load Balancer Controller v2.5.0+ requires Kubernetes 1.22+ Deployment considerations \u00b6 Additional requirements for non-EKS clusters: \u00b6 Ensure subnets are tagged appropriately for auto-discovery to work For IP targets, pods must have IPs from the VPC subnets. You can configure the amazon-vpc-cni-k8s plugin for this purpose. Additional requirements for isolated cluster: \u00b6 Isolated clusters are clusters without internet access, and instead reply on VPC endpoints for all required connects. When installing the AWS LBC in isolated clusters, you need to disable shield, waf and wafv2 via controller flags --enable-shield=false, --enable-waf=false, --enable-wafv2=false Using the Amazon EC2 instance metadata server version 2 (IMDSv2) \u00b6 We recommend blocking the access to instance metadata by requiring the instance to use IMDSv2 only. For more information, please refer to the AWS guidance here . If you are using the IMDSv2, set the hop limit to 2 or higher in order to allow the LBC to perform the metadata introspection. You can set the IMDSv2 as follows: aws ec2 modify-instance-metadata-options --http-put-response-hop-limit 2 --http-tokens required --region --instance-id Instead of depending on IMDSv2, you can specify the AWS Region and the VPC via the controller flags --aws-region and --aws-vpc-id . Configure IAM \u00b6 The controller runs on the worker nodes, so it needs access to the AWS ALB/NLB APIs with IAM permissions. The IAM permissions can either be setup using IAM roles for service accounts (IRSA) or can be attached directly to the worker node IAM roles. The best practice is using IRSA if you're using Amazon EKS. If you're using kOps or self-hosted Kubernetes, you must manually attach polices to node instances. Option A: Recommended, IAM roles for service accounts (IRSA) \u00b6 The reference IAM policies contain the following permissive configuration: { \"Effect\": \"Allow\", \"Action\": [ \"ec2:AuthorizeSecurityGroupIngress\", \"ec2:RevokeSecurityGroupIngress\" ], \"Resource\": \"*\" }, We recommend further scoping down this configuration based on the VPC ID or cluster name resource tag. Example condition for VPC ID: \"Condition\": { \"ArnEquals\": { \"ec2:Vpc\": \"arn:aws:ec2:::vpc/\" } } Example condition for cluster name resource tag: \"Condition\": { \"Null\": { \"aws:ResourceTag/kubernetes.io/cluster/\": \"false\" } } Create an IAM OIDC provider. You can skip this step if you already have one for your cluster. eksctl utils associate-iam-oidc-provider \\ --region \\ --cluster \\ --approve Download an IAM policy for the LBC using one of the following commands: If your cluster is in a US Gov Cloud region: curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/install/iam_policy_us-gov.json If your cluster is in a China region: curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/install/iam_policy_cn.json If your cluster is in any other region: curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/install/iam_policy.json Create an IAM policy named AWSLoadBalancerControllerIAMPolicy . If you downloaded a different policy, replace iam-policy with the name of the policy that you downloaded. aws iam create-policy \\ --policy-name AWSLoadBalancerControllerIAMPolicy \\ --policy-document file://iam-policy.json Take note of the policy ARN that's returned. Create an IAM role and Kubernetes ServiceAccount for the LBC. Use the ARN from the previous step. eksctl create iamserviceaccount \\ --cluster= \\ --namespace=kube-system \\ --name=aws-load-balancer-controller \\ --attach-policy-arn=arn:aws:iam:::policy/AWSLoadBalancerControllerIAMPolicy \\ --override-existing-serviceaccounts \\ --region \\ --approve Option B: Attach IAM policies to nodes \u00b6 If you're not setting up IAM roles for service accounts, apply the IAM policies from the following URL at a minimum. Please be aware of the possibility that the controller permissions may be assumed by other users in a pod after retrieving the node role credentials, so the best practice would be using IRSA instead of attaching IAM policy directly. curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/install/iam_policy.json The following IAM permissions subset is for those using TargetGroupBinding only and don't plan to use the LBC to manage security group rules: { \"Statement\": [ { \"Action\": [ \"ec2:DescribeVpcs\", \"ec2:DescribeSecurityGroups\", \"ec2:DescribeInstances\", \"elasticloadbalancing:DescribeTargetGroups\", \"elasticloadbalancing:DescribeTargetHealth\", \"elasticloadbalancing:ModifyTargetGroup\", \"elasticloadbalancing:ModifyTargetGroupAttributes\", \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\" ], \"Effect\": \"Allow\", \"Resource\": \"*\" } ], \"Version\": \"2012-10-17\" } Network configuration \u00b6 Review the worker nodes security group docs. Your node security group must permit incoming traffic on TCP port 9443 from the Kubernetes control plane. This is needed for webhook access. If you use eksctl , this is the default configuration. If you use custom networking, please refer to the EKS Best Practices Guides for network configuration. Add controller to cluster \u00b6 We recommend using the Helm chart to install the controller. The chart supports Fargate and facilitates updating the controller. Helm If you want to run the controller on Fargate, use the Helm chart, since it doesn't depend on the cert-manager . Detailed instructions \u00b6 Follow the instructions in the aws-load-balancer-controller Helm chart. Summary \u00b6 Add the EKS chart repo to Helm helm repo add eks https://aws.github.io/eks-charts If upgrading the chart via helm upgrade , install the TargetGroupBinding CRDs. wget https://raw.githubusercontent.com/aws/eks-charts/master/stable/aws-load-balancer-controller/crds/crds.yaml kubectl apply -f crds.yaml Tip The helm install command automatically applies the CRDs, but helm upgrade doesn't. Helm install command for clusters with IRSA: helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName= --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller Helm install command for clusters not using IRSA: helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName= YAML manifests Install cert-manager \u00b6 kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.12.3/cert-manager.yaml Apply YAML \u00b6 Download the spec for the LBC. wget https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.7.0/v2_7_0_full.yaml Edit the saved yaml file, go to the Deployment spec, and set the controller --cluster-name arg value to your EKS cluster name apiVersion: apps/v1 kind: Deployment . . . name: aws-load-balancer-controller namespace: kube-system spec: . . . template: spec: containers: - args: - --cluster-name= If you use IAM roles for service accounts, we recommend that you delete the ServiceAccount from the yaml spec. If you delete the installation section from the yaml spec, deleting the ServiceAccount preserves the eksctl created iamserviceaccount . apiVersion: v1 kind: ServiceAccount Apply the yaml file kubectl apply -f v2_7_0_full.yaml Optionally download the default ingressclass and ingressclass params wget https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.7.0/v2_7_0_ingclass.yaml Apply the ingressclass and params kubectl apply -f v2_7_0_ingclass.yaml Create Update Strategy \u00b6 The controller doesn't receive security updates automatically. You need to manually upgrade to a newer version when it becomes available. You can upgrade using helm upgrade or another strategy to manage the controller deployment.","title":"Installation Guide"},{"location":"deploy/installation/#aws-load-balancer-controller-installation","text":"The AWS Load Balancer controller (LBC) provisions AWS Network Load Balancer (NLB) and Application Load Balancer (ALB) resources. The LBC watches for new service or ingress Kubernetes resources and configures AWS resources. The LBC is supported by AWS. Some clusters may be using the legacy \"in-tree\" functionality to provision AWS load balancers. The AWS Load Balancer Controller should be installed instead. Existing AWS ALB Ingress Controller users The AWS ALB Ingress controller must be uninstalled before installing the AWS Load Balancer Controller. Please follow our migration guide to do a migration. When using AWS Load Balancer Controller v2.5+ The AWS LBC provides a mutating webhook for service resources to set the spec.loadBalancerClass field for service of type LoadBalancer on create. This makes the AWS LBC the default controller for service of type LoadBalancer. You can disable this feature and revert to set Cloud Controller Manager (in-tree controller) as the default by setting the helm chart value enableServiceMutatorWebhook to false with --set enableServiceMutatorWebhook=false . You will no longer be able to provision new Classic Load Balancer (CLB) from your kubernetes service unless you disable this feature. Existing CLB will continue to work fine.","title":"AWS Load Balancer Controller installation"},{"location":"deploy/installation/#supported-kubernetes-versions","text":"AWS Load Balancer Controller v2.0.0~v2.1.3 requires Kubernetes 1.15+ AWS Load Balancer Controller v2.2.0~v2.3.1 requires Kubernetes 1.16-1.21 AWS Load Balancer Controller v2.4.0+ requires Kubernetes 1.19+ AWS Load Balancer Controller v2.5.0+ requires Kubernetes 1.22+","title":"Supported Kubernetes versions"},{"location":"deploy/installation/#deployment-considerations","text":"","title":"Deployment considerations"},{"location":"deploy/installation/#additional-requirements-for-non-eks-clusters","text":"Ensure subnets are tagged appropriately for auto-discovery to work For IP targets, pods must have IPs from the VPC subnets. You can configure the amazon-vpc-cni-k8s plugin for this purpose.","title":"Additional requirements for non-EKS clusters:"},{"location":"deploy/installation/#additional-requirements-for-isolated-cluster","text":"Isolated clusters are clusters without internet access, and instead reply on VPC endpoints for all required connects. When installing the AWS LBC in isolated clusters, you need to disable shield, waf and wafv2 via controller flags --enable-shield=false, --enable-waf=false, --enable-wafv2=false","title":"Additional requirements for isolated cluster:"},{"location":"deploy/installation/#using-the-amazon-ec2-instance-metadata-server-version-2-imdsv2","text":"We recommend blocking the access to instance metadata by requiring the instance to use IMDSv2 only. For more information, please refer to the AWS guidance here . If you are using the IMDSv2, set the hop limit to 2 or higher in order to allow the LBC to perform the metadata introspection. You can set the IMDSv2 as follows: aws ec2 modify-instance-metadata-options --http-put-response-hop-limit 2 --http-tokens required --region --instance-id Instead of depending on IMDSv2, you can specify the AWS Region and the VPC via the controller flags --aws-region and --aws-vpc-id .","title":"Using the Amazon EC2 instance metadata server version 2 (IMDSv2)"},{"location":"deploy/installation/#configure-iam","text":"The controller runs on the worker nodes, so it needs access to the AWS ALB/NLB APIs with IAM permissions. The IAM permissions can either be setup using IAM roles for service accounts (IRSA) or can be attached directly to the worker node IAM roles. The best practice is using IRSA if you're using Amazon EKS. If you're using kOps or self-hosted Kubernetes, you must manually attach polices to node instances.","title":"Configure IAM"},{"location":"deploy/installation/#option-a-recommended-iam-roles-for-service-accounts-irsa","text":"The reference IAM policies contain the following permissive configuration: { \"Effect\": \"Allow\", \"Action\": [ \"ec2:AuthorizeSecurityGroupIngress\", \"ec2:RevokeSecurityGroupIngress\" ], \"Resource\": \"*\" }, We recommend further scoping down this configuration based on the VPC ID or cluster name resource tag. Example condition for VPC ID: \"Condition\": { \"ArnEquals\": { \"ec2:Vpc\": \"arn:aws:ec2:::vpc/\" } } Example condition for cluster name resource tag: \"Condition\": { \"Null\": { \"aws:ResourceTag/kubernetes.io/cluster/\": \"false\" } } Create an IAM OIDC provider. You can skip this step if you already have one for your cluster. eksctl utils associate-iam-oidc-provider \\ --region \\ --cluster \\ --approve Download an IAM policy for the LBC using one of the following commands: If your cluster is in a US Gov Cloud region: curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/install/iam_policy_us-gov.json If your cluster is in a China region: curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/install/iam_policy_cn.json If your cluster is in any other region: curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/install/iam_policy.json Create an IAM policy named AWSLoadBalancerControllerIAMPolicy . If you downloaded a different policy, replace iam-policy with the name of the policy that you downloaded. aws iam create-policy \\ --policy-name AWSLoadBalancerControllerIAMPolicy \\ --policy-document file://iam-policy.json Take note of the policy ARN that's returned. Create an IAM role and Kubernetes ServiceAccount for the LBC. Use the ARN from the previous step. eksctl create iamserviceaccount \\ --cluster= \\ --namespace=kube-system \\ --name=aws-load-balancer-controller \\ --attach-policy-arn=arn:aws:iam:::policy/AWSLoadBalancerControllerIAMPolicy \\ --override-existing-serviceaccounts \\ --region \\ --approve","title":"Option A: Recommended, IAM roles for service accounts (IRSA)"},{"location":"deploy/installation/#option-b-attach-iam-policies-to-nodes","text":"If you're not setting up IAM roles for service accounts, apply the IAM policies from the following URL at a minimum. Please be aware of the possibility that the controller permissions may be assumed by other users in a pod after retrieving the node role credentials, so the best practice would be using IRSA instead of attaching IAM policy directly. curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/install/iam_policy.json The following IAM permissions subset is for those using TargetGroupBinding only and don't plan to use the LBC to manage security group rules: { \"Statement\": [ { \"Action\": [ \"ec2:DescribeVpcs\", \"ec2:DescribeSecurityGroups\", \"ec2:DescribeInstances\", \"elasticloadbalancing:DescribeTargetGroups\", \"elasticloadbalancing:DescribeTargetHealth\", \"elasticloadbalancing:ModifyTargetGroup\", \"elasticloadbalancing:ModifyTargetGroupAttributes\", \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\" ], \"Effect\": \"Allow\", \"Resource\": \"*\" } ], \"Version\": \"2012-10-17\" }","title":"Option B: Attach IAM policies to nodes"},{"location":"deploy/installation/#network-configuration","text":"Review the worker nodes security group docs. Your node security group must permit incoming traffic on TCP port 9443 from the Kubernetes control plane. This is needed for webhook access. If you use eksctl , this is the default configuration. If you use custom networking, please refer to the EKS Best Practices Guides for network configuration.","title":"Network configuration"},{"location":"deploy/installation/#add-controller-to-cluster","text":"We recommend using the Helm chart to install the controller. The chart supports Fargate and facilitates updating the controller. Helm If you want to run the controller on Fargate, use the Helm chart, since it doesn't depend on the cert-manager .","title":"Add controller to cluster"},{"location":"deploy/installation/#detailed-instructions","text":"Follow the instructions in the aws-load-balancer-controller Helm chart.","title":"Detailed instructions"},{"location":"deploy/installation/#summary","text":"Add the EKS chart repo to Helm helm repo add eks https://aws.github.io/eks-charts If upgrading the chart via helm upgrade , install the TargetGroupBinding CRDs. wget https://raw.githubusercontent.com/aws/eks-charts/master/stable/aws-load-balancer-controller/crds/crds.yaml kubectl apply -f crds.yaml Tip The helm install command automatically applies the CRDs, but helm upgrade doesn't. Helm install command for clusters with IRSA: helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName= --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller Helm install command for clusters not using IRSA: helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName= YAML manifests","title":"Summary"},{"location":"deploy/installation/#install-cert-manager","text":"kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.12.3/cert-manager.yaml","title":"Install cert-manager"},{"location":"deploy/installation/#apply-yaml","text":"Download the spec for the LBC. wget https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.7.0/v2_7_0_full.yaml Edit the saved yaml file, go to the Deployment spec, and set the controller --cluster-name arg value to your EKS cluster name apiVersion: apps/v1 kind: Deployment . . . name: aws-load-balancer-controller namespace: kube-system spec: . . . template: spec: containers: - args: - --cluster-name= If you use IAM roles for service accounts, we recommend that you delete the ServiceAccount from the yaml spec. If you delete the installation section from the yaml spec, deleting the ServiceAccount preserves the eksctl created iamserviceaccount . apiVersion: v1 kind: ServiceAccount Apply the yaml file kubectl apply -f v2_7_0_full.yaml Optionally download the default ingressclass and ingressclass params wget https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.7.0/v2_7_0_ingclass.yaml Apply the ingressclass and params kubectl apply -f v2_7_0_ingclass.yaml","title":"Apply YAML"},{"location":"deploy/installation/#create-update-strategy","text":"The controller doesn't receive security updates automatically. You need to manually upgrade to a newer version when it becomes available. You can upgrade using helm upgrade or another strategy to manage the controller deployment.","title":"Create Update Strategy"},{"location":"deploy/pod_readiness_gate/","text":"Pod readiness gate \u00b6 AWS Load Balancer controller supports \u00bbPod readiness gates\u00ab to indicate that pod is registered to the ALB/NLB and healthy to receive traffic. The controller automatically injects the necessary readiness gate configuration to the pod spec via mutating webhook during pod creation. For readiness gate configuration to be injected to the pod spec, you need to apply the label elbv2.k8s.aws/pod-readiness-gate-inject: enabled to the pod namespace. However, note that this only works with target-type: ip , since when using target-type: instance , it's the node used as backend, the ALB itself is not aware of pod/podReadiness in such case. The pod readiness gate is needed under certain circumstances to achieve full zero downtime rolling deployments. Consider the following example: Low number of replicas in a deployment Start a rolling update of the deployment Rollout of new pods takes less time than it takes the AWS Load Balancer controller to register the new pods and for their health state turn \u00bbHealthy\u00ab in the target group At some point during this rolling update, the target group might only have registered targets that are in \u00bbInitial\u00ab or \u00bbDraining\u00ab state; this results in service outage In order to avoid this situation, the AWS Load Balancer controller can set the readiness condition on the pods that constitute your ingress or service backend. The condition status on a pod will be set to True only when the corresponding target in the ALB/NLB target group shows a health state of \u00bbHealthy\u00ab. This prevents the rolling update of a deployment from terminating old pods until the newly created pods are \u00bbHealthy\u00ab in the ALB/NLB target group and ready to take traffic. upgrading from AWS ALB ingress controller If you have a pod spec with legacy readiness gate configuration, ensure you label the namespace and create the Service/Ingress objects before applying the pod/deployment manifest. The load balancer controller will remove all legacy readiness-gate configuration and add new ones during pod creation. Configuration \u00b6 Pod readiness gate support is enabled by default on the AWS load balancer controller. You need to apply the readiness gate inject label to each of the namespace that you would like to use this feature. You can create and label a namespace as follows - $ kubectl create namespace readiness namespace/readiness created $ kubectl label namespace readiness elbv2.k8s.aws/pod-readiness-gate-inject=enabled namespace/readiness labeled $ kubectl describe namespace readiness Name: readiness Labels: elbv2.k8s.aws/pod-readiness-gate-inject=enabled Annotations: Status: Active Once labelled, the controller will add the pod readiness gates config to all the pods created subsequently that meet all the following conditions There exists a service matching the pod labels in the same namespace There exists at least one target group binding that refers to the matching service The target type is IP The readiness gates have the prefix target-health.elbv2.k8s.aws and the controller injects the config to the pod spec only during pod creation. create ingress or service before pod To ensure all of your pods in a namespace get the readiness gate config, you need create your Ingress or Service and label the namespace before creating the pods Object Selector \u00b6 The default webhook configuration matches all pods in the namespaces containing the label elbv2.k8s.aws/pod-readiness-gate-inject=enabled . You can modify the webhook configuration further to select specific pods from the labeled namespace by specifying the objectSelector . For example, in order to select resources with elbv2.k8s.aws/pod-readiness-gate-inject: enabled label, you can add the following objectSelector to the webhook: objectSelector: matchLabels: elbv2.k8s.aws/pod-readiness-gate-inject: enabled To edit, $ kubectl edit mutatingwebhookconfigurations aws-load-balancer-webhook ... name: mpod.elbv2.k8s.aws namespaceSelector: matchExpressions: - key: elbv2.k8s.aws/pod-readiness-gate-inject operator: In values: - enabled objectSelector: matchLabels: elbv2.k8s.aws/pod-readiness-gate-inject: enabled ... When you specify multiple selectors, pods matching all the conditions will get mutated. Upgrading from AWS ALB Ingress controller \u00b6 If you have a pod spec with the AWS ALB ingress controller (aka v1) style readiness-gate configuration, the controller will automatically remove the legacy readiness gates config and add new ones during pod creation if the pod namespace is labelled correctly. Other than the namespace labeling, no further configuration is necessary. The legacy readiness gates have the target-health.alb.ingress.k8s.aws prefix. Disabling the readiness gate inject \u00b6 You can specify the controller flag --enable-pod-readiness-gate-inject=false during controller startup to disable the controller from modifying the pod spec. Checking the pod condition status \u00b6 The status of the readiness gates can be verified with kubectl get pod -o wide : NAME READY STATUS RESTARTS AGE IP NODE READINESS GATES nginx-test-5744b9ff84-7ftl9 1/1 Running 0 81s 10.1.2.3 ip-10-1-2-3.ec2.internal 0/1 When the target is registered and healthy in the ALB/NLB, the output will look like: NAME READY STATUS RESTARTS AGE IP NODE READINESS GATES nginx-test-5744b9ff84-7ftl9 1/1 Running 0 81s 10.1.2.3 ip-10-1-2-3.ec2.internal 1/1 If a readiness gate doesn't get ready, you can check the reason via: $ kubectl get pod nginx-test-545d8f4d89-l7rcl -o yaml | grep -B7 'type: target-health' status: conditions: - lastProbeTime: null lastTransitionTime: null message: Initial health checks in progress reason: Elb.InitialHealthChecking status: \"True\" type: target-health.elbv2.k8s.aws/k8s-readines-perf1000-7848e5026b","title":"Pod Readiness Gate"},{"location":"deploy/pod_readiness_gate/#pod-readiness-gate","text":"AWS Load Balancer controller supports \u00bbPod readiness gates\u00ab to indicate that pod is registered to the ALB/NLB and healthy to receive traffic. The controller automatically injects the necessary readiness gate configuration to the pod spec via mutating webhook during pod creation. For readiness gate configuration to be injected to the pod spec, you need to apply the label elbv2.k8s.aws/pod-readiness-gate-inject: enabled to the pod namespace. However, note that this only works with target-type: ip , since when using target-type: instance , it's the node used as backend, the ALB itself is not aware of pod/podReadiness in such case. The pod readiness gate is needed under certain circumstances to achieve full zero downtime rolling deployments. Consider the following example: Low number of replicas in a deployment Start a rolling update of the deployment Rollout of new pods takes less time than it takes the AWS Load Balancer controller to register the new pods and for their health state turn \u00bbHealthy\u00ab in the target group At some point during this rolling update, the target group might only have registered targets that are in \u00bbInitial\u00ab or \u00bbDraining\u00ab state; this results in service outage In order to avoid this situation, the AWS Load Balancer controller can set the readiness condition on the pods that constitute your ingress or service backend. The condition status on a pod will be set to True only when the corresponding target in the ALB/NLB target group shows a health state of \u00bbHealthy\u00ab. This prevents the rolling update of a deployment from terminating old pods until the newly created pods are \u00bbHealthy\u00ab in the ALB/NLB target group and ready to take traffic. upgrading from AWS ALB ingress controller If you have a pod spec with legacy readiness gate configuration, ensure you label the namespace and create the Service/Ingress objects before applying the pod/deployment manifest. The load balancer controller will remove all legacy readiness-gate configuration and add new ones during pod creation.","title":"Pod readiness gate"},{"location":"deploy/pod_readiness_gate/#configuration","text":"Pod readiness gate support is enabled by default on the AWS load balancer controller. You need to apply the readiness gate inject label to each of the namespace that you would like to use this feature. You can create and label a namespace as follows - $ kubectl create namespace readiness namespace/readiness created $ kubectl label namespace readiness elbv2.k8s.aws/pod-readiness-gate-inject=enabled namespace/readiness labeled $ kubectl describe namespace readiness Name: readiness Labels: elbv2.k8s.aws/pod-readiness-gate-inject=enabled Annotations: Status: Active Once labelled, the controller will add the pod readiness gates config to all the pods created subsequently that meet all the following conditions There exists a service matching the pod labels in the same namespace There exists at least one target group binding that refers to the matching service The target type is IP The readiness gates have the prefix target-health.elbv2.k8s.aws and the controller injects the config to the pod spec only during pod creation. create ingress or service before pod To ensure all of your pods in a namespace get the readiness gate config, you need create your Ingress or Service and label the namespace before creating the pods","title":"Configuration"},{"location":"deploy/pod_readiness_gate/#object-selector","text":"The default webhook configuration matches all pods in the namespaces containing the label elbv2.k8s.aws/pod-readiness-gate-inject=enabled . You can modify the webhook configuration further to select specific pods from the labeled namespace by specifying the objectSelector . For example, in order to select resources with elbv2.k8s.aws/pod-readiness-gate-inject: enabled label, you can add the following objectSelector to the webhook: objectSelector: matchLabels: elbv2.k8s.aws/pod-readiness-gate-inject: enabled To edit, $ kubectl edit mutatingwebhookconfigurations aws-load-balancer-webhook ... name: mpod.elbv2.k8s.aws namespaceSelector: matchExpressions: - key: elbv2.k8s.aws/pod-readiness-gate-inject operator: In values: - enabled objectSelector: matchLabels: elbv2.k8s.aws/pod-readiness-gate-inject: enabled ... When you specify multiple selectors, pods matching all the conditions will get mutated.","title":"Object Selector"},{"location":"deploy/pod_readiness_gate/#upgrading-from-aws-alb-ingress-controller","text":"If you have a pod spec with the AWS ALB ingress controller (aka v1) style readiness-gate configuration, the controller will automatically remove the legacy readiness gates config and add new ones during pod creation if the pod namespace is labelled correctly. Other than the namespace labeling, no further configuration is necessary. The legacy readiness gates have the target-health.alb.ingress.k8s.aws prefix.","title":"Upgrading from AWS ALB Ingress controller"},{"location":"deploy/pod_readiness_gate/#disabling-the-readiness-gate-inject","text":"You can specify the controller flag --enable-pod-readiness-gate-inject=false during controller startup to disable the controller from modifying the pod spec.","title":"Disabling the readiness gate inject"},{"location":"deploy/pod_readiness_gate/#checking-the-pod-condition-status","text":"The status of the readiness gates can be verified with kubectl get pod -o wide : NAME READY STATUS RESTARTS AGE IP NODE READINESS GATES nginx-test-5744b9ff84-7ftl9 1/1 Running 0 81s 10.1.2.3 ip-10-1-2-3.ec2.internal 0/1 When the target is registered and healthy in the ALB/NLB, the output will look like: NAME READY STATUS RESTARTS AGE IP NODE READINESS GATES nginx-test-5744b9ff84-7ftl9 1/1 Running 0 81s 10.1.2.3 ip-10-1-2-3.ec2.internal 1/1 If a readiness gate doesn't get ready, you can check the reason via: $ kubectl get pod nginx-test-545d8f4d89-l7rcl -o yaml | grep -B7 'type: target-health' status: conditions: - lastProbeTime: null lastTransitionTime: null message: Initial health checks in progress reason: Elb.InitialHealthChecking status: \"True\" type: target-health.elbv2.k8s.aws/k8s-readines-perf1000-7848e5026b","title":"Checking the pod condition status"},{"location":"deploy/security_groups/","text":"Security Groups for Load Balancers \u00b6 Use security groups to limit client connections to your load balancers, and restrict connections with nodes. The AWS Load Balancer Controller (LBC) defines two classifications of security groups: frontend and backend . Frontend Security Groups: Determine the clients that can access the load balancers. Backend Security Groups: Permit the load balancer to connect to targets, such as EC2 instances or ENIs. Frontend Security Groups \u00b6 Frontend security groups control access to load balancers by specifying which clients can connect to them. Use cases for Frontent Security Groups include: Placing the load balancer behind another service, such as AWS Web Application Firewall or AWS CloudFront . Blocking the IP address range (CIDR) of a region. Configuring the Load Balancer for private or internal use, by specifying internal CIDRs and Security Groups. In the default configuration, the LBC automatically creates one security group per load balancer, allowing traffic from inbound-cidrs to listen-ports . Configuration \u00b6 Apply custom frontend security groups with an annotation. This disables automatic generation of frontend security groups. For Ingress resources, use the alb.ingress.kubernetes.io/security-groups annotation. For Service resources, use the service.beta.kubernetes.io/aws-load-balancer-security-groups annotation. The annotation must be set to one or more security group IDs or security group names. Backend Security Groups \u00b6 Backend Security Groups control traffic between AWS Load Balancers and their target EC2 instances or ENIs. For example, backend security groups can restrict the ports load balancers may access on nodes. Backend security groups permit traffic from AWS Load Balancers to their targets. LBC uses a single, shared backend security group, attaching it to each load balancer and using as the traffic source in the security group rules it adds to targets. When configuring security group rules at the ENI/Instance level, use the Security Group ID of the backend security group. Avoid using the IP addresses of a specific AWS Load Balancer, these IPs are dynamic and the security group rules aren't updated automatically. Configuration \u00b6 Enable or Disable: Use --enable-backend-security-group (default true ) to enable/disable the shared backend security group. You can turn off the shared backend security group feature by setting it to false . However, if you have a high number of Ingress resources with frontend security groups auto-generated by the controller, you might run into security group rule limits on the instance/ENI security groups. Specification: Use --backend-security-group to pass in a security group ID to use as a custom shared backend security group. If --backend-security-group is left empty, a security group with the following attributes will be created: name : k8s-traffic-- tags : elbv2.k8s.aws/cluster : elbv2.k8s.aws/resource : backend-sg Coordination of Frontend and Backend Security Groups \u00b6 If the LBC auto-creates the frontend security group for a load balancer, it automatically adds the security group rules to allow traffic from the load balancer to the backend instances/ENIs. If the frontend security groups are manually specified, the LBC will not by default add any rules to the backend security group. Enable Autogeneration of Backend Security Group Rules \u00b6 If using custom frontend security groups, the LBC can be configured to automatically manage backend security group rules. To enable managing backend security group rules, apply an additional annotation to Ingress and Service resources. For Ingress resources, set the alb.ingress.kubernetes.io/manage-backend-security-group-rules annotation to true . For Service resources, set the service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules annotation to true . If management of backend security group rules is enabled with an annotation on a Service or Ingress, then --enable-backend-security-group must be set to true. These annotations are ignored when using auto-generated frontend security groups. Port Range Restrictions \u00b6 From version v2.3.0 onwards, the controller restricts port ranges in the backend security group rules by default. This improves the security of the default configuration. The LBC should generate the necessary rules to permit traffic, based on the Service and Ingress resources. If needed, set the controller flag --disable-restricted-sg-rules to true to permit traffic to all ports. This may be appropriate for backwards compatability, or troubleshooting.","title":"Security Group Management"},{"location":"deploy/security_groups/#security-groups-for-load-balancers","text":"Use security groups to limit client connections to your load balancers, and restrict connections with nodes. The AWS Load Balancer Controller (LBC) defines two classifications of security groups: frontend and backend . Frontend Security Groups: Determine the clients that can access the load balancers. Backend Security Groups: Permit the load balancer to connect to targets, such as EC2 instances or ENIs.","title":"Security Groups for Load Balancers"},{"location":"deploy/security_groups/#frontend-security-groups","text":"Frontend security groups control access to load balancers by specifying which clients can connect to them. Use cases for Frontent Security Groups include: Placing the load balancer behind another service, such as AWS Web Application Firewall or AWS CloudFront . Blocking the IP address range (CIDR) of a region. Configuring the Load Balancer for private or internal use, by specifying internal CIDRs and Security Groups. In the default configuration, the LBC automatically creates one security group per load balancer, allowing traffic from inbound-cidrs to listen-ports .","title":"Frontend Security Groups"},{"location":"deploy/security_groups/#configuration","text":"Apply custom frontend security groups with an annotation. This disables automatic generation of frontend security groups. For Ingress resources, use the alb.ingress.kubernetes.io/security-groups annotation. For Service resources, use the service.beta.kubernetes.io/aws-load-balancer-security-groups annotation. The annotation must be set to one or more security group IDs or security group names.","title":"Configuration"},{"location":"deploy/security_groups/#backend-security-groups","text":"Backend Security Groups control traffic between AWS Load Balancers and their target EC2 instances or ENIs. For example, backend security groups can restrict the ports load balancers may access on nodes. Backend security groups permit traffic from AWS Load Balancers to their targets. LBC uses a single, shared backend security group, attaching it to each load balancer and using as the traffic source in the security group rules it adds to targets. When configuring security group rules at the ENI/Instance level, use the Security Group ID of the backend security group. Avoid using the IP addresses of a specific AWS Load Balancer, these IPs are dynamic and the security group rules aren't updated automatically.","title":"Backend Security Groups"},{"location":"deploy/security_groups/#configuration_1","text":"Enable or Disable: Use --enable-backend-security-group (default true ) to enable/disable the shared backend security group. You can turn off the shared backend security group feature by setting it to false . However, if you have a high number of Ingress resources with frontend security groups auto-generated by the controller, you might run into security group rule limits on the instance/ENI security groups. Specification: Use --backend-security-group to pass in a security group ID to use as a custom shared backend security group. If --backend-security-group is left empty, a security group with the following attributes will be created: name : k8s-traffic-- tags : elbv2.k8s.aws/cluster : elbv2.k8s.aws/resource : backend-sg","title":"Configuration"},{"location":"deploy/security_groups/#coordination-of-frontend-and-backend-security-groups","text":"If the LBC auto-creates the frontend security group for a load balancer, it automatically adds the security group rules to allow traffic from the load balancer to the backend instances/ENIs. If the frontend security groups are manually specified, the LBC will not by default add any rules to the backend security group.","title":"Coordination of Frontend and Backend Security Groups"},{"location":"deploy/security_groups/#enable-autogeneration-of-backend-security-group-rules","text":"If using custom frontend security groups, the LBC can be configured to automatically manage backend security group rules. To enable managing backend security group rules, apply an additional annotation to Ingress and Service resources. For Ingress resources, set the alb.ingress.kubernetes.io/manage-backend-security-group-rules annotation to true . For Service resources, set the service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules annotation to true . If management of backend security group rules is enabled with an annotation on a Service or Ingress, then --enable-backend-security-group must be set to true. These annotations are ignored when using auto-generated frontend security groups.","title":"Enable Autogeneration of Backend Security Group Rules"},{"location":"deploy/security_groups/#port-range-restrictions","text":"From version v2.3.0 onwards, the controller restricts port ranges in the backend security group rules by default. This improves the security of the default configuration. The LBC should generate the necessary rules to permit traffic, based on the Service and Ingress resources. If needed, set the controller flag --disable-restricted-sg-rules to true to permit traffic to all ports. This may be appropriate for backwards compatability, or troubleshooting.","title":"Port Range Restrictions"},{"location":"deploy/subnet_discovery/","text":"Subnet auto-discovery \u00b6 By default, the AWS Load Balancer Controller (LBC) auto-discovers network subnets that it can create AWS Network Load Balancers (NLB) and AWS Application Load Balancers (ALB) in. ALBs require at least two subnets across Availability Zones by default, set Feature Gate ALBSingleSubnet to \"true\" allows using only 1 subnet for provisioning ALB. NLBs require one subnet. The subnets must be tagged appropriately for auto-discovery to work. The controller chooses one subnet from each Availability Zone. During auto-discovery, the controller considers subnets with at least eight available IP addresses. In the case of multiple qualified tagged subnets in an Availability Zone, the controller chooses the first one in lexicographical order by the subnet IDs. For more information about the subnets for the LBC, see Application Load Balancers and Network Load Balancers . If you used eksctl or an Amazon EKS AWS CloudFormation template to create your VPC after March 26, 2020, then the subnets are tagged appropriately when they're created. For more information about the Amazon EKS AWS CloudFormation VPC templates, see Creating a VPC for your Amazon EKS cluster . Public subnets \u00b6 Public subnets are used for internet-facing load balancers. These subnets must have the following tags: Key Value kubernetes.io/role/elb 1 or `` Private subnets \u00b6 Private subnets are used for internal load balancers. These subnets must have the following tags: Key Value kubernetes.io/role/internal-elb 1 or `` Common tag \u00b6 In version v2.1.1 and older of the LBC, both the public and private subnets must be tagged with the cluster name as follows: Key Value kubernetes.io/cluster/${cluster-name} owned or shared ${cluster-name} is the name of the Kubernetes cluster. The cluster tag is not required in versions v2.1.2 to v2.4.1, unless a cluster tag for another cluster is present. With versions v2.4.2 and later, you can disable the cluster tag check completely by specifying the feature gate SubnetsClusterTagCheck=false","title":"Subnet Discovery"},{"location":"deploy/subnet_discovery/#subnet-auto-discovery","text":"By default, the AWS Load Balancer Controller (LBC) auto-discovers network subnets that it can create AWS Network Load Balancers (NLB) and AWS Application Load Balancers (ALB) in. ALBs require at least two subnets across Availability Zones by default, set Feature Gate ALBSingleSubnet to \"true\" allows using only 1 subnet for provisioning ALB. NLBs require one subnet. The subnets must be tagged appropriately for auto-discovery to work. The controller chooses one subnet from each Availability Zone. During auto-discovery, the controller considers subnets with at least eight available IP addresses. In the case of multiple qualified tagged subnets in an Availability Zone, the controller chooses the first one in lexicographical order by the subnet IDs. For more information about the subnets for the LBC, see Application Load Balancers and Network Load Balancers . If you used eksctl or an Amazon EKS AWS CloudFormation template to create your VPC after March 26, 2020, then the subnets are tagged appropriately when they're created. For more information about the Amazon EKS AWS CloudFormation VPC templates, see Creating a VPC for your Amazon EKS cluster .","title":"Subnet auto-discovery"},{"location":"deploy/subnet_discovery/#public-subnets","text":"Public subnets are used for internet-facing load balancers. These subnets must have the following tags: Key Value kubernetes.io/role/elb 1 or ``","title":"Public subnets"},{"location":"deploy/subnet_discovery/#private-subnets","text":"Private subnets are used for internal load balancers. These subnets must have the following tags: Key Value kubernetes.io/role/internal-elb 1 or ``","title":"Private subnets"},{"location":"deploy/subnet_discovery/#common-tag","text":"In version v2.1.1 and older of the LBC, both the public and private subnets must be tagged with the cluster name as follows: Key Value kubernetes.io/cluster/${cluster-name} owned or shared ${cluster-name} is the name of the Kubernetes cluster. The cluster tag is not required in versions v2.1.2 to v2.4.1, unless a cluster tag for another cluster is present. With versions v2.4.2 and later, you can disable the cluster tag check completely by specifying the feature gate SubnetsClusterTagCheck=false","title":"Common tag"},{"location":"deploy/upgrade/migrate_v1_v2/","text":"Migrate from v1 to v2 \u00b6 This document contains the information necessary to migrate from an existing installation of AWSALBIngressController(v1) to the new AWSLoadBalancerController(v2). Prerequisites \u00b6 AWSALBIngressController >=v1.1.3 If you have AWSALBIngressController(<1.1.3) installed, you need to upgrade to version>=v1.1.3(e.g. v1.1.9) first. Backwards compatibility \u00b6 The AWSLoadBalancerController(v2.0.1) is backwards-compatible with AWSALBIngressController(>=v1.1.3). It supports existing AWS resources provisioned by AWSALBIngressController(>=v1.1.3) for Ingress resources with below caveats: The AWS LoadBalancer resource created for your Ingress will be preserved. If migrating from =v1.1.3). Existing AWSALBIngressController needs to be uninstalled first before install new AWSLoadBalancerController. Existing Ingress resources do not need to be deleted. Install new AWSLoadBalancerController Install AWSLoadBalancerController(v2.5.0) by following the installation instructions Grant additional IAM policy needed for migration to the controller. Verify all Ingresses works as expected.","title":"Migrate v1 to v2"},{"location":"deploy/upgrade/migrate_v1_v2/#migrate-from-v1-to-v2","text":"This document contains the information necessary to migrate from an existing installation of AWSALBIngressController(v1) to the new AWSLoadBalancerController(v2).","title":"Migrate from v1 to v2"},{"location":"deploy/upgrade/migrate_v1_v2/#prerequisites","text":"AWSALBIngressController >=v1.1.3 If you have AWSALBIngressController(<1.1.3) installed, you need to upgrade to version>=v1.1.3(e.g. v1.1.9) first.","title":"Prerequisites"},{"location":"deploy/upgrade/migrate_v1_v2/#backwards-compatibility","text":"The AWSLoadBalancerController(v2.0.1) is backwards-compatible with AWSALBIngressController(>=v1.1.3). It supports existing AWS resources provisioned by AWSALBIngressController(>=v1.1.3) for Ingress resources with below caveats: The AWS LoadBalancer resource created for your Ingress will be preserved. If migrating from =v1.1.3). Existing AWSALBIngressController needs to be uninstalled first before install new AWSLoadBalancerController. Existing Ingress resources do not need to be deleted. Install new AWSLoadBalancerController Install AWSLoadBalancerController(v2.5.0) by following the installation instructions Grant additional IAM policy needed for migration to the controller. Verify all Ingresses works as expected.","title":"Upgrade steps"},{"location":"examples/echo_server/","text":"walkthrough: echoserver \u00b6 In this walkthrough, you'll Create a cluster with EKS Deploy an aws-load-balancer-controller Create deployments and ingress resources in the cluster Verify access to the service (Optional) Use external-dns to create a DNS record pointing to the load balancer created by the aws-load-balancer-controller. This assumes you have a route53 hosted zone available. Otherwise you can access the service using the load balancer DNS. Create the EKS cluster \u00b6 Install eksctl : https://eksctl.io Create EKS cluster via eksctl eksctl create cluster 2018-08-14T11:19:09-07:00 [\u2139] setting availability zones to [us-west-2c us-west-2a us-west-2b] 2018-08-14T11:19:09-07:00 [\u2139] importing SSH public key \"/Users/kamador/.ssh/id_rsa.pub\" as \"eksctl-exciting-gopher-1534270749-b7:71:da:f6:f3:63:7a:ee:ad:7a:10:37:28:ff:44:d1\" 2018-08-14T11:19:10-07:00 [\u2139] creating EKS cluster \"exciting-gopher-1534270749\" in \"us-west-2\" region 2018-08-14T11:19:10-07:00 [\u2139] creating ServiceRole stack \"EKS-exciting-gopher-1534270749-ServiceRole\" 2018-08-14T11:19:10-07:00 [\u2139] creating VPC stack \"EKS-exciting-gopher-1534270749-VPC\" 2018-08-14T11:19:50-07:00 [\u2714] created ServiceRole stack \"EKS-exciting-gopher-1534270749-ServiceRole\" 2018-08-14T11:20:30-07:00 [\u2714] created VPC stack \"EKS-exciting-gopher-1534270749-VPC\" 2018-08-14T11:20:30-07:00 [\u2139] creating control plane \"exciting-gopher-1534270749\" 2018-08-14T11:31:52-07:00 [\u2714] created control plane \"exciting-gopher-1534270749\" 2018-08-14T11:31:52-07:00 [\u2139] creating DefaultNodeGroup stack \"EKS-exciting-gopher-1534270749-DefaultNodeGroup\" 2018-08-14T11:35:33-07:00 [\u2714] created DefaultNodeGroup stack \"EKS-exciting-gopher-1534270749-DefaultNodeGroup\" 2018-08-14T11:35:33-07:00 [\u2714] all EKS cluster \"exciting-gopher-1534270749\" resources has been created 2018-08-14T11:35:33-07:00 [\u2714] saved kubeconfig as \"/Users/kamador/.kube/config\" 2018-08-14T11:35:34-07:00 [\u2139] the cluster has 0 nodes 2018-08-14T11:35:34-07:00 [\u2139] waiting for at least 2 nodes to become ready 2018-08-14T11:36:05-07:00 [\u2139] the cluster has 2 nodes 2018-08-14T11:36:05-07:00 [\u2139] node \"ip-192-168-139-176.us-west-2.compute.internal\" is ready 2018-08-14T11:36:05-07:00 [\u2139] node \"ip-192-168-214-126.us-west-2.compute.internal\" is ready 2018-08-14T11:36:05-07:00 [\u2714] EKS cluster \"exciting-gopher-1534270749\" in \"us-west-2\" region is ready Setup the AWS Load Balancer controller \u00b6 Refer to the installation instructions to setup the controller Verify the deployment was successful and the controller started. kubectl logs -n kube-system --tail -1 -l app.kubernetes.io/name = aws-load-balancer-controller Should display output similar to the following. {\"level\":\"info\",\"ts\":1602778062.2588625,\"logger\":\"setup\",\"msg\":\"version\",\"GitVersion\":\"v2.0.0-rc3-13-gcdc8f715-dirty\",\"GitCommit\":\"cdc8f715919cc65ca8161b6083c4091222632d6b\",\"BuildDate\":\"2020-10-15T15:58:31+0000\"} {\"level\":\"info\",\"ts\":1602778065.4515743,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1602778065.4536595,\"logger\":\"controller-runtime.webhook\",\"msg\":\"registering webhook\",\"path\":\"/mutate-v1-pod\"} {\"level\":\"info\",\"ts\":1602778065.4537156,\"logger\":\"controller-runtime.webhook\",\"msg\":\"registering webhook\",\"path\":\"/mutate-elbv2-k8s-aws-v1beta1-targetgroupbinding\"} {\"level\":\"info\",\"ts\":1602778065.4537542,\"logger\":\"controller-runtime.webhook\",\"msg\":\"registering webhook\",\"path\":\"/validate-elbv2-k8s-aws-v1beta1-targetgroupbinding\"} {\"level\":\"info\",\"ts\":1602778065.4537594,\"logger\":\"setup\",\"msg\":\"starting manager\"} I1015 16:07:45.453851 1 leaderelection.go:242] attempting to acquire leader lease kube-system/aws-load-balancer-controller-leader... {\"level\":\"info\",\"ts\":1602778065.5544264,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1602778065.5544496,\"logger\":\"controller-runtime.webhook.webhooks\",\"msg\":\"starting webhook server\"} {\"level\":\"info\",\"ts\":1602778065.5549548,\"logger\":\"controller-runtime.certwatcher\",\"msg\":\"Updated current TLS certificate\"} {\"level\":\"info\",\"ts\":1602778065.5550802,\"logger\":\"controller-runtime.webhook\",\"msg\":\"serving webhook server\",\"host\":\"\",\"port\":9443} {\"level\":\"info\",\"ts\":1602778065.5551715,\"logger\":\"controller-runtime.certwatcher\",\"msg\":\"Starting certificate watcher\"} I1015 16:08:03.662023 1 leaderelection.go:252] successfully acquired lease kube-system/aws-load-balancer-controller-leader {\"level\":\"info\",\"ts\":1602778083.663017,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"targetGroupBinding\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.6631303,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"targetGroupBinding\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.6633205,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"ingress\",\"source\":\"channel source: 0xc0007340f0\"} {\"level\":\"info\",\"ts\":1602778083.6633654,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"ingress\",\"source\":\"channel source: 0xc000734140\"} {\"level\":\"info\",\"ts\":1602778083.6633892,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"ingress\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.663441,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"ingress\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.6634624,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"ingress\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.6635776,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"service\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.6636262,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting Controller\",\"controller\":\"service\"} {\"level\":\"info\",\"ts\":1602778083.7634695,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"targetGroupBinding\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.7637022,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting workers\",\"controller\":\"service\",\"worker count\":3} {\"level\":\"info\",\"ts\":1602778083.7641861,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting Controller\",\"controller\":\"ingress\"} {\"level\":\"info\",\"ts\":1602778083.8641882,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting Controller\",\"controller\":\"targetGroupBinding\"} {\"level\":\"info\",\"ts\":1602778083.864236,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting workers\",\"controller\":\"targetGroupBinding\",\"worker count\":3} {\"level\":\"info\",\"ts\":1602778083.8643816,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting workers\",\"controller\":\"ingress\",\"worker count\":3} Deploy the echoserver resources \u00b6 Deploy all the echoserver resources (namespace, service, deployment) kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/examples/echoservice/echoserver-namespace.yaml && \\ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/examples/echoservice/echoserver-service.yaml && \\ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/examples/echoservice/echoserver-deployment.yaml List all the resources to ensure they were created. kubectl get -n echoserver deploy,svc Should resolve similar to the following. NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/echoserver 10.3.31.76 80:31027/TCP 4d NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/echoserver 1 1 1 1 4d Deploy ingress for echoserver \u00b6 Download the echoserver ingress manifest locally. wget https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/examples/echoservice/echoserver-ingress.yaml Configure the subnets, either by add annotation to the ingress or add tags to subnets. This step is optional in lieu of auto-discovery. Tip If you'd like to use external dns, alter the host field to a domain that you own in Route 53. Assuming you managed example.com in Route 53. Edit the alb.ingress.kubernetes.io/subnets annotation to include at least two subnets. Subnets must be from different Availability Zones. eksctl get cluster exciting-gopher-1534270749 NAME VERSION STATUS CREATED VPC SUBNETS SECURITYGROUPS exciting-gopher-1534270749 1.10 ACTIVE 2018-08-14T18:20:32Z vpc-0aa01b07b3c922c9c subnet-05e1c98ed0f5b109e,subnet-07f5bb81f661df61b,subnet-0a4e6232630820516 sg-05ceb5eee9fd7cac4 apiVersion : networking.k8s.io/v1 kind : Ingress metadata : name : echoserver namespace : echoserver annotations : alb.ingress.kubernetes.io/scheme : internet-facing alb.ingress.kubernetes.io/target-type : ip alb.ingress.kubernetes.io/subnets : subnet-05e1c98ed0f5b109e,subnet-07f5bb81f661df61b,subnet-0a4e6232630820516 alb.ingress.kubernetes.io/tags : Environment=dev,Team=test spec : rules : - http : paths : Adding tags to subnets for auto-discovery(instead of alb.ingress.kubernetes.io/subnets annotation) you must include the following tags on desired subnets. kubernetes.io/cluster/$CLUSTER_NAME where $CLUSTER_NAME is the same CLUSTER_NAME specified in the above step. kubernetes.io/role/internal-elb should be set to 1 or an empty tag value for internal load balancers. kubernetes.io/role/elb should be set to 1 or an empty tag value for internet-facing load balancers. An example of a subnet with the correct tags for the cluster joshcalico is as follows. Deploy the ingress resource for echoserver kubectl apply -f echoserver-ingress.yaml Verify the aws-load-balancer-controller creates the resources kubectl logs -n kube-system --tail -1 -l app.kubernetes.io/name = aws-load-balancer-controller | grep 'echoserver\\/echoserver' You should see similar to the following. {\"level\":\"info\",\"ts\":1602803965.264764,\"logger\":\"controllers.ingress\",\"msg\":\"successfully built model\",\"model\":\"{\\\"id\\\":\\\"echoserver/echoserver\\\",\\\"resources\\\":{\\\"AWS::EC2::SecurityGroup\\\":{\\\"ManagedLBSecurityGroup\\\":{\\\"spec\\\":{\\\"groupName\\\":\\\"k8s-echoserv-echoserv-4e1e34cae5\\\",\\\"description\\\":\\\"[k8s] Managed SecurityGroup for LoadBalancer\\\",\\\"tags\\\":{\\\"Environment\\\":\\\"dev\\\",\\\"Team\\\":\\\"test\\\"},\\\"ingress\\\":[{\\\"ipProtocol\\\":\\\"tcp\\\",\\\"fromPort\\\":80,\\\"toPort\\\":80,\\\"ipRanges\\\":[{\\\"cidrIP\\\":\\\"0.0.0.0/0\\\"}]}]}}},\\\"AWS::ElasticLoadBalancingV2::Listener\\\":{\\\"80\\\":{\\\"spec\\\":{\\\"loadBalancerARN\\\":{\\\"$ref\\\":\\\"#/resources/AWS::ElasticLoadBalancingV2::LoadBalancer/LoadBalancer/status/loadBalancerARN\\\"},\\\"port\\\":80,\\\"protocol\\\":\\\"HTTP\\\",\\\"defaultActions\\\":[{\\\"type\\\":\\\"fixed-response\\\",\\\"fixedResponseConfig\\\":{\\\"contentType\\\":\\\"text/plain\\\",\\\"statusCode\\\":\\\"404\\\"}}]}}},\\\"AWS::ElasticLoadBalancingV2::ListenerRule\\\":{\\\"80:1\\\":{\\\"spec\\\":{\\\"listenerARN\\\":{\\\"$ref\\\":\\\"#/resources/AWS::ElasticLoadBalancingV2::Listener/80/status/listenerARN\\\"},\\\"priority\\\":1,\\\"actions\\\":[{\\\"type\\\":\\\"forward\\\",\\\"forwardConfig\\\":{\\\"targetGroups\\\":[{\\\"targetGroupARN\\\":{\\\"$ref\\\":\\\"#/resources/AWS::ElasticLoadBalancingV2::TargetGroup/echoserver/echoserver-echoserver:80/status/targetGroupARN\\\"}}]}}],\\\"conditions\\\":[{\\\"field\\\":\\\"host-header\\\",\\\"hostHeaderConfig\\\":{\\\"values\\\":[\\\"echoserver.example.com\\\"]}},{\\\"field\\\":\\\"path-pattern\\\",\\\"pathPatternConfig\\\":{\\\"values\\\":[\\\"/\\\"]}}]}}},\\\"AWS::ElasticLoadBalancingV2::LoadBalancer\\\":{\\\"LoadBalancer\\\":{\\\"spec\\\":{\\\"name\\\":\\\"k8s-echoserv-echoserv-d4d6bd65d0\\\",\\\"type\\\":\\\"application\\\",\\\"scheme\\\":\\\"internet-facing\\\",\\\"ipAddressType\\\":\\\"ipv4\\\",\\\"subnetMapping\\\":[{\\\"subnetID\\\":\\\"subnet-01b35707c23b0a43b\\\"},{\\\"subnetID\\\":\\\"subnet-0f7814a7ab4dfcc2c\\\"}],\\\"securityGroups\\\":[{\\\"$ref\\\":\\\"#/resources/AWS::EC2::SecurityGroup/ManagedLBSecurityGroup/status/groupID\\\"}],\\\"tags\\\":{\\\"Environment\\\":\\\"dev\\\",\\\"Team\\\":\\\"test\\\"}}}},\\\"AWS::ElasticLoadBalancingV2::TargetGroup\\\":{\\\"echoserver/echoserver-echoserver:80\\\":{\\\"spec\\\":{\\\"name\\\":\\\"k8s-echoserv-echoserv-d989093207\\\",\\\"targetType\\\":\\\"instance\\\",\\\"port\\\":1,\\\"protocol\\\":\\\"HTTP\\\",\\\"healthCheckConfig\\\":{\\\"port\\\":\\\"traffic-port\\\",\\\"protocol\\\":\\\"HTTP\\\",\\\"path\\\":\\\"/\\\",\\\"matcher\\\":{\\\"httpCode\\\":\\\"200\\\"},\\\"intervalSeconds\\\":15,\\\"timeoutSeconds\\\":5,\\\"healthyThresholdCount\\\":2,\\\"unhealthyThresholdCount\\\":2},\\\"tags\\\":{\\\"Environment\\\":\\\"dev\\\",\\\"Team\\\":\\\"test\\\"}}}},\\\"K8S::ElasticLoadBalancingV2::TargetGroupBinding\\\":{\\\"echoserver/echoserver-echoserver:80\\\":{\\\"spec\\\":{\\\"template\\\":{\\\"metadata\\\":{\\\"name\\\":\\\"k8s-echoserv-echoserv-d989093207\\\",\\\"namespace\\\":\\\"echoserver\\\",\\\"creationTimestamp\\\":null},\\\"spec\\\":{\\\"targetGroupARN\\\":{\\\"$ref\\\":\\\"#/resources/AWS::ElasticLoadBalancingV2::TargetGroup/echoserver/echoserver-echoserver:80/status/targetGroupARN\\\"},\\\"targetType\\\":\\\"instance\\\",\\\"serviceRef\\\":{\\\"name\\\":\\\"echoserver\\\",\\\"port\\\":80},\\\"networking\\\":{\\\"ingress\\\":[{\\\"from\\\":[{\\\"securityGroup\\\":{\\\"groupID\\\":{\\\"$ref\\\":\\\"#/resources/AWS::EC2::SecurityGroup/ManagedLBSecurityGroup/status/groupID\\\"}}}],\\\"ports\\\":[{\\\"protocol\\\":\\\"TCP\\\"}]}]}}}}}}}}\"} {\"level\":\"info\",\"ts\":1602803966.411922,\"logger\":\"controllers.ingress\",\"msg\":\"creating targetGroup\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"echoserver/echoserver-echoserver:80\"} {\"level\":\"info\",\"ts\":1602803966.6606336,\"logger\":\"controllers.ingress\",\"msg\":\"created targetGroup\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"echoserver/echoserver-echoserver:80\",\"arn\":\"arn:aws:elasticloadbalancing:us-west-2:019453415603:targetgroup/k8s-echoserv-echoserv-d989093207/63225ae3ead3deb6\"} {\"level\":\"info\",\"ts\":1602803966.798019,\"logger\":\"controllers.ingress\",\"msg\":\"creating loadBalancer\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"LoadBalancer\"} {\"level\":\"info\",\"ts\":1602803967.5472538,\"logger\":\"controllers.ingress\",\"msg\":\"created loadBalancer\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"LoadBalancer\",\"arn\":\"arn:aws:elasticloadbalancing:us-west-2:019453415603:loadbalancer/app/k8s-echoserv-echoserv-d4d6bd65d0/4b4ebe8d6e1ef0c1\"} {\"level\":\"info\",\"ts\":1602803967.5863476,\"logger\":\"controllers.ingress\",\"msg\":\"creating listener\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"80\"} {\"level\":\"info\",\"ts\":1602803967.6436293,\"logger\":\"controllers.ingress\",\"msg\":\"created listener\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"80\",\"arn\":\"arn:aws:elasticloadbalancing:us-west-2:019453415603:listener/app/k8s-echoserv-echoserv-d4d6bd65d0/4b4ebe8d6e1ef0c1/6e13477f9d840da0\"} {\"level\":\"info\",\"ts\":1602803967.6528971,\"logger\":\"controllers.ingress\",\"msg\":\"creating listener rule\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"80:1\"} {\"level\":\"info\",\"ts\":1602803967.7160048,\"logger\":\"controllers.ingress\",\"msg\":\"created listener rule\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"80:1\",\"arn\":\"arn:aws:elasticloadbalancing:us-west-2:019453415603:listener-rule/app/k8s-echoserv-echoserv-d4d6bd65d0/4b4ebe8d6e1ef0c1/6e13477f9d840da0/23ef859380e792e8\"} {\"level\":\"info\",\"ts\":1602803967.8484688,\"logger\":\"controllers.ingress\",\"msg\":\"successfully deployed model\",\"ingressGroup\":\"echoserver/echoserver\"} Check the events of the ingress to see what has occur. kubectl describe ing -n echoserver echoserver You should see similar to the following. Name: echoserver Namespace: echoserver Address: joshcalico-echoserver-echo-2ad7-1490890749.us-east-2.elb.amazonaws.com Default backend: default-http-backend:80 (10.2.1.28:8080) Rules: Host Path Backends ---- ---- -------- * / echoserver:80 () Annotations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 ingress-controller Normal CREATE Ingress echoserver/echoserver 3m 32s 3 ingress-controller Normal UPDATE Ingress echoserver/echoserver The address seen above is the ALB's DNS name. This will be referenced via records created by external-dns if you choose to set it up. Verify that you can access the service \u00b6 Make a curl request to the echoserver service and verify that it returns a response payload. Use the address from the output of kubectl describe ing command above. curl You should get back a valid response. (Optional) Use external-dns to create a DNS record \u00b6 Deploy external-dns to your cluster using these instructions - Setup external-dns Update your ingress resource and add spec.rules[0].host and set the value to your domain name. The example below uses echoserver.example.org . spec : rules : - host : echoserver.example.org http : paths : 1. external-dns will then create a DNS record for the host you specified. This assumes you have the hosted zone corresponding to the domain you are trying to create a record in. Annotate the ingress with the external-dns specific configuration annotations : kubernetes.io/ingress.class : alb alb.ingress.kubernetes.io/scheme : internet-facing # external-dns specific configuration for creating route53 record-set external-dns.alpha.kubernetes.io/hostname : my-app.test-dns.com # give your domain name here Verify the DNS has propagated dig echoserver.example.org ;; QUESTION SECTION: ;echoserver.example.org. IN A ;; ANSWER SECTION: echoserver.example.org. 60 IN A 13.59.147.105 echoserver.example.org. 60 IN A 18.221.65.39 echoserver.example.org. 60 IN A 52.15.186.25 Once it has, you can make a call to echoserver and it should return a response payload. curl echoserver.example.org CLIENT VALUES: client_address=10.0.50.185 command=GET real path=/ query=nil request_version=1.1 request_uri=http://echoserver.example.org:8080/ SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* host=echoserver.example.org user-agent=curl/7.54.0 x-amzn-trace-id=Root=1-59c08da5-113347df69640735312371bd x-forwarded-for=67.173.237.250 x-forwarded-port=80 x-forwarded-proto=http BODY: Kube2iam setup \u00b6 follow below steps if you want to use kube2iam to provide the AWS credentials configure the proper policy The policy to be used can be fetched from https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/install/iam_policy.json configure the proper role and create the trust relationship You have to find which role is associated with your K8S nodes. Once you found take note of the full arn: arn:aws:iam::XXXXXXXXXXXX:role/k8scluster-node create the role, called k8s-lb-controller, attach the above policy and add a Trust Relationship like: { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"\", \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"ec2.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\" }, { \"Sid\": \"\", \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::XXXXXXXXXXXX:role/k8scluster-node\" }, \"Action\": \"sts:AssumeRole\" } ] } The new role will have a similar arn: arn:aws:iam:::XXXXXXXXXXXX:role/k8s-lb-controller update the alb-load-balancer-controller deployment Add the annotations in the template's metadata point spec : replicas : 1 selector : matchLabels : app.kubernetes.io/component : controller app.kubernetes.io/name : aws-load-balancer-controller strategy : rollingUpdate : maxSurge : 1 maxUnavailable : 1 type : RollingUpdate template : metadata : annotations : iam.amazonaws.com/role : arn:aws:iam:::XXXXXXXXXXXX:role/k8s-lb-controller","title":"EchoServer"},{"location":"examples/echo_server/#walkthrough-echoserver","text":"In this walkthrough, you'll Create a cluster with EKS Deploy an aws-load-balancer-controller Create deployments and ingress resources in the cluster Verify access to the service (Optional) Use external-dns to create a DNS record pointing to the load balancer created by the aws-load-balancer-controller. This assumes you have a route53 hosted zone available. Otherwise you can access the service using the load balancer DNS.","title":"walkthrough: echoserver"},{"location":"examples/echo_server/#create-the-eks-cluster","text":"Install eksctl : https://eksctl.io Create EKS cluster via eksctl eksctl create cluster 2018-08-14T11:19:09-07:00 [\u2139] setting availability zones to [us-west-2c us-west-2a us-west-2b] 2018-08-14T11:19:09-07:00 [\u2139] importing SSH public key \"/Users/kamador/.ssh/id_rsa.pub\" as \"eksctl-exciting-gopher-1534270749-b7:71:da:f6:f3:63:7a:ee:ad:7a:10:37:28:ff:44:d1\" 2018-08-14T11:19:10-07:00 [\u2139] creating EKS cluster \"exciting-gopher-1534270749\" in \"us-west-2\" region 2018-08-14T11:19:10-07:00 [\u2139] creating ServiceRole stack \"EKS-exciting-gopher-1534270749-ServiceRole\" 2018-08-14T11:19:10-07:00 [\u2139] creating VPC stack \"EKS-exciting-gopher-1534270749-VPC\" 2018-08-14T11:19:50-07:00 [\u2714] created ServiceRole stack \"EKS-exciting-gopher-1534270749-ServiceRole\" 2018-08-14T11:20:30-07:00 [\u2714] created VPC stack \"EKS-exciting-gopher-1534270749-VPC\" 2018-08-14T11:20:30-07:00 [\u2139] creating control plane \"exciting-gopher-1534270749\" 2018-08-14T11:31:52-07:00 [\u2714] created control plane \"exciting-gopher-1534270749\" 2018-08-14T11:31:52-07:00 [\u2139] creating DefaultNodeGroup stack \"EKS-exciting-gopher-1534270749-DefaultNodeGroup\" 2018-08-14T11:35:33-07:00 [\u2714] created DefaultNodeGroup stack \"EKS-exciting-gopher-1534270749-DefaultNodeGroup\" 2018-08-14T11:35:33-07:00 [\u2714] all EKS cluster \"exciting-gopher-1534270749\" resources has been created 2018-08-14T11:35:33-07:00 [\u2714] saved kubeconfig as \"/Users/kamador/.kube/config\" 2018-08-14T11:35:34-07:00 [\u2139] the cluster has 0 nodes 2018-08-14T11:35:34-07:00 [\u2139] waiting for at least 2 nodes to become ready 2018-08-14T11:36:05-07:00 [\u2139] the cluster has 2 nodes 2018-08-14T11:36:05-07:00 [\u2139] node \"ip-192-168-139-176.us-west-2.compute.internal\" is ready 2018-08-14T11:36:05-07:00 [\u2139] node \"ip-192-168-214-126.us-west-2.compute.internal\" is ready 2018-08-14T11:36:05-07:00 [\u2714] EKS cluster \"exciting-gopher-1534270749\" in \"us-west-2\" region is ready","title":"Create the EKS cluster"},{"location":"examples/echo_server/#setup-the-aws-load-balancer-controller","text":"Refer to the installation instructions to setup the controller Verify the deployment was successful and the controller started. kubectl logs -n kube-system --tail -1 -l app.kubernetes.io/name = aws-load-balancer-controller Should display output similar to the following. {\"level\":\"info\",\"ts\":1602778062.2588625,\"logger\":\"setup\",\"msg\":\"version\",\"GitVersion\":\"v2.0.0-rc3-13-gcdc8f715-dirty\",\"GitCommit\":\"cdc8f715919cc65ca8161b6083c4091222632d6b\",\"BuildDate\":\"2020-10-15T15:58:31+0000\"} {\"level\":\"info\",\"ts\":1602778065.4515743,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1602778065.4536595,\"logger\":\"controller-runtime.webhook\",\"msg\":\"registering webhook\",\"path\":\"/mutate-v1-pod\"} {\"level\":\"info\",\"ts\":1602778065.4537156,\"logger\":\"controller-runtime.webhook\",\"msg\":\"registering webhook\",\"path\":\"/mutate-elbv2-k8s-aws-v1beta1-targetgroupbinding\"} {\"level\":\"info\",\"ts\":1602778065.4537542,\"logger\":\"controller-runtime.webhook\",\"msg\":\"registering webhook\",\"path\":\"/validate-elbv2-k8s-aws-v1beta1-targetgroupbinding\"} {\"level\":\"info\",\"ts\":1602778065.4537594,\"logger\":\"setup\",\"msg\":\"starting manager\"} I1015 16:07:45.453851 1 leaderelection.go:242] attempting to acquire leader lease kube-system/aws-load-balancer-controller-leader... {\"level\":\"info\",\"ts\":1602778065.5544264,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1602778065.5544496,\"logger\":\"controller-runtime.webhook.webhooks\",\"msg\":\"starting webhook server\"} {\"level\":\"info\",\"ts\":1602778065.5549548,\"logger\":\"controller-runtime.certwatcher\",\"msg\":\"Updated current TLS certificate\"} {\"level\":\"info\",\"ts\":1602778065.5550802,\"logger\":\"controller-runtime.webhook\",\"msg\":\"serving webhook server\",\"host\":\"\",\"port\":9443} {\"level\":\"info\",\"ts\":1602778065.5551715,\"logger\":\"controller-runtime.certwatcher\",\"msg\":\"Starting certificate watcher\"} I1015 16:08:03.662023 1 leaderelection.go:252] successfully acquired lease kube-system/aws-load-balancer-controller-leader {\"level\":\"info\",\"ts\":1602778083.663017,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"targetGroupBinding\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.6631303,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"targetGroupBinding\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.6633205,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"ingress\",\"source\":\"channel source: 0xc0007340f0\"} {\"level\":\"info\",\"ts\":1602778083.6633654,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"ingress\",\"source\":\"channel source: 0xc000734140\"} {\"level\":\"info\",\"ts\":1602778083.6633892,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"ingress\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.663441,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"ingress\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.6634624,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"ingress\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.6635776,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"service\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.6636262,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting Controller\",\"controller\":\"service\"} {\"level\":\"info\",\"ts\":1602778083.7634695,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"targetGroupBinding\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.7637022,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting workers\",\"controller\":\"service\",\"worker count\":3} {\"level\":\"info\",\"ts\":1602778083.7641861,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting Controller\",\"controller\":\"ingress\"} {\"level\":\"info\",\"ts\":1602778083.8641882,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting Controller\",\"controller\":\"targetGroupBinding\"} {\"level\":\"info\",\"ts\":1602778083.864236,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting workers\",\"controller\":\"targetGroupBinding\",\"worker count\":3} {\"level\":\"info\",\"ts\":1602778083.8643816,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting workers\",\"controller\":\"ingress\",\"worker count\":3}","title":"Setup the AWS Load Balancer controller"},{"location":"examples/echo_server/#deploy-the-echoserver-resources","text":"Deploy all the echoserver resources (namespace, service, deployment) kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/examples/echoservice/echoserver-namespace.yaml && \\ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/examples/echoservice/echoserver-service.yaml && \\ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/examples/echoservice/echoserver-deployment.yaml List all the resources to ensure they were created. kubectl get -n echoserver deploy,svc Should resolve similar to the following. NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/echoserver 10.3.31.76 80:31027/TCP 4d NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/echoserver 1 1 1 1 4d","title":"Deploy the echoserver resources"},{"location":"examples/echo_server/#deploy-ingress-for-echoserver","text":"Download the echoserver ingress manifest locally. wget https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/examples/echoservice/echoserver-ingress.yaml Configure the subnets, either by add annotation to the ingress or add tags to subnets. This step is optional in lieu of auto-discovery. Tip If you'd like to use external dns, alter the host field to a domain that you own in Route 53. Assuming you managed example.com in Route 53. Edit the alb.ingress.kubernetes.io/subnets annotation to include at least two subnets. Subnets must be from different Availability Zones. eksctl get cluster exciting-gopher-1534270749 NAME VERSION STATUS CREATED VPC SUBNETS SECURITYGROUPS exciting-gopher-1534270749 1.10 ACTIVE 2018-08-14T18:20:32Z vpc-0aa01b07b3c922c9c subnet-05e1c98ed0f5b109e,subnet-07f5bb81f661df61b,subnet-0a4e6232630820516 sg-05ceb5eee9fd7cac4 apiVersion : networking.k8s.io/v1 kind : Ingress metadata : name : echoserver namespace : echoserver annotations : alb.ingress.kubernetes.io/scheme : internet-facing alb.ingress.kubernetes.io/target-type : ip alb.ingress.kubernetes.io/subnets : subnet-05e1c98ed0f5b109e,subnet-07f5bb81f661df61b,subnet-0a4e6232630820516 alb.ingress.kubernetes.io/tags : Environment=dev,Team=test spec : rules : - http : paths : Adding tags to subnets for auto-discovery(instead of alb.ingress.kubernetes.io/subnets annotation) you must include the following tags on desired subnets. kubernetes.io/cluster/$CLUSTER_NAME where $CLUSTER_NAME is the same CLUSTER_NAME specified in the above step. kubernetes.io/role/internal-elb should be set to 1 or an empty tag value for internal load balancers. kubernetes.io/role/elb should be set to 1 or an empty tag value for internet-facing load balancers. An example of a subnet with the correct tags for the cluster joshcalico is as follows. Deploy the ingress resource for echoserver kubectl apply -f echoserver-ingress.yaml Verify the aws-load-balancer-controller creates the resources kubectl logs -n kube-system --tail -1 -l app.kubernetes.io/name = aws-load-balancer-controller | grep 'echoserver\\/echoserver' You should see similar to the following. {\"level\":\"info\",\"ts\":1602803965.264764,\"logger\":\"controllers.ingress\",\"msg\":\"successfully built model\",\"model\":\"{\\\"id\\\":\\\"echoserver/echoserver\\\",\\\"resources\\\":{\\\"AWS::EC2::SecurityGroup\\\":{\\\"ManagedLBSecurityGroup\\\":{\\\"spec\\\":{\\\"groupName\\\":\\\"k8s-echoserv-echoserv-4e1e34cae5\\\",\\\"description\\\":\\\"[k8s] Managed SecurityGroup for LoadBalancer\\\",\\\"tags\\\":{\\\"Environment\\\":\\\"dev\\\",\\\"Team\\\":\\\"test\\\"},\\\"ingress\\\":[{\\\"ipProtocol\\\":\\\"tcp\\\",\\\"fromPort\\\":80,\\\"toPort\\\":80,\\\"ipRanges\\\":[{\\\"cidrIP\\\":\\\"0.0.0.0/0\\\"}]}]}}},\\\"AWS::ElasticLoadBalancingV2::Listener\\\":{\\\"80\\\":{\\\"spec\\\":{\\\"loadBalancerARN\\\":{\\\"$ref\\\":\\\"#/resources/AWS::ElasticLoadBalancingV2::LoadBalancer/LoadBalancer/status/loadBalancerARN\\\"},\\\"port\\\":80,\\\"protocol\\\":\\\"HTTP\\\",\\\"defaultActions\\\":[{\\\"type\\\":\\\"fixed-response\\\",\\\"fixedResponseConfig\\\":{\\\"contentType\\\":\\\"text/plain\\\",\\\"statusCode\\\":\\\"404\\\"}}]}}},\\\"AWS::ElasticLoadBalancingV2::ListenerRule\\\":{\\\"80:1\\\":{\\\"spec\\\":{\\\"listenerARN\\\":{\\\"$ref\\\":\\\"#/resources/AWS::ElasticLoadBalancingV2::Listener/80/status/listenerARN\\\"},\\\"priority\\\":1,\\\"actions\\\":[{\\\"type\\\":\\\"forward\\\",\\\"forwardConfig\\\":{\\\"targetGroups\\\":[{\\\"targetGroupARN\\\":{\\\"$ref\\\":\\\"#/resources/AWS::ElasticLoadBalancingV2::TargetGroup/echoserver/echoserver-echoserver:80/status/targetGroupARN\\\"}}]}}],\\\"conditions\\\":[{\\\"field\\\":\\\"host-header\\\",\\\"hostHeaderConfig\\\":{\\\"values\\\":[\\\"echoserver.example.com\\\"]}},{\\\"field\\\":\\\"path-pattern\\\",\\\"pathPatternConfig\\\":{\\\"values\\\":[\\\"/\\\"]}}]}}},\\\"AWS::ElasticLoadBalancingV2::LoadBalancer\\\":{\\\"LoadBalancer\\\":{\\\"spec\\\":{\\\"name\\\":\\\"k8s-echoserv-echoserv-d4d6bd65d0\\\",\\\"type\\\":\\\"application\\\",\\\"scheme\\\":\\\"internet-facing\\\",\\\"ipAddressType\\\":\\\"ipv4\\\",\\\"subnetMapping\\\":[{\\\"subnetID\\\":\\\"subnet-01b35707c23b0a43b\\\"},{\\\"subnetID\\\":\\\"subnet-0f7814a7ab4dfcc2c\\\"}],\\\"securityGroups\\\":[{\\\"$ref\\\":\\\"#/resources/AWS::EC2::SecurityGroup/ManagedLBSecurityGroup/status/groupID\\\"}],\\\"tags\\\":{\\\"Environment\\\":\\\"dev\\\",\\\"Team\\\":\\\"test\\\"}}}},\\\"AWS::ElasticLoadBalancingV2::TargetGroup\\\":{\\\"echoserver/echoserver-echoserver:80\\\":{\\\"spec\\\":{\\\"name\\\":\\\"k8s-echoserv-echoserv-d989093207\\\",\\\"targetType\\\":\\\"instance\\\",\\\"port\\\":1,\\\"protocol\\\":\\\"HTTP\\\",\\\"healthCheckConfig\\\":{\\\"port\\\":\\\"traffic-port\\\",\\\"protocol\\\":\\\"HTTP\\\",\\\"path\\\":\\\"/\\\",\\\"matcher\\\":{\\\"httpCode\\\":\\\"200\\\"},\\\"intervalSeconds\\\":15,\\\"timeoutSeconds\\\":5,\\\"healthyThresholdCount\\\":2,\\\"unhealthyThresholdCount\\\":2},\\\"tags\\\":{\\\"Environment\\\":\\\"dev\\\",\\\"Team\\\":\\\"test\\\"}}}},\\\"K8S::ElasticLoadBalancingV2::TargetGroupBinding\\\":{\\\"echoserver/echoserver-echoserver:80\\\":{\\\"spec\\\":{\\\"template\\\":{\\\"metadata\\\":{\\\"name\\\":\\\"k8s-echoserv-echoserv-d989093207\\\",\\\"namespace\\\":\\\"echoserver\\\",\\\"creationTimestamp\\\":null},\\\"spec\\\":{\\\"targetGroupARN\\\":{\\\"$ref\\\":\\\"#/resources/AWS::ElasticLoadBalancingV2::TargetGroup/echoserver/echoserver-echoserver:80/status/targetGroupARN\\\"},\\\"targetType\\\":\\\"instance\\\",\\\"serviceRef\\\":{\\\"name\\\":\\\"echoserver\\\",\\\"port\\\":80},\\\"networking\\\":{\\\"ingress\\\":[{\\\"from\\\":[{\\\"securityGroup\\\":{\\\"groupID\\\":{\\\"$ref\\\":\\\"#/resources/AWS::EC2::SecurityGroup/ManagedLBSecurityGroup/status/groupID\\\"}}}],\\\"ports\\\":[{\\\"protocol\\\":\\\"TCP\\\"}]}]}}}}}}}}\"} {\"level\":\"info\",\"ts\":1602803966.411922,\"logger\":\"controllers.ingress\",\"msg\":\"creating targetGroup\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"echoserver/echoserver-echoserver:80\"} {\"level\":\"info\",\"ts\":1602803966.6606336,\"logger\":\"controllers.ingress\",\"msg\":\"created targetGroup\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"echoserver/echoserver-echoserver:80\",\"arn\":\"arn:aws:elasticloadbalancing:us-west-2:019453415603:targetgroup/k8s-echoserv-echoserv-d989093207/63225ae3ead3deb6\"} {\"level\":\"info\",\"ts\":1602803966.798019,\"logger\":\"controllers.ingress\",\"msg\":\"creating loadBalancer\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"LoadBalancer\"} {\"level\":\"info\",\"ts\":1602803967.5472538,\"logger\":\"controllers.ingress\",\"msg\":\"created loadBalancer\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"LoadBalancer\",\"arn\":\"arn:aws:elasticloadbalancing:us-west-2:019453415603:loadbalancer/app/k8s-echoserv-echoserv-d4d6bd65d0/4b4ebe8d6e1ef0c1\"} {\"level\":\"info\",\"ts\":1602803967.5863476,\"logger\":\"controllers.ingress\",\"msg\":\"creating listener\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"80\"} {\"level\":\"info\",\"ts\":1602803967.6436293,\"logger\":\"controllers.ingress\",\"msg\":\"created listener\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"80\",\"arn\":\"arn:aws:elasticloadbalancing:us-west-2:019453415603:listener/app/k8s-echoserv-echoserv-d4d6bd65d0/4b4ebe8d6e1ef0c1/6e13477f9d840da0\"} {\"level\":\"info\",\"ts\":1602803967.6528971,\"logger\":\"controllers.ingress\",\"msg\":\"creating listener rule\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"80:1\"} {\"level\":\"info\",\"ts\":1602803967.7160048,\"logger\":\"controllers.ingress\",\"msg\":\"created listener rule\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"80:1\",\"arn\":\"arn:aws:elasticloadbalancing:us-west-2:019453415603:listener-rule/app/k8s-echoserv-echoserv-d4d6bd65d0/4b4ebe8d6e1ef0c1/6e13477f9d840da0/23ef859380e792e8\"} {\"level\":\"info\",\"ts\":1602803967.8484688,\"logger\":\"controllers.ingress\",\"msg\":\"successfully deployed model\",\"ingressGroup\":\"echoserver/echoserver\"} Check the events of the ingress to see what has occur. kubectl describe ing -n echoserver echoserver You should see similar to the following. Name: echoserver Namespace: echoserver Address: joshcalico-echoserver-echo-2ad7-1490890749.us-east-2.elb.amazonaws.com Default backend: default-http-backend:80 (10.2.1.28:8080) Rules: Host Path Backends ---- ---- -------- * / echoserver:80 () Annotations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 ingress-controller Normal CREATE Ingress echoserver/echoserver 3m 32s 3 ingress-controller Normal UPDATE Ingress echoserver/echoserver The address seen above is the ALB's DNS name. This will be referenced via records created by external-dns if you choose to set it up.","title":"Deploy ingress for echoserver"},{"location":"examples/echo_server/#verify-that-you-can-access-the-service","text":"Make a curl request to the echoserver service and verify that it returns a response payload. Use the address from the output of kubectl describe ing command above. curl You should get back a valid response.","title":"Verify that you can access the service"},{"location":"examples/echo_server/#optional-use-external-dns-to-create-a-dns-record","text":"Deploy external-dns to your cluster using these instructions - Setup external-dns Update your ingress resource and add spec.rules[0].host and set the value to your domain name. The example below uses echoserver.example.org . spec : rules : - host : echoserver.example.org http : paths : 1. external-dns will then create a DNS record for the host you specified. This assumes you have the hosted zone corresponding to the domain you are trying to create a record in. Annotate the ingress with the external-dns specific configuration annotations : kubernetes.io/ingress.class : alb alb.ingress.kubernetes.io/scheme : internet-facing # external-dns specific configuration for creating route53 record-set external-dns.alpha.kubernetes.io/hostname : my-app.test-dns.com # give your domain name here Verify the DNS has propagated dig echoserver.example.org ;; QUESTION SECTION: ;echoserver.example.org. IN A ;; ANSWER SECTION: echoserver.example.org. 60 IN A 13.59.147.105 echoserver.example.org. 60 IN A 18.221.65.39 echoserver.example.org. 60 IN A 52.15.186.25 Once it has, you can make a call to echoserver and it should return a response payload. curl echoserver.example.org CLIENT VALUES: client_address=10.0.50.185 command=GET real path=/ query=nil request_version=1.1 request_uri=http://echoserver.example.org:8080/ SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* host=echoserver.example.org user-agent=curl/7.54.0 x-amzn-trace-id=Root=1-59c08da5-113347df69640735312371bd x-forwarded-for=67.173.237.250 x-forwarded-port=80 x-forwarded-proto=http BODY:","title":"(Optional) Use external-dns to create a DNS record"},{"location":"examples/echo_server/#kube2iam-setup","text":"follow below steps if you want to use kube2iam to provide the AWS credentials configure the proper policy The policy to be used can be fetched from https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/install/iam_policy.json configure the proper role and create the trust relationship You have to find which role is associated with your K8S nodes. Once you found take note of the full arn: arn:aws:iam::XXXXXXXXXXXX:role/k8scluster-node create the role, called k8s-lb-controller, attach the above policy and add a Trust Relationship like: { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"\", \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"ec2.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\" }, { \"Sid\": \"\", \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::XXXXXXXXXXXX:role/k8scluster-node\" }, \"Action\": \"sts:AssumeRole\" } ] } The new role will have a similar arn: arn:aws:iam:::XXXXXXXXXXXX:role/k8s-lb-controller update the alb-load-balancer-controller deployment Add the annotations in the template's metadata point spec : replicas : 1 selector : matchLabels : app.kubernetes.io/component : controller app.kubernetes.io/name : aws-load-balancer-controller strategy : rollingUpdate : maxSurge : 1 maxUnavailable : 1 type : RollingUpdate template : metadata : annotations : iam.amazonaws.com/role : arn:aws:iam:::XXXXXXXXXXXX:role/k8s-lb-controller","title":"Kube2iam setup"},{"location":"examples/grpc_server/","text":"walkthrough: grpcserver \u00b6 In this walkthrough, you'll Deploy a grpc service to an existing EKS cluster Send a test message to the hosted service over TLS Prerequsites \u00b6 The following resources are required prior to deployment: EKS cluster aws-load-balancer-controller external-dns See echo_server.md and external_dns.md for setup instructions for those resources. Create an ACM certificate \u00b6 NOTE: An ACM certificate is required for this demo as the application uses the grpc.secure_channel method. If you already have an ACM certificate (including wildcard certificates) for the domain you would like to use in this example, you can skip this step. Request a certificate for a domain you own using the steps described in the official AWS ACM documentation . Once the status for the certificate is \"Issued\" continue to the next step. Deploy the grpcserver manifests \u00b6 Deploy all the manifests from GitHub. kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/grpc/grpcserver-namespace.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/grpc/grpcserver-service.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/grpc/grpcserver-deployment.yaml Confirm that all resources were created. kubectl get -n grpcserver all You should see the pod, service, and deployment. NAME READY STATUS RESTARTS AGE pod/grpcserver-5455b7d4d-jshk5 1/1 Running 0 35m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/grpcserver ClusterIP None 50051/TCP 77m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/grpcserver 1/1 1 1 77m NAME DESIRED CURRENT READY AGE replicaset.apps/grpcserver-5455b7d4d 1 1 1 35m Customize the ingress for grpcserver \u00b6 Download the grpcserver ingress manifest. wget https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/grpc/grpcserver-ingress.yaml Change the domain name from grpcserver.example.com to your desired domain. The example manifest assumes that you have tagged your subnets for the aws-load-balancer-controller. Otherwise add your subnets using the alb.ingress.kubernetes.io/subnets annotation. Deploy the ingress resource for grpcserver. kubectl apply -f grpcserver-ingress.yaml Wait a few minutes for the ALB to provision and for DNS to update. Check the aws-load-balancer-controller logs to ensure the ALB is created. Also ensure that external-dns creates a DNS record that points your domain to the ALB. kubectl logs -n kube-system --tail -1 -l app.kubernetes.io/name = aws-load-balancer-controller | grep 'grpcserver\\/grpcserver' kubectl logs -n kube-system --tail -1 -l app.kubernetes.io/name = external-dns | grep 'YOUR_DOMAIN_NAME' Next check that your ingress shows the correct ALB address and custom domain name. kubectl get ingress -n grpcserver grpcserver You should see similar to the following. NNAME CLASS HOSTS ADDRESS PORTS AGE grpcserver alb YOUR_DOMAIN_NAME ALB-DNS-NAME 80 90m Finally, test your secure gRPC service by running the greeter client, substituting YOUR_DOMAIN_NAME for the domain you used in the ingress manifest. docker run --rm -it --env BACKEND = YOUR_DOMAIN_NAME placeexchange/grpc-demo:latest python greeter_client.py You should see the following response. Greeter client received: Hello, you!","title":"gRPCServer"},{"location":"examples/grpc_server/#walkthrough-grpcserver","text":"In this walkthrough, you'll Deploy a grpc service to an existing EKS cluster Send a test message to the hosted service over TLS","title":"walkthrough: grpcserver"},{"location":"examples/grpc_server/#prerequsites","text":"The following resources are required prior to deployment: EKS cluster aws-load-balancer-controller external-dns See echo_server.md and external_dns.md for setup instructions for those resources.","title":"Prerequsites"},{"location":"examples/grpc_server/#create-an-acm-certificate","text":"NOTE: An ACM certificate is required for this demo as the application uses the grpc.secure_channel method. If you already have an ACM certificate (including wildcard certificates) for the domain you would like to use in this example, you can skip this step. Request a certificate for a domain you own using the steps described in the official AWS ACM documentation . Once the status for the certificate is \"Issued\" continue to the next step.","title":"Create an ACM certificate"},{"location":"examples/grpc_server/#deploy-the-grpcserver-manifests","text":"Deploy all the manifests from GitHub. kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/grpc/grpcserver-namespace.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/grpc/grpcserver-service.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/grpc/grpcserver-deployment.yaml Confirm that all resources were created. kubectl get -n grpcserver all You should see the pod, service, and deployment. NAME READY STATUS RESTARTS AGE pod/grpcserver-5455b7d4d-jshk5 1/1 Running 0 35m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/grpcserver ClusterIP None 50051/TCP 77m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/grpcserver 1/1 1 1 77m NAME DESIRED CURRENT READY AGE replicaset.apps/grpcserver-5455b7d4d 1 1 1 35m","title":"Deploy the grpcserver manifests"},{"location":"examples/grpc_server/#customize-the-ingress-for-grpcserver","text":"Download the grpcserver ingress manifest. wget https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/grpc/grpcserver-ingress.yaml Change the domain name from grpcserver.example.com to your desired domain. The example manifest assumes that you have tagged your subnets for the aws-load-balancer-controller. Otherwise add your subnets using the alb.ingress.kubernetes.io/subnets annotation. Deploy the ingress resource for grpcserver. kubectl apply -f grpcserver-ingress.yaml Wait a few minutes for the ALB to provision and for DNS to update. Check the aws-load-balancer-controller logs to ensure the ALB is created. Also ensure that external-dns creates a DNS record that points your domain to the ALB. kubectl logs -n kube-system --tail -1 -l app.kubernetes.io/name = aws-load-balancer-controller | grep 'grpcserver\\/grpcserver' kubectl logs -n kube-system --tail -1 -l app.kubernetes.io/name = external-dns | grep 'YOUR_DOMAIN_NAME' Next check that your ingress shows the correct ALB address and custom domain name. kubectl get ingress -n grpcserver grpcserver You should see similar to the following. NNAME CLASS HOSTS ADDRESS PORTS AGE grpcserver alb YOUR_DOMAIN_NAME ALB-DNS-NAME 80 90m Finally, test your secure gRPC service by running the greeter client, substituting YOUR_DOMAIN_NAME for the domain you used in the ingress manifest. docker run --rm -it --env BACKEND = YOUR_DOMAIN_NAME placeexchange/grpc-demo:latest python greeter_client.py You should see the following response. Greeter client received: Hello, you!","title":"Customize the ingress for grpcserver"},{"location":"examples/secrets_access/","text":"RBAC configuration for secrets resources \u00b6 In this walkthrough, you will configure RBAC permissions for the controller to access specific secrets resource in a particular namespace. Create Role \u00b6 Prepare the role manifest with the appropriate name, namespace, and secretName, for example: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: example-role namespace: example-namespace rules: - apiGroups: - \"\" resourceNames: - example-secret resources: - secrets verbs: - get - list - watch Apply the role manifest kubectl apply -f role.yaml Create RoleBinding \u00b6 Prepare the rolebinding manifest with the appropriate name, namespace and role reference. For example: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: example-rolebinding namespace: example-namespace roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: example-role subjects: - kind: ServiceAccount name: aws-load-balancer-controller namespace: kube-system Apply the rolebinding manifest kubectl apply -f rolebinding.yaml","title":"RBAC to access OIDC Secret"},{"location":"examples/secrets_access/#rbac-configuration-for-secrets-resources","text":"In this walkthrough, you will configure RBAC permissions for the controller to access specific secrets resource in a particular namespace.","title":"RBAC configuration for secrets resources"},{"location":"examples/secrets_access/#create-role","text":"Prepare the role manifest with the appropriate name, namespace, and secretName, for example: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: example-role namespace: example-namespace rules: - apiGroups: - \"\" resourceNames: - example-secret resources: - secrets verbs: - get - list - watch Apply the role manifest kubectl apply -f role.yaml","title":"Create Role"},{"location":"examples/secrets_access/#create-rolebinding","text":"Prepare the rolebinding manifest with the appropriate name, namespace and role reference. For example: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: example-rolebinding namespace: example-namespace roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: example-role subjects: - kind: ServiceAccount name: aws-load-balancer-controller namespace: kube-system Apply the rolebinding manifest kubectl apply -f rolebinding.yaml","title":"Create RoleBinding"},{"location":"guide/ingress/annotations/","text":"Ingress annotations \u00b6 You can add annotations to kubernetes Ingress and Service objects to customize their behavior. Annotation keys and values can only be strings. Advanced format should be encoded as below: boolean: 'true' integer: '42' stringList: s1,s2,s3 stringMap: k1=v1,k2=v2 json: 'jsonContent' Annotations applied to Service have higher priority over annotations applied to Ingress. Location column below indicates where that annotation can be applied to. Annotations that configures LoadBalancer / Listener behaviors have different merge behavior when IngressGroup feature is been used. MergeBehavior column below indicates how such annotation will be merged. Exclusive: such annotation should only be specified on a single Ingress within IngressGroup or specified with same value across all Ingresses within IngressGroup. Merge: such annotation can be specified on all Ingresses within IngressGroup, and will be merged together. Annotations \u00b6 Name Type Default Location MergeBehavior alb.ingress.kubernetes.io/load-balancer-name string N/A Ingress Exclusive alb.ingress.kubernetes.io/group.name string N/A Ingress N/A alb.ingress.kubernetes.io/group.order integer 0 Ingress N/A alb.ingress.kubernetes.io/tags stringMap N/A Ingress,Service Merge alb.ingress.kubernetes.io/ip-address-type ipv4 | dualstack ipv4 Ingress Exclusive alb.ingress.kubernetes.io/scheme internal | internet-facing internal Ingress Exclusive alb.ingress.kubernetes.io/subnets stringList N/A Ingress Exclusive alb.ingress.kubernetes.io/security-groups stringList N/A Ingress Exclusive alb.ingress.kubernetes.io/manage-backend-security-group-rules boolean N/A Ingress Exclusive alb.ingress.kubernetes.io/customer-owned-ipv4-pool string N/A Ingress Exclusive alb.ingress.kubernetes.io/load-balancer-attributes stringMap N/A Ingress Exclusive alb.ingress.kubernetes.io/wafv2-acl-arn string N/A Ingress Exclusive alb.ingress.kubernetes.io/waf-acl-id string N/A Ingress Exclusive alb.ingress.kubernetes.io/shield-advanced-protection boolean N/A Ingress Exclusive alb.ingress.kubernetes.io/listen-ports json '[{\"HTTP\": 80}]' | '[{\"HTTPS\": 443}]' Ingress Merge alb.ingress.kubernetes.io/ssl-redirect integer N/A Ingress Exclusive alb.ingress.kubernetes.io/inbound-cidrs stringList 0.0.0.0/0, ::/0 Ingress Exclusive alb.ingress.kubernetes.io/certificate-arn stringList N/A Ingress Merge alb.ingress.kubernetes.io/ssl-policy string ELBSecurityPolicy-2016-08 Ingress Exclusive alb.ingress.kubernetes.io/target-type instance | ip instance Ingress,Service N/A alb.ingress.kubernetes.io/backend-protocol HTTP | HTTPS HTTP Ingress,Service N/A alb.ingress.kubernetes.io/backend-protocol-version string HTTP1 Ingress,Service N/A alb.ingress.kubernetes.io/target-group-attributes stringMap N/A Ingress,Service N/A alb.ingress.kubernetes.io/healthcheck-port integer | traffic-port traffic-port Ingress,Service N/A alb.ingress.kubernetes.io/healthcheck-protocol HTTP | HTTPS HTTP Ingress,Service N/A alb.ingress.kubernetes.io/healthcheck-path string / | /AWS.ALB/healthcheck Ingress,Service N/A alb.ingress.kubernetes.io/healthcheck-interval-seconds integer '15' Ingress,Service N/A alb.ingress.kubernetes.io/healthcheck-timeout-seconds integer '5' Ingress,Service N/A alb.ingress.kubernetes.io/healthy-threshold-count integer '2' Ingress,Service N/A alb.ingress.kubernetes.io/unhealthy-threshold-count integer '2' Ingress,Service N/A alb.ingress.kubernetes.io/success-codes string '200' | '12' Ingress,Service N/A alb.ingress.kubernetes.io/auth-type none|oidc|cognito none Ingress,Service N/A alb.ingress.kubernetes.io/auth-idp-cognito json N/A Ingress,Service N/A alb.ingress.kubernetes.io/auth-idp-oidc json N/A Ingress,Service N/A alb.ingress.kubernetes.io/auth-on-unauthenticated-request authenticate|allow|deny authenticate Ingress,Service N/A alb.ingress.kubernetes.io/auth-scope string openid Ingress,Service N/A alb.ingress.kubernetes.io/auth-session-cookie string AWSELBAuthSessionCookie Ingress,Service N/A alb.ingress.kubernetes.io/auth-session-timeout integer '604800' Ingress,Service N/A alb.ingress.kubernetes.io/actions.${action-name} json N/A Ingress N/A alb.ingress.kubernetes.io/conditions.${conditions-name} json N/A Ingress N/A alb.ingress.kubernetes.io/target-node-labels stringMap N/A Ingress,Service N/A alb.ingress.kubernetes.io/mutual-authentication json '[{\"port\": 443, \"mode\": \"off\"}]' Ingress Exclusive IngressGroup \u00b6 IngressGroup feature enables you to group multiple Ingress resources together. The controller will automatically merge Ingress rules for all Ingresses within IngressGroup and support them with a single ALB. In addition, most annotations defined on an Ingress only apply to the paths defined by that Ingress. By default, Ingresses don't belong to any IngressGroup, and we treat it as a \"implicit IngressGroup\" consisting of the Ingress itself. alb.ingress.kubernetes.io/group.name specifies the group name that this Ingress belongs to. Ingresses with same group.name annotation will form an \"explicit IngressGroup\". groupName must consist of lower case alphanumeric characters, - or . , and must start and end with an alphanumeric character. groupName must be no more than 63 character. Security Risk IngressGroup feature should only be used when all Kubernetes users with RBAC permission to create/modify Ingress resources are within trust boundary. If you turn your Ingress to belong a \"explicit IngressGroup\" by adding group.name annotation, other Kubernetes users may create/modify their Ingresses to belong to the same IngressGroup, and can thus add more rules or overwrite existing rules with higher priority to the ALB for your Ingress. We'll add more fine-grained access-control in future versions. Rename behavior The ALB for an IngressGroup is found by searching for an AWS tag ingress.k8s.aws/stack tag with the name of the IngressGroup as its value. For an implicit IngressGroup, the value is namespace/ingressname . When the groupName of an IngressGroup for an Ingress is changed, the Ingress will be moved to a new IngressGroup and be supported by the ALB for the new IngressGroup. If the ALB for the new IngressGroup doesn't exist, a new ALB will be created. If an IngressGroup no longer contains any Ingresses, the ALB for that IngressGroup will be deleted and any deletion protection of that ALB will be ignored. Example alb.ingress.kubernetes.io/group.name: my-team.awesome-group alb.ingress.kubernetes.io/group.order specifies the order across all Ingresses within IngressGroup. You can explicitly denote the order using a number between -1000 and 1000 The smaller the order, the rule will be evaluated first. All Ingresses without an explicit order setting get order value as 0 Rules with the same order are sorted lexicographically by the Ingress\u2019s namespace/name. Example alb.ingress.kubernetes.io/group.order: '10' Traffic Listening \u00b6 Traffic Listening can be controlled with the following annotations: alb.ingress.kubernetes.io/listen-ports specifies the ports that ALB listens on. Merge Behavior listen-ports is merged across all Ingresses in IngressGroup. You can define different listen-ports per Ingress, Ingress rules will only impact the ports defined for that Ingress. If same listen-port is defined by multiple Ingress within IngressGroup, Ingress rules will be merged with respect to their group order within IngressGroup. Default defaults to '[{\"HTTP\": 80}]' or '[{\"HTTPS\": 443}]' depending on whether certificate-arn is specified. You may not have duplicate load balancer ports defined. Example alb.ingress.kubernetes.io/listen-ports: '[{\"HTTP\": 80}, {\"HTTPS\": 443}, {\"HTTP\": 8080}, {\"HTTPS\": 8443}]' alb.ingress.kubernetes.io/ssl-redirect enables SSLRedirect and specifies the SSL port that redirects to. Merge Behavior ssl-redirect is exclusive across all Ingresses in IngressGroup. Once defined on a single Ingress, it impacts every Ingress within IngressGroup. Once enabled SSLRedirect, every HTTP listener will be configured with a default action which redirects to HTTPS, other rules will be ignored. The SSL port that redirects to must exists on LoadBalancer. See alb.ingress.kubernetes.io/listen-ports for the listen ports configuration. Example alb.ingress.kubernetes.io/ssl-redirect: '443' alb.ingress.kubernetes.io/ip-address-type specifies the IP address type of ALB. Example alb.ingress.kubernetes.io/ip-address-type: ipv4 alb.ingress.kubernetes.io/customer-owned-ipv4-pool specifies the customer-owned IPv4 address pool for ALB on Outpost. This annotation should be treated as immutable. To remove or change coIPv4Pool, you need to recreate Ingress. Example alb.ingress.kubernetes.io/customer-owned-ipv4-pool: ipv4pool-coip-xxxxxxxx Traffic Routing \u00b6 Traffic Routing can be controlled with following annotations: alb.ingress.kubernetes.io/load-balancer-name specifies the custom name to use for the load balancer. Name longer than 32 characters will be treated as an error. Merge Behavior name is exclusive across all Ingresses in an IngressGroup. Once defined on a single Ingress, it impacts every Ingress within the IngressGroup. Example alb.ingress.kubernetes.io/load-balancer-name: custom-name alb.ingress.kubernetes.io/target-type specifies how to route traffic to pods. You can choose between instance and ip : instance mode will route traffic to all ec2 instances within cluster on NodePort opened for your service. service must be of type \"NodePort\" or \"LoadBalancer\" to use instance mode ip mode will route traffic directly to the pod IP. network plugin must use secondary IP addresses on ENI for pod IP to use ip mode. e.g. amazon-vpc-cni-k8s ip mode is required for sticky sessions to work with Application Load Balancers. The Service type does not matter, when using ip mode. Example alb.ingress.kubernetes.io/target-type: instance alb.ingress.kubernetes.io/target-node-labels specifies which nodes to include in the target group registration for instance target type. Example alb.ingress.kubernetes.io/target-node-labels: label1=value1, label2=value2 alb.ingress.kubernetes.io/backend-protocol specifies the protocol used when route traffic to pods. Example alb.ingress.kubernetes.io/backend-protocol: HTTPS alb.ingress.kubernetes.io/backend-protocol-version specifies the application protocol used to route traffic to pods. Only valid when HTTP or HTTPS is used as the backend protocol. Example HTTP2 alb.ingress.kubernetes.io/backend-protocol-version: HTTP2 GRPC alb.ingress.kubernetes.io/backend-protocol-version: GRPC alb.ingress.kubernetes.io/subnets specifies the Availability Zone s that the ALB will route traffic to. See Load Balancer subnets for more details. You must specify at least two subnets in different AZs. Either subnetID or subnetName(Name tag on subnets) can be used. Tip You can enable subnet auto discovery to avoid specifying this annotation on every Ingress. See Subnet Discovery for instructions. Example alb.ingress.kubernetes.io/subnets: subnet-xxxx, mySubnet alb.ingress.kubernetes.io/actions.${action-name} Provides a method for configuring custom actions on a listener, such as Redirect Actions. The action-name in the annotation must match the serviceName in the Ingress rules, and servicePort must be use-annotation . use ARN in forward Action ARN can be used in forward action(both simplified schema and advanced schema), it must be an targetGroup created outside of k8s, typically an targetGroup for legacy application. use ServiceName/ServicePort in forward Action ServiceName/ServicePort can be used in forward action(advanced schema only). Auth related annotations on Service object will only be respected if a single TargetGroup in is used. Example response-503: return fixed 503 response redirect-to-eks: redirect to an external url forward-single-tg: forward to a single targetGroup [ simplified schema ] forward-multiple-tg: forward to multiple targetGroups with different weights and stickiness config [ advanced schema ] apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/scheme : internet-facing alb.ingress.kubernetes.io/actions.response-503 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"503\",\"messageBody\":\"503 error text\"}} alb.ingress.kubernetes.io/actions.redirect-to-eks : > {\"type\":\"redirect\",\"redirectConfig\":{\"host\":\"aws.amazon.com\",\"path\":\"/eks/\",\"port\":\"443\",\"protocol\":\"HTTPS\",\"query\":\"k=v\",\"statusCode\":\"HTTP_302\"}} alb.ingress.kubernetes.io/actions.forward-single-tg : > {\"type\":\"forward\",\"targetGroupARN\": \"arn-of-your-target-group\"} alb.ingress.kubernetes.io/actions.forward-multiple-tg : > {\"type\":\"forward\",\"forwardConfig\":{\"targetGroups\":[{\"serviceName\":\"service-1\",\"servicePort\":\"http\",\"weight\":20},{\"serviceName\":\"service-2\",\"servicePort\":80,\"weight\":20},{\"targetGroupARN\":\"arn-of-your-non-k8s-target-group\",\"weight\":60}],\"targetGroupStickinessConfig\":{\"enabled\":true,\"durationSeconds\":200}}} spec : ingressClassName : alb rules : - http : paths : - path : /503 pathType : Exact backend : service : name : response-503 port : name : use-annotation - path : /eks pathType : Exact backend : service : name : redirect-to-eks port : name : use-annotation - path : /path1 pathType : Exact backend : service : name : forward-single-tg port : name : use-annotation - path : /path2 pathType : Exact backend : service : name : forward-multiple-tg port : name : use-annotation alb.ingress.kubernetes.io/conditions.${conditions-name} Provides a method for specifying routing conditions in addition to original host/path condition on Ingress spec . The conditions-name in the annotation must match the serviceName in the Ingress rules. It can be a either real serviceName or an annotation based action name when servicePort is use-annotation . limitations General ALB limitations applies: Each rule can optionally include up to one of each of the following conditions: host-header, http-request-method, path-pattern, and source-ip. Each rule can also optionally include one or more of each of the following conditions: http-header and query-string. You can specify up to three match evaluations per condition. You can specify up to five match evaluations per rule. Refer ALB documentation for more details. Example rule-path1: Host is www.example.com OR anno.example.com Path is /path1 rule-path2: Host is www.example.com Path is /path2 OR /anno/path2 rule-path3: Host is www.example.com Path is /path3 Http header HeaderName is HeaderValue1 OR HeaderValue2 rule-path4: Host is www.example.com Path is /path4 Http request method is GET OR HEAD rule-path5: Host is www.example.com Path is /path5 Query string is paramA:valueA1 OR paramA:valueA2 rule-path6: Host is www.example.com Path is /path6 Source IP is192.168.0.0/16 OR 172.16.0.0/16 rule-path7: Host is www.example.com Path is /path7 Http header HeaderName is HeaderValue Query string is paramA:valueA Query string is paramB:valueB apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/scheme : internet-facing alb.ingress.kubernetes.io/actions.rule-path1 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Host is www.example.com OR anno.example.com\"}} alb.ingress.kubernetes.io/conditions.rule-path1 : > [{\"field\":\"host-header\",\"hostHeaderConfig\":{\"values\":[\"anno.example.com\"]}}] alb.ingress.kubernetes.io/actions.rule-path2 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Path is /path2 OR /anno/path2\"}} alb.ingress.kubernetes.io/conditions.rule-path2 : > [{\"field\":\"path-pattern\",\"pathPatternConfig\":{\"values\":[\"/anno/path2\"]}}] alb.ingress.kubernetes.io/actions.rule-path3 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Http header HeaderName is HeaderValue1 OR HeaderValue2\"}} alb.ingress.kubernetes.io/conditions.rule-path3 : > [{\"field\":\"http-header\",\"httpHeaderConfig\":{\"httpHeaderName\": \"HeaderName\", \"values\":[\"HeaderValue1\", \"HeaderValue2\"]}}] alb.ingress.kubernetes.io/actions.rule-path4 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Http request method is GET OR HEAD\"}} alb.ingress.kubernetes.io/conditions.rule-path4 : > [{\"field\":\"http-request-method\",\"httpRequestMethodConfig\":{\"Values\":[\"GET\", \"HEAD\"]}}] alb.ingress.kubernetes.io/actions.rule-path5 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Query string is paramA:valueA1 OR paramA:valueA2\"}} alb.ingress.kubernetes.io/conditions.rule-path5 : > [{\"field\":\"query-string\",\"queryStringConfig\":{\"values\":[{\"key\":\"paramA\",\"value\":\"valueA1\"},{\"key\":\"paramA\",\"value\":\"valueA2\"}]}}] alb.ingress.kubernetes.io/actions.rule-path6 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Source IP is 192.168.0.0/16 OR 172.16.0.0/16\"}} alb.ingress.kubernetes.io/conditions.rule-path6 : > [{\"field\":\"source-ip\",\"sourceIpConfig\":{\"values\":[\"192.168.0.0/16\", \"172.16.0.0/16\"]}}] alb.ingress.kubernetes.io/actions.rule-path7 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"multiple conditions applies\"}} alb.ingress.kubernetes.io/conditions.rule-path7 : > [{\"field\":\"http-header\",\"httpHeaderConfig\":{\"httpHeaderName\": \"HeaderName\", \"values\":[\"HeaderValue\"]}},{\"field\":\"query-string\",\"queryStringConfig\":{\"values\":[{\"key\":\"paramA\",\"value\":\"valueA\"}]}},{\"field\":\"query-string\",\"queryStringConfig\":{\"values\":[{\"key\":\"paramB\",\"value\":\"valueB\"}]}}] spec : ingressClassName : alb rules : - host : www.example.com http : paths : - path : /path1 pathType : Exact backend : service : name : rule-path1 port : name : use-annotation - path : /path2 pathType : Exact backend : service : name : rule-path2 port : name : use-annotation - path : /path3 pathType : Exact backend : service : name : rule-path3 port : name : use-annotation - path : /path4 pathType : Exact backend : service : name : rule-path4 port : name : use-annotation - path : /path5 pathType : Exact backend : service : name : rule-path5 port : name : use-annotation - path : /path6 pathType : Exact backend : service : name : rule-path6 port : name : use-annotation - path : /path7 pathType : Exact backend : service : name : rule-path7 port : name : use-annotation Note If you are using alb.ingress.kubernetes.io/target-group-attributes with stickiness.enabled=true , you should add TargetGroupStickinessConfig under alb.ingress.kubernetes.io/actions.weighted-routing Example apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/scheme : internet-facing alb.ingress.kubernetes.io/target-type : ip alb.ingress.kubernetes.io/target-group-attributes : stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=60 alb.ingress.kubernetes.io/actions.weighted-routing : | { \"type\" : \"forward\" , \"forwardConfig\" :{ \"targetGroups\" :[ { \"serviceName\" : \"service-1\" , \"servicePort\" : \"80\" , \"weight\" : 50 }, { \"serviceName\" : \"service-2\" , \"servicePort\" : \"80\" , \"weight\" : 50 } ], \"TargetGroupStickinessConfig\" : { \"Enabled\" : true , \"DurationSeconds\" : 120 } } } spec : ingressClassName : alb rules : - host : www.example.com http : paths : - path : / pathType : Prefix backend : service : name : weighted-routing port : name : use-annotation Access control \u00b6 Access control for LoadBalancer can be controlled with following annotations: alb.ingress.kubernetes.io/scheme specifies whether your LoadBalancer will be internet facing. See Load balancer scheme in the AWS documentation for more details. Example alb.ingress.kubernetes.io/scheme: internal alb.ingress.kubernetes.io/inbound-cidrs specifies the CIDRs that are allowed to access LoadBalancer. Merge Behavior inbound-cidrs is merged across all Ingresses in IngressGroup, but is exclusive per listen-port. the inbound-cidrs will only impact the ports defined for that Ingress. if same listen-port is defined by multiple Ingress within IngressGroup, inbound-cidrs should only be defined on one of the Ingress. Default 0.0.0.0/0 will be used if the IPAddressType is \"ipv4\" 0.0.0.0/0 and ::/0 will be used if the IPAddressType is \"dualstack\" this annotation will be ignored if alb.ingress.kubernetes.io/security-groups is specified. Example alb.ingress.kubernetes.io/inbound-cidrs: 10.0.0.0/24 alb.ingress.kubernetes.io/security-groups specifies the securityGroups you want to attach to LoadBalancer. When this annotation is not present, the controller will automatically create one security group, the security group will be attached to the LoadBalancer and allow access from inbound-cidrs to the listen-ports . Also, the securityGroups for Node/Pod will be modified to allow inbound traffic from this securityGroup. If you specify this annotation, you need to configure the security groups on your Node/Pod to allow inbound traffic from the load balancer. You could also set the manage-backend-security-group-rules if you want the controller to manage the access rules. Both name or ID of securityGroups are supported. Name matches a Name tag, not the groupName attribute. Example alb.ingress.kubernetes.io/security-groups: sg-xxxx, nameOfSg1, nameOfSg2 alb.ingress.kubernetes.io/manage-backend-security-group-rules specifies whether you want the controller to configure security group rules on Node/Pod for traffic access when you specify security-groups . This annotation applies only in case you specify the security groups via security-groups annotation. If set to true, controller attaches an additional shared backend security group to your load balancer. This backend security group is used in the Node/Pod security group rules. Example alb.ingress.kubernetes.io/manage-backend-security-group-rules: \"true\" Authentication \u00b6 ALB supports authentication with Cognito or OIDC. See Authenticate Users Using an Application Load Balancer for more details. HTTPS only Authentication is only supported for HTTPS listeners. See TLS for configuring HTTPS listeners. alb.ingress.kubernetes.io/auth-type specifies the authentication type on targets. Example alb.ingress.kubernetes.io/auth-type: cognito alb.ingress.kubernetes.io/auth-idp-cognito specifies the cognito idp configuration. If you are using Amazon Cognito Domain, the userPoolDomain should be set to the domain prefix(my-domain) instead of full domain(https://my-domain.auth.us-west-2.amazoncognito.com) Example alb.ingress.kubernetes.io/auth-idp-cognito: '{\"userPoolARN\":\"arn:aws:cognito-idp:us-west-2:xxx:userpool/xxx\",\"userPoolClientID\":\"my-clientID\",\"userPoolDomain\":\"my-domain\"}' alb.ingress.kubernetes.io/auth-idp-oidc specifies the oidc idp configuration. You need to create an secret within the same namespace as Ingress to hold your OIDC clientID and clientSecret. The format of secret is as below: apiVersion : v1 kind : Secret metadata : namespace : testcase name : my-k8s-secret data : clientID : base64 of your plain text clientId clientSecret : base64 of your plain text clientSecret Example alb.ingress.kubernetes.io/auth-idp-oidc: '{\"issuer\":\"https://example.com\",\"authorizationEndpoint\":\"https://authorization.example.com\",\"tokenEndpoint\":\"https://token.example.com\",\"userInfoEndpoint\":\"https://userinfo.example.com\",\"secretName\":\"my-k8s-secret\"}' alb.ingress.kubernetes.io/auth-on-unauthenticated-request specifies the behavior if the user is not authenticated. options: authenticate : try authenticate with configured IDP. deny : return an HTTP 401 Unauthorized error. allow : allow the request to be forwarded to the target. Example alb.ingress.kubernetes.io/auth-on-unauthenticated-request: authenticate alb.ingress.kubernetes.io/auth-scope specifies the set of user claims to be requested from the IDP(cognito or oidc), in a space-separated list. options: phone email profile openid aws.cognito.signin.user.admin Example alb.ingress.kubernetes.io/auth-scope: 'email openid' alb.ingress.kubernetes.io/auth-session-cookie specifies the name of the cookie used to maintain session information Example alb.ingress.kubernetes.io/auth-session-cookie: custom-cookie alb.ingress.kubernetes.io/auth-session-timeout specifies the maximum duration of the authentication session, in seconds Example alb.ingress.kubernetes.io/auth-session-timeout: '86400' Health Check \u00b6 Health check on target groups can be controlled with following annotations: alb.ingress.kubernetes.io/healthcheck-protocol specifies the protocol used when performing health check on targets. Example alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS alb.ingress.kubernetes.io/healthcheck-port specifies the port used when performing health check on targets. When using target-type: instance with a service of type \"NodePort\", the healthcheck port can be set to traffic-port to automatically point to the correct port. Example set the healthcheck port to the traffic port alb.ingress.kubernetes.io/healthcheck-port: traffic-port set the healthcheck port to the NodePort(when target-type=instance) or TargetPort(when target-type=ip) of a named port alb.ingress.kubernetes.io/healthcheck-port: my-port set the healthcheck port to 80/tcp alb.ingress.kubernetes.io/healthcheck-port: '80' alb.ingress.kubernetes.io/healthcheck-path specifies the HTTP path when performing health check on targets. Example HTTP alb.ingress.kubernetes.io/healthcheck-path: /ping GRPC alb.ingress.kubernetes.io/healthcheck-path: /package.service/method alb.ingress.kubernetes.io/healthcheck-interval-seconds specifies the interval(in seconds) between health check of an individual target. Example alb.ingress.kubernetes.io/healthcheck-interval-seconds: '10' alb.ingress.kubernetes.io/healthcheck-timeout-seconds specifies the timeout(in seconds) during which no response from a target means a failed health check Example alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '8' alb.ingress.kubernetes.io/success-codes specifies the HTTP or gRPC status code that should be expected when doing health checks against the specified health check path. Example use single value alb.ingress.kubernetes.io/success-codes: '200' use multiple values alb.ingress.kubernetes.io/success-codes: 200,201 use range of value alb.ingress.kubernetes.io/success-codes: 200-300 use gRPC single value alb.ingress.kubernetes.io/success-codes: '0' use gRPC multiple value alb.ingress.kubernetes.io/success-codes: 0,1 use gRPC range of value alb.ingress.kubernetes.io/success-codes: 0-5 alb.ingress.kubernetes.io/healthy-threshold-count specifies the consecutive health checks successes required before considering an unhealthy target healthy. Example alb.ingress.kubernetes.io/healthy-threshold-count: '2' alb.ingress.kubernetes.io/unhealthy-threshold-count specifies the consecutive health check failures required before considering a target unhealthy. Example alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' TLS \u00b6 TLS support can be controlled with the following annotations: alb.ingress.kubernetes.io/certificate-arn specifies the ARN of one or more certificate managed by AWS Certificate Manager The first certificate in the list will be added as default certificate. And remaining certificate will be added to the optional certificate list. See SSL Certificates for more details. Certificate Discovery TLS certificates for ALB Listeners can be automatically discovered with hostnames from Ingress resources. See Certificate Discovery for instructions. Example single certificate alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:xxxxx:certificate/xxxxxxx multiple certificates alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:xxxxx:certificate/cert1,arn:aws:acm:us-west-2:xxxxx:certificate/cert2,arn:aws:acm:us-west-2:xxxxx:certificate/cert3 alb.ingress.kubernetes.io/ssl-policy specifies the Security Policy that should be assigned to the ALB, allowing you to control the protocol and ciphers. Example alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 alb.ingress.kubernetes.io/mutual-authentication specifies the mutual authentication configuration that should be assigned to the Application Load Balancer secure listener ports. See Mutual authentication with TLS in the AWS documentation for more details. Configuration Options port: listen port Must be a HTTPS port specified by listen-ports . mode: \"off\" (default) | \"passthrough\" | \"verify\" verify mode requires an existing trust store resource. See Create a trust store in the AWS documentation for more details. trustStore: ARN (arn:aws:elasticloadbalancing:trustStoreArn) | Name (my-trust-store) Both ARN and Name of trustStore are supported values. trustStore is required when mode is verify . ignoreClientCertificateExpiry : true | false (default) Example listen-ports specifies four HTTPS ports: 80, 443, 8080, 8443 listener HTTPS:80 will be set to passthrough mode listener HTTPS:443 will be set to verify mode, associated with trust store arn arn:aws:elasticloadbalancing:trustStoreArn and have ignoreClientCertificateExpiry set to true listeners HTTPS:8080 and HTTPS:8443 remain in the default mode off . alb.ingress.kubernetes.io/listen-ports: '[{\"HTTPS\": 80}, {\"HTTPS\": 443}, {\"HTTPS\": 8080}, {\"HTTPS\": 8443}]' alb.ingress.kubernetes.io/mutual-authentication: '[{\"port\": 80, \"mode\": \"passthrough\"}, {\"port\": 443, \"mode\": \"verify\", \"trustStore\": \"arn:aws:elasticloadbalancing:trustStoreArn\", \"ignoreClientCertificateExpiry\" : true}]' Note To avoid conflict errors in IngressGroup, this annotation should only be specified on a single Ingress within IngressGroup or specified with same value across all Ingresses within IngressGroup. Trust stores limit per Application Load Balancer A maximum of two different trust stores can be associated among listeners on the same ingress. See Quotas for your Application Load Balancers in the AWS documentation for more details. Custom attributes \u00b6 Custom attributes to LoadBalancers and TargetGroups can be controlled with following annotations: alb.ingress.kubernetes.io/load-balancer-attributes specifies Load Balancer Attributes that should be applied to the ALB. Only attributes defined in the annotation will be updated. To unset any AWS defaults(e.g. Disabling access logs after having them enabled once), the values need to be explicitly set to the original values( access_logs.s3.enabled=false ) and omitting them is not sufficient. If deletion_protection.enabled=true is in annotation, the controller will not be able to delete the ALB during reconciliation. Once the attribute gets edited to deletion_protection.enabled=false during reconciliation, the deployer will force delete the resource. Please note, if the deletion protection is not enabled via annotation (e.g. via AWS console), the controller still deletes the underlying resource. Example enable access log to s3 alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=my-access-log-bucket,access_logs.s3.prefix=my-app enable deletion protection alb.ingress.kubernetes.io/load-balancer-attributes: deletion_protection.enabled=true enable invalid header fields removal alb.ingress.kubernetes.io/load-balancer-attributes: routing.http.drop_invalid_header_fields.enabled=true enable http2 support alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true set idle_timeout delay to 600 seconds alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=600 enable connection logs alb.ingress.kubernetes.io/load-balancer-attributes: connection_logs.s3.enabled=true,connection_logs.s3.bucket=my-connection-log-bucket,connection_logs.s3.prefix=my-app alb.ingress.kubernetes.io/target-group-attributes specifies Target Group Attributes which should be applied to Target Groups. Example set the slow start duration to 30 seconds (available range is 30-900 seconds) alb.ingress.kubernetes.io/target-group-attributes: slow_start.duration_seconds=30 set the deregistration delay to 30 seconds (available range is 0-3600 seconds) alb.ingress.kubernetes.io/target-group-attributes: deregistration_delay.timeout_seconds=30 enable sticky sessions (requires alb.ingress.kubernetes.io/target-type be set to ip ) alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=60 alb.ingress.kubernetes.io/target-type: ip set load balancing algorithm to least outstanding requests alb.ingress.kubernetes.io/target-group-attributes: load_balancing.algorithm.type=least_outstanding_requests enable Automated Target Weights(ATW) on HTTP/HTTPS target groups to increase application availability. Set your load balancing algorithm to weighted random and turn on anomaly mitigation (recommended) alb.ingress.kubernetes.io/target-group-attributes: load_balancing.algorithm.type=weighted_random,load_balancing.algorithm.anomaly_mitigation=on Resource Tags \u00b6 The AWS Load Balancer Controller automatically applies following tags to the AWS resources (ALB/TargetGroups/SecurityGroups/Listener/ListenerRule) it creates: elbv2.k8s.aws/cluster: ${clusterName} ingress.k8s.aws/stack: ${stackID} ingress.k8s.aws/resource: ${resourceID} In addition, you can use annotations to specify additional tags alb.ingress.kubernetes.io/tags specifies additional tags that will be applied to AWS resources created. In case of target group, the controller will merge the tags from the ingress and the backend service giving precedence to the values specified on the service when there is conflict. Example alb.ingress.kubernetes.io/tags: Environment=dev,Team=test Addons \u00b6 Note If waf-acl-arn is specified via the ingress annotations, the controller will make sure the waf-acl is associated to the provisioned ALB with the ingress. If there is not such annotation, the controller will make sure no waf-acl is associated, so it may remove the existing waf-acl on the ALB provisioned. If users do not want the controller to manage the waf-acl on the ALBs, they can disable the feature by setting controller command line flags --enable-waf=false or --enable-wafv2=false alb.ingress.kubernetes.io/waf-acl-id specifies the identifier for the Amazon WAF web ACL. Only Regional WAF is supported. Example alb.ingress.kubernetes.io/waf-acl-id: 499e8b99-6671-4614-a86d-adb1810b7fbe alb.ingress.kubernetes.io/wafv2-acl-arn specifies ARN for the Amazon WAFv2 web ACL. Only Regional WAFv2 is supported. To get the WAFv2 Web ACL ARN from the Console, click the gear icon in the upper right and enable the ARN column. Example alb.ingress.kubernetes.io/wafv2-acl-arn: arn:aws:wafv2:us-west-2:xxxxx:regional/webacl/xxxxxxx/3ab78708-85b0-49d3-b4e1-7a9615a6613b alb.ingress.kubernetes.io/shield-advanced-protection turns on / off the AWS Shield Advanced protection for the load balancer. Example alb.ingress.kubernetes.io/shield-advanced-protection: 'true'","title":"Annotations"},{"location":"guide/ingress/annotations/#ingress-annotations","text":"You can add annotations to kubernetes Ingress and Service objects to customize their behavior. Annotation keys and values can only be strings. Advanced format should be encoded as below: boolean: 'true' integer: '42' stringList: s1,s2,s3 stringMap: k1=v1,k2=v2 json: 'jsonContent' Annotations applied to Service have higher priority over annotations applied to Ingress. Location column below indicates where that annotation can be applied to. Annotations that configures LoadBalancer / Listener behaviors have different merge behavior when IngressGroup feature is been used. MergeBehavior column below indicates how such annotation will be merged. Exclusive: such annotation should only be specified on a single Ingress within IngressGroup or specified with same value across all Ingresses within IngressGroup. Merge: such annotation can be specified on all Ingresses within IngressGroup, and will be merged together.","title":"Ingress annotations"},{"location":"guide/ingress/annotations/#annotations","text":"Name Type Default Location MergeBehavior alb.ingress.kubernetes.io/load-balancer-name string N/A Ingress Exclusive alb.ingress.kubernetes.io/group.name string N/A Ingress N/A alb.ingress.kubernetes.io/group.order integer 0 Ingress N/A alb.ingress.kubernetes.io/tags stringMap N/A Ingress,Service Merge alb.ingress.kubernetes.io/ip-address-type ipv4 | dualstack ipv4 Ingress Exclusive alb.ingress.kubernetes.io/scheme internal | internet-facing internal Ingress Exclusive alb.ingress.kubernetes.io/subnets stringList N/A Ingress Exclusive alb.ingress.kubernetes.io/security-groups stringList N/A Ingress Exclusive alb.ingress.kubernetes.io/manage-backend-security-group-rules boolean N/A Ingress Exclusive alb.ingress.kubernetes.io/customer-owned-ipv4-pool string N/A Ingress Exclusive alb.ingress.kubernetes.io/load-balancer-attributes stringMap N/A Ingress Exclusive alb.ingress.kubernetes.io/wafv2-acl-arn string N/A Ingress Exclusive alb.ingress.kubernetes.io/waf-acl-id string N/A Ingress Exclusive alb.ingress.kubernetes.io/shield-advanced-protection boolean N/A Ingress Exclusive alb.ingress.kubernetes.io/listen-ports json '[{\"HTTP\": 80}]' | '[{\"HTTPS\": 443}]' Ingress Merge alb.ingress.kubernetes.io/ssl-redirect integer N/A Ingress Exclusive alb.ingress.kubernetes.io/inbound-cidrs stringList 0.0.0.0/0, ::/0 Ingress Exclusive alb.ingress.kubernetes.io/certificate-arn stringList N/A Ingress Merge alb.ingress.kubernetes.io/ssl-policy string ELBSecurityPolicy-2016-08 Ingress Exclusive alb.ingress.kubernetes.io/target-type instance | ip instance Ingress,Service N/A alb.ingress.kubernetes.io/backend-protocol HTTP | HTTPS HTTP Ingress,Service N/A alb.ingress.kubernetes.io/backend-protocol-version string HTTP1 Ingress,Service N/A alb.ingress.kubernetes.io/target-group-attributes stringMap N/A Ingress,Service N/A alb.ingress.kubernetes.io/healthcheck-port integer | traffic-port traffic-port Ingress,Service N/A alb.ingress.kubernetes.io/healthcheck-protocol HTTP | HTTPS HTTP Ingress,Service N/A alb.ingress.kubernetes.io/healthcheck-path string / | /AWS.ALB/healthcheck Ingress,Service N/A alb.ingress.kubernetes.io/healthcheck-interval-seconds integer '15' Ingress,Service N/A alb.ingress.kubernetes.io/healthcheck-timeout-seconds integer '5' Ingress,Service N/A alb.ingress.kubernetes.io/healthy-threshold-count integer '2' Ingress,Service N/A alb.ingress.kubernetes.io/unhealthy-threshold-count integer '2' Ingress,Service N/A alb.ingress.kubernetes.io/success-codes string '200' | '12' Ingress,Service N/A alb.ingress.kubernetes.io/auth-type none|oidc|cognito none Ingress,Service N/A alb.ingress.kubernetes.io/auth-idp-cognito json N/A Ingress,Service N/A alb.ingress.kubernetes.io/auth-idp-oidc json N/A Ingress,Service N/A alb.ingress.kubernetes.io/auth-on-unauthenticated-request authenticate|allow|deny authenticate Ingress,Service N/A alb.ingress.kubernetes.io/auth-scope string openid Ingress,Service N/A alb.ingress.kubernetes.io/auth-session-cookie string AWSELBAuthSessionCookie Ingress,Service N/A alb.ingress.kubernetes.io/auth-session-timeout integer '604800' Ingress,Service N/A alb.ingress.kubernetes.io/actions.${action-name} json N/A Ingress N/A alb.ingress.kubernetes.io/conditions.${conditions-name} json N/A Ingress N/A alb.ingress.kubernetes.io/target-node-labels stringMap N/A Ingress,Service N/A alb.ingress.kubernetes.io/mutual-authentication json '[{\"port\": 443, \"mode\": \"off\"}]' Ingress Exclusive","title":"Annotations"},{"location":"guide/ingress/annotations/#ingressgroup","text":"IngressGroup feature enables you to group multiple Ingress resources together. The controller will automatically merge Ingress rules for all Ingresses within IngressGroup and support them with a single ALB. In addition, most annotations defined on an Ingress only apply to the paths defined by that Ingress. By default, Ingresses don't belong to any IngressGroup, and we treat it as a \"implicit IngressGroup\" consisting of the Ingress itself. alb.ingress.kubernetes.io/group.name specifies the group name that this Ingress belongs to. Ingresses with same group.name annotation will form an \"explicit IngressGroup\". groupName must consist of lower case alphanumeric characters, - or . , and must start and end with an alphanumeric character. groupName must be no more than 63 character. Security Risk IngressGroup feature should only be used when all Kubernetes users with RBAC permission to create/modify Ingress resources are within trust boundary. If you turn your Ingress to belong a \"explicit IngressGroup\" by adding group.name annotation, other Kubernetes users may create/modify their Ingresses to belong to the same IngressGroup, and can thus add more rules or overwrite existing rules with higher priority to the ALB for your Ingress. We'll add more fine-grained access-control in future versions. Rename behavior The ALB for an IngressGroup is found by searching for an AWS tag ingress.k8s.aws/stack tag with the name of the IngressGroup as its value. For an implicit IngressGroup, the value is namespace/ingressname . When the groupName of an IngressGroup for an Ingress is changed, the Ingress will be moved to a new IngressGroup and be supported by the ALB for the new IngressGroup. If the ALB for the new IngressGroup doesn't exist, a new ALB will be created. If an IngressGroup no longer contains any Ingresses, the ALB for that IngressGroup will be deleted and any deletion protection of that ALB will be ignored. Example alb.ingress.kubernetes.io/group.name: my-team.awesome-group alb.ingress.kubernetes.io/group.order specifies the order across all Ingresses within IngressGroup. You can explicitly denote the order using a number between -1000 and 1000 The smaller the order, the rule will be evaluated first. All Ingresses without an explicit order setting get order value as 0 Rules with the same order are sorted lexicographically by the Ingress\u2019s namespace/name. Example alb.ingress.kubernetes.io/group.order: '10'","title":"IngressGroup"},{"location":"guide/ingress/annotations/#traffic-listening","text":"Traffic Listening can be controlled with the following annotations: alb.ingress.kubernetes.io/listen-ports specifies the ports that ALB listens on. Merge Behavior listen-ports is merged across all Ingresses in IngressGroup. You can define different listen-ports per Ingress, Ingress rules will only impact the ports defined for that Ingress. If same listen-port is defined by multiple Ingress within IngressGroup, Ingress rules will be merged with respect to their group order within IngressGroup. Default defaults to '[{\"HTTP\": 80}]' or '[{\"HTTPS\": 443}]' depending on whether certificate-arn is specified. You may not have duplicate load balancer ports defined. Example alb.ingress.kubernetes.io/listen-ports: '[{\"HTTP\": 80}, {\"HTTPS\": 443}, {\"HTTP\": 8080}, {\"HTTPS\": 8443}]' alb.ingress.kubernetes.io/ssl-redirect enables SSLRedirect and specifies the SSL port that redirects to. Merge Behavior ssl-redirect is exclusive across all Ingresses in IngressGroup. Once defined on a single Ingress, it impacts every Ingress within IngressGroup. Once enabled SSLRedirect, every HTTP listener will be configured with a default action which redirects to HTTPS, other rules will be ignored. The SSL port that redirects to must exists on LoadBalancer. See alb.ingress.kubernetes.io/listen-ports for the listen ports configuration. Example alb.ingress.kubernetes.io/ssl-redirect: '443' alb.ingress.kubernetes.io/ip-address-type specifies the IP address type of ALB. Example alb.ingress.kubernetes.io/ip-address-type: ipv4 alb.ingress.kubernetes.io/customer-owned-ipv4-pool specifies the customer-owned IPv4 address pool for ALB on Outpost. This annotation should be treated as immutable. To remove or change coIPv4Pool, you need to recreate Ingress. Example alb.ingress.kubernetes.io/customer-owned-ipv4-pool: ipv4pool-coip-xxxxxxxx","title":"Traffic Listening"},{"location":"guide/ingress/annotations/#traffic-routing","text":"Traffic Routing can be controlled with following annotations: alb.ingress.kubernetes.io/load-balancer-name specifies the custom name to use for the load balancer. Name longer than 32 characters will be treated as an error. Merge Behavior name is exclusive across all Ingresses in an IngressGroup. Once defined on a single Ingress, it impacts every Ingress within the IngressGroup. Example alb.ingress.kubernetes.io/load-balancer-name: custom-name alb.ingress.kubernetes.io/target-type specifies how to route traffic to pods. You can choose between instance and ip : instance mode will route traffic to all ec2 instances within cluster on NodePort opened for your service. service must be of type \"NodePort\" or \"LoadBalancer\" to use instance mode ip mode will route traffic directly to the pod IP. network plugin must use secondary IP addresses on ENI for pod IP to use ip mode. e.g. amazon-vpc-cni-k8s ip mode is required for sticky sessions to work with Application Load Balancers. The Service type does not matter, when using ip mode. Example alb.ingress.kubernetes.io/target-type: instance alb.ingress.kubernetes.io/target-node-labels specifies which nodes to include in the target group registration for instance target type. Example alb.ingress.kubernetes.io/target-node-labels: label1=value1, label2=value2 alb.ingress.kubernetes.io/backend-protocol specifies the protocol used when route traffic to pods. Example alb.ingress.kubernetes.io/backend-protocol: HTTPS alb.ingress.kubernetes.io/backend-protocol-version specifies the application protocol used to route traffic to pods. Only valid when HTTP or HTTPS is used as the backend protocol. Example HTTP2 alb.ingress.kubernetes.io/backend-protocol-version: HTTP2 GRPC alb.ingress.kubernetes.io/backend-protocol-version: GRPC alb.ingress.kubernetes.io/subnets specifies the Availability Zone s that the ALB will route traffic to. See Load Balancer subnets for more details. You must specify at least two subnets in different AZs. Either subnetID or subnetName(Name tag on subnets) can be used. Tip You can enable subnet auto discovery to avoid specifying this annotation on every Ingress. See Subnet Discovery for instructions. Example alb.ingress.kubernetes.io/subnets: subnet-xxxx, mySubnet alb.ingress.kubernetes.io/actions.${action-name} Provides a method for configuring custom actions on a listener, such as Redirect Actions. The action-name in the annotation must match the serviceName in the Ingress rules, and servicePort must be use-annotation . use ARN in forward Action ARN can be used in forward action(both simplified schema and advanced schema), it must be an targetGroup created outside of k8s, typically an targetGroup for legacy application. use ServiceName/ServicePort in forward Action ServiceName/ServicePort can be used in forward action(advanced schema only). Auth related annotations on Service object will only be respected if a single TargetGroup in is used. Example response-503: return fixed 503 response redirect-to-eks: redirect to an external url forward-single-tg: forward to a single targetGroup [ simplified schema ] forward-multiple-tg: forward to multiple targetGroups with different weights and stickiness config [ advanced schema ] apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/scheme : internet-facing alb.ingress.kubernetes.io/actions.response-503 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"503\",\"messageBody\":\"503 error text\"}} alb.ingress.kubernetes.io/actions.redirect-to-eks : > {\"type\":\"redirect\",\"redirectConfig\":{\"host\":\"aws.amazon.com\",\"path\":\"/eks/\",\"port\":\"443\",\"protocol\":\"HTTPS\",\"query\":\"k=v\",\"statusCode\":\"HTTP_302\"}} alb.ingress.kubernetes.io/actions.forward-single-tg : > {\"type\":\"forward\",\"targetGroupARN\": \"arn-of-your-target-group\"} alb.ingress.kubernetes.io/actions.forward-multiple-tg : > {\"type\":\"forward\",\"forwardConfig\":{\"targetGroups\":[{\"serviceName\":\"service-1\",\"servicePort\":\"http\",\"weight\":20},{\"serviceName\":\"service-2\",\"servicePort\":80,\"weight\":20},{\"targetGroupARN\":\"arn-of-your-non-k8s-target-group\",\"weight\":60}],\"targetGroupStickinessConfig\":{\"enabled\":true,\"durationSeconds\":200}}} spec : ingressClassName : alb rules : - http : paths : - path : /503 pathType : Exact backend : service : name : response-503 port : name : use-annotation - path : /eks pathType : Exact backend : service : name : redirect-to-eks port : name : use-annotation - path : /path1 pathType : Exact backend : service : name : forward-single-tg port : name : use-annotation - path : /path2 pathType : Exact backend : service : name : forward-multiple-tg port : name : use-annotation alb.ingress.kubernetes.io/conditions.${conditions-name} Provides a method for specifying routing conditions in addition to original host/path condition on Ingress spec . The conditions-name in the annotation must match the serviceName in the Ingress rules. It can be a either real serviceName or an annotation based action name when servicePort is use-annotation . limitations General ALB limitations applies: Each rule can optionally include up to one of each of the following conditions: host-header, http-request-method, path-pattern, and source-ip. Each rule can also optionally include one or more of each of the following conditions: http-header and query-string. You can specify up to three match evaluations per condition. You can specify up to five match evaluations per rule. Refer ALB documentation for more details. Example rule-path1: Host is www.example.com OR anno.example.com Path is /path1 rule-path2: Host is www.example.com Path is /path2 OR /anno/path2 rule-path3: Host is www.example.com Path is /path3 Http header HeaderName is HeaderValue1 OR HeaderValue2 rule-path4: Host is www.example.com Path is /path4 Http request method is GET OR HEAD rule-path5: Host is www.example.com Path is /path5 Query string is paramA:valueA1 OR paramA:valueA2 rule-path6: Host is www.example.com Path is /path6 Source IP is192.168.0.0/16 OR 172.16.0.0/16 rule-path7: Host is www.example.com Path is /path7 Http header HeaderName is HeaderValue Query string is paramA:valueA Query string is paramB:valueB apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/scheme : internet-facing alb.ingress.kubernetes.io/actions.rule-path1 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Host is www.example.com OR anno.example.com\"}} alb.ingress.kubernetes.io/conditions.rule-path1 : > [{\"field\":\"host-header\",\"hostHeaderConfig\":{\"values\":[\"anno.example.com\"]}}] alb.ingress.kubernetes.io/actions.rule-path2 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Path is /path2 OR /anno/path2\"}} alb.ingress.kubernetes.io/conditions.rule-path2 : > [{\"field\":\"path-pattern\",\"pathPatternConfig\":{\"values\":[\"/anno/path2\"]}}] alb.ingress.kubernetes.io/actions.rule-path3 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Http header HeaderName is HeaderValue1 OR HeaderValue2\"}} alb.ingress.kubernetes.io/conditions.rule-path3 : > [{\"field\":\"http-header\",\"httpHeaderConfig\":{\"httpHeaderName\": \"HeaderName\", \"values\":[\"HeaderValue1\", \"HeaderValue2\"]}}] alb.ingress.kubernetes.io/actions.rule-path4 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Http request method is GET OR HEAD\"}} alb.ingress.kubernetes.io/conditions.rule-path4 : > [{\"field\":\"http-request-method\",\"httpRequestMethodConfig\":{\"Values\":[\"GET\", \"HEAD\"]}}] alb.ingress.kubernetes.io/actions.rule-path5 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Query string is paramA:valueA1 OR paramA:valueA2\"}} alb.ingress.kubernetes.io/conditions.rule-path5 : > [{\"field\":\"query-string\",\"queryStringConfig\":{\"values\":[{\"key\":\"paramA\",\"value\":\"valueA1\"},{\"key\":\"paramA\",\"value\":\"valueA2\"}]}}] alb.ingress.kubernetes.io/actions.rule-path6 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Source IP is 192.168.0.0/16 OR 172.16.0.0/16\"}} alb.ingress.kubernetes.io/conditions.rule-path6 : > [{\"field\":\"source-ip\",\"sourceIpConfig\":{\"values\":[\"192.168.0.0/16\", \"172.16.0.0/16\"]}}] alb.ingress.kubernetes.io/actions.rule-path7 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"multiple conditions applies\"}} alb.ingress.kubernetes.io/conditions.rule-path7 : > [{\"field\":\"http-header\",\"httpHeaderConfig\":{\"httpHeaderName\": \"HeaderName\", \"values\":[\"HeaderValue\"]}},{\"field\":\"query-string\",\"queryStringConfig\":{\"values\":[{\"key\":\"paramA\",\"value\":\"valueA\"}]}},{\"field\":\"query-string\",\"queryStringConfig\":{\"values\":[{\"key\":\"paramB\",\"value\":\"valueB\"}]}}] spec : ingressClassName : alb rules : - host : www.example.com http : paths : - path : /path1 pathType : Exact backend : service : name : rule-path1 port : name : use-annotation - path : /path2 pathType : Exact backend : service : name : rule-path2 port : name : use-annotation - path : /path3 pathType : Exact backend : service : name : rule-path3 port : name : use-annotation - path : /path4 pathType : Exact backend : service : name : rule-path4 port : name : use-annotation - path : /path5 pathType : Exact backend : service : name : rule-path5 port : name : use-annotation - path : /path6 pathType : Exact backend : service : name : rule-path6 port : name : use-annotation - path : /path7 pathType : Exact backend : service : name : rule-path7 port : name : use-annotation Note If you are using alb.ingress.kubernetes.io/target-group-attributes with stickiness.enabled=true , you should add TargetGroupStickinessConfig under alb.ingress.kubernetes.io/actions.weighted-routing Example apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/scheme : internet-facing alb.ingress.kubernetes.io/target-type : ip alb.ingress.kubernetes.io/target-group-attributes : stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=60 alb.ingress.kubernetes.io/actions.weighted-routing : | { \"type\" : \"forward\" , \"forwardConfig\" :{ \"targetGroups\" :[ { \"serviceName\" : \"service-1\" , \"servicePort\" : \"80\" , \"weight\" : 50 }, { \"serviceName\" : \"service-2\" , \"servicePort\" : \"80\" , \"weight\" : 50 } ], \"TargetGroupStickinessConfig\" : { \"Enabled\" : true , \"DurationSeconds\" : 120 } } } spec : ingressClassName : alb rules : - host : www.example.com http : paths : - path : / pathType : Prefix backend : service : name : weighted-routing port : name : use-annotation","title":"Traffic Routing"},{"location":"guide/ingress/annotations/#access-control","text":"Access control for LoadBalancer can be controlled with following annotations: alb.ingress.kubernetes.io/scheme specifies whether your LoadBalancer will be internet facing. See Load balancer scheme in the AWS documentation for more details. Example alb.ingress.kubernetes.io/scheme: internal alb.ingress.kubernetes.io/inbound-cidrs specifies the CIDRs that are allowed to access LoadBalancer. Merge Behavior inbound-cidrs is merged across all Ingresses in IngressGroup, but is exclusive per listen-port. the inbound-cidrs will only impact the ports defined for that Ingress. if same listen-port is defined by multiple Ingress within IngressGroup, inbound-cidrs should only be defined on one of the Ingress. Default 0.0.0.0/0 will be used if the IPAddressType is \"ipv4\" 0.0.0.0/0 and ::/0 will be used if the IPAddressType is \"dualstack\" this annotation will be ignored if alb.ingress.kubernetes.io/security-groups is specified. Example alb.ingress.kubernetes.io/inbound-cidrs: 10.0.0.0/24 alb.ingress.kubernetes.io/security-groups specifies the securityGroups you want to attach to LoadBalancer. When this annotation is not present, the controller will automatically create one security group, the security group will be attached to the LoadBalancer and allow access from inbound-cidrs to the listen-ports . Also, the securityGroups for Node/Pod will be modified to allow inbound traffic from this securityGroup. If you specify this annotation, you need to configure the security groups on your Node/Pod to allow inbound traffic from the load balancer. You could also set the manage-backend-security-group-rules if you want the controller to manage the access rules. Both name or ID of securityGroups are supported. Name matches a Name tag, not the groupName attribute. Example alb.ingress.kubernetes.io/security-groups: sg-xxxx, nameOfSg1, nameOfSg2 alb.ingress.kubernetes.io/manage-backend-security-group-rules specifies whether you want the controller to configure security group rules on Node/Pod for traffic access when you specify security-groups . This annotation applies only in case you specify the security groups via security-groups annotation. If set to true, controller attaches an additional shared backend security group to your load balancer. This backend security group is used in the Node/Pod security group rules. Example alb.ingress.kubernetes.io/manage-backend-security-group-rules: \"true\"","title":"Access control"},{"location":"guide/ingress/annotations/#authentication","text":"ALB supports authentication with Cognito or OIDC. See Authenticate Users Using an Application Load Balancer for more details. HTTPS only Authentication is only supported for HTTPS listeners. See TLS for configuring HTTPS listeners. alb.ingress.kubernetes.io/auth-type specifies the authentication type on targets. Example alb.ingress.kubernetes.io/auth-type: cognito alb.ingress.kubernetes.io/auth-idp-cognito specifies the cognito idp configuration. If you are using Amazon Cognito Domain, the userPoolDomain should be set to the domain prefix(my-domain) instead of full domain(https://my-domain.auth.us-west-2.amazoncognito.com) Example alb.ingress.kubernetes.io/auth-idp-cognito: '{\"userPoolARN\":\"arn:aws:cognito-idp:us-west-2:xxx:userpool/xxx\",\"userPoolClientID\":\"my-clientID\",\"userPoolDomain\":\"my-domain\"}' alb.ingress.kubernetes.io/auth-idp-oidc specifies the oidc idp configuration. You need to create an secret within the same namespace as Ingress to hold your OIDC clientID and clientSecret. The format of secret is as below: apiVersion : v1 kind : Secret metadata : namespace : testcase name : my-k8s-secret data : clientID : base64 of your plain text clientId clientSecret : base64 of your plain text clientSecret Example alb.ingress.kubernetes.io/auth-idp-oidc: '{\"issuer\":\"https://example.com\",\"authorizationEndpoint\":\"https://authorization.example.com\",\"tokenEndpoint\":\"https://token.example.com\",\"userInfoEndpoint\":\"https://userinfo.example.com\",\"secretName\":\"my-k8s-secret\"}' alb.ingress.kubernetes.io/auth-on-unauthenticated-request specifies the behavior if the user is not authenticated. options: authenticate : try authenticate with configured IDP. deny : return an HTTP 401 Unauthorized error. allow : allow the request to be forwarded to the target. Example alb.ingress.kubernetes.io/auth-on-unauthenticated-request: authenticate alb.ingress.kubernetes.io/auth-scope specifies the set of user claims to be requested from the IDP(cognito or oidc), in a space-separated list. options: phone email profile openid aws.cognito.signin.user.admin Example alb.ingress.kubernetes.io/auth-scope: 'email openid' alb.ingress.kubernetes.io/auth-session-cookie specifies the name of the cookie used to maintain session information Example alb.ingress.kubernetes.io/auth-session-cookie: custom-cookie alb.ingress.kubernetes.io/auth-session-timeout specifies the maximum duration of the authentication session, in seconds Example alb.ingress.kubernetes.io/auth-session-timeout: '86400'","title":"Authentication"},{"location":"guide/ingress/annotations/#health-check","text":"Health check on target groups can be controlled with following annotations: alb.ingress.kubernetes.io/healthcheck-protocol specifies the protocol used when performing health check on targets. Example alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS alb.ingress.kubernetes.io/healthcheck-port specifies the port used when performing health check on targets. When using target-type: instance with a service of type \"NodePort\", the healthcheck port can be set to traffic-port to automatically point to the correct port. Example set the healthcheck port to the traffic port alb.ingress.kubernetes.io/healthcheck-port: traffic-port set the healthcheck port to the NodePort(when target-type=instance) or TargetPort(when target-type=ip) of a named port alb.ingress.kubernetes.io/healthcheck-port: my-port set the healthcheck port to 80/tcp alb.ingress.kubernetes.io/healthcheck-port: '80' alb.ingress.kubernetes.io/healthcheck-path specifies the HTTP path when performing health check on targets. Example HTTP alb.ingress.kubernetes.io/healthcheck-path: /ping GRPC alb.ingress.kubernetes.io/healthcheck-path: /package.service/method alb.ingress.kubernetes.io/healthcheck-interval-seconds specifies the interval(in seconds) between health check of an individual target. Example alb.ingress.kubernetes.io/healthcheck-interval-seconds: '10' alb.ingress.kubernetes.io/healthcheck-timeout-seconds specifies the timeout(in seconds) during which no response from a target means a failed health check Example alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '8' alb.ingress.kubernetes.io/success-codes specifies the HTTP or gRPC status code that should be expected when doing health checks against the specified health check path. Example use single value alb.ingress.kubernetes.io/success-codes: '200' use multiple values alb.ingress.kubernetes.io/success-codes: 200,201 use range of value alb.ingress.kubernetes.io/success-codes: 200-300 use gRPC single value alb.ingress.kubernetes.io/success-codes: '0' use gRPC multiple value alb.ingress.kubernetes.io/success-codes: 0,1 use gRPC range of value alb.ingress.kubernetes.io/success-codes: 0-5 alb.ingress.kubernetes.io/healthy-threshold-count specifies the consecutive health checks successes required before considering an unhealthy target healthy. Example alb.ingress.kubernetes.io/healthy-threshold-count: '2' alb.ingress.kubernetes.io/unhealthy-threshold-count specifies the consecutive health check failures required before considering a target unhealthy. Example alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'","title":"Health Check"},{"location":"guide/ingress/annotations/#tls","text":"TLS support can be controlled with the following annotations: alb.ingress.kubernetes.io/certificate-arn specifies the ARN of one or more certificate managed by AWS Certificate Manager The first certificate in the list will be added as default certificate. And remaining certificate will be added to the optional certificate list. See SSL Certificates for more details. Certificate Discovery TLS certificates for ALB Listeners can be automatically discovered with hostnames from Ingress resources. See Certificate Discovery for instructions. Example single certificate alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:xxxxx:certificate/xxxxxxx multiple certificates alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:xxxxx:certificate/cert1,arn:aws:acm:us-west-2:xxxxx:certificate/cert2,arn:aws:acm:us-west-2:xxxxx:certificate/cert3 alb.ingress.kubernetes.io/ssl-policy specifies the Security Policy that should be assigned to the ALB, allowing you to control the protocol and ciphers. Example alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 alb.ingress.kubernetes.io/mutual-authentication specifies the mutual authentication configuration that should be assigned to the Application Load Balancer secure listener ports. See Mutual authentication with TLS in the AWS documentation for more details. Configuration Options port: listen port Must be a HTTPS port specified by listen-ports . mode: \"off\" (default) | \"passthrough\" | \"verify\" verify mode requires an existing trust store resource. See Create a trust store in the AWS documentation for more details. trustStore: ARN (arn:aws:elasticloadbalancing:trustStoreArn) | Name (my-trust-store) Both ARN and Name of trustStore are supported values. trustStore is required when mode is verify . ignoreClientCertificateExpiry : true | false (default) Example listen-ports specifies four HTTPS ports: 80, 443, 8080, 8443 listener HTTPS:80 will be set to passthrough mode listener HTTPS:443 will be set to verify mode, associated with trust store arn arn:aws:elasticloadbalancing:trustStoreArn and have ignoreClientCertificateExpiry set to true listeners HTTPS:8080 and HTTPS:8443 remain in the default mode off . alb.ingress.kubernetes.io/listen-ports: '[{\"HTTPS\": 80}, {\"HTTPS\": 443}, {\"HTTPS\": 8080}, {\"HTTPS\": 8443}]' alb.ingress.kubernetes.io/mutual-authentication: '[{\"port\": 80, \"mode\": \"passthrough\"}, {\"port\": 443, \"mode\": \"verify\", \"trustStore\": \"arn:aws:elasticloadbalancing:trustStoreArn\", \"ignoreClientCertificateExpiry\" : true}]' Note To avoid conflict errors in IngressGroup, this annotation should only be specified on a single Ingress within IngressGroup or specified with same value across all Ingresses within IngressGroup. Trust stores limit per Application Load Balancer A maximum of two different trust stores can be associated among listeners on the same ingress. See Quotas for your Application Load Balancers in the AWS documentation for more details.","title":"TLS"},{"location":"guide/ingress/annotations/#custom-attributes","text":"Custom attributes to LoadBalancers and TargetGroups can be controlled with following annotations: alb.ingress.kubernetes.io/load-balancer-attributes specifies Load Balancer Attributes that should be applied to the ALB. Only attributes defined in the annotation will be updated. To unset any AWS defaults(e.g. Disabling access logs after having them enabled once), the values need to be explicitly set to the original values( access_logs.s3.enabled=false ) and omitting them is not sufficient. If deletion_protection.enabled=true is in annotation, the controller will not be able to delete the ALB during reconciliation. Once the attribute gets edited to deletion_protection.enabled=false during reconciliation, the deployer will force delete the resource. Please note, if the deletion protection is not enabled via annotation (e.g. via AWS console), the controller still deletes the underlying resource. Example enable access log to s3 alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=my-access-log-bucket,access_logs.s3.prefix=my-app enable deletion protection alb.ingress.kubernetes.io/load-balancer-attributes: deletion_protection.enabled=true enable invalid header fields removal alb.ingress.kubernetes.io/load-balancer-attributes: routing.http.drop_invalid_header_fields.enabled=true enable http2 support alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true set idle_timeout delay to 600 seconds alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=600 enable connection logs alb.ingress.kubernetes.io/load-balancer-attributes: connection_logs.s3.enabled=true,connection_logs.s3.bucket=my-connection-log-bucket,connection_logs.s3.prefix=my-app alb.ingress.kubernetes.io/target-group-attributes specifies Target Group Attributes which should be applied to Target Groups. Example set the slow start duration to 30 seconds (available range is 30-900 seconds) alb.ingress.kubernetes.io/target-group-attributes: slow_start.duration_seconds=30 set the deregistration delay to 30 seconds (available range is 0-3600 seconds) alb.ingress.kubernetes.io/target-group-attributes: deregistration_delay.timeout_seconds=30 enable sticky sessions (requires alb.ingress.kubernetes.io/target-type be set to ip ) alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=60 alb.ingress.kubernetes.io/target-type: ip set load balancing algorithm to least outstanding requests alb.ingress.kubernetes.io/target-group-attributes: load_balancing.algorithm.type=least_outstanding_requests enable Automated Target Weights(ATW) on HTTP/HTTPS target groups to increase application availability. Set your load balancing algorithm to weighted random and turn on anomaly mitigation (recommended) alb.ingress.kubernetes.io/target-group-attributes: load_balancing.algorithm.type=weighted_random,load_balancing.algorithm.anomaly_mitigation=on","title":"Custom attributes"},{"location":"guide/ingress/annotations/#resource-tags","text":"The AWS Load Balancer Controller automatically applies following tags to the AWS resources (ALB/TargetGroups/SecurityGroups/Listener/ListenerRule) it creates: elbv2.k8s.aws/cluster: ${clusterName} ingress.k8s.aws/stack: ${stackID} ingress.k8s.aws/resource: ${resourceID} In addition, you can use annotations to specify additional tags alb.ingress.kubernetes.io/tags specifies additional tags that will be applied to AWS resources created. In case of target group, the controller will merge the tags from the ingress and the backend service giving precedence to the values specified on the service when there is conflict. Example alb.ingress.kubernetes.io/tags: Environment=dev,Team=test","title":"Resource Tags"},{"location":"guide/ingress/annotations/#addons","text":"Note If waf-acl-arn is specified via the ingress annotations, the controller will make sure the waf-acl is associated to the provisioned ALB with the ingress. If there is not such annotation, the controller will make sure no waf-acl is associated, so it may remove the existing waf-acl on the ALB provisioned. If users do not want the controller to manage the waf-acl on the ALBs, they can disable the feature by setting controller command line flags --enable-waf=false or --enable-wafv2=false alb.ingress.kubernetes.io/waf-acl-id specifies the identifier for the Amazon WAF web ACL. Only Regional WAF is supported. Example alb.ingress.kubernetes.io/waf-acl-id: 499e8b99-6671-4614-a86d-adb1810b7fbe alb.ingress.kubernetes.io/wafv2-acl-arn specifies ARN for the Amazon WAFv2 web ACL. Only Regional WAFv2 is supported. To get the WAFv2 Web ACL ARN from the Console, click the gear icon in the upper right and enable the ARN column. Example alb.ingress.kubernetes.io/wafv2-acl-arn: arn:aws:wafv2:us-west-2:xxxxx:regional/webacl/xxxxxxx/3ab78708-85b0-49d3-b4e1-7a9615a6613b alb.ingress.kubernetes.io/shield-advanced-protection turns on / off the AWS Shield Advanced protection for the load balancer. Example alb.ingress.kubernetes.io/shield-advanced-protection: 'true'","title":"Addons"},{"location":"guide/ingress/cert_discovery/","text":"Certificate Discovery \u00b6 TLS certificates for ALB Listeners can be automatically discovered with hostnames from Ingress resources if the alb.ingress.kubernetes.io/certificate-arn annotation is not specified. The controller will attempt to discover TLS certificates from the tls field in Ingress and host field in Ingress rules. You need to explicitly specify to use HTTPS listener with listen-ports annotation. Discover via Ingress tls \u00b6 Example attaches certs for www.example.com to the ALB apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/listen-ports : '[{\"HTTPS\":443}]' spec : ingressClassName : alb tls : - hosts : - www.example.com rules : - http : paths : - path : /users pathType : Prefix backend : service : name : user-service port : number : 80 Discover via Ingress rule host. \u00b6 Example attaches a cert for dev.example.com or *.example.com to the ALB apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/listen-ports : '[{\"HTTPS\":443}]' spec : ingressClassName : alb rules : - host : dev.example.com http : paths : - path : /users pathType : Prefix backend : service : name : user-service port : number : 80","title":"Certificate Discovery"},{"location":"guide/ingress/cert_discovery/#certificate-discovery","text":"TLS certificates for ALB Listeners can be automatically discovered with hostnames from Ingress resources if the alb.ingress.kubernetes.io/certificate-arn annotation is not specified. The controller will attempt to discover TLS certificates from the tls field in Ingress and host field in Ingress rules. You need to explicitly specify to use HTTPS listener with listen-ports annotation.","title":"Certificate Discovery"},{"location":"guide/ingress/cert_discovery/#discover-via-ingress-tls","text":"Example attaches certs for www.example.com to the ALB apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/listen-ports : '[{\"HTTPS\":443}]' spec : ingressClassName : alb tls : - hosts : - www.example.com rules : - http : paths : - path : /users pathType : Prefix backend : service : name : user-service port : number : 80","title":"Discover via Ingress tls"},{"location":"guide/ingress/cert_discovery/#discover-via-ingress-rule-host","text":"Example attaches a cert for dev.example.com or *.example.com to the ALB apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/listen-ports : '[{\"HTTPS\":443}]' spec : ingressClassName : alb rules : - host : dev.example.com http : paths : - path : /users pathType : Prefix backend : service : name : user-service port : number : 80","title":"Discover via Ingress rule host."},{"location":"guide/ingress/ingress_class/","text":"IngressClass \u00b6 Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class. IngressClass resources contain an optional parameters field. This can be used to reference additional implementation-specific configuration for this class. For the AWS Load Balancer controller, the implementation-specific configuration is IngressClassParams in the elbv2.k8s.aws API group. Example specify controller as ingress.k8s.aws/alb to denote Ingresses should be managed by AWS Load Balancer Controller. apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: name: awesome-class spec: controller: ingress.k8s.aws/alb specify additional configurations by referencing an IngressClassParams resource. apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: name: awesome-class spec: controller: ingress.k8s.aws/alb parameters: apiGroup: elbv2.k8s.aws kind: IngressClassParams name: awesome-class-cfg default IngressClass You can mark a particular IngressClass as the default for your cluster. Setting the ingressclass.kubernetes.io/is-default-class annotation to true on an IngressClass resource will ensure that new Ingresses without an ingressClassName field specified will be assigned this default IngressClass. Deprecated kubernetes.io/ingress.class annotation \u00b6 Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation was never formally defined, but was widely supported by Ingress controllers. The newer ingressClassName field on Ingresses is a replacement for that annotation, but is not a direct equivalent. While the annotation was generally used to reference the name of the Ingress controller that should implement the Ingress, the field is a reference to an IngressClass resource that contains additional Ingress configuration, including the name of the Ingress controller. disable kubernetes.io/ingress.class annotation In order to maintain backwards-compatibility, kubernetes.io/ingress.class annotation is still supported currently. You can enforce IngressClass resource adoption by disabling the kubernetes.io/ingress.class annotation via --disable-ingress-class-annotation controller flag. IngressClassParams \u00b6 IngressClassParams is a CRD specific to the AWS Load Balancer Controller, which can be used along with IngressClass\u2019s parameter field. You can use IngressClassParams to enforce settings for a set of Ingresses. Example with scheme & ipAddressType & tags apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: awesome-class spec: scheme: internal ipAddressType: dualstack tags: - key: org value: my-org with namespaceSelector apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: awesome-class spec: namespaceSelector: matchLabels: team: team-a with IngressGroup apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: awesome-class spec: group: name: my-group with loadBalancerAttributes apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: awesome-class spec: loadBalancerAttributes: - key: deletion_protection.enabled value: \"true\" - key: idle_timeout.timeout_seconds value: \"120\" with subnets.ids apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: awesome-class spec: subnets: ids: - subnet-xxx - subnet-123 with subnets.tags apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: class2048-config spec: subnets: tags: kubernetes.io/role/internal-elb: - \"1\" myKey: - myVal0 - myVal1 IngressClassParams specification \u00b6 spec.namespaceSelector \u00b6 namespaceSelector is an optional setting that follows general Kubernetes label selector semantics. Cluster administrators can use the namespaceSelector field to restrict the namespaces of Ingresses that are allowed to specify the IngressClass. If namespaceSelector specified, only Ingresses in selected namespaces can use IngressClasses with this parameter. The controller will refuse to reconcile for Ingresses that violates namespaceSelector . If namespaceSelector un-specified, all Ingresses in any namespace can use IngressClasses with this parameter. spec.group \u00b6 group is an optional setting. The only available sub-field is group.name . Cluster administrators can use group.name field to denote the groupName for all Ingresses belong to this IngressClass. If group.name specified, all Ingresses with this IngressClass will belong to the same IngressGroup specified and result in a single ALB. If group.name is not specified, Ingresses with this IngressClass can use the older / legacy alb.ingress.kubernetes.io/group.name annotation to specify their IngressGroup. Ingresses that belong to the same IngressClass can form different IngressGroups via that annotation. spec.scheme \u00b6 scheme is an optional setting. The available options are internet-facing or internal . Cluster administrators can use the scheme field to restrict the scheme for all Ingresses that belong to this IngressClass. If scheme specified, all Ingresses with this IngressClass will have the specified scheme. If scheme un-specified, Ingresses with this IngressClass can continue to use alb.ingress.kubernetes.io/scheme annotation to specify scheme. spec.inboundCIDRs \u00b6 Cluster administrators can use the optional inboundCIDRs field to specify the CIDRs that are allowed to access the load balancers that belong to this IngressClass. If the field is specified, LBC will ignore the alb.ingress.kubernetes.io/inbound-cidrs annotation. spec.sslPolicy \u00b6 Cluster administrators can use the optional sslPolicy field to specify the SSL policy for the load balancers that belong to this IngressClass. If the field is specified, LBC will ignore the alb.ingress.kubernetes.io/ssl-policy annotation. spec.subnets \u00b6 Cluster administrators can use the optional subnets field to specify the subnets for the load balancers that belong to this IngressClass. They may specify either ids or tags . If the field is specified, LBC will ignore the alb.ingress.kubernetes.io/subnets annotation annotation. spec.subnets.ids \u00b6 If ids is specified, it must be a set of at least one resource ID of a subnet in the VPC. No two subnets may be in the same availability zone. spec.subnets.tags \u00b6 If tags is specified, it is a map of tag filters. The filters will match subnets in the VPC for which each listed tag key is present and has one of the corresponding tag values. Unless the SubnetsClusterTagCheck feature gate is disabled, subnets without a cluster tag and with the cluster tag for another cluster will be excluded. Within any given availability zone, subnets with a cluster tag will be chosen over subnets without, then the subnet with the lowest-sorting resource ID will be chosen. spec.ipAddressType \u00b6 ipAddressType is an optional setting. The available options are ipv4 or dualstack . Cluster administrators can use ipAddressType field to restrict the ipAddressType for all Ingresses that belong to this IngressClass. If ipAddressType specified, all Ingresses with this IngressClass will have the specified ipAddressType. If ipAddressType un-specified, Ingresses with this IngressClass can continue to use alb.ingress.kubernetes.io/ip-address-type annotation to specify ipAddressType. spec.tags \u00b6 tags is an optional setting. Cluster administrators can use tags field to specify the custom tags for AWS resources provisioned for all Ingresses belong to this IngressClass. If tags is set, AWS resources provisioned for all Ingresses with this IngressClass will have the specified tags. You can also use controller-level flag --default-tags or alb.ingress.kubernetes.io/tags annotation to specify custom tags. These tags will be merged together based on tag-key. If same tag-key appears in multiple sources, the priority is as follows: controller-level flag --default-tags will have the highest priority. spec.tags in IngressClassParams will have the middle priority. alb.ingress.kubernetes.io/tags annotation will have the lowest priority. spec.loadBalancerAttributes \u00b6 loadBalancerAttributes is an optional setting. Cluster administrators can use loadBalancerAttributes field to specify the Load Balancer Attributes that should be applied to the load balancers that belong to this IngressClass. You can specify the list of load balancer attribute name and the desired value in the spec.loadBalancerAttributes field. If loadBalancerAttributes is set, the attributes defined will be applied to the load balancer that belong to this IngressClass. If you specify invalid keys or values for the load balancer attributes, the controller will fail to reconcile ingresses belonging to the particular ingress class. If loadBalancerAttributes un-specified, Ingresses with this IngressClass can continue to use alb.ingress.kubernetes.io/load-balancer-attributes annotation to specify the load balancer attributes.","title":"IngressClass"},{"location":"guide/ingress/ingress_class/#ingressclass","text":"Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class. IngressClass resources contain an optional parameters field. This can be used to reference additional implementation-specific configuration for this class. For the AWS Load Balancer controller, the implementation-specific configuration is IngressClassParams in the elbv2.k8s.aws API group. Example specify controller as ingress.k8s.aws/alb to denote Ingresses should be managed by AWS Load Balancer Controller. apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: name: awesome-class spec: controller: ingress.k8s.aws/alb specify additional configurations by referencing an IngressClassParams resource. apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: name: awesome-class spec: controller: ingress.k8s.aws/alb parameters: apiGroup: elbv2.k8s.aws kind: IngressClassParams name: awesome-class-cfg default IngressClass You can mark a particular IngressClass as the default for your cluster. Setting the ingressclass.kubernetes.io/is-default-class annotation to true on an IngressClass resource will ensure that new Ingresses without an ingressClassName field specified will be assigned this default IngressClass.","title":"IngressClass"},{"location":"guide/ingress/ingress_class/#deprecated-kubernetesioingressclass-annotation","text":"Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation was never formally defined, but was widely supported by Ingress controllers. The newer ingressClassName field on Ingresses is a replacement for that annotation, but is not a direct equivalent. While the annotation was generally used to reference the name of the Ingress controller that should implement the Ingress, the field is a reference to an IngressClass resource that contains additional Ingress configuration, including the name of the Ingress controller. disable kubernetes.io/ingress.class annotation In order to maintain backwards-compatibility, kubernetes.io/ingress.class annotation is still supported currently. You can enforce IngressClass resource adoption by disabling the kubernetes.io/ingress.class annotation via --disable-ingress-class-annotation controller flag.","title":"Deprecated kubernetes.io/ingress.class annotation"},{"location":"guide/ingress/ingress_class/#ingressclassparams","text":"IngressClassParams is a CRD specific to the AWS Load Balancer Controller, which can be used along with IngressClass\u2019s parameter field. You can use IngressClassParams to enforce settings for a set of Ingresses. Example with scheme & ipAddressType & tags apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: awesome-class spec: scheme: internal ipAddressType: dualstack tags: - key: org value: my-org with namespaceSelector apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: awesome-class spec: namespaceSelector: matchLabels: team: team-a with IngressGroup apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: awesome-class spec: group: name: my-group with loadBalancerAttributes apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: awesome-class spec: loadBalancerAttributes: - key: deletion_protection.enabled value: \"true\" - key: idle_timeout.timeout_seconds value: \"120\" with subnets.ids apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: awesome-class spec: subnets: ids: - subnet-xxx - subnet-123 with subnets.tags apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: class2048-config spec: subnets: tags: kubernetes.io/role/internal-elb: - \"1\" myKey: - myVal0 - myVal1","title":"IngressClassParams"},{"location":"guide/ingress/ingress_class/#ingressclassparams-specification","text":"","title":"IngressClassParams specification"},{"location":"guide/ingress/ingress_class/#specnamespaceselector","text":"namespaceSelector is an optional setting that follows general Kubernetes label selector semantics. Cluster administrators can use the namespaceSelector field to restrict the namespaces of Ingresses that are allowed to specify the IngressClass. If namespaceSelector specified, only Ingresses in selected namespaces can use IngressClasses with this parameter. The controller will refuse to reconcile for Ingresses that violates namespaceSelector . If namespaceSelector un-specified, all Ingresses in any namespace can use IngressClasses with this parameter.","title":"spec.namespaceSelector"},{"location":"guide/ingress/ingress_class/#specgroup","text":"group is an optional setting. The only available sub-field is group.name . Cluster administrators can use group.name field to denote the groupName for all Ingresses belong to this IngressClass. If group.name specified, all Ingresses with this IngressClass will belong to the same IngressGroup specified and result in a single ALB. If group.name is not specified, Ingresses with this IngressClass can use the older / legacy alb.ingress.kubernetes.io/group.name annotation to specify their IngressGroup. Ingresses that belong to the same IngressClass can form different IngressGroups via that annotation.","title":"spec.group"},{"location":"guide/ingress/ingress_class/#specscheme","text":"scheme is an optional setting. The available options are internet-facing or internal . Cluster administrators can use the scheme field to restrict the scheme for all Ingresses that belong to this IngressClass. If scheme specified, all Ingresses with this IngressClass will have the specified scheme. If scheme un-specified, Ingresses with this IngressClass can continue to use alb.ingress.kubernetes.io/scheme annotation to specify scheme.","title":"spec.scheme"},{"location":"guide/ingress/ingress_class/#specinboundcidrs","text":"Cluster administrators can use the optional inboundCIDRs field to specify the CIDRs that are allowed to access the load balancers that belong to this IngressClass. If the field is specified, LBC will ignore the alb.ingress.kubernetes.io/inbound-cidrs annotation.","title":"spec.inboundCIDRs"},{"location":"guide/ingress/ingress_class/#specsslpolicy","text":"Cluster administrators can use the optional sslPolicy field to specify the SSL policy for the load balancers that belong to this IngressClass. If the field is specified, LBC will ignore the alb.ingress.kubernetes.io/ssl-policy annotation.","title":"spec.sslPolicy"},{"location":"guide/ingress/ingress_class/#specsubnets","text":"Cluster administrators can use the optional subnets field to specify the subnets for the load balancers that belong to this IngressClass. They may specify either ids or tags . If the field is specified, LBC will ignore the alb.ingress.kubernetes.io/subnets annotation annotation.","title":"spec.subnets"},{"location":"guide/ingress/ingress_class/#specsubnetsids","text":"If ids is specified, it must be a set of at least one resource ID of a subnet in the VPC. No two subnets may be in the same availability zone.","title":"spec.subnets.ids"},{"location":"guide/ingress/ingress_class/#specsubnetstags","text":"If tags is specified, it is a map of tag filters. The filters will match subnets in the VPC for which each listed tag key is present and has one of the corresponding tag values. Unless the SubnetsClusterTagCheck feature gate is disabled, subnets without a cluster tag and with the cluster tag for another cluster will be excluded. Within any given availability zone, subnets with a cluster tag will be chosen over subnets without, then the subnet with the lowest-sorting resource ID will be chosen.","title":"spec.subnets.tags"},{"location":"guide/ingress/ingress_class/#specipaddresstype","text":"ipAddressType is an optional setting. The available options are ipv4 or dualstack . Cluster administrators can use ipAddressType field to restrict the ipAddressType for all Ingresses that belong to this IngressClass. If ipAddressType specified, all Ingresses with this IngressClass will have the specified ipAddressType. If ipAddressType un-specified, Ingresses with this IngressClass can continue to use alb.ingress.kubernetes.io/ip-address-type annotation to specify ipAddressType.","title":"spec.ipAddressType"},{"location":"guide/ingress/ingress_class/#spectags","text":"tags is an optional setting. Cluster administrators can use tags field to specify the custom tags for AWS resources provisioned for all Ingresses belong to this IngressClass. If tags is set, AWS resources provisioned for all Ingresses with this IngressClass will have the specified tags. You can also use controller-level flag --default-tags or alb.ingress.kubernetes.io/tags annotation to specify custom tags. These tags will be merged together based on tag-key. If same tag-key appears in multiple sources, the priority is as follows: controller-level flag --default-tags will have the highest priority. spec.tags in IngressClassParams will have the middle priority. alb.ingress.kubernetes.io/tags annotation will have the lowest priority.","title":"spec.tags"},{"location":"guide/ingress/ingress_class/#specloadbalancerattributes","text":"loadBalancerAttributes is an optional setting. Cluster administrators can use loadBalancerAttributes field to specify the Load Balancer Attributes that should be applied to the load balancers that belong to this IngressClass. You can specify the list of load balancer attribute name and the desired value in the spec.loadBalancerAttributes field. If loadBalancerAttributes is set, the attributes defined will be applied to the load balancer that belong to this IngressClass. If you specify invalid keys or values for the load balancer attributes, the controller will fail to reconcile ingresses belonging to the particular ingress class. If loadBalancerAttributes un-specified, Ingresses with this IngressClass can continue to use alb.ingress.kubernetes.io/load-balancer-attributes annotation to specify the load balancer attributes.","title":"spec.loadBalancerAttributes"},{"location":"guide/ingress/spec/","text":"Ingress specification \u00b6 This document covers how ingress resources work in relation to The AWS Load Balancer Controller. Beginning from v2.4.3 of the AWS LBC, rules are ordered as follows: pathType: Exact paths are always ordered first followed by pathType: Prefix paths, with the longest prefix first followed by pathType: ImplementationSpecific paths, in the order they are listed in the manifest An example ingress, from example is as follows. apiVersion : networking.k8s.io/v1 kind : Ingress metadata : name : \"2048-ingress\" namespace : \"2048-game\" labels : app : 2048-nginx-ingress spec : ingressClassName : alb rules : - host : 2048.example.com http : paths : - path : /* pathType : ImplementationSpecific backend : service : name : \"service-2048\" port : number : 80 The host field specifies the eventual Route 53-managed domain that will route to this service. The service, service-2048, must be of type NodePort in order for the provisioned ALB to route to it.(see echoserver-service.yaml ) The AWS Load Balancer Controller does not support the resource field of backend .","title":"Specification"},{"location":"guide/ingress/spec/#ingress-specification","text":"This document covers how ingress resources work in relation to The AWS Load Balancer Controller. Beginning from v2.4.3 of the AWS LBC, rules are ordered as follows: pathType: Exact paths are always ordered first followed by pathType: Prefix paths, with the longest prefix first followed by pathType: ImplementationSpecific paths, in the order they are listed in the manifest An example ingress, from example is as follows. apiVersion : networking.k8s.io/v1 kind : Ingress metadata : name : \"2048-ingress\" namespace : \"2048-game\" labels : app : 2048-nginx-ingress spec : ingressClassName : alb rules : - host : 2048.example.com http : paths : - path : /* pathType : ImplementationSpecific backend : service : name : \"service-2048\" port : number : 80 The host field specifies the eventual Route 53-managed domain that will route to this service. The service, service-2048, must be of type NodePort in order for the provisioned ALB to route to it.(see echoserver-service.yaml ) The AWS Load Balancer Controller does not support the resource field of backend .","title":"Ingress specification"},{"location":"guide/integrations/external_dns/","text":"Setup External DNS \u00b6 external-dns provisions DNS records based on the host information. This project will setup and manage records in Route 53 that point to controller deployed ALBs. Prerequisites \u00b6 Role Permissions \u00b6 Adequate roles and policies must be configured in AWS and available to the node(s) running the external-dns. See https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md#iam-permissions. Installation \u00b6 Download sample external-dns manifest wget https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/external-dns.yaml Edit the --domain-filter flag to include your hosted zone(s) The following example is for a hosted zone test-dns.com : args : - --source=service - --source=ingress - --domain-filter=test-dns.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones - --provider=aws - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization - --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both) - --registry=txt - --txt-owner-id=my-identifier Deploy external-dns kubectl apply -f external-dns.yaml Verify it deployed successfully. kubectl logs -f $( kubectl get po | egrep -o 'external-dns[A-Za-z0-9-]+' ) Should display output similar to the following: time=\"2019-12-11T10:26:05Z\" level=info msg=\"config: {Master: KubeConfig: RequestTimeout:30s IstioIngressGateway:istio-system/istio-ingressgateway Sources:[service ingress] Namespace: AnnotationFilter: FQDNTemplate: CombineFQDNAndAnnotation:false Compatibility: PublishInternal:false PublishHostIP:false ConnectorSourceServer:localhost:8080 Provider:aws GoogleProject: DomainFilter:[test-dns.com] ZoneIDFilter:[] AlibabaCloudConfigFile:/etc/kubernetes/alibaba-cloud.json AlibabaCloudZoneType: AWSZoneType:public AWSAssumeRole: AWSBatchChangeSize:4000 AWSBatchChangeInterval:1s AWSEvaluateTargetHealth:true AzureConfigFile:/etc/kubernetes/azure.json AzureResourceGroup: CloudflareProxied:false InfobloxGridHost: InfobloxWapiPort:443 InfobloxWapiUsername:admin InfobloxWapiPassword: InfobloxWapiVersion:2.3.1 InfobloxSSLVerify:true DynCustomerName: DynUsername: DynPassword: DynMinTTLSeconds:0 OCIConfigFile:/etc/kubernetes/oci.yaml InMemoryZones:[] PDNSServer:http://localhost:8081 PDNSAPIKey: PDNSTLSEnabled:false TLSCA: TLSClientCert: TLSClientCertKey: Policy:upsert-only Registry:txt TXTOwnerID:my-identifier TXTPrefix: Interval:1m0s Once:false DryRun:false LogFormat:text MetricsAddress::7979 LogLevel:info TXTCacheInterval:0s ExoscaleEndpoint:https://api.exoscale.ch/dns ExoscaleAPIKey: ExoscaleAPISecret: CRDSourceAPIVersion:externaldns.k8s.io/v1alpha CRDSourceKind:DNSEndpoint ServiceTypeFilter:[] RFC2136Host: RFC2136Port:0 RFC2136Zone: RFC2136Insecure:false RFC2136TSIGKeyName: RFC2136TSIGSecret: RFC2136TSIGSecretAlg: RFC2136TAXFR:false}\" time=\"2019-12-11T10:26:05Z\" level=info msg=\"Created Kubernetes client https://10.100.0.1:443\" Usage \u00b6 To create a record set in the subdomain, from your ingress which has been created by the ingress-controller, add the following annotation in the ingress objectresource: annotations : kubernetes.io/ingress.class : alb alb.ingress.kubernetes.io/scheme : internet-facing # external-dns specific configuration for creating route53 record-set external-dns.alpha.kubernetes.io/hostname : my-app.test-dns.com # give your domain name here A snippet of the external-dns pod log indicating route53 update: time=\"2019-12-11T10:26:08Z\" level=info msg=\"Desired change: CREATE my-app.test-dns.com A\" time=\"2019-12-11T10:26:08Z\" level=info msg=\"Desired change: CREATE my-app.test-dns.com TXT\" time=\"2019-12-11T10:26:08Z\" level=info msg=\"2 record(s) in zone my-app.test-dns.com. were successfully updated\" External DNS configures Simple routing policy for the route53 records. You can configure Weighted policy by specifying the weight and the identifier via annotation. Weighted policy allows you to split the traffic between multiple load balancers. Here is an example to specify weight and identifier: annotations : # For creating weighted route53 records external-dns.alpha.kubernetes.io/hostname : my-app.test-dns.com external-dns.alpha.kubernetes.io/aws-weight : \"100\" external-dns.alpha.kubernetes.io/set-identifier : \"3\" You can refer to the External DNS documentation for further details [ link ].","title":"Setup External DNS"},{"location":"guide/integrations/external_dns/#setup-external-dns","text":"external-dns provisions DNS records based on the host information. This project will setup and manage records in Route 53 that point to controller deployed ALBs.","title":"Setup External DNS"},{"location":"guide/integrations/external_dns/#prerequisites","text":"","title":"Prerequisites"},{"location":"guide/integrations/external_dns/#role-permissions","text":"Adequate roles and policies must be configured in AWS and available to the node(s) running the external-dns. See https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md#iam-permissions.","title":"Role Permissions"},{"location":"guide/integrations/external_dns/#installation","text":"Download sample external-dns manifest wget https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/external-dns.yaml Edit the --domain-filter flag to include your hosted zone(s) The following example is for a hosted zone test-dns.com : args : - --source=service - --source=ingress - --domain-filter=test-dns.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones - --provider=aws - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization - --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both) - --registry=txt - --txt-owner-id=my-identifier Deploy external-dns kubectl apply -f external-dns.yaml Verify it deployed successfully. kubectl logs -f $( kubectl get po | egrep -o 'external-dns[A-Za-z0-9-]+' ) Should display output similar to the following: time=\"2019-12-11T10:26:05Z\" level=info msg=\"config: {Master: KubeConfig: RequestTimeout:30s IstioIngressGateway:istio-system/istio-ingressgateway Sources:[service ingress] Namespace: AnnotationFilter: FQDNTemplate: CombineFQDNAndAnnotation:false Compatibility: PublishInternal:false PublishHostIP:false ConnectorSourceServer:localhost:8080 Provider:aws GoogleProject: DomainFilter:[test-dns.com] ZoneIDFilter:[] AlibabaCloudConfigFile:/etc/kubernetes/alibaba-cloud.json AlibabaCloudZoneType: AWSZoneType:public AWSAssumeRole: AWSBatchChangeSize:4000 AWSBatchChangeInterval:1s AWSEvaluateTargetHealth:true AzureConfigFile:/etc/kubernetes/azure.json AzureResourceGroup: CloudflareProxied:false InfobloxGridHost: InfobloxWapiPort:443 InfobloxWapiUsername:admin InfobloxWapiPassword: InfobloxWapiVersion:2.3.1 InfobloxSSLVerify:true DynCustomerName: DynUsername: DynPassword: DynMinTTLSeconds:0 OCIConfigFile:/etc/kubernetes/oci.yaml InMemoryZones:[] PDNSServer:http://localhost:8081 PDNSAPIKey: PDNSTLSEnabled:false TLSCA: TLSClientCert: TLSClientCertKey: Policy:upsert-only Registry:txt TXTOwnerID:my-identifier TXTPrefix: Interval:1m0s Once:false DryRun:false LogFormat:text MetricsAddress::7979 LogLevel:info TXTCacheInterval:0s ExoscaleEndpoint:https://api.exoscale.ch/dns ExoscaleAPIKey: ExoscaleAPISecret: CRDSourceAPIVersion:externaldns.k8s.io/v1alpha CRDSourceKind:DNSEndpoint ServiceTypeFilter:[] RFC2136Host: RFC2136Port:0 RFC2136Zone: RFC2136Insecure:false RFC2136TSIGKeyName: RFC2136TSIGSecret: RFC2136TSIGSecretAlg: RFC2136TAXFR:false}\" time=\"2019-12-11T10:26:05Z\" level=info msg=\"Created Kubernetes client https://10.100.0.1:443\"","title":"Installation"},{"location":"guide/integrations/external_dns/#usage","text":"To create a record set in the subdomain, from your ingress which has been created by the ingress-controller, add the following annotation in the ingress objectresource: annotations : kubernetes.io/ingress.class : alb alb.ingress.kubernetes.io/scheme : internet-facing # external-dns specific configuration for creating route53 record-set external-dns.alpha.kubernetes.io/hostname : my-app.test-dns.com # give your domain name here A snippet of the external-dns pod log indicating route53 update: time=\"2019-12-11T10:26:08Z\" level=info msg=\"Desired change: CREATE my-app.test-dns.com A\" time=\"2019-12-11T10:26:08Z\" level=info msg=\"Desired change: CREATE my-app.test-dns.com TXT\" time=\"2019-12-11T10:26:08Z\" level=info msg=\"2 record(s) in zone my-app.test-dns.com. were successfully updated\" External DNS configures Simple routing policy for the route53 records. You can configure Weighted policy by specifying the weight and the identifier via annotation. Weighted policy allows you to split the traffic between multiple load balancers. Here is an example to specify weight and identifier: annotations : # For creating weighted route53 records external-dns.alpha.kubernetes.io/hostname : my-app.test-dns.com external-dns.alpha.kubernetes.io/aws-weight : \"100\" external-dns.alpha.kubernetes.io/set-identifier : \"3\" You can refer to the External DNS documentation for further details [ link ].","title":"Usage"},{"location":"guide/service/annotations/","text":"Service annotations \u00b6 Annotation keys and values can only be strings. All other types below must be string-encoded, for example: boolean: \"true\" integer: \"42\" stringList: \"s1,s2,s3\" stringMap: \"k1=v1,k2=v2\" json: \"{ \\\"key\\\": \\\"value\\\" }\" Annotations \u00b6 Warning These annotations are specific to the kubernetes service resources reconciled by the AWS Load Balancer Controller. Although the list was initially derived from the k8s in-tree kube-controller-manager , this documentation is not an accurate reference for the services reconciled by the in-tree controller. Name Type Default Notes service.beta.kubernetes.io/load-balancer-source-ranges stringList service.beta.kubernetes.io/aws-load-balancer-type string service.beta.kubernetes.io/aws-load-balancer-nlb-target-type string default instance in case of LoadBalancerClass service.beta.kubernetes.io/aws-load-balancer-name string service.beta.kubernetes.io/aws-load-balancer-internal boolean false deprecated, in favor of aws-load-balancer-scheme service.beta.kubernetes.io/aws-load-balancer-scheme string internal service.beta.kubernetes.io/aws-load-balancer-proxy-protocol string Set to \"*\" to enable service.beta.kubernetes.io/aws-load-balancer-ip-address-type string ipv4 ipv4 | dualstack service.beta.kubernetes.io/aws-load-balancer-access-log-enabled boolean false deprecated, in favor of aws-load-balancer-attributes service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name string deprecated, in favor of aws-load-balancer-attributes service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix string deprecated, in favor of aws-load-balancer-attributes service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled boolean false deprecated, in favor of aws-load-balancer-attributes service.beta.kubernetes.io/aws-load-balancer-ssl-cert stringList service.beta.kubernetes.io/aws-load-balancer-ssl-ports stringList service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy string ELBSecurityPolicy-2016-08 service.beta.kubernetes.io/aws-load-balancer-backend-protocol string service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags stringMap service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol string TCP service.beta.kubernetes.io/aws-load-balancer-healthcheck-port integer | traffic-port traffic-port service.beta.kubernetes.io/aws-load-balancer-healthcheck-path string \"/\" for HTTP(S) protocols service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold integer 3 service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold integer 3 service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout integer 10 service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval integer 10 service.beta.kubernetes.io/aws-load-balancer-healthcheck-success-codes string 200-399 service.beta.kubernetes.io/aws-load-balancer-eip-allocations stringList internet-facing lb only. Length must match the number of subnets service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses stringList internal lb only. Length must match the number of subnets service.beta.kubernetes.io/aws-load-balancer-ipv6-addresses stringList dualstack lb only. Length must match the number of subnets service.beta.kubernetes.io/aws-load-balancer-target-group-attributes stringMap service.beta.kubernetes.io/aws-load-balancer-subnets stringList service.beta.kubernetes.io/aws-load-balancer-alpn-policy string service.beta.kubernetes.io/aws-load-balancer-target-node-labels stringMap service.beta.kubernetes.io/aws-load-balancer-attributes stringMap service.beta.kubernetes.io/aws-load-balancer-security-groups stringList service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules boolean true Traffic Routing \u00b6 Traffic Routing can be controlled with following annotations: service.beta.kubernetes.io/aws-load-balancer-name specifies the custom name to use for the load balancer. Name longer than 32 characters will be treated as an error. limitations If you modify this annotation after service creation, there is no effect. Example service.beta.kubernetes.io/aws-load-balancer-name: custom-name service.beta.kubernetes.io/aws-load-balancer-type specifies the load balancer type. This controller reconciles those service resources with this annotation set to either nlb-ip or external . Tip This annotation specifies the controller used to provision LoadBalancers (as specified in legacy-cloud-provider ). Refer to lb-scheme to specify whether the LoadBalancer is internet-facing or internal. [Deprecated] For type nlb-ip , the controller will provision an NLB with targets registered by IP address. This value is supported for backwards compatibility. For type external , the NLB target type depends on the nlb-target-type annotation. limitations This annotation should not be modified after service creation. Example service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type specifies the target type to configure for NLB. You can choose between instance and ip . instance mode will route traffic to all EC2 instances within cluster on the NodePort opened for your service. The kube-proxy on the individual worker nodes sets up the forwarding of the traffic from the NodePort to the pods behind the service. service must be of type NodePort or LoadBalancer for instance targets for k8s 1.22 and later if spec.allocateLoadBalancerNodePorts is set to false , NodePort must be allocated manually default value If you configure spec.loadBalancerClass , the controller defaults to instance target type NodePort allocation k8s version 1.22 and later support disabling NodePort allocation by setting the service field spec.allocateLoadBalancerNodePorts to false . If the NodePort is not allocated for a service port, the controller will fail to reconcile instance mode NLB. ip mode will route traffic directly to the pod IP. In this mode, AWS NLB sends traffic directly to the Kubernetes pods behind the service, eliminating the need for an extra network hop through the worker nodes in the Kubernetes cluster. ip target mode supports pods running on AWS EC2 instances and AWS Fargate network plugin must use native AWS VPC networking configuration for pod IP, for example Amazon VPC CNI plugin . Example service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance service.beta.kubernetes.io/aws-load-balancer-subnets specifies the Availability Zone the NLB will route traffic to. See Network Load Balancers for more details. Tip Subnets are auto-discovered if this annotation is not specified, see Subnet Discovery for further details. You must specify at least one subnet in any of the AZs, both subnetID or subnetName(Name tag on subnets) can be used. limitations Each subnets must be from a different Availability Zone AWS has restrictions on disabling existing subnets for NLB. As a result, you might not be able to edit this annotation once the NLB gets provisioned. Example service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-xxxx, mySubnet service.beta.kubernetes.io/aws-load-balancer-alpn-policy allows you to configure the ALPN policies on the load balancer. supported policies HTTP1Only Negotiate only HTTP/1.*. The ALPN preference list is http/1.1, http/1.0. HTTP2Only Negotiate only HTTP/2. The ALPN preference list is h2. HTTP2Optional Prefer HTTP/1.* over HTTP/2 (which can be useful for HTTP/2 testing). The ALPN preference list is http/1.1, http/1.0, h2. HTTP2Preferred Prefer HTTP/2 over HTTP/1.*. The ALPN preference list is h2, http/1.1, http/1.0. None Do not negotiate ALPN. This is the default. Example service.beta.kubernetes.io/aws-load-balancer-alpn-policy: HTTP2Preferred service.beta.kubernetes.io/aws-load-balancer-target-node-labels specifies which nodes to include in the target group registration for instance target type. Example service.beta.kubernetes.io/aws-load-balancer-target-node-labels: label1=value1, label2=value2 service.beta.kubernetes.io/aws-load-balancer-eip-allocations specifies a list of elastic IP address configuration for an internet-facing NLB. Note This configuration is optional, and you can use it to assign static IP addresses to your NLB You must specify the same number of eip allocations as load balancer subnets annotation NLB must be internet-facing Example service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-xyz, eipalloc-zzz service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses specifies a list of private IPv4 addresses for an internal NLB. Note NLB must be internal This configuration is optional, and you can use it to assign static IPv4 addresses to your NLB You must specify the same number of private IPv4 addresses as load balancer subnets annotation You must specify the IPv4 addresses from the load balancer subnet IPv4 ranges Example service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: 192.168.10.15, 192.168.32.16 service.beta.kubernetes.io/aws-load-balancer-ipv6-addresses specifies a list of IPv6 addresses for an dualstack NLB. Note NLB must be dualstack This configuration is optional, and you can use it to assign static IPv6 addresses to your NLB You must specify the same number of private IPv6 addresses as load balancer subnets annotation You must specify the IPv6 addresses from the load balancer subnet IPv6 ranges Example service.beta.kubernetes.io/aws-load-balancer-ipv6-addresses: 2600:1f13:837:8501::1, 2600:1f13:837:8504::1 Traffic Listening \u00b6 Traffic Listening can be controlled with following annotations: service.beta.kubernetes.io/aws-load-balancer-ip-address-type specifies the IP address type of NLB. Example service.beta.kubernetes.io/aws-load-balancer-ip-address-type: ipv4 Resource attributes \u00b6 NLB resource attributes can be controlled via the following annotations: service.beta.kubernetes.io/aws-load-balancer-proxy-protocol specifies whether to enable proxy protocol v2 on the target group. Set to '*' to enable proxy protocol v2. This annotation takes precedence over the annotation service.beta.kubernetes.io/aws-load-balancer-target-group-attributes for proxy protocol v2 configuration. The only valid value for this annotation is * . service.beta.kubernetes.io/aws-load-balancer-target-group-attributes specifies the Target Group Attributes to be configured. Example set the deregistration delay to 120 seconds (available range is 0-3600 seconds) service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.timeout_seconds=120 enable source IP affinity service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: stickiness.enabled=true,stickiness.type=source_ip enable proxy protocol version 2 service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: proxy_protocol_v2.enabled=true enable connection termination on deregistration service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.connection_termination.enabled=true enable client IP preservation service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true service.beta.kubernetes.io/aws-load-balancer-attributes specifies Load Balancer Attributes that should be applied to the NLB. Only attributes defined in the annotation will be updated. To unset any AWS defaults(e.g. Disabling access logs after having them enabled once), the values need to be explicitly set to the original values( access_logs.s3.enabled=false ) and omitting them is not sufficient. Custom attributes set in this annotation's config map will be overriden by annotation-specific attributes. For backwards compatibility, existing annotations for the individual load balancer attributes get precedence in case of ties. If deletion_protection.enabled=true is in the annotation, the controller will not be able to delete the NLB during reconciliation. Once the attribute gets edited to deletion_protection.enabled=false during reconciliation, the deployer will force delete the resource. Please note, if the deletion protection is not enabled via annotation (e.g. via AWS console), the controller still deletes the underlying resource. Example enable access log to s3 service.beta.kubernetes.io/aws-load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=my-access-log-bucket,access_logs.s3.prefix=my-app enable NLB deletion protection service.beta.kubernetes.io/aws-load-balancer-attributes: deletion_protection.enabled=true enable cross zone load balancing service.beta.kubernetes.io/aws-load-balancer-attributes: load_balancing.cross_zone.enabled=true the following annotations are deprecated in v2.3.0 release in favor of service.beta.kubernetes.io/aws-load-balancer-attributes service.beta.kubernetes.io/aws-load-balancer-access-log-enabled service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled AWS Resource Tags \u00b6 The AWS Load Balancer Controller automatically applies following tags to the AWS resources it creates (NLB/TargetGroups/Listener/ListenerRule): elbv2.k8s.aws/cluster: ${clusterName} service.k8s.aws/stack: ${stackID} service.k8s.aws/resource: ${resourceID} In addition, you can use annotations to specify additional tags service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags specifies additional tags to apply to the AWS resources. you cannot override the default controller tags mentioned above or the tags specified in the --default-tags controller flag if any of the tag conflicts with the ones configured via --external-managed-tags controller flag, the controller fails to reconcile the service Example service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Environment=dev,Team=test Health Check \u00b6 Health check on target groups can be configured with following annotations: service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol specifies the target group health check protocol. you can specify tcp , or http or https , tcp is the default tcp is the default health check protocol if the service spec.externalTrafficPolicy is Cluster , http if Local if the service spec.externalTrafficPolicy is Local , do not use tcp for health check Supports only single protocol per service Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: http service.beta.kubernetes.io/aws-load-balancer-healthcheck-port specifies the TCP port to use for target group health check. default value if you do not specify the health check port, the default value will be spec.healthCheckNodePort when externalTrafficPolicy=local or traffic-port otherwise. Example set the health check port to traffic-port service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: traffic-port set the health check port to port 80 service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: \"80\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-path specifies the http path for the health check in case of http/https protocol. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /healthz service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold specifies the consecutive health check successes required before a target is considered healthy. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: \"3\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold specifies the consecutive health check failures before a target gets marked unhealthy. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: \"3\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval specifies the interval between consecutive health checks. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: \"10\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-success-codes specifies the http success codes for the health check in case of http/https protocol. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-success-codes: \"200-399\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout specifies the target group health check timeout. The target has to respond within the timeout for a successful health check. Note The controller currently ignores the timeout configuration due to the limitations on the AWS NLB. The default timeout for TCP is 10s and HTTP is 6s. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: \"10\" TLS \u00b6 You can configure TLS support via the following annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-cert specifies the ARN of one or more certificates managed by the AWS Certificate Manager . The first certificate in the list is the default certificate and remaining certificates are for the optional certificate list. See Server Certificates for further details. Example service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:xxxxx:certificate/xxxxxxx service.beta.kubernetes.io/aws-load-balancer-ssl-ports specifies the frontend ports with TLS listeners. You must configure at least one certificate for TLS listeners You can specify a list of port names or port values, * does not match any ports If you don't specify this annotation, controller creates TLS listener for all the service ports Specify this annotation if you need both TLS and non-TLS listeners on the same load balancer Example service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 443, custom-port service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy specifies the Security Policy for NLB frontend connections, allowing you to control the protocol and ciphers. Example service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS13-1-2-2021-06 service.beta.kubernetes.io/aws-load-balancer-backend-protocol specifies whether to use TLS for the backend traffic between the load balancer and the kubernetes pods. If you specify ssl as the backend protocol, NLB uses TLS connections for the traffic to your kubernetes pods in case of TLS listeners You can specify ssl or tcp (default) Example service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl Access control \u00b6 Load balancer access can be controlled via following annotations: service.beta.kubernetes.io/load-balancer-source-ranges specifies the CIDRs that are allowed to access the NLB. Tip we recommend specifying CIDRs in the service spec.loadBalancerSourceRanges instead Default 0.0.0.0/0 will be used if the IPAddressType is \"ipv4\" 0.0.0.0/0 and ::/0 will be used if the IPAddressType is \"dualstack\" The VPC CIDR will be used if service.beta.kubernetes.io/aws-load-balancer-scheme is internal This annotation will be ignored in case preserve client IP is not enabled. - preserve client IP is disabled by default for IP targets - preserve client IP is enabled by default for instance targets Preserve client IP has no effect on traffic converted from IPv4 to IPv6 and on traffic converted from IPv6 to IPv4. The source IP of this type of traffic is always the private IP address of the Network Load Balancer. - This could cause the clients that have their traffic converted to bypass the specified CIDRs that are allowed to access the NLB. this annotation will be ignored if service.beta.kubernetes.io/aws-load-balancer-security-groups is specified. Example service.beta.kubernetes.io/load-balancer-source-ranges: 10.0.0.0/24 service.beta.kubernetes.io/aws-load-balancer-scheme specifies whether the NLB will be internet-facing or internal. Valid values are internal , internet-facing . If not specified, default is internal . Example service.beta.kubernetes.io/aws-load-balancer-scheme: \"internet-facing\" service.beta.kubernetes.io/aws-load-balancer-internal specifies whether the NLB will be internet-facing or internal. deprecation note This annotation is deprecated starting v2.2.0 release in favor of the new aws-load-balancer-scheme annotation. It will be supported, but in case of ties, the aws-load-balancer-scheme gets precedence. Example service.beta.kubernetes.io/aws-load-balancer-internal: \"true\" service.beta.kubernetes.io/aws-load-balancer-security-groups specifies the frontend securityGroups you want to attach to an NLB. When this annotation is not present, the controller will automatically create one security group. The security group will be attached to the LoadBalancer and allow access from inbound-cidrs to the listen-ports . Also, the securityGroups for target instances/ENIs will be modified to allow inbound traffic from this securityGroup. If you specify this annotation, you need to configure the security groups on your target instances/ENIs to allow inbound traffic from the load balancer. You could also set the manage-backend-security-group-rules if you want the controller to manage the security group rules. Both name and ID of securityGroups are supported. Name matches a Name tag, not the groupName attribute. Example service.beta.kubernetes.io/aws-load-balancer-security-groups: sg-xxxx, nameOfSg1, nameOfSg2 service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules specifies whether the controller should automatically add the ingress rules to the instance/ENI security group. If you disable the automatic management of security group rules for an NLB, you will need to manually add appropriate ingress rules to your EC2 instance or ENI security groups to allow access to the traffic and health check ports. Example service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: \"false\" Legacy Cloud Provider \u00b6 The AWS Load Balancer Controller manages Kubernetes Services in a compatible way with the AWS cloud provider's legacy service controller. For users on v2.5.0+, The AWS LBC provides a mutating webhook for service resources to set the spec.loadBalancerCLass field for Serive of type LoadBalancer, effectively making the AWS LBC the default controller for Service of type LoadBalancer. Users can disable this feature and revert to using the AWS Cloud Controller Manager as the default service controller by setting the helm chart value enableServiceMutatorWebhook to false with --set enableServiceMutatorWebhook=false . For users on older versions, the annotation service.beta.kubernetes.io/aws-load-balancer-type is used to determine which controller reconciles the service. If the annotation value is nlb-ip or external , recent versions of the legacy cloud provider ignore the Service resource so that the AWS LBC can take over. For all other values of the annotation, the legacy cloud provider will handle the service. Note that this annotation should be specified during service creation and not edited later. Support for the annotation was added to the legacy cloud provider in Kubernetes v1.20, and is backported to v1.18.18+ and v1.19.10+.","title":"Annotations"},{"location":"guide/service/annotations/#service-annotations","text":"Annotation keys and values can only be strings. All other types below must be string-encoded, for example: boolean: \"true\" integer: \"42\" stringList: \"s1,s2,s3\" stringMap: \"k1=v1,k2=v2\" json: \"{ \\\"key\\\": \\\"value\\\" }\"","title":"Service annotations"},{"location":"guide/service/annotations/#annotations","text":"Warning These annotations are specific to the kubernetes service resources reconciled by the AWS Load Balancer Controller. Although the list was initially derived from the k8s in-tree kube-controller-manager , this documentation is not an accurate reference for the services reconciled by the in-tree controller. Name Type Default Notes service.beta.kubernetes.io/load-balancer-source-ranges stringList service.beta.kubernetes.io/aws-load-balancer-type string service.beta.kubernetes.io/aws-load-balancer-nlb-target-type string default instance in case of LoadBalancerClass service.beta.kubernetes.io/aws-load-balancer-name string service.beta.kubernetes.io/aws-load-balancer-internal boolean false deprecated, in favor of aws-load-balancer-scheme service.beta.kubernetes.io/aws-load-balancer-scheme string internal service.beta.kubernetes.io/aws-load-balancer-proxy-protocol string Set to \"*\" to enable service.beta.kubernetes.io/aws-load-balancer-ip-address-type string ipv4 ipv4 | dualstack service.beta.kubernetes.io/aws-load-balancer-access-log-enabled boolean false deprecated, in favor of aws-load-balancer-attributes service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name string deprecated, in favor of aws-load-balancer-attributes service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix string deprecated, in favor of aws-load-balancer-attributes service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled boolean false deprecated, in favor of aws-load-balancer-attributes service.beta.kubernetes.io/aws-load-balancer-ssl-cert stringList service.beta.kubernetes.io/aws-load-balancer-ssl-ports stringList service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy string ELBSecurityPolicy-2016-08 service.beta.kubernetes.io/aws-load-balancer-backend-protocol string service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags stringMap service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol string TCP service.beta.kubernetes.io/aws-load-balancer-healthcheck-port integer | traffic-port traffic-port service.beta.kubernetes.io/aws-load-balancer-healthcheck-path string \"/\" for HTTP(S) protocols service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold integer 3 service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold integer 3 service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout integer 10 service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval integer 10 service.beta.kubernetes.io/aws-load-balancer-healthcheck-success-codes string 200-399 service.beta.kubernetes.io/aws-load-balancer-eip-allocations stringList internet-facing lb only. Length must match the number of subnets service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses stringList internal lb only. Length must match the number of subnets service.beta.kubernetes.io/aws-load-balancer-ipv6-addresses stringList dualstack lb only. Length must match the number of subnets service.beta.kubernetes.io/aws-load-balancer-target-group-attributes stringMap service.beta.kubernetes.io/aws-load-balancer-subnets stringList service.beta.kubernetes.io/aws-load-balancer-alpn-policy string service.beta.kubernetes.io/aws-load-balancer-target-node-labels stringMap service.beta.kubernetes.io/aws-load-balancer-attributes stringMap service.beta.kubernetes.io/aws-load-balancer-security-groups stringList service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules boolean true","title":"Annotations"},{"location":"guide/service/annotations/#traffic-routing","text":"Traffic Routing can be controlled with following annotations: service.beta.kubernetes.io/aws-load-balancer-name specifies the custom name to use for the load balancer. Name longer than 32 characters will be treated as an error. limitations If you modify this annotation after service creation, there is no effect. Example service.beta.kubernetes.io/aws-load-balancer-name: custom-name service.beta.kubernetes.io/aws-load-balancer-type specifies the load balancer type. This controller reconciles those service resources with this annotation set to either nlb-ip or external . Tip This annotation specifies the controller used to provision LoadBalancers (as specified in legacy-cloud-provider ). Refer to lb-scheme to specify whether the LoadBalancer is internet-facing or internal. [Deprecated] For type nlb-ip , the controller will provision an NLB with targets registered by IP address. This value is supported for backwards compatibility. For type external , the NLB target type depends on the nlb-target-type annotation. limitations This annotation should not be modified after service creation. Example service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type specifies the target type to configure for NLB. You can choose between instance and ip . instance mode will route traffic to all EC2 instances within cluster on the NodePort opened for your service. The kube-proxy on the individual worker nodes sets up the forwarding of the traffic from the NodePort to the pods behind the service. service must be of type NodePort or LoadBalancer for instance targets for k8s 1.22 and later if spec.allocateLoadBalancerNodePorts is set to false , NodePort must be allocated manually default value If you configure spec.loadBalancerClass , the controller defaults to instance target type NodePort allocation k8s version 1.22 and later support disabling NodePort allocation by setting the service field spec.allocateLoadBalancerNodePorts to false . If the NodePort is not allocated for a service port, the controller will fail to reconcile instance mode NLB. ip mode will route traffic directly to the pod IP. In this mode, AWS NLB sends traffic directly to the Kubernetes pods behind the service, eliminating the need for an extra network hop through the worker nodes in the Kubernetes cluster. ip target mode supports pods running on AWS EC2 instances and AWS Fargate network plugin must use native AWS VPC networking configuration for pod IP, for example Amazon VPC CNI plugin . Example service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance service.beta.kubernetes.io/aws-load-balancer-subnets specifies the Availability Zone the NLB will route traffic to. See Network Load Balancers for more details. Tip Subnets are auto-discovered if this annotation is not specified, see Subnet Discovery for further details. You must specify at least one subnet in any of the AZs, both subnetID or subnetName(Name tag on subnets) can be used. limitations Each subnets must be from a different Availability Zone AWS has restrictions on disabling existing subnets for NLB. As a result, you might not be able to edit this annotation once the NLB gets provisioned. Example service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-xxxx, mySubnet service.beta.kubernetes.io/aws-load-balancer-alpn-policy allows you to configure the ALPN policies on the load balancer. supported policies HTTP1Only Negotiate only HTTP/1.*. The ALPN preference list is http/1.1, http/1.0. HTTP2Only Negotiate only HTTP/2. The ALPN preference list is h2. HTTP2Optional Prefer HTTP/1.* over HTTP/2 (which can be useful for HTTP/2 testing). The ALPN preference list is http/1.1, http/1.0, h2. HTTP2Preferred Prefer HTTP/2 over HTTP/1.*. The ALPN preference list is h2, http/1.1, http/1.0. None Do not negotiate ALPN. This is the default. Example service.beta.kubernetes.io/aws-load-balancer-alpn-policy: HTTP2Preferred service.beta.kubernetes.io/aws-load-balancer-target-node-labels specifies which nodes to include in the target group registration for instance target type. Example service.beta.kubernetes.io/aws-load-balancer-target-node-labels: label1=value1, label2=value2 service.beta.kubernetes.io/aws-load-balancer-eip-allocations specifies a list of elastic IP address configuration for an internet-facing NLB. Note This configuration is optional, and you can use it to assign static IP addresses to your NLB You must specify the same number of eip allocations as load balancer subnets annotation NLB must be internet-facing Example service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-xyz, eipalloc-zzz service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses specifies a list of private IPv4 addresses for an internal NLB. Note NLB must be internal This configuration is optional, and you can use it to assign static IPv4 addresses to your NLB You must specify the same number of private IPv4 addresses as load balancer subnets annotation You must specify the IPv4 addresses from the load balancer subnet IPv4 ranges Example service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: 192.168.10.15, 192.168.32.16 service.beta.kubernetes.io/aws-load-balancer-ipv6-addresses specifies a list of IPv6 addresses for an dualstack NLB. Note NLB must be dualstack This configuration is optional, and you can use it to assign static IPv6 addresses to your NLB You must specify the same number of private IPv6 addresses as load balancer subnets annotation You must specify the IPv6 addresses from the load balancer subnet IPv6 ranges Example service.beta.kubernetes.io/aws-load-balancer-ipv6-addresses: 2600:1f13:837:8501::1, 2600:1f13:837:8504::1","title":"Traffic Routing"},{"location":"guide/service/annotations/#traffic-listening","text":"Traffic Listening can be controlled with following annotations: service.beta.kubernetes.io/aws-load-balancer-ip-address-type specifies the IP address type of NLB. Example service.beta.kubernetes.io/aws-load-balancer-ip-address-type: ipv4","title":"Traffic Listening"},{"location":"guide/service/annotations/#resource-attributes","text":"NLB resource attributes can be controlled via the following annotations: service.beta.kubernetes.io/aws-load-balancer-proxy-protocol specifies whether to enable proxy protocol v2 on the target group. Set to '*' to enable proxy protocol v2. This annotation takes precedence over the annotation service.beta.kubernetes.io/aws-load-balancer-target-group-attributes for proxy protocol v2 configuration. The only valid value for this annotation is * . service.beta.kubernetes.io/aws-load-balancer-target-group-attributes specifies the Target Group Attributes to be configured. Example set the deregistration delay to 120 seconds (available range is 0-3600 seconds) service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.timeout_seconds=120 enable source IP affinity service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: stickiness.enabled=true,stickiness.type=source_ip enable proxy protocol version 2 service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: proxy_protocol_v2.enabled=true enable connection termination on deregistration service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.connection_termination.enabled=true enable client IP preservation service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true service.beta.kubernetes.io/aws-load-balancer-attributes specifies Load Balancer Attributes that should be applied to the NLB. Only attributes defined in the annotation will be updated. To unset any AWS defaults(e.g. Disabling access logs after having them enabled once), the values need to be explicitly set to the original values( access_logs.s3.enabled=false ) and omitting them is not sufficient. Custom attributes set in this annotation's config map will be overriden by annotation-specific attributes. For backwards compatibility, existing annotations for the individual load balancer attributes get precedence in case of ties. If deletion_protection.enabled=true is in the annotation, the controller will not be able to delete the NLB during reconciliation. Once the attribute gets edited to deletion_protection.enabled=false during reconciliation, the deployer will force delete the resource. Please note, if the deletion protection is not enabled via annotation (e.g. via AWS console), the controller still deletes the underlying resource. Example enable access log to s3 service.beta.kubernetes.io/aws-load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=my-access-log-bucket,access_logs.s3.prefix=my-app enable NLB deletion protection service.beta.kubernetes.io/aws-load-balancer-attributes: deletion_protection.enabled=true enable cross zone load balancing service.beta.kubernetes.io/aws-load-balancer-attributes: load_balancing.cross_zone.enabled=true the following annotations are deprecated in v2.3.0 release in favor of service.beta.kubernetes.io/aws-load-balancer-attributes service.beta.kubernetes.io/aws-load-balancer-access-log-enabled service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled","title":"Resource attributes"},{"location":"guide/service/annotations/#aws-resource-tags","text":"The AWS Load Balancer Controller automatically applies following tags to the AWS resources it creates (NLB/TargetGroups/Listener/ListenerRule): elbv2.k8s.aws/cluster: ${clusterName} service.k8s.aws/stack: ${stackID} service.k8s.aws/resource: ${resourceID} In addition, you can use annotations to specify additional tags service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags specifies additional tags to apply to the AWS resources. you cannot override the default controller tags mentioned above or the tags specified in the --default-tags controller flag if any of the tag conflicts with the ones configured via --external-managed-tags controller flag, the controller fails to reconcile the service Example service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Environment=dev,Team=test","title":"AWS Resource Tags"},{"location":"guide/service/annotations/#health-check","text":"Health check on target groups can be configured with following annotations: service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol specifies the target group health check protocol. you can specify tcp , or http or https , tcp is the default tcp is the default health check protocol if the service spec.externalTrafficPolicy is Cluster , http if Local if the service spec.externalTrafficPolicy is Local , do not use tcp for health check Supports only single protocol per service Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: http service.beta.kubernetes.io/aws-load-balancer-healthcheck-port specifies the TCP port to use for target group health check. default value if you do not specify the health check port, the default value will be spec.healthCheckNodePort when externalTrafficPolicy=local or traffic-port otherwise. Example set the health check port to traffic-port service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: traffic-port set the health check port to port 80 service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: \"80\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-path specifies the http path for the health check in case of http/https protocol. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /healthz service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold specifies the consecutive health check successes required before a target is considered healthy. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: \"3\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold specifies the consecutive health check failures before a target gets marked unhealthy. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: \"3\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval specifies the interval between consecutive health checks. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: \"10\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-success-codes specifies the http success codes for the health check in case of http/https protocol. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-success-codes: \"200-399\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout specifies the target group health check timeout. The target has to respond within the timeout for a successful health check. Note The controller currently ignores the timeout configuration due to the limitations on the AWS NLB. The default timeout for TCP is 10s and HTTP is 6s. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: \"10\"","title":"Health Check"},{"location":"guide/service/annotations/#tls","text":"You can configure TLS support via the following annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-cert specifies the ARN of one or more certificates managed by the AWS Certificate Manager . The first certificate in the list is the default certificate and remaining certificates are for the optional certificate list. See Server Certificates for further details. Example service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:xxxxx:certificate/xxxxxxx service.beta.kubernetes.io/aws-load-balancer-ssl-ports specifies the frontend ports with TLS listeners. You must configure at least one certificate for TLS listeners You can specify a list of port names or port values, * does not match any ports If you don't specify this annotation, controller creates TLS listener for all the service ports Specify this annotation if you need both TLS and non-TLS listeners on the same load balancer Example service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 443, custom-port service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy specifies the Security Policy for NLB frontend connections, allowing you to control the protocol and ciphers. Example service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS13-1-2-2021-06 service.beta.kubernetes.io/aws-load-balancer-backend-protocol specifies whether to use TLS for the backend traffic between the load balancer and the kubernetes pods. If you specify ssl as the backend protocol, NLB uses TLS connections for the traffic to your kubernetes pods in case of TLS listeners You can specify ssl or tcp (default) Example service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl","title":"TLS"},{"location":"guide/service/annotations/#access-control","text":"Load balancer access can be controlled via following annotations: service.beta.kubernetes.io/load-balancer-source-ranges specifies the CIDRs that are allowed to access the NLB. Tip we recommend specifying CIDRs in the service spec.loadBalancerSourceRanges instead Default 0.0.0.0/0 will be used if the IPAddressType is \"ipv4\" 0.0.0.0/0 and ::/0 will be used if the IPAddressType is \"dualstack\" The VPC CIDR will be used if service.beta.kubernetes.io/aws-load-balancer-scheme is internal This annotation will be ignored in case preserve client IP is not enabled. - preserve client IP is disabled by default for IP targets - preserve client IP is enabled by default for instance targets Preserve client IP has no effect on traffic converted from IPv4 to IPv6 and on traffic converted from IPv6 to IPv4. The source IP of this type of traffic is always the private IP address of the Network Load Balancer. - This could cause the clients that have their traffic converted to bypass the specified CIDRs that are allowed to access the NLB. this annotation will be ignored if service.beta.kubernetes.io/aws-load-balancer-security-groups is specified. Example service.beta.kubernetes.io/load-balancer-source-ranges: 10.0.0.0/24 service.beta.kubernetes.io/aws-load-balancer-scheme specifies whether the NLB will be internet-facing or internal. Valid values are internal , internet-facing . If not specified, default is internal . Example service.beta.kubernetes.io/aws-load-balancer-scheme: \"internet-facing\" service.beta.kubernetes.io/aws-load-balancer-internal specifies whether the NLB will be internet-facing or internal. deprecation note This annotation is deprecated starting v2.2.0 release in favor of the new aws-load-balancer-scheme annotation. It will be supported, but in case of ties, the aws-load-balancer-scheme gets precedence. Example service.beta.kubernetes.io/aws-load-balancer-internal: \"true\" service.beta.kubernetes.io/aws-load-balancer-security-groups specifies the frontend securityGroups you want to attach to an NLB. When this annotation is not present, the controller will automatically create one security group. The security group will be attached to the LoadBalancer and allow access from inbound-cidrs to the listen-ports . Also, the securityGroups for target instances/ENIs will be modified to allow inbound traffic from this securityGroup. If you specify this annotation, you need to configure the security groups on your target instances/ENIs to allow inbound traffic from the load balancer. You could also set the manage-backend-security-group-rules if you want the controller to manage the security group rules. Both name and ID of securityGroups are supported. Name matches a Name tag, not the groupName attribute. Example service.beta.kubernetes.io/aws-load-balancer-security-groups: sg-xxxx, nameOfSg1, nameOfSg2 service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules specifies whether the controller should automatically add the ingress rules to the instance/ENI security group. If you disable the automatic management of security group rules for an NLB, you will need to manually add appropriate ingress rules to your EC2 instance or ENI security groups to allow access to the traffic and health check ports. Example service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: \"false\"","title":"Access control"},{"location":"guide/service/annotations/#legacy-cloud-provider","text":"The AWS Load Balancer Controller manages Kubernetes Services in a compatible way with the AWS cloud provider's legacy service controller. For users on v2.5.0+, The AWS LBC provides a mutating webhook for service resources to set the spec.loadBalancerCLass field for Serive of type LoadBalancer, effectively making the AWS LBC the default controller for Service of type LoadBalancer. Users can disable this feature and revert to using the AWS Cloud Controller Manager as the default service controller by setting the helm chart value enableServiceMutatorWebhook to false with --set enableServiceMutatorWebhook=false . For users on older versions, the annotation service.beta.kubernetes.io/aws-load-balancer-type is used to determine which controller reconciles the service. If the annotation value is nlb-ip or external , recent versions of the legacy cloud provider ignore the Service resource so that the AWS LBC can take over. For all other values of the annotation, the legacy cloud provider will handle the service. Note that this annotation should be specified during service creation and not edited later. Support for the annotation was added to the legacy cloud provider in Kubernetes v1.20, and is backported to v1.18.18+ and v1.19.10+.","title":"Legacy Cloud Provider"},{"location":"guide/service/nlb/","text":"Network Load Balancer \u00b6 The AWS Load Balancer Controller (LBC) supports reconciliation for Kubernetes Service resources of type LoadBalancer by provisioning an AWS Network Load Balancer (NLB) with an instance or ip target type . Secure by default Since the v2.2.0 release, the LBC provisions an internal NLB by default. To create an internet-facing NLB, the following annotation is required on your service: service.beta.kubernetes.io/aws-load-balancer-scheme : internet-facing For backwards compatibility, if the service.beta.kubernetes.io/aws-load-balancer-scheme annotation is absent, an existing NLB's scheme remains unchanged. Prerequisites \u00b6 LBC >= v2.2.0 For Kubernetes Service resources of type LoadBalancer : Kubernetes >= v1.20 or Kubernetes >= v1.19.10 for 1.19 or Kubernetes >= v1.18.18 for 1.18 or EKS >= v1.16 For Kubernetes Service resources of type NodePort : Kubernetes >= v1.16 For ip target type: Pods have native AWS VPC networking configured. For more information, see the Amazon VPC CNI plugin documentation. Configuration \u00b6 By default, Kubernetes Service resources of type LoadBalancer get reconciled by the Kubernetes controller built into the CloudProvider component of the kube-controller-manager or the cloud-controller-manager (also known as the in-tree controller). In order for the LBC to manage the reconciliation of Kubernetes Service resources of type LoadBalancer , you need to offload the reconciliation from the in-tree controller to the LBC, explicitly. With LoadBalancerClass The LBC supports the LoadBalancerClass feature since the v2.4.0 release for Kubernetes v1.22+ clusters. The LoadBalancerClass feature provides a CloudProvider agnostic way of offloading the reconciliation for Kubernetes Service resources of type LoadBalancer to an external controller. When you specify the spec.loadBalancerClass to be service.k8s.aws/nlb on a Kubernetes Service resource of type LoadBalancer , the LBC takes charge of reconciliation by provisioning an NLB. Warning If you modify a Service resource with matching spec.loadBalancerClass by changing its type from LoadBalancer to anything else, the controller will cleanup the provisioned NLB for that Service. If the spec.loadBalancerClass is set to a loadBalancerClass that isn't recognized by the LBC, it ignores the Service resource, regardless of the service.beta.kubernetes.io/aws-load-balancer-type annotation. Tip By default, the NLB uses the instance target type. You can customize it using the service.beta.kubernetes.io/aws-load-balancer-nlb-target-type annotation . The LBC uses service.k8s.aws/nlb as the default LoadBalancerClass . You can customize it to a different value using the controller flag --load-balancer-class . Example: instance mode apiVersion : v1 kind : Service metadata : name : echoserver annotations : service.beta.kubernetes.io/aws-load-balancer-nlb-target-type : instance spec : selector : app : echoserver ports : - port : 80 targetPort : 8080 protocol : TCP type : LoadBalancer loadBalancerClass : service.k8s.aws/nlb Example: ip mode apiVersion : v1 kind : Service metadata : name : echoserver annotations : service.beta.kubernetes.io/aws-load-balancer-nlb-target-type : ip spec : selector : app : echoserver ports : - port : 80 targetPort : 8080 protocol : TCP type : LoadBalancer loadBalancerClass : service.k8s.aws/nlb With service.beta.kubernetes.io/aws-load-balancer-type annotation The AWS in-tree controller supports an AWS specific way of offloading the reconciliation for Kubernetes Service resources of type LoadBalancer to an external controller. When you specify the service.beta.kubernetes.io/aws-load-balancer-type annotation to be external on a Kubernetes Service resource of type LoadBalancer , the in-tree controller ignores the Service resource. In addition, if you specify the service.beta.kubernetes.io/aws-load-balancer-nlb-target-type annotation on the Service resource, the LBC takes charge of reconciliation by provisioning an NLB. Warning It's not recommended to modify or add the service.beta.kubernetes.io/aws-load-balancer-type annotation on an existing Service resource. If a change is desired, delete the existing Service resource and create a new one instead of modifying an existing Service. If you modify this annotation on an existing Service resource, you might end up with leaked LBC resources. backwards compatibility for nlb-ip type For backwards compatibility, both the in-tree and LBC controller supports nlb-ip as a value for the service.beta.kubernetes.io/aws-load-balancer-type annotation. The controllers treats it as if you specified both of the following annotations: service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip Example: instance mode apiVersion : v1 kind : Service metadata : name : echoserver annotations : service.beta.kubernetes.io/aws-load-balancer-type : external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type : instance spec : selector : app : echoserver ports : - port : 80 targetPort : 8080 protocol : TCP type : LoadBalancer Example: ip mode apiVersion : v1 kind : Service metadata : name : echoserver annotations : service.beta.kubernetes.io/aws-load-balancer-type : external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type : ip spec : selector : app : echoserver ports : - port : 80 targetPort : 8080 protocol : TCP type : LoadBalancer Protocols \u00b6 The LBC supports both TCP and UDP protocols. The controller also configures TLS termination on your NLB if you configure the Service with a certificate annotation. In the case of TCP, an NLB with IP targets doesn't pass the client source IP address, unless you specifically configure it to using target group attributes. Your application pods might not see the actual client IP address, even if the NLB passes it along. For example, if you're using instance mode with externalTrafficPolicy set to Cluster . In such cases, you can configure NLB proxy protocol v2 using an annotation if you need visibility into the client source IP address on your application pods. To enable proxy protocol v2, apply the following annotation to your Service: service.beta.kubernetes.io/aws-load-balancer-proxy-protocol : \"*\" If you enable proxy protocol v2, NLB health checks with HTTP/HTTPS only work if the health check port supports proxy protocol v2. Due to this behavior, you shouldn't configure proxy protocol v2 with NLB instance mode and externalTrafficPolicy set to Local . Subnet tagging requirements \u00b6 See Subnet Discovery for details on configuring Elastic Load Balancing for public or private placement. Security group \u00b6 From v2.6.0, the AWS LBC creates and attaches frontend and backend security groups to NLB by default. For more information please see the security groups documentation In older versions, the controller by default adds inbound rules to the worker node security groups, to allow inbound traffic from an NLB. disable worker node security group rule management You can disable the worker node security group rule management using an annotation . Worker node security groups selection \u00b6 The controller automatically selects the worker node security groups that it modifies to allow inbound traffic using the following rules: For instance mode, the security group of each backend worker node's primary elastic network interface (ENI) is selected. For ip mode, the security group of each backend pod's ENI is selected. Multiple security groups on an ENI If there are multiple security groups attached to an ENI, the controller expects only one security group tagged with following tags: Key Value kubernetes.io/cluster/${cluster-name} owned or shared ${cluster-name} is the name of the Kubernetes cluster. If it is possible for multiple security groups with the tag kubernetes.io/cluster/${cluster-name} to be on a target ENI, you may use the --service-target-eni-security-group-tags flag to specify additional tags that must also match in order for a security group to be used. Worker node security groups rules \u00b6 When client IP preservation is enabled Rule Protocol Port(s) IpRanges(s) Client Traffic spec.ports[*].protocol spec.ports[*].port Traffic Source CIDRs Health Check Traffic TCP Health Check Ports NLB Subnet CIDRs When client IP preservation is disabled Rule Protocol Port(s) IpRange(s) Client Traffic spec.ports[*].protocol spec.ports[*].port NLB Subnet CIDRs Health Check Traffic TCP Health Check Ports NLB Subnet CIDRs","title":"Network Load Balancer"},{"location":"guide/service/nlb/#network-load-balancer","text":"The AWS Load Balancer Controller (LBC) supports reconciliation for Kubernetes Service resources of type LoadBalancer by provisioning an AWS Network Load Balancer (NLB) with an instance or ip target type . Secure by default Since the v2.2.0 release, the LBC provisions an internal NLB by default. To create an internet-facing NLB, the following annotation is required on your service: service.beta.kubernetes.io/aws-load-balancer-scheme : internet-facing For backwards compatibility, if the service.beta.kubernetes.io/aws-load-balancer-scheme annotation is absent, an existing NLB's scheme remains unchanged.","title":"Network Load Balancer"},{"location":"guide/service/nlb/#prerequisites","text":"LBC >= v2.2.0 For Kubernetes Service resources of type LoadBalancer : Kubernetes >= v1.20 or Kubernetes >= v1.19.10 for 1.19 or Kubernetes >= v1.18.18 for 1.18 or EKS >= v1.16 For Kubernetes Service resources of type NodePort : Kubernetes >= v1.16 For ip target type: Pods have native AWS VPC networking configured. For more information, see the Amazon VPC CNI plugin documentation.","title":"Prerequisites"},{"location":"guide/service/nlb/#configuration","text":"By default, Kubernetes Service resources of type LoadBalancer get reconciled by the Kubernetes controller built into the CloudProvider component of the kube-controller-manager or the cloud-controller-manager (also known as the in-tree controller). In order for the LBC to manage the reconciliation of Kubernetes Service resources of type LoadBalancer , you need to offload the reconciliation from the in-tree controller to the LBC, explicitly. With LoadBalancerClass The LBC supports the LoadBalancerClass feature since the v2.4.0 release for Kubernetes v1.22+ clusters. The LoadBalancerClass feature provides a CloudProvider agnostic way of offloading the reconciliation for Kubernetes Service resources of type LoadBalancer to an external controller. When you specify the spec.loadBalancerClass to be service.k8s.aws/nlb on a Kubernetes Service resource of type LoadBalancer , the LBC takes charge of reconciliation by provisioning an NLB. Warning If you modify a Service resource with matching spec.loadBalancerClass by changing its type from LoadBalancer to anything else, the controller will cleanup the provisioned NLB for that Service. If the spec.loadBalancerClass is set to a loadBalancerClass that isn't recognized by the LBC, it ignores the Service resource, regardless of the service.beta.kubernetes.io/aws-load-balancer-type annotation. Tip By default, the NLB uses the instance target type. You can customize it using the service.beta.kubernetes.io/aws-load-balancer-nlb-target-type annotation . The LBC uses service.k8s.aws/nlb as the default LoadBalancerClass . You can customize it to a different value using the controller flag --load-balancer-class . Example: instance mode apiVersion : v1 kind : Service metadata : name : echoserver annotations : service.beta.kubernetes.io/aws-load-balancer-nlb-target-type : instance spec : selector : app : echoserver ports : - port : 80 targetPort : 8080 protocol : TCP type : LoadBalancer loadBalancerClass : service.k8s.aws/nlb Example: ip mode apiVersion : v1 kind : Service metadata : name : echoserver annotations : service.beta.kubernetes.io/aws-load-balancer-nlb-target-type : ip spec : selector : app : echoserver ports : - port : 80 targetPort : 8080 protocol : TCP type : LoadBalancer loadBalancerClass : service.k8s.aws/nlb With service.beta.kubernetes.io/aws-load-balancer-type annotation The AWS in-tree controller supports an AWS specific way of offloading the reconciliation for Kubernetes Service resources of type LoadBalancer to an external controller. When you specify the service.beta.kubernetes.io/aws-load-balancer-type annotation to be external on a Kubernetes Service resource of type LoadBalancer , the in-tree controller ignores the Service resource. In addition, if you specify the service.beta.kubernetes.io/aws-load-balancer-nlb-target-type annotation on the Service resource, the LBC takes charge of reconciliation by provisioning an NLB. Warning It's not recommended to modify or add the service.beta.kubernetes.io/aws-load-balancer-type annotation on an existing Service resource. If a change is desired, delete the existing Service resource and create a new one instead of modifying an existing Service. If you modify this annotation on an existing Service resource, you might end up with leaked LBC resources. backwards compatibility for nlb-ip type For backwards compatibility, both the in-tree and LBC controller supports nlb-ip as a value for the service.beta.kubernetes.io/aws-load-balancer-type annotation. The controllers treats it as if you specified both of the following annotations: service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip Example: instance mode apiVersion : v1 kind : Service metadata : name : echoserver annotations : service.beta.kubernetes.io/aws-load-balancer-type : external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type : instance spec : selector : app : echoserver ports : - port : 80 targetPort : 8080 protocol : TCP type : LoadBalancer Example: ip mode apiVersion : v1 kind : Service metadata : name : echoserver annotations : service.beta.kubernetes.io/aws-load-balancer-type : external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type : ip spec : selector : app : echoserver ports : - port : 80 targetPort : 8080 protocol : TCP type : LoadBalancer","title":"Configuration"},{"location":"guide/service/nlb/#protocols","text":"The LBC supports both TCP and UDP protocols. The controller also configures TLS termination on your NLB if you configure the Service with a certificate annotation. In the case of TCP, an NLB with IP targets doesn't pass the client source IP address, unless you specifically configure it to using target group attributes. Your application pods might not see the actual client IP address, even if the NLB passes it along. For example, if you're using instance mode with externalTrafficPolicy set to Cluster . In such cases, you can configure NLB proxy protocol v2 using an annotation if you need visibility into the client source IP address on your application pods. To enable proxy protocol v2, apply the following annotation to your Service: service.beta.kubernetes.io/aws-load-balancer-proxy-protocol : \"*\" If you enable proxy protocol v2, NLB health checks with HTTP/HTTPS only work if the health check port supports proxy protocol v2. Due to this behavior, you shouldn't configure proxy protocol v2 with NLB instance mode and externalTrafficPolicy set to Local .","title":"Protocols"},{"location":"guide/service/nlb/#subnet-tagging-requirements","text":"See Subnet Discovery for details on configuring Elastic Load Balancing for public or private placement.","title":"Subnet tagging requirements"},{"location":"guide/service/nlb/#security-group","text":"From v2.6.0, the AWS LBC creates and attaches frontend and backend security groups to NLB by default. For more information please see the security groups documentation In older versions, the controller by default adds inbound rules to the worker node security groups, to allow inbound traffic from an NLB. disable worker node security group rule management You can disable the worker node security group rule management using an annotation .","title":"Security group"},{"location":"guide/service/nlb/#worker-node-security-groups-selection","text":"The controller automatically selects the worker node security groups that it modifies to allow inbound traffic using the following rules: For instance mode, the security group of each backend worker node's primary elastic network interface (ENI) is selected. For ip mode, the security group of each backend pod's ENI is selected. Multiple security groups on an ENI If there are multiple security groups attached to an ENI, the controller expects only one security group tagged with following tags: Key Value kubernetes.io/cluster/${cluster-name} owned or shared ${cluster-name} is the name of the Kubernetes cluster. If it is possible for multiple security groups with the tag kubernetes.io/cluster/${cluster-name} to be on a target ENI, you may use the --service-target-eni-security-group-tags flag to specify additional tags that must also match in order for a security group to be used.","title":"Worker node security groups selection"},{"location":"guide/service/nlb/#worker-node-security-groups-rules","text":"When client IP preservation is enabled Rule Protocol Port(s) IpRanges(s) Client Traffic spec.ports[*].protocol spec.ports[*].port Traffic Source CIDRs Health Check Traffic TCP Health Check Ports NLB Subnet CIDRs When client IP preservation is disabled Rule Protocol Port(s) IpRange(s) Client Traffic spec.ports[*].protocol spec.ports[*].port NLB Subnet CIDRs Health Check Traffic TCP Health Check Ports NLB Subnet CIDRs","title":"Worker node security groups rules"},{"location":"guide/targetgroupbinding/spec/","text":"Packages: elbv2.k8s.aws/v1beta1 elbv2.k8s.aws/v1beta1 Package v1beta1 contains API Schema definitions for the elbv2 v1beta1 API group Resource Types: TargetGroupBinding TargetGroupBinding TargetGroupBinding is the Schema for the TargetGroupBinding API Field Description apiVersion string elbv2.k8s.aws/v1beta1 kind string TargetGroupBinding metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec TargetGroupBindingSpec targetGroupARN string targetGroupARN is the Amazon Resource Name (ARN) for the TargetGroup. targetType TargetType (Optional) targetType is the TargetType of TargetGroup. If unspecified, it will be automatically inferred. serviceRef ServiceReference serviceRef is a reference to a Kubernetes Service and ServicePort. networking TargetGroupBindingNetworking (Optional) networking defines the networking rules to allow ELBV2 LoadBalancer to access targets in TargetGroup. status TargetGroupBindingStatus IPBlock ( Appears on: NetworkingPeer ) IPBlock defines source/destination IPBlock in networking rules. Field Description cidr string CIDR is the network CIDR. Both IPV4 or IPV6 CIDR are accepted. NetworkingIngressRule ( Appears on: TargetGroupBindingNetworking ) NetworkingIngressRule defines a particular set of traffic that is allowed to access TargetGroup\u2019s targets. Field Description from []NetworkingPeer List of peers which should be able to access the targets in TargetGroup. At least one NetworkingPeer should be specified. ports []NetworkingPort List of ports which should be made accessible on the targets in TargetGroup. If ports is empty or unspecified, it defaults to all ports with TCP. NetworkingPeer ( Appears on: NetworkingIngressRule ) NetworkingPeer defines the source/destination peer for networking rules. Field Description ipBlock IPBlock (Optional) IPBlock defines an IPBlock peer. If specified, none of the other fields can be set. securityGroup SecurityGroup (Optional) SecurityGroup defines a SecurityGroup peer. If specified, none of the other fields can be set. NetworkingPort ( Appears on: NetworkingIngressRule ) NetworkingPort defines the port and protocol for networking rules. Field Description protocol NetworkingProtocol The protocol which traffic must match. If protocol is unspecified, it defaults to TCP. port k8s.io/apimachinery/pkg/util/intstr.IntOrString (Optional) The port which traffic must match. When NodePort endpoints(instance TargetType) is used, this must be a numerical port. When Port endpoints(ip TargetType) is used, this can be either numerical or named port on pods. if port is unspecified, it defaults to all ports. NetworkingProtocol ( string alias) ( Appears on: NetworkingPort ) NetworkingProtocol defines the protocol for networking rules. SecurityGroup ( Appears on: NetworkingPeer ) SecurityGroup defines reference to an AWS EC2 SecurityGroup. Field Description groupID string GroupID is the EC2 SecurityGroupID. ServiceReference ( Appears on: TargetGroupBindingSpec ) ServiceReference defines reference to a Kubernetes Service and its ServicePort. Field Description name string Name is the name of the Service. port k8s.io/apimachinery/pkg/util/intstr.IntOrString Port is the port of the ServicePort. TargetGroupBindingNetworking ( Appears on: TargetGroupBindingSpec ) TargetGroupBindingNetworking defines the networking rules to allow ELBV2 LoadBalancer to access targets in TargetGroup. Field Description ingress []NetworkingIngressRule (Optional) List of ingress rules to allow ELBV2 LoadBalancer to access targets in TargetGroup. TargetGroupBindingSpec ( Appears on: TargetGroupBinding ) TargetGroupBindingSpec defines the desired state of TargetGroupBinding Field Description targetGroupARN string targetGroupARN is the Amazon Resource Name (ARN) for the TargetGroup. targetType TargetType (Optional) targetType is the TargetType of TargetGroup. If unspecified, it will be automatically inferred. serviceRef ServiceReference serviceRef is a reference to a Kubernetes Service and ServicePort. networking TargetGroupBindingNetworking (Optional) networking defines the networking rules to allow ELBV2 LoadBalancer to access targets in TargetGroup. TargetGroupBindingStatus ( Appears on: TargetGroupBinding ) TargetGroupBindingStatus defines the observed state of TargetGroupBinding Field Description observedGeneration int64 (Optional) The generation observed by the TargetGroupBinding controller. TargetType ( string alias) ( Appears on: TargetGroupBindingSpec ) TargetType is the targetType of your ELBV2 TargetGroup. with instance TargetType, nodes with nodePort for your service will be registered as targets with ip TargetType, Pods with containerPort for your service will be registered as targets Generated with gen-crd-api-reference-docs on git commit 21418f44 .","title":"Specification"},{"location":"guide/targetgroupbinding/targetgroupbinding/","text":"TargetGroupBinding \u00b6 TargetGroupBinding is a custom resource (CR) that can expose your pods using an existing ALB TargetGroup or NLB TargetGroup . This will allow you to provision the load balancer infrastructure completely outside of Kubernetes but still manage the targets with Kubernetes Service. usage to support Ingress and Service The AWS LoadBalancer controller internally used TargetGroupBinding to support the functionality for Ingress and Service resource as well. It automatically creates TargetGroupBinding in the same namespace of the Service used. You can view all TargetGroupBindings in a namespace by kubectl get targetgroupbindings -n -o wide TargetType \u00b6 TargetGroupBinding CR supports TargetGroups of either instance or ip TargetType. If TargetType is not explicitly specified, a mutating webhook will automatically call AWS API to find the TargetType for your TargetGroup and set it to correct value. Sample YAML \u00b6 apiVersion : elbv2.k8s.aws/v1beta1 kind : TargetGroupBinding metadata : name : my-tgb spec : serviceRef : name : awesome-service # route traffic to the awesome-service port : 80 targetGroupARN : NodeSelector \u00b6 Default Node Selector \u00b6 For TargetType: instance , all nodes of a cluster that match the following selector are added to the target group by default: matchExpressions : - key : node-role.kubernetes.io/master operator : DoesNotExist - key : node.kubernetes.io/exclude-from-external-load-balancers operator : DoesNotExist - key : alpha.service-controller.kubernetes.io/exclude-balancer operator : DoesNotExist - key : eks.amazonaws.com/compute-type operator : NotIn values : [ \"fargate\" ] Custom Node Selector \u00b6 TargetGroupBinding CR supports NodeSelector which is a LabelSelector . This will select nodes to attach to the instance TargetType target group and is merged with the default node selector above . apiVersion : elbv2.k8s.aws/v1beta1 kind : TargetGroupBinding metadata : name : my-tgb spec : nodeSelector : matchLabels : foo : bar ... Reference \u00b6 See the reference for TargetGroupBinding CR","title":"TargetGroupBinding"},{"location":"guide/targetgroupbinding/targetgroupbinding/#targetgroupbinding","text":"TargetGroupBinding is a custom resource (CR) that can expose your pods using an existing ALB TargetGroup or NLB TargetGroup . This will allow you to provision the load balancer infrastructure completely outside of Kubernetes but still manage the targets with Kubernetes Service. usage to support Ingress and Service The AWS LoadBalancer controller internally used TargetGroupBinding to support the functionality for Ingress and Service resource as well. It automatically creates TargetGroupBinding in the same namespace of the Service used. You can view all TargetGroupBindings in a namespace by kubectl get targetgroupbindings -n -o wide","title":"TargetGroupBinding"},{"location":"guide/targetgroupbinding/targetgroupbinding/#targettype","text":"TargetGroupBinding CR supports TargetGroups of either instance or ip TargetType. If TargetType is not explicitly specified, a mutating webhook will automatically call AWS API to find the TargetType for your TargetGroup and set it to correct value.","title":"TargetType"},{"location":"guide/targetgroupbinding/targetgroupbinding/#sample-yaml","text":"apiVersion : elbv2.k8s.aws/v1beta1 kind : TargetGroupBinding metadata : name : my-tgb spec : serviceRef : name : awesome-service # route traffic to the awesome-service port : 80 targetGroupARN : ","title":"Sample YAML"},{"location":"guide/targetgroupbinding/targetgroupbinding/#nodeselector","text":"","title":"NodeSelector"},{"location":"guide/targetgroupbinding/targetgroupbinding/#default-node-selector","text":"For TargetType: instance , all nodes of a cluster that match the following selector are added to the target group by default: matchExpressions : - key : node-role.kubernetes.io/master operator : DoesNotExist - key : node.kubernetes.io/exclude-from-external-load-balancers operator : DoesNotExist - key : alpha.service-controller.kubernetes.io/exclude-balancer operator : DoesNotExist - key : eks.amazonaws.com/compute-type operator : NotIn values : [ \"fargate\" ]","title":"Default Node Selector"},{"location":"guide/targetgroupbinding/targetgroupbinding/#custom-node-selector","text":"TargetGroupBinding CR supports NodeSelector which is a LabelSelector . This will select nodes to attach to the instance TargetType target group and is merged with the default node selector above . apiVersion : elbv2.k8s.aws/v1beta1 kind : TargetGroupBinding metadata : name : my-tgb spec : nodeSelector : matchLabels : foo : bar ...","title":"Custom Node Selector"},{"location":"guide/targetgroupbinding/targetgroupbinding/#reference","text":"See the reference for TargetGroupBinding CR","title":"Reference"},{"location":"guide/tasks/cognito_authentication/","text":"Setup Cognito/AWS Load Balancer Controller \u00b6 This document describes how to install AWS Load Balancer Controller with AWS Cognito integration to minimal capacity, other options and or configurations may be required for production, and on an app to app basis. Assumptions \u00b6 The following assumptions are observed regarding this procedure. ExternalDNS is installed to the cluster and will provide a custom URL for your ALB. To setup ExternalDNS refer to the install instructions . Cognito Configuration \u00b6 Configure Cognito for use with AWS Load Balancer Controller using the following links with specified caveats. Create Cognito user pool Configure application integration On step 11.c for the Callback URL enter https:///oauth2/idpresponse . On step 11.d for Allowed OAuth Flows select authorization code grant and for Allowed OAuth Scopes select openid . AWS Load Balancer Controller Setup \u00b6 Install the AWS Load Balancer Controller using the install instructions with the following caveats. When setting up IAM Role Permissions, add the cognito-idp:DescribeUserPoolClient permission to the example policy. Deploying an Ingress \u00b6 Using the cognito-ingress-template you can fill in the variables to create an ALB ingress connected to your Cognito user pool for authentication.","title":"Cognito Authentication"},{"location":"guide/tasks/cognito_authentication/#setup-cognitoaws-load-balancer-controller","text":"This document describes how to install AWS Load Balancer Controller with AWS Cognito integration to minimal capacity, other options and or configurations may be required for production, and on an app to app basis.","title":"Setup Cognito/AWS Load Balancer Controller"},{"location":"guide/tasks/cognito_authentication/#assumptions","text":"The following assumptions are observed regarding this procedure. ExternalDNS is installed to the cluster and will provide a custom URL for your ALB. To setup ExternalDNS refer to the install instructions .","title":"Assumptions"},{"location":"guide/tasks/cognito_authentication/#cognito-configuration","text":"Configure Cognito for use with AWS Load Balancer Controller using the following links with specified caveats. Create Cognito user pool Configure application integration On step 11.c for the Callback URL enter https:///oauth2/idpresponse . On step 11.d for Allowed OAuth Flows select authorization code grant and for Allowed OAuth Scopes select openid .","title":"Cognito Configuration"},{"location":"guide/tasks/cognito_authentication/#aws-load-balancer-controller-setup","text":"Install the AWS Load Balancer Controller using the install instructions with the following caveats. When setting up IAM Role Permissions, add the cognito-idp:DescribeUserPoolClient permission to the example policy.","title":"AWS Load Balancer Controller Setup"},{"location":"guide/tasks/cognito_authentication/#deploying-an-ingress","text":"Using the cognito-ingress-template you can fill in the variables to create an ALB ingress connected to your Cognito user pool for authentication.","title":"Deploying an Ingress"},{"location":"guide/tasks/migrate_legacy_apps/","text":"Migrating From Legacy Apps with Manually Configured Target Groups \u00b6 Many organizations are decomposing old legacy apps into smaller services and components. During the transition they may be running a hybrid ecosystem with some parts of the app running in ec2 instances, some in Kubernetes microservices, and possibly even some in serverless environments like Lambda. The existing clients of the application expect all endpoints under one DNS entry and it's desirable to be able to route traffic at the ALB to services running outside the Kubernetes cluster. The actions annotation allows the definition of a forward rule to a previously configured target group. Learn more about the actions annotation at alb.ingress.kubernetes.io/actions.${action-name} Example Ingress Manifest \u00b6 apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : testcase name : echoserver annotations : alb.ingress.kubernetes.io/actions.legacy-app : '{\"Type\": \"forward\", \"TargetGroupArn\": \"legacy-tg-arn\"}' spec : ingressClassName : alb rules : - http : paths : - path : /v1/endpoints pathType : Exact backend : service : name : legacy-app port : name : use-annotation - path : /normal-path pathType : Exact backend : service : name : echoserver port : number : 80 Note The TargetGroupArn must be set and the user is responsible for configuring the Target group in AWS before applying the forward rule.","title":"Migrating From Legacy Apps with Manually Configured Target Groups"},{"location":"guide/tasks/migrate_legacy_apps/#migrating-from-legacy-apps-with-manually-configured-target-groups","text":"Many organizations are decomposing old legacy apps into smaller services and components. During the transition they may be running a hybrid ecosystem with some parts of the app running in ec2 instances, some in Kubernetes microservices, and possibly even some in serverless environments like Lambda. The existing clients of the application expect all endpoints under one DNS entry and it's desirable to be able to route traffic at the ALB to services running outside the Kubernetes cluster. The actions annotation allows the definition of a forward rule to a previously configured target group. Learn more about the actions annotation at alb.ingress.kubernetes.io/actions.${action-name}","title":"Migrating From Legacy Apps with Manually Configured Target Groups"},{"location":"guide/tasks/migrate_legacy_apps/#example-ingress-manifest","text":"apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : testcase name : echoserver annotations : alb.ingress.kubernetes.io/actions.legacy-app : '{\"Type\": \"forward\", \"TargetGroupArn\": \"legacy-tg-arn\"}' spec : ingressClassName : alb rules : - http : paths : - path : /v1/endpoints pathType : Exact backend : service : name : legacy-app port : name : use-annotation - path : /normal-path pathType : Exact backend : service : name : echoserver port : number : 80 Note The TargetGroupArn must be set and the user is responsible for configuring the Target group in AWS before applying the forward rule.","title":"Example Ingress Manifest"},{"location":"guide/tasks/ssl_redirect/","text":"Redirect Traffic from HTTP to HTTPS \u00b6 You can use the alb.ingress.kubernetes.io/ssl-redirect annotation to setup an ingress to redirect http traffic to https Example Ingress Manifest \u00b6 apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/certificate-arn : arn:aws:acm:us-west-2:xxxx:certificate/xxxxxx alb.ingress.kubernetes.io/listen-ports : '[{\"HTTP\": 80}, {\"HTTPS\":443}]' alb.ingress.kubernetes.io/ssl-redirect : '443' spec : ingressClassName : alb rules : - http : paths : - path : /users/* pathType : ImplementationSpecific backend : service : name : user-service port : number : 80 - path : /* pathType : ImplementationSpecific backend : service : name : default-service port : number : 80 Note alb.ingress.kubernetes.io/listen-ports annotation must at least include [{\"HTTP\": 80}, {\"HTTPS\":443}] to listen on 80 and 443. alb.ingress.kubernetes.io/certificate-arn annotation must be set to allow listen for HTTPS traffic the ssl-redirect port must appear in the listen-port annotation, and must be an HTTPS port How it works \u00b6 If you enable SSL redirection, the controller configures each HTTP listener with a default action to redirect to HTTPS. The controller does not add any other rules to the HTTP listener. For the above example, the HTTP listener on port 80 will have a single default rule to redirect traffic to HTTPS on port 443.","title":"SSL Redirect"},{"location":"guide/tasks/ssl_redirect/#redirect-traffic-from-http-to-https","text":"You can use the alb.ingress.kubernetes.io/ssl-redirect annotation to setup an ingress to redirect http traffic to https","title":"Redirect Traffic from HTTP to HTTPS"},{"location":"guide/tasks/ssl_redirect/#example-ingress-manifest","text":"apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/certificate-arn : arn:aws:acm:us-west-2:xxxx:certificate/xxxxxx alb.ingress.kubernetes.io/listen-ports : '[{\"HTTP\": 80}, {\"HTTPS\":443}]' alb.ingress.kubernetes.io/ssl-redirect : '443' spec : ingressClassName : alb rules : - http : paths : - path : /users/* pathType : ImplementationSpecific backend : service : name : user-service port : number : 80 - path : /* pathType : ImplementationSpecific backend : service : name : default-service port : number : 80 Note alb.ingress.kubernetes.io/listen-ports annotation must at least include [{\"HTTP\": 80}, {\"HTTPS\":443}] to listen on 80 and 443. alb.ingress.kubernetes.io/certificate-arn annotation must be set to allow listen for HTTPS traffic the ssl-redirect port must appear in the listen-port annotation, and must be an HTTPS port","title":"Example Ingress Manifest"},{"location":"guide/tasks/ssl_redirect/#how-it-works","text":"If you enable SSL redirection, the controller configures each HTTP listener with a default action to redirect to HTTPS. The controller does not add any other rules to the HTTP listener. For the above example, the HTTP listener on port 80 will have a single default rule to redirect traffic to HTTPS on port 443.","title":"How it works"},{"location":"guide/use_cases/blue_green/","text":"Split Traffic \u00b6 You can configure an Application Load Balancer (ALB) to split traffic from the same listener across multiple target groups using rules. This facilitates A/B testing, blue/green deployment, and traffic management without additional tools. The Load Balancer Controller (LBC) supports defining this behavior alongside the standard configuration of an Ingress resource. More specifically, the ALB supports weighted target groups and advanced request routing. Weighted target group Multiple target groups can be attached to the same forward action of a listener rule and specify a weight for each group. It allows developers to control how to distribute traffic to multiple versions of their application. For example, when you define a rule having two target groups with weights of 8 and 2, the load balancer will route 80 percent of the traffic to the first target group and 20 percent to the other. Advanced request routing In addition to the weighted target group, AWS announced the advanced request routing feature in 2019. Advanced request routing gives developers the ability to write rules (and route traffic) based on standard and custom HTTP headers and methods, the request path, the query string, and the source IP address. This new feature simplifies the application architecture by eliminating the need for a proxy fleet for routing, blocks unwanted traffic at the load balancer, and enables the implementation of A/B testing. Overview \u00b6 The ALB is configured to split traffic using annotations on the ingress resrouces. More specifically, the ingress annotation alb.ingress.kubernetes.io/actions.${service-name} configures custom actions on the listener. The body of the annotation is a JSON document that identifies an action type, and configures it. The supported actions are redirect , forward , and fixed-response . With forward action, multiple target groups with different weights can be defined in the annotation. The LBC provisions the target groups and configures the listener rules as per the annotation to direct the traffic. Importantly: * The action-name in the annotation must match the service name in the Ingress rules. For example, the annotation alb.ingress.kubernetes.io/actions.blue-green matches the service name blue-green referenced in the Ingress rules. * The servicePort of the service in the Ingress rules must be use-annotation . Example \u00b6 The following ingress resource configures the ALB to forward all traffic to hello-kubernetes-v1 service (weight: 100 vs. 0). Note that the annotation name includes blue-green , which matches the service name referenced in the ingress rules. The annotation reference includes further examples of the JSON configuration for different actions. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: \"hello-kubernetes\" namespace: \"hello-kubernetes\" annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/actions.blue-green: | { \"type\":\"forward\", \"forwardConfig\":{ \"targetGroups\":[ { \"serviceName\":\"hello-kubernetes-v1\", \"servicePort\":\"80\", \"weight\":100 }, { \"serviceName\":\"hello-kubernetes-v2\", \"servicePort\":\"80\", \"weight\":0 } ] } } labels: app: hello-kubernetes spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: blue-green port: name: use-annotation","title":"Blue/Green"},{"location":"guide/use_cases/blue_green/#split-traffic","text":"You can configure an Application Load Balancer (ALB) to split traffic from the same listener across multiple target groups using rules. This facilitates A/B testing, blue/green deployment, and traffic management without additional tools. The Load Balancer Controller (LBC) supports defining this behavior alongside the standard configuration of an Ingress resource. More specifically, the ALB supports weighted target groups and advanced request routing. Weighted target group Multiple target groups can be attached to the same forward action of a listener rule and specify a weight for each group. It allows developers to control how to distribute traffic to multiple versions of their application. For example, when you define a rule having two target groups with weights of 8 and 2, the load balancer will route 80 percent of the traffic to the first target group and 20 percent to the other. Advanced request routing In addition to the weighted target group, AWS announced the advanced request routing feature in 2019. Advanced request routing gives developers the ability to write rules (and route traffic) based on standard and custom HTTP headers and methods, the request path, the query string, and the source IP address. This new feature simplifies the application architecture by eliminating the need for a proxy fleet for routing, blocks unwanted traffic at the load balancer, and enables the implementation of A/B testing.","title":"Split Traffic"},{"location":"guide/use_cases/blue_green/#overview","text":"The ALB is configured to split traffic using annotations on the ingress resrouces. More specifically, the ingress annotation alb.ingress.kubernetes.io/actions.${service-name} configures custom actions on the listener. The body of the annotation is a JSON document that identifies an action type, and configures it. The supported actions are redirect , forward , and fixed-response . With forward action, multiple target groups with different weights can be defined in the annotation. The LBC provisions the target groups and configures the listener rules as per the annotation to direct the traffic. Importantly: * The action-name in the annotation must match the service name in the Ingress rules. For example, the annotation alb.ingress.kubernetes.io/actions.blue-green matches the service name blue-green referenced in the Ingress rules. * The servicePort of the service in the Ingress rules must be use-annotation .","title":"Overview"},{"location":"guide/use_cases/blue_green/#example","text":"The following ingress resource configures the ALB to forward all traffic to hello-kubernetes-v1 service (weight: 100 vs. 0). Note that the annotation name includes blue-green , which matches the service name referenced in the ingress rules. The annotation reference includes further examples of the JSON configuration for different actions. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: \"hello-kubernetes\" namespace: \"hello-kubernetes\" annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/actions.blue-green: | { \"type\":\"forward\", \"forwardConfig\":{ \"targetGroups\":[ { \"serviceName\":\"hello-kubernetes-v1\", \"servicePort\":\"80\", \"weight\":100 }, { \"serviceName\":\"hello-kubernetes-v2\", \"servicePort\":\"80\", \"weight\":0 } ] } } labels: app: hello-kubernetes spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: blue-green port: name: use-annotation","title":"Example"},{"location":"guide/use_cases/frontend_sg/","text":"Frontend security groups limit client/internet traffic with a load balancer. This improves security by preventing unauthorized access to cluster services, and blocking unexpected outbound connections. Both AWS Network Load Balancers (NLBs) and Application Load Balancers (ALBs) support frontend security groups. Learn more about how the Load Balancer Controller uses Frontend and Backend Security Groups . Solution Overview \u00b6 Load balancers expose cluster workloads to a wider network. Creating a frontend security group limits access to these workloads (service or ingress resources). More specifically, a security group acts as a virtual firewall to control incoming and outgoing traffic. Inbound rules control the incoming traffic to your load balancer, and outbound rules control the outgoing traffic from your load balancer. Security groups are particularly suited for defining what access other AWS resources (services, EC2 instances) have to your cluster. For example, if you have an existing security group including EC2 instances, you can permit only that security group to access a service. In this example, you will restrict access to a cluster service. You will create a new security group for the frontend of a load balancer, and add an inbound rule permitting traffic. The rule may limit traffic to a specific port, CIDR, or existing security group. Prerequisites \u00b6 Kubernetes Cluster Version 1.22+ AWS Load Balancer Controller v2.6.0+ AWS CLI v2 Configure \u00b6 1. Find the VPC ID of your cluster \u00b6 $ aws eks describe-cluster --name --query \"cluster.resourcesVpcConfig.vpcId\" --output text vpc-0101XXXXa356 Ensure you have the right cluster name, AWS region, and the AWS CLI is configured. 2. Create a security group using the VPC ID \u00b6 $ aws ec2 create-security-group --group-name --description --vpc-id { \"GroupId\" : \"sg-0406XXXX645c\" } Note the security group ID. This will be the frontend security group for the load balancer. 3. Create your ingress rules \u00b6 Load balancers generally serve as an entrypoint for clients to access your cluster. This makes ingress rules especially important. For example, this rule permits all traffic on port 443: aws ec2 authorize-security-group-ingress --group-id --protocol all --port 443 --cidr 0 .0.0.0/0 Learn more about how to create an ingress rule with the AWS CLI. 4. Determine your egress rules (optional) \u00b6 By default, all outbound traffic is allowed. Further, security groups are stateful, and responses to an allowed connection will also be permitted. Learn how to create an egress rule with the AWS CLI. 5. Add the security group annotation to your Ingress or Service \u00b6 For Ingress resources , add the following annotation: apiVersion : networking.k8s.io/v1 kind : Ingress metadata : name : frontend annotations : alb.ingress.kubernetes.io/security-groups : For Service resources , add the following annotation: apiVersion : v1 kind : Service metadata : name : frontend annotations : service.beta.kubernetes.io/aws-load-balancer-security-groups : spec : type : LoadBalancer loadBalancerClass : service.k8s.aws/nlb For Ingress resources, the associated Application Load Balancer will be updated. For Service resources, the associated Network Load Balancer will be updated. 6. List your load balancers and verify the security groups are attached \u00b6 $ aws elbv2 describe-load-balancers { \"LoadBalancers\" : [ { \"LoadBalancerArn\" : \"arn:aws:elasticloadbalancing:us-east-1:1853XXXX5115:loadbalancer/net/k8s-default-frontend-ae3743b818/3ad6d16fb75ff688\" , <...> \"SecurityGroups\" : [ \"sg-0406XXXX645c\" , \"sg-0873XXXX2bef\" ] , \"IpAddressType\" : \"ipv4\" } ] } If you don't see the security groups, verify: The Load Balancer Controller is properly installed. The controller has proper IAM permissions to modify load balancers. Look at the logs of the controller pods for IAM errors. 7. Clean up (Optional) \u00b6 Removing the annotations from Service/Ingress resources will revert to the default frontend ecurity groups. Load balancers may be costly. Delete Ingress and Service resources to deprovision the load balancers. If the load balancers are deleted from the console, they may be recreated by the controller.","title":"Frontend Security Groups"},{"location":"guide/use_cases/frontend_sg/#solution-overview","text":"Load balancers expose cluster workloads to a wider network. Creating a frontend security group limits access to these workloads (service or ingress resources). More specifically, a security group acts as a virtual firewall to control incoming and outgoing traffic. Inbound rules control the incoming traffic to your load balancer, and outbound rules control the outgoing traffic from your load balancer. Security groups are particularly suited for defining what access other AWS resources (services, EC2 instances) have to your cluster. For example, if you have an existing security group including EC2 instances, you can permit only that security group to access a service. In this example, you will restrict access to a cluster service. You will create a new security group for the frontend of a load balancer, and add an inbound rule permitting traffic. The rule may limit traffic to a specific port, CIDR, or existing security group.","title":"Solution Overview"},{"location":"guide/use_cases/frontend_sg/#prerequisites","text":"Kubernetes Cluster Version 1.22+ AWS Load Balancer Controller v2.6.0+ AWS CLI v2","title":"Prerequisites"},{"location":"guide/use_cases/frontend_sg/#configure","text":"","title":"Configure"},{"location":"guide/use_cases/frontend_sg/#1-find-the-vpc-id-of-your-cluster","text":"$ aws eks describe-cluster --name --query \"cluster.resourcesVpcConfig.vpcId\" --output text vpc-0101XXXXa356 Ensure you have the right cluster name, AWS region, and the AWS CLI is configured.","title":"1. Find the VPC ID of your cluster"},{"location":"guide/use_cases/frontend_sg/#2-create-a-security-group-using-the-vpc-id","text":"$ aws ec2 create-security-group --group-name --description --vpc-id { \"GroupId\" : \"sg-0406XXXX645c\" } Note the security group ID. This will be the frontend security group for the load balancer.","title":"2. Create a security group using the VPC ID"},{"location":"guide/use_cases/frontend_sg/#3-create-your-ingress-rules","text":"Load balancers generally serve as an entrypoint for clients to access your cluster. This makes ingress rules especially important. For example, this rule permits all traffic on port 443: aws ec2 authorize-security-group-ingress --group-id --protocol all --port 443 --cidr 0 .0.0.0/0 Learn more about how to create an ingress rule with the AWS CLI.","title":"3. Create your ingress rules"},{"location":"guide/use_cases/frontend_sg/#4-determine-your-egress-rules-optional","text":"By default, all outbound traffic is allowed. Further, security groups are stateful, and responses to an allowed connection will also be permitted. Learn how to create an egress rule with the AWS CLI.","title":"4. Determine your egress rules (optional)"},{"location":"guide/use_cases/frontend_sg/#5-add-the-security-group-annotation-to-your-ingress-or-service","text":"For Ingress resources , add the following annotation: apiVersion : networking.k8s.io/v1 kind : Ingress metadata : name : frontend annotations : alb.ingress.kubernetes.io/security-groups : For Service resources , add the following annotation: apiVersion : v1 kind : Service metadata : name : frontend annotations : service.beta.kubernetes.io/aws-load-balancer-security-groups : spec : type : LoadBalancer loadBalancerClass : service.k8s.aws/nlb For Ingress resources, the associated Application Load Balancer will be updated. For Service resources, the associated Network Load Balancer will be updated.","title":"5. Add the security group annotation to your Ingress or Service"},{"location":"guide/use_cases/frontend_sg/#6-list-your-load-balancers-and-verify-the-security-groups-are-attached","text":"$ aws elbv2 describe-load-balancers { \"LoadBalancers\" : [ { \"LoadBalancerArn\" : \"arn:aws:elasticloadbalancing:us-east-1:1853XXXX5115:loadbalancer/net/k8s-default-frontend-ae3743b818/3ad6d16fb75ff688\" , <...> \"SecurityGroups\" : [ \"sg-0406XXXX645c\" , \"sg-0873XXXX2bef\" ] , \"IpAddressType\" : \"ipv4\" } ] } If you don't see the security groups, verify: The Load Balancer Controller is properly installed. The controller has proper IAM permissions to modify load balancers. Look at the logs of the controller pods for IAM errors.","title":"6. List your load balancers and verify the security groups are attached"},{"location":"guide/use_cases/frontend_sg/#7-clean-up-optional","text":"Removing the annotations from Service/Ingress resources will revert to the default frontend ecurity groups. Load balancers may be costly. Delete Ingress and Service resources to deprovision the load balancers. If the load balancers are deleted from the console, they may be recreated by the controller.","title":"7. Clean up (Optional)"},{"location":"guide/use_cases/nlb_tls_termination/","text":"Motivation \u00b6 Managing TLS certificates (and related configuration) for production cluster workloads is both time consuming, and high risk. For example, storing multiple copies of a certificate secret key in the cluster may increases the chances of it being compromised. Additionally, TLS can be complicated to configure and implement properly. Traditionally, TLS termination at the load balancer step required using more expensive application load balancers (ALBs). AWS introduced TLS termination for network load balancers (NLBs) for enhanced security and cost effectiveness. The TLS implementation used by the AWS NLB is formally verified and maintained. Additionally, AWS Certificate Manager (ACM) is used, fully isolating your cluster from access to the private key. Solution Overview \u00b6 An external client transmits a request to the NLB. The request is encrypted with TLS using the production (e.g., client facing) certificate, and on port 443. The NLB decrypts the request, and transmits it on to your cluster on port 80. It follows the standard request routing configured within the cluster. Notably, the request received within the cluster includes the actual origin IP address of the external client. Alternate ports may be configured. Note The NLB may be configured to maintain the source (i.e., client) IP address. However, there are some limitations. Review Client IP Preservation in the AWS docs. Prerequisites \u00b6 \u2705 Access to DNS records for domain name. Review the docs on registering domains with AWS's Route 53. Alternate DNS providers may be used, such as Google Domains or Namecheap. Later, a subdomain (e.g., demo-service.gcline.us) will be created, pointing to the NLB. Access to the DNS records is required to generate a TLS certificate for use by the NLB. \u2705 AWS Load Balancer Controller Installed Generally, setting up the Load Balancer Controller has two steps: enabling IAM roles for service accounts, and adding the controller to the cluster. The IAM role allows the controller in the Kubernetes cluster to manage AWS resources. Learn more about IAM roles for service accounts. Configure \u00b6 Generate TLS Certificate \u00b6 Create a public TLS certificate for the domain using AWS Certificate Manager (ACM). This is streamlined when the domain is managed by Route 53. Review the AWS Certificate Manager Docs. The domain name on the TLS certificate must correspond to the planned domain name for the kubernetes service. The domain name may be specified explicitly (e.g., tls-demo.gcline.us), or a wildcard certificate can be used (e.g., *.gcline.us). If the domain is registered with Route53, the TLS certificate request will automatically be approved. Otherwise, follow ACM console the instructions to create a DNS record to validate the domain. After validation, the certificate will be available for use in your AWS account. Note the ARN of the certificate, which uniquely identifies it in kubernetes config files. Create Service with new NLB \u00b6 Add annotations to a load balancer service to enable NLB TLS termination, before the traffic reaches Envoy. The annotations are actioned by the load balancer controller. Review all the NLB annotations on GitHub. annotation name value meaning service.beta.kubernetes.io/aws-load-balancer-type external explicitly requires an NLB, instead of an ALB service.beta.kubernetes.io/aws-load-balancer-nlb-target-type ip route traffic directly to the pod IP service.beta.kubernetes.io/aws-load-balancer-scheme internet-facing An internet-facing load balancer has a publicly resolvable DNS name service.beta.kubernetes.io/aws-load-balancer-ssl-cert \"arn:aws:acm:...\" identifies the TLS certificate used by the NLB service.beta.kubernetes.io/aws-load-balancer-ssl-ports 443 determines the port the NLB should listen for TLS traffic on Example: apiVersion: v1 kind: Service metadata: name: MyAppSvc namespace: dev annotations: service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing service.beta.kubernetes.io/aws-load-balancer-ssl-cert: \"arn:aws:acm:us-east-2:185309785115:certificate/7610ed7d-5a81-4ea2-a18a-7ba1606cca3e\" service.beta.kubernetes.io/aws-load-balancer-ssl-ports: \"443\" spec: externalTrafficPolicy: Local ports: - port: 443 targetPort: 80 name: http protocol: TCP selector: app: MyApp type: LoadBalancer Configure DNS \u00b6 Get domain name using kubectl. The service name and namespace were defined above. kubectl get svc MyAppSvc --namespace dev NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE envoy LoadBalancer 10.100.24.154 k8s---xxxxxxxxxx-xxxxxxxxxxxxxxxx.elb..amazonaws.com 443:31606/TCP 40d Note the last 4 digits of the domain name for the NLB. For example, \"bb1f\". Setup DNS alias for NLB Create a DNS record pointing from a friendly name (e.g., tls-demo.gcline.us) to the NLB domain (e.g., bb1f.elb.us-east-2.amazonaws.com). For AWS's Route 53, follow the instructions below. If you use a different DNS provider, follow their instructions for creating a CNAME record . First, create a new record in Route 53. Use the \"A\" record type, and enable the \"alias\" option. This option attaches the DNS record to the AWS resource, without requiring an extra lookup step for clients. Select the NLB resource. Double check the region, and use the last 4 digits (noted earlier) to select the proper resource. Verify \u00b6 Attempt to access the NLB domain at port 443 with HTTPS/TLS. Is the connection successful? What certificate is used? Does it reach the expected endpoint within the cluster?","title":"NLB TLS Termination"},{"location":"guide/use_cases/nlb_tls_termination/#motivation","text":"Managing TLS certificates (and related configuration) for production cluster workloads is both time consuming, and high risk. For example, storing multiple copies of a certificate secret key in the cluster may increases the chances of it being compromised. Additionally, TLS can be complicated to configure and implement properly. Traditionally, TLS termination at the load balancer step required using more expensive application load balancers (ALBs). AWS introduced TLS termination for network load balancers (NLBs) for enhanced security and cost effectiveness. The TLS implementation used by the AWS NLB is formally verified and maintained. Additionally, AWS Certificate Manager (ACM) is used, fully isolating your cluster from access to the private key.","title":"Motivation"},{"location":"guide/use_cases/nlb_tls_termination/#solution-overview","text":"An external client transmits a request to the NLB. The request is encrypted with TLS using the production (e.g., client facing) certificate, and on port 443. The NLB decrypts the request, and transmits it on to your cluster on port 80. It follows the standard request routing configured within the cluster. Notably, the request received within the cluster includes the actual origin IP address of the external client. Alternate ports may be configured. Note The NLB may be configured to maintain the source (i.e., client) IP address. However, there are some limitations. Review Client IP Preservation in the AWS docs.","title":"Solution Overview"},{"location":"guide/use_cases/nlb_tls_termination/#prerequisites","text":"\u2705 Access to DNS records for domain name. Review the docs on registering domains with AWS's Route 53. Alternate DNS providers may be used, such as Google Domains or Namecheap. Later, a subdomain (e.g., demo-service.gcline.us) will be created, pointing to the NLB. Access to the DNS records is required to generate a TLS certificate for use by the NLB. \u2705 AWS Load Balancer Controller Installed Generally, setting up the Load Balancer Controller has two steps: enabling IAM roles for service accounts, and adding the controller to the cluster. The IAM role allows the controller in the Kubernetes cluster to manage AWS resources. Learn more about IAM roles for service accounts.","title":"Prerequisites"},{"location":"guide/use_cases/nlb_tls_termination/#configure","text":"","title":"Configure"},{"location":"guide/use_cases/nlb_tls_termination/#generate-tls-certificate","text":"Create a public TLS certificate for the domain using AWS Certificate Manager (ACM). This is streamlined when the domain is managed by Route 53. Review the AWS Certificate Manager Docs. The domain name on the TLS certificate must correspond to the planned domain name for the kubernetes service. The domain name may be specified explicitly (e.g., tls-demo.gcline.us), or a wildcard certificate can be used (e.g., *.gcline.us). If the domain is registered with Route53, the TLS certificate request will automatically be approved. Otherwise, follow ACM console the instructions to create a DNS record to validate the domain. After validation, the certificate will be available for use in your AWS account. Note the ARN of the certificate, which uniquely identifies it in kubernetes config files.","title":"Generate TLS Certificate"},{"location":"guide/use_cases/nlb_tls_termination/#create-service-with-new-nlb","text":"Add annotations to a load balancer service to enable NLB TLS termination, before the traffic reaches Envoy. The annotations are actioned by the load balancer controller. Review all the NLB annotations on GitHub. annotation name value meaning service.beta.kubernetes.io/aws-load-balancer-type external explicitly requires an NLB, instead of an ALB service.beta.kubernetes.io/aws-load-balancer-nlb-target-type ip route traffic directly to the pod IP service.beta.kubernetes.io/aws-load-balancer-scheme internet-facing An internet-facing load balancer has a publicly resolvable DNS name service.beta.kubernetes.io/aws-load-balancer-ssl-cert \"arn:aws:acm:...\" identifies the TLS certificate used by the NLB service.beta.kubernetes.io/aws-load-balancer-ssl-ports 443 determines the port the NLB should listen for TLS traffic on Example: apiVersion: v1 kind: Service metadata: name: MyAppSvc namespace: dev annotations: service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing service.beta.kubernetes.io/aws-load-balancer-ssl-cert: \"arn:aws:acm:us-east-2:185309785115:certificate/7610ed7d-5a81-4ea2-a18a-7ba1606cca3e\" service.beta.kubernetes.io/aws-load-balancer-ssl-ports: \"443\" spec: externalTrafficPolicy: Local ports: - port: 443 targetPort: 80 name: http protocol: TCP selector: app: MyApp type: LoadBalancer","title":"Create Service with new NLB"},{"location":"guide/use_cases/nlb_tls_termination/#configure-dns","text":"Get domain name using kubectl. The service name and namespace were defined above. kubectl get svc MyAppSvc --namespace dev NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE envoy LoadBalancer 10.100.24.154 k8s---xxxxxxxxxx-xxxxxxxxxxxxxxxx.elb..amazonaws.com 443:31606/TCP 40d Note the last 4 digits of the domain name for the NLB. For example, \"bb1f\". Setup DNS alias for NLB Create a DNS record pointing from a friendly name (e.g., tls-demo.gcline.us) to the NLB domain (e.g., bb1f.elb.us-east-2.amazonaws.com). For AWS's Route 53, follow the instructions below. If you use a different DNS provider, follow their instructions for creating a CNAME record . First, create a new record in Route 53. Use the \"A\" record type, and enable the \"alias\" option. This option attaches the DNS record to the AWS resource, without requiring an extra lookup step for clients. Select the NLB resource. Double check the region, and use the last 4 digits (noted earlier) to select the proper resource.","title":"Configure DNS"},{"location":"guide/use_cases/nlb_tls_termination/#verify","text":"Attempt to access the NLB domain at port 443 with HTTPS/TLS. Is the connection successful? What certificate is used? Does it reach the expected endpoint within the cluster?","title":"Verify"},{"location":"guide/use_cases/self_managed_lb/","text":"Motivation \u00b6 The load balancer controller (LBC) generally creates and destroys AWS Load Balancers in response to Kubernetes resources. However, some cluster operators may prefer to manually manage AWS Load Balancers. This supports use cases like: Preventing accidental release of key IP addresses. Supporting load balancers where the Kubernetes cluster is one of multiple targets. Complying with organizational requirements on provisioning load balancers, for security or cost reasons. Solution Overview \u00b6 Use the TargetGroupBinding CRD to sync a Kubernetes service with the targets of a load balancer. First, a load balancer is manually created directly with AWS. This guide uses a network load balancer, but an application load balancer may be similarly configured. Second, A listener and a target group are then added to the load balancer. Third, a TargetGroupBinding CRD is created in a cluster. The CRD includes references to a Kubernetes service and the ARN of the Load Balancer Target Group. The CRD configures the LBC to watch the service and automatically update the target group with the appropriate pod VPC IP addresses. Prerequisites \u00b6 Install: Load Balancer Controller Installed on Cluster AWS CLI Kubectl Have this information available: Cluster VPC Information ID of EKS Cluster Subnet IDs This information is available in the \"Networking\" section of the EKS Cluster Console. Port and Protocol of Target Kubernetes Service Configure Load Balancer \u00b6 Create Load Balancer: (optional) Use the create-load-balancer command to create an IPv4 load balancer, specifying a public subnet for each Availability Zone in which you have instances. You can specify only one subnet per Availability Zone. aws elbv2 create-load-balancer --name my-load-balancer --type network --subnets subnet-0e3f5cac72EXAMPLE Important: The output includes the ARN of the load balancer. This value is needed to configure the LBC. Example: arn:aws:elasticloadbalancing:us-east-2:123456789012:loadbalancer/net/my-load-balancer/1234567890123456 Use the create-target-group command to create an IPv4 target group, specifying the same VPC of your EKS cluster. aws elbv2 create-target-group --name my-targets --protocol TCP --port 80 --vpc-id vpc-0598c7d356EXAMPLE The output includes the ARN of the target group, with this format: arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/my-targets/1234567890123456 Use the create-listener command to create a listener for your load balancer with a default rule that forwards requests to your target group. The listener port and protocol must match the Kubernetes service. However, TLS termination is permitted. [[double check it works in this configuration?]] aws elbv2 create-listener --load-balancer-arn loadbalancer-arn --protocol TCP --port 80 \\ --default-actions Type=forward,TargetGroupArn=targetgroup-arn Create TargetGroupBinding CRD \u00b6 Create the TargetGroupBinding CRD Insert the ARN of the Target Group, as created above. Insert the name and port of the target Kubernetes service. apiVersion : elbv2.k8s.aws/v1beta1 kind : TargetGroupBinding metadata : name : my-tgb spec : serviceRef : name : awesome-service # route traffic to the awesome-service port : 80 targetGroupARN : arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/my-targets/1234567890123456 2. Apply the CRD Apply the TargetGroupBinding CRD CRD file to your Cluster. kubectl apply -f my-tgb.yaml Verify \u00b6 Wait approximately 30 seconds for the LBC to update the load balancer. View all target groups in the AWS console. Find the target group by the ARN noted above, and verify the appropriate instances from the cluster have been added.","title":"Externally Managed Load Balancer"},{"location":"guide/use_cases/self_managed_lb/#motivation","text":"The load balancer controller (LBC) generally creates and destroys AWS Load Balancers in response to Kubernetes resources. However, some cluster operators may prefer to manually manage AWS Load Balancers. This supports use cases like: Preventing accidental release of key IP addresses. Supporting load balancers where the Kubernetes cluster is one of multiple targets. Complying with organizational requirements on provisioning load balancers, for security or cost reasons.","title":"Motivation"},{"location":"guide/use_cases/self_managed_lb/#solution-overview","text":"Use the TargetGroupBinding CRD to sync a Kubernetes service with the targets of a load balancer. First, a load balancer is manually created directly with AWS. This guide uses a network load balancer, but an application load balancer may be similarly configured. Second, A listener and a target group are then added to the load balancer. Third, a TargetGroupBinding CRD is created in a cluster. The CRD includes references to a Kubernetes service and the ARN of the Load Balancer Target Group. The CRD configures the LBC to watch the service and automatically update the target group with the appropriate pod VPC IP addresses.","title":"Solution Overview"},{"location":"guide/use_cases/self_managed_lb/#prerequisites","text":"Install: Load Balancer Controller Installed on Cluster AWS CLI Kubectl Have this information available: Cluster VPC Information ID of EKS Cluster Subnet IDs This information is available in the \"Networking\" section of the EKS Cluster Console. Port and Protocol of Target Kubernetes Service","title":"Prerequisites"},{"location":"guide/use_cases/self_managed_lb/#configure-load-balancer","text":"Create Load Balancer: (optional) Use the create-load-balancer command to create an IPv4 load balancer, specifying a public subnet for each Availability Zone in which you have instances. You can specify only one subnet per Availability Zone. aws elbv2 create-load-balancer --name my-load-balancer --type network --subnets subnet-0e3f5cac72EXAMPLE Important: The output includes the ARN of the load balancer. This value is needed to configure the LBC. Example: arn:aws:elasticloadbalancing:us-east-2:123456789012:loadbalancer/net/my-load-balancer/1234567890123456 Use the create-target-group command to create an IPv4 target group, specifying the same VPC of your EKS cluster. aws elbv2 create-target-group --name my-targets --protocol TCP --port 80 --vpc-id vpc-0598c7d356EXAMPLE The output includes the ARN of the target group, with this format: arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/my-targets/1234567890123456 Use the create-listener command to create a listener for your load balancer with a default rule that forwards requests to your target group. The listener port and protocol must match the Kubernetes service. However, TLS termination is permitted. [[double check it works in this configuration?]] aws elbv2 create-listener --load-balancer-arn loadbalancer-arn --protocol TCP --port 80 \\ --default-actions Type=forward,TargetGroupArn=targetgroup-arn","title":"Configure Load Balancer"},{"location":"guide/use_cases/self_managed_lb/#create-targetgroupbinding-crd","text":"Create the TargetGroupBinding CRD Insert the ARN of the Target Group, as created above. Insert the name and port of the target Kubernetes service. apiVersion : elbv2.k8s.aws/v1beta1 kind : TargetGroupBinding metadata : name : my-tgb spec : serviceRef : name : awesome-service # route traffic to the awesome-service port : 80 targetGroupARN : arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/my-targets/1234567890123456 2. Apply the CRD Apply the TargetGroupBinding CRD CRD file to your Cluster. kubectl apply -f my-tgb.yaml","title":"Create TargetGroupBinding CRD"},{"location":"guide/use_cases/self_managed_lb/#verify","text":"Wait approximately 30 seconds for the LBC to update the load balancer. View all target groups in the AWS console. Find the target group by the ARN noted above, and verify the appropriate instances from the cluster have been added.","title":"Verify"}]} \ No newline at end of file +{"config":{"lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"A Kubernetes controller for Elastic Load Balancers AWS Load Balancer Controller \u00b6 AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster. It satisfies Kubernetes Ingress resources by provisioning Application Load Balancers . It satisfies Kubernetes Service resources by provisioning Network Load Balancers . This project was formerly known as \"AWS ALB Ingress Controller\", we rebranded it to be \"AWS Load Balancer Controller\". AWS ALB Ingress Controller was originated by Ticketmaster and CoreOS as part of Ticketmaster's move to AWS and CoreOS Tectonic. Learn more about Ticketmaster's Kubernetes initiative from Justin Dean's video at Tectonic Summit . AWS ALB Ingress Controller was donated to Kubernetes SIG-AWS to allow AWS, CoreOS, Ticketmaster and other SIG-AWS contributors to officially maintain the project. SIG-AWS reached this consensus on June 1, 2018. Support Policy \u00b6 Currently, AWS provides security updates and bug fixes to the latest available minor versions of AWS LBC. For other ad-hoc supports on older versions, please reach out through AWS support ticket.","title":"Welcome"},{"location":"#aws-load-balancer-controller","text":"AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster. It satisfies Kubernetes Ingress resources by provisioning Application Load Balancers . It satisfies Kubernetes Service resources by provisioning Network Load Balancers . This project was formerly known as \"AWS ALB Ingress Controller\", we rebranded it to be \"AWS Load Balancer Controller\". AWS ALB Ingress Controller was originated by Ticketmaster and CoreOS as part of Ticketmaster's move to AWS and CoreOS Tectonic. Learn more about Ticketmaster's Kubernetes initiative from Justin Dean's video at Tectonic Summit . AWS ALB Ingress Controller was donated to Kubernetes SIG-AWS to allow AWS, CoreOS, Ticketmaster and other SIG-AWS contributors to officially maintain the project. SIG-AWS reached this consensus on June 1, 2018.","title":"AWS Load Balancer Controller"},{"location":"#support-policy","text":"Currently, AWS provides security updates and bug fixes to the latest available minor versions of AWS LBC. For other ad-hoc supports on older versions, please reach out through AWS support ticket.","title":"Support Policy"},{"location":"CONTRIBUTING/","text":"Contributing Guidelines \u00b6 Welcome to Kubernetes. We are excited about the prospect of you joining our community ! The Kubernetes community abides by the CNCF code of conduct . Here is an excerpt: As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. Getting Started \u00b6 Building the project \u00b6 Controller development documentation has instructions on how to build the project and project specific expectations. Contributing to docs \u00b6 The documentation is generated using Material for MkDocs . In order to generate and preview docs locally, use the steps below - Install pipenv run make docs-preview . This will generate and serve docs locally at http://127.0.0.1:8000 Contributing \u00b6 We also have more documentation on how to get started contributing here: Contributor License Agreement Kubernetes projects require that you sign a Contributor License Agreement (CLA) before we can accept your pull requests Kubernetes Contributor Guide - Main contributor documentation, or you can just jump directly to the contributing section Contributor Cheat Sheet - Common resources for existing developers Mentorship \u00b6 Mentoring Initiatives - We have a diverse set of mentorship programs available that are always looking for volunteers! Contact Information \u00b6 Slack channel Mailing list","title":"Contributing Guidelines"},{"location":"CONTRIBUTING/#contributing-guidelines","text":"Welcome to Kubernetes. We are excited about the prospect of you joining our community ! The Kubernetes community abides by the CNCF code of conduct . Here is an excerpt: As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities.","title":"Contributing Guidelines"},{"location":"CONTRIBUTING/#getting-started","text":"","title":"Getting Started"},{"location":"CONTRIBUTING/#building-the-project","text":"Controller development documentation has instructions on how to build the project and project specific expectations.","title":"Building the project"},{"location":"CONTRIBUTING/#contributing-to-docs","text":"The documentation is generated using Material for MkDocs . In order to generate and preview docs locally, use the steps below - Install pipenv run make docs-preview . This will generate and serve docs locally at http://127.0.0.1:8000","title":"Contributing to docs"},{"location":"CONTRIBUTING/#contributing","text":"We also have more documentation on how to get started contributing here: Contributor License Agreement Kubernetes projects require that you sign a Contributor License Agreement (CLA) before we can accept your pull requests Kubernetes Contributor Guide - Main contributor documentation, or you can just jump directly to the contributing section Contributor Cheat Sheet - Common resources for existing developers","title":"Contributing"},{"location":"CONTRIBUTING/#mentorship","text":"Mentoring Initiatives - We have a diverse set of mentorship programs available that are always looking for volunteers!","title":"Mentorship"},{"location":"CONTRIBUTING/#contact-information","text":"Slack channel Mailing list","title":"Contact Information"},{"location":"code-of-conduct/","text":"Kubernetes Community Code of Conduct \u00b6 Please refer to our Kubernetes Community Code of Conduct","title":"Kubernetes Community Code of Conduct"},{"location":"code-of-conduct/#kubernetes-community-code-of-conduct","text":"Please refer to our Kubernetes Community Code of Conduct","title":"Kubernetes Community Code of Conduct"},{"location":"controller-devel/","text":"AWS Load Balancer Controller Development Guide \u00b6 We'll walk you through the setup to start contributing to the AWS Load Balancer Controller project. No matter if you're contributing code or docs, follow the steps below to set up your development environment. Issue before PR Of course we're happy about code drops via PRs, however, in order to give us time to plan ahead and also to avoid disappointment, consider creating an issue first and submit a PR later. This also helps us to coordinate between different contributors and should in general help keeping everyone happy. Prerequisites \u00b6 Please ensure that you have properly installed Go . Go version We recommend to use a Go version of 1.14 or above for development. Fork upstream repository \u00b6 The first step in setting up your AWS Load Balancer controller development environment is to fork the upstream AWS Load Balancer controller repository to your personal Github account. Ensure source code organization directories exist \u00b6 Make sure in your $GOPATH/src that you have directories for the sigs.k8s.io organization: mkdir -p $GOPATH /src/github.com/sigs.k8s.io git clone forked repository and add upstream remote \u00b6 For the forked repository, you will git clone the repository into the appropriate folder in your $GOPATH . Once git clone 'd, you will want to set up a Git remote called \"upstream\" (remember that \"origin\" will be pointing at your forked repository location in your personal Github space). You can use this script to do this for you: GITHUB_ID = \"your GH username\" cd $GOPATH /src/github.com/sigs.k8s.io git clone git@github.com: $GITHUB_ID /aws-load-balancer-controller cd aws-load-balancer-controller/ git remote add upstream git@github.com:kubernetes-sigs/aws-load-balancer-controller git fetch --all Create your local branch \u00b6 Next, you create a local branch where you work on your feature or bug fix. Let's say you want to enhance the docs, so set BRANCH_NAME=docs-improve and then: git fetch --all && git checkout -b $BRANCH_NAME upstream/main Commit changes \u00b6 Make your changes locally, commit and push using: git commit -a -m \"improves the docs a lot\" git push origin $BRANCH_NAME Create a pull request \u00b6 Finally, submit a pull request against the upstream source repository. We monitor the GitHub repo and try to follow up with comments within a working day. Building the controller \u00b6 To build the controller binary, run the following command. make controller To install CRDs into a Kubernetes cluster, run the following command. make install To uninstall CRD from a Kubernetes cluster, run the following command. make uninstall To build the container image for the controller and push to a container registry, run the following command. make docker-push To deploy the CRDs and the container image to a Kubernetes cluster, run the following command. make deploy","title":"AWS Load Balancer Controller Development Guide"},{"location":"controller-devel/#aws-load-balancer-controller-development-guide","text":"We'll walk you through the setup to start contributing to the AWS Load Balancer Controller project. No matter if you're contributing code or docs, follow the steps below to set up your development environment. Issue before PR Of course we're happy about code drops via PRs, however, in order to give us time to plan ahead and also to avoid disappointment, consider creating an issue first and submit a PR later. This also helps us to coordinate between different contributors and should in general help keeping everyone happy.","title":"AWS Load Balancer Controller Development Guide"},{"location":"controller-devel/#prerequisites","text":"Please ensure that you have properly installed Go . Go version We recommend to use a Go version of 1.14 or above for development.","title":"Prerequisites"},{"location":"controller-devel/#fork-upstream-repository","text":"The first step in setting up your AWS Load Balancer controller development environment is to fork the upstream AWS Load Balancer controller repository to your personal Github account.","title":"Fork upstream repository"},{"location":"controller-devel/#ensure-source-code-organization-directories-exist","text":"Make sure in your $GOPATH/src that you have directories for the sigs.k8s.io organization: mkdir -p $GOPATH /src/github.com/sigs.k8s.io","title":"Ensure source code organization directories exist"},{"location":"controller-devel/#git-clone-forked-repository-and-add-upstream-remote","text":"For the forked repository, you will git clone the repository into the appropriate folder in your $GOPATH . Once git clone 'd, you will want to set up a Git remote called \"upstream\" (remember that \"origin\" will be pointing at your forked repository location in your personal Github space). You can use this script to do this for you: GITHUB_ID = \"your GH username\" cd $GOPATH /src/github.com/sigs.k8s.io git clone git@github.com: $GITHUB_ID /aws-load-balancer-controller cd aws-load-balancer-controller/ git remote add upstream git@github.com:kubernetes-sigs/aws-load-balancer-controller git fetch --all","title":"git clone forked repository and add upstream remote"},{"location":"controller-devel/#create-your-local-branch","text":"Next, you create a local branch where you work on your feature or bug fix. Let's say you want to enhance the docs, so set BRANCH_NAME=docs-improve and then: git fetch --all && git checkout -b $BRANCH_NAME upstream/main","title":"Create your local branch"},{"location":"controller-devel/#commit-changes","text":"Make your changes locally, commit and push using: git commit -a -m \"improves the docs a lot\" git push origin $BRANCH_NAME","title":"Commit changes"},{"location":"controller-devel/#create-a-pull-request","text":"Finally, submit a pull request against the upstream source repository. We monitor the GitHub repo and try to follow up with comments within a working day.","title":"Create a pull request"},{"location":"controller-devel/#building-the-controller","text":"To build the controller binary, run the following command. make controller To install CRDs into a Kubernetes cluster, run the following command. make install To uninstall CRD from a Kubernetes cluster, run the following command. make uninstall To build the container image for the controller and push to a container registry, run the following command. make docker-push To deploy the CRDs and the container image to a Kubernetes cluster, run the following command. make deploy","title":"Building the controller"},{"location":"how-it-works/","text":"How AWS Load Balancer controller works \u00b6 Design \u00b6 The following diagram details the AWS components this controller creates. It also demonstrates the route ingress traffic takes from the ALB to the Kubernetes cluster. Note The controller manages the configurations of the resources it creates, and we do not recommend out-of-band modifications to these resources because the controller may revert the manual changes during reconciliation. We recommend to use configuration options provided as best practice, such as ingress and service annotations, controller command line flags and IngressClassParams. Ingress Creation \u00b6 This section describes each step (circle) above. This example demonstrates satisfying 1 ingress resource. [1] : The controller watches for ingress events from the API server. When it finds ingress resources that satisfy its requirements, it begins the creation of AWS resources. [2] : An ALB (ELBv2) is created in AWS for the new ingress resource. This ALB can be internet-facing or internal. You can also specify the subnets it's created in using annotations. [3] : Target Groups are created in AWS for each unique Kubernetes service described in the ingress resource. [4] : Listeners are created for every port detailed in your ingress resource annotations. When no port is specified, sensible defaults ( 80 or 443 ) are used. Certificates may also be attached via annotations. [5] : Rules are created for each path specified in your ingress resource. This ensures traffic to a specific path is routed to the correct Kubernetes Service. Along with the above, the controller also... deletes AWS components when ingress resources are removed from k8s. modifies AWS components when ingress resources change in k8s. assembles a list of existing ingress-related AWS components on start-up, allowing you to recover if the controller were to be restarted. Ingress Traffic \u00b6 AWS Load Balancer controller supports two traffic modes: Instance mode IP mode By default, Instance mode is used, users can explicitly select the mode via alb.ingress.kubernetes.io/target-type annotation. Instance mode \u00b6 Ingress traffic starts at the ALB and reaches the Kubernetes nodes through each service's NodePort. This means that services referenced from ingress resources must be exposed by type:NodePort in order to be reached by the ALB. IP mode \u00b6 Ingress traffic starts at the ALB and reaches the Kubernetes pods directly. CNIs must support directly accessible POD ip via secondary IP addresses on ENI .","title":"How it works"},{"location":"how-it-works/#how-aws-load-balancer-controller-works","text":"","title":"How AWS Load Balancer controller works"},{"location":"how-it-works/#design","text":"The following diagram details the AWS components this controller creates. It also demonstrates the route ingress traffic takes from the ALB to the Kubernetes cluster. Note The controller manages the configurations of the resources it creates, and we do not recommend out-of-band modifications to these resources because the controller may revert the manual changes during reconciliation. We recommend to use configuration options provided as best practice, such as ingress and service annotations, controller command line flags and IngressClassParams.","title":"Design"},{"location":"how-it-works/#ingress-creation","text":"This section describes each step (circle) above. This example demonstrates satisfying 1 ingress resource. [1] : The controller watches for ingress events from the API server. When it finds ingress resources that satisfy its requirements, it begins the creation of AWS resources. [2] : An ALB (ELBv2) is created in AWS for the new ingress resource. This ALB can be internet-facing or internal. You can also specify the subnets it's created in using annotations. [3] : Target Groups are created in AWS for each unique Kubernetes service described in the ingress resource. [4] : Listeners are created for every port detailed in your ingress resource annotations. When no port is specified, sensible defaults ( 80 or 443 ) are used. Certificates may also be attached via annotations. [5] : Rules are created for each path specified in your ingress resource. This ensures traffic to a specific path is routed to the correct Kubernetes Service. Along with the above, the controller also... deletes AWS components when ingress resources are removed from k8s. modifies AWS components when ingress resources change in k8s. assembles a list of existing ingress-related AWS components on start-up, allowing you to recover if the controller were to be restarted.","title":"Ingress Creation"},{"location":"how-it-works/#ingress-traffic","text":"AWS Load Balancer controller supports two traffic modes: Instance mode IP mode By default, Instance mode is used, users can explicitly select the mode via alb.ingress.kubernetes.io/target-type annotation.","title":"Ingress Traffic"},{"location":"how-it-works/#instance-mode","text":"Ingress traffic starts at the ALB and reaches the Kubernetes nodes through each service's NodePort. This means that services referenced from ingress resources must be exposed by type:NodePort in order to be reached by the ALB.","title":"Instance mode"},{"location":"how-it-works/#ip-mode","text":"Ingress traffic starts at the ALB and reaches the Kubernetes pods directly. CNIs must support directly accessible POD ip via secondary IP addresses on ENI .","title":"IP mode"},{"location":"release/","text":"AWS Load Balancer Controller Release Process \u00b6 Create the Release Commit \u00b6 Run hack/set-version to set the new version number and commit the resulting changes. This is called the \"release commit\". Merge the Release Commit \u00b6 Create a pull request with the release commit. Get it reviewed and merged to main . Upon merge to main , GitHub Actions will create a release tag for the new release. If the release is a \".0-beta.1\" release, GitHub Actions will also create a release branch for the minor version. (Remaining steps in process yet to be documented.)","title":"AWS Load Balancer Controller Release Process"},{"location":"release/#aws-load-balancer-controller-release-process","text":"","title":"AWS Load Balancer Controller Release Process"},{"location":"release/#create-the-release-commit","text":"Run hack/set-version to set the new version number and commit the resulting changes. This is called the \"release commit\".","title":"Create the Release Commit"},{"location":"release/#merge-the-release-commit","text":"Create a pull request with the release commit. Get it reviewed and merged to main . Upon merge to main , GitHub Actions will create a release tag for the new release. If the release is a \".0-beta.1\" release, GitHub Actions will also create a release branch for the minor version. (Remaining steps in process yet to be documented.)","title":"Merge the Release Commit"},{"location":"deploy/configurations/","text":"Controller configuration options \u00b6 This document covers configuration of the AWS Load Balancer controller limitation The v2.0.0+ version of AWSLoadBalancerController currently only support one controller deployment(with one or multiple replicas) per cluster. The AWSLoadBalancerController assumes it's the solo owner of worker node security group rules with elbv2.k8s.aws/targetGroupBinding=shared description, running multiple controller deployment will cause these controllers compete with each other updating worker node security group rules. We will remove this limitation in future versions: tracking issue AWS API Access \u00b6 To perform operations, the controller must have required IAM role capabilities for accessing and provisioning ALB resources. There are many ways to achieve this, such as loading AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY as environment variables or using kube2iam . Refer to the installation guide for installing the controller in your kubernetes cluster and for the minimum required IAM permissions. Setting Ingress Resource Scope \u00b6 You can limit the ingresses ALB ingress controller controls by combining following two approaches: Limiting ingress class \u00b6 Setting the --ingress-class argument constrains the controller's scope to ingresses with matching ingressClassName field. An example of the container spec portion of the controller, only listening for resources with the class \"alb\", would be as follows. spec : containers : - args : - --ingress-class=alb Now, only ingress resources with the appropriate class are picked up, as seen below. apiVersion : networking.k8s.io/v1 kind : Ingress metadata : name : echoserver namespace : echoserver spec : ingressClassName : alb ... If the ingress class is not specified, the controller will reconcile Ingress objects without the ingress class specified or ingress class alb . Limiting Namespaces \u00b6 Setting the --watch-namespace argument constrains the controller's scope to a single namespace. Ingress events outside of the namespace specified are not be seen by the controller. An example of the container spec, for a controller watching only the default namespace, is as follows. spec : containers : - args : - --watch-namespace=default Currently, you can set only 1 namespace to watch in this flag. See this Kubernetes issue for more details. Controller command line flags \u00b6 The --cluster-name flag is mandatory and the value must match the name of the kubernetes cluster. If you specify an incorrect name, the subnet auto-discovery will not work. Flag Type Default Description aws-api-endpoints AWS API Endpoints Config AWS API endpoints mapping, format: serviceID1=URL1,serviceID2=URL2 aws-api-throttle AWS Throttle Config default value throttle settings for AWS APIs, format: serviceID1:operationRegex1=rate:burst,serviceID2:operationRegex2=rate:burst aws-max-retries int 10 Maximum retries for AWS APIs aws-region string instance metadata AWS Region for the kubernetes cluster aws-vpc-id string instance metadata AWS VPC ID for the Kubernetes cluster allowed-certificate-authority-arns stringList [] Specify an optional list of CA ARNs to filter on in cert discovery (empty means all CAs are allowed) backend-security-group string Backend security group id to use for the ingress rules on the worker node SG cluster-name string Kubernetes cluster name default-ssl-policy string ELBSecurityPolicy-2016-08 Default SSL Policy that will be applied to all Ingresses or Services that do not have the SSL Policy annotation default-tags stringMap AWS Tags that will be applied to all AWS resources managed by this controller. Specified Tags takes highest priority default-target-type string instance Default target type for Ingresses and Services - ip, instance disable-ingress-class-annotation boolean false Disable new usage of the kubernetes.io/ingress.class annotation disable-ingress-group-name-annotation boolean false Disallow new use of the alb.ingress.kubernetes.io/group.name annotation disable-restricted-sg-rules boolean false Disable the usage of restricted security group rules enable-backend-security-group boolean true Enable sharing of security groups for backend traffic enable-endpoint-slices boolean false Use EndpointSlices instead of Endpoints for pod endpoint and TargetGroupBinding resolution for load balancers with IP targets. enable-leader-election boolean true Enable leader election for the load balancer controller manager. Enabling this will ensure there is only one active controller manager enable-pod-readiness-gate-inject boolean true If enabled, targetHealth readiness gate will get injected to the pod spec for the matching endpoint pods enable-shield boolean true Enable Shield addon for ALB enable-waf boolean true Enable WAF addon for ALB enable-wafv2 boolean true Enable WAF V2 addon for ALB external-managed-tags stringList AWS Tag keys that will be managed externally. Specified Tags are ignored during reconciliation feature-gates stringMap A set of key=value pairs to enable or disable features health-probe-bind-addr string :61779 The address the health probes binds to ingress-class string alb Name of the ingress class this controller satisfies ingress-max-concurrent-reconciles int 3 Maximum number of concurrently running reconcile loops for ingress kubeconfig string in-cluster config Path to the kubeconfig file containing authorization and API server information leader-election-id string aws-load-balancer-controller-leader Name of the leader election ID to use for this controller leader-election-namespace string Name of the leader election ID to use for this controller load-balancer-class string service.k8s.aws/nlb Name of the load balancer class specified in service spec.loadBalancerClass reconciled by this controller log-level string info Set the controller log level - info, debug metrics-bind-addr string :8080 The address the metric endpoint binds to service-max-concurrent-reconciles int 3 Maximum number of concurrently running reconcile loops for service sync-period duration 10h0m0s Period at which the controller forces the repopulation of its local object stores targetgroupbinding-max-concurrent-reconciles int 3 Maximum number of concurrently running reconcile loops for targetGroupBinding targetgroupbinding-max-exponential-backoff-delay duration 16m40s Maximum duration of exponential backoff for targetGroupBinding reconcile failures tolerate-non-existent-backend-service boolean true Whether to allow rules which refer to backend services that do not exist (When enabled, it will return 503 error if backend service not exist) tolerate-non-existent-backend-action boolean true Whether to allow rules which refer to backend actions that do not exist (When enabled, it will return 503 error if backend action not exist) watch-namespace string Namespace the controller watches for updates to Kubernetes objects, If empty, all namespaces are watched. webhook-bind-port int 9443 The TCP port the Webhook server binds to webhook-cert-dir string /tmp/k8s-webhook-server/serving-certs The directory that contains the server key and certificate webhook-cert-file string tls.crt The server certificate name webhook-key-file string tls.key The server key name disable-ingress-class-annotation \u00b6 --disable-ingress-class-annotation controls whether to disable new usage of the kubernetes.io/ingress.class annotation. Once disabled: you can no longer create Ingresses with the value of the kubernetes.io/ingress.class annotation equal to alb (can be overridden via --ingress-class flag of this controller). you can no longer update Ingresses to set the value of the kubernetes.io/ingress.class annotation equal to alb (can be overridden via --ingress-class flag of this controller). you can still create Ingresses with a kubernetes.io/ingress.class annotation that has other values (for example: \"nginx\") disable-ingress-group-name-annotation \u00b6 --disable-ingress-group-name-annotation controls whether to disable new usage of alb.ingress.kubernetes.io/group.name annotation. Once disabled: you can no longer create Ingresses with the alb.ingress.kubernetes.io/group.name annotation. you can no longer alter the value of an alb.ingress.kubernetes.io/group.name annotation on an existing Ingress. sync-period \u00b6 --sync-period defines a fixed interval for the controller to reconcile all resources even if there is no change, default to 10 hr. Please be mindful that frequent reconciliations may incur unnecessary AWS API usage. As best practice, we do not recommend users to manually modify the resources managed by the controller. And users should not depend on the controller auto-reconciliation to revert the manual modification, or to mitigate any security risks. waf-addons \u00b6 By default, the controller assumes sole ownership of the WAF addons associated to the provisioned ALBs, via the flag --enable-waf and --enable-wafv2 . And the users should disable them accordingly if they want a third party like AWS Firewall Manager to associate or remove the WAF-ACL of the ALBs. Once disabled, the controller shall not take any actions on the waf addons of the provisioned ALBs. throttle config \u00b6 Controller uses the following default throttle config: WAF Regional:^AssociateWebACL|DisassociateWebACL=0.5:1,WAF Regional:^GetWebACLForResource|ListResourcesForWebACL=1:1,WAFV2:^AssociateWebACL|DisassociateWebACL=0.5:1,WAFV2:^GetWebACLForResource|ListResourcesForWebACL=1:1,Elastic Load Balancing v2:^RegisterTargets|^DeregisterTargets=4:20,Elastic Load Balancing v2:.*=10:40 Client side throttling enables gradual scaling of the api calls. Additional throttle config can be specified via the --aws-api-throttle flag. You can get the ServiceID from the API definition in AWS SDK. For e.g, ELBv2 it is Elastic Load Balancing v2 . Here is an example of throttle config to specify client side throttling of ELBv2 calls. --aws-api-throttle=Elastic Load Balancing v2:RegisterTargets|DeregisterTargets=4:20,Elastic Load Balancing v2:.*=10:40 Instance metadata \u00b6 If running on EC2, the default values are obtained from the instance metadata service. Feature Gates \u00b6 They are a set of kye=value pairs that describe AWS load balance controller features. You can use it as flags --feature-gates=key1=value1,key2=value2 Features-gate Supported Key Type Default Value Description ListenerRulesTagging string true Enable or disable tagging AWS load balancer listeners and rules WeightedTargetGroups string true Enable or disable weighted target groups ServiceTypeLoadBalancerOnly string false If enabled, controller will be limited to reconciling service of type LoadBalancer EndpointsFailOpen string true Enable or disable allowing endpoints with ready:unknown state in the target groups. EnableServiceController string true Toggles support for Service type resources. EnableIPTargetType string true Used to toggle support for target-type ip across Ingress and Service type resources. EnableRGTAPI string false If enabled, the tagging manager will describe resource tags via RGT APIs, otherwise via ELB APIs. In order to enable RGT API, tag:GetResources is needed in controller IAM policy. SubnetsClusterTagCheck string true Enable or disable the check for kubernetes.io/cluster/${cluster-name} during subnet auto-discovery NLBHealthCheckAdvancedConfiguration string true Enable or disable advanced health check configuration for NLB, for example health check timeout ALBSingleSubnet string false If enabled, controller will allow using only 1 subnet for provisioning ALB, which need to get whitelisted by ELB in advance NLBSecurityGroup string true Enable or disable all NLB security groups actions including frontend sg creation, backend sg creation, and backend sg modifications","title":"Configurations"},{"location":"deploy/configurations/#controller-configuration-options","text":"This document covers configuration of the AWS Load Balancer controller limitation The v2.0.0+ version of AWSLoadBalancerController currently only support one controller deployment(with one or multiple replicas) per cluster. The AWSLoadBalancerController assumes it's the solo owner of worker node security group rules with elbv2.k8s.aws/targetGroupBinding=shared description, running multiple controller deployment will cause these controllers compete with each other updating worker node security group rules. We will remove this limitation in future versions: tracking issue","title":"Controller configuration options"},{"location":"deploy/configurations/#aws-api-access","text":"To perform operations, the controller must have required IAM role capabilities for accessing and provisioning ALB resources. There are many ways to achieve this, such as loading AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY as environment variables or using kube2iam . Refer to the installation guide for installing the controller in your kubernetes cluster and for the minimum required IAM permissions.","title":"AWS API Access"},{"location":"deploy/configurations/#setting-ingress-resource-scope","text":"You can limit the ingresses ALB ingress controller controls by combining following two approaches:","title":"Setting Ingress Resource Scope"},{"location":"deploy/configurations/#limiting-ingress-class","text":"Setting the --ingress-class argument constrains the controller's scope to ingresses with matching ingressClassName field. An example of the container spec portion of the controller, only listening for resources with the class \"alb\", would be as follows. spec : containers : - args : - --ingress-class=alb Now, only ingress resources with the appropriate class are picked up, as seen below. apiVersion : networking.k8s.io/v1 kind : Ingress metadata : name : echoserver namespace : echoserver spec : ingressClassName : alb ... If the ingress class is not specified, the controller will reconcile Ingress objects without the ingress class specified or ingress class alb .","title":"Limiting ingress class"},{"location":"deploy/configurations/#limiting-namespaces","text":"Setting the --watch-namespace argument constrains the controller's scope to a single namespace. Ingress events outside of the namespace specified are not be seen by the controller. An example of the container spec, for a controller watching only the default namespace, is as follows. spec : containers : - args : - --watch-namespace=default Currently, you can set only 1 namespace to watch in this flag. See this Kubernetes issue for more details.","title":"Limiting Namespaces"},{"location":"deploy/configurations/#controller-command-line-flags","text":"The --cluster-name flag is mandatory and the value must match the name of the kubernetes cluster. If you specify an incorrect name, the subnet auto-discovery will not work. Flag Type Default Description aws-api-endpoints AWS API Endpoints Config AWS API endpoints mapping, format: serviceID1=URL1,serviceID2=URL2 aws-api-throttle AWS Throttle Config default value throttle settings for AWS APIs, format: serviceID1:operationRegex1=rate:burst,serviceID2:operationRegex2=rate:burst aws-max-retries int 10 Maximum retries for AWS APIs aws-region string instance metadata AWS Region for the kubernetes cluster aws-vpc-id string instance metadata AWS VPC ID for the Kubernetes cluster allowed-certificate-authority-arns stringList [] Specify an optional list of CA ARNs to filter on in cert discovery (empty means all CAs are allowed) backend-security-group string Backend security group id to use for the ingress rules on the worker node SG cluster-name string Kubernetes cluster name default-ssl-policy string ELBSecurityPolicy-2016-08 Default SSL Policy that will be applied to all Ingresses or Services that do not have the SSL Policy annotation default-tags stringMap AWS Tags that will be applied to all AWS resources managed by this controller. Specified Tags takes highest priority default-target-type string instance Default target type for Ingresses and Services - ip, instance disable-ingress-class-annotation boolean false Disable new usage of the kubernetes.io/ingress.class annotation disable-ingress-group-name-annotation boolean false Disallow new use of the alb.ingress.kubernetes.io/group.name annotation disable-restricted-sg-rules boolean false Disable the usage of restricted security group rules enable-backend-security-group boolean true Enable sharing of security groups for backend traffic enable-endpoint-slices boolean false Use EndpointSlices instead of Endpoints for pod endpoint and TargetGroupBinding resolution for load balancers with IP targets. enable-leader-election boolean true Enable leader election for the load balancer controller manager. Enabling this will ensure there is only one active controller manager enable-pod-readiness-gate-inject boolean true If enabled, targetHealth readiness gate will get injected to the pod spec for the matching endpoint pods enable-shield boolean true Enable Shield addon for ALB enable-waf boolean true Enable WAF addon for ALB enable-wafv2 boolean true Enable WAF V2 addon for ALB external-managed-tags stringList AWS Tag keys that will be managed externally. Specified Tags are ignored during reconciliation feature-gates stringMap A set of key=value pairs to enable or disable features health-probe-bind-addr string :61779 The address the health probes binds to ingress-class string alb Name of the ingress class this controller satisfies ingress-max-concurrent-reconciles int 3 Maximum number of concurrently running reconcile loops for ingress kubeconfig string in-cluster config Path to the kubeconfig file containing authorization and API server information leader-election-id string aws-load-balancer-controller-leader Name of the leader election ID to use for this controller leader-election-namespace string Name of the leader election ID to use for this controller load-balancer-class string service.k8s.aws/nlb Name of the load balancer class specified in service spec.loadBalancerClass reconciled by this controller log-level string info Set the controller log level - info, debug metrics-bind-addr string :8080 The address the metric endpoint binds to service-max-concurrent-reconciles int 3 Maximum number of concurrently running reconcile loops for service sync-period duration 10h0m0s Period at which the controller forces the repopulation of its local object stores targetgroupbinding-max-concurrent-reconciles int 3 Maximum number of concurrently running reconcile loops for targetGroupBinding targetgroupbinding-max-exponential-backoff-delay duration 16m40s Maximum duration of exponential backoff for targetGroupBinding reconcile failures tolerate-non-existent-backend-service boolean true Whether to allow rules which refer to backend services that do not exist (When enabled, it will return 503 error if backend service not exist) tolerate-non-existent-backend-action boolean true Whether to allow rules which refer to backend actions that do not exist (When enabled, it will return 503 error if backend action not exist) watch-namespace string Namespace the controller watches for updates to Kubernetes objects, If empty, all namespaces are watched. webhook-bind-port int 9443 The TCP port the Webhook server binds to webhook-cert-dir string /tmp/k8s-webhook-server/serving-certs The directory that contains the server key and certificate webhook-cert-file string tls.crt The server certificate name webhook-key-file string tls.key The server key name","title":"Controller command line flags"},{"location":"deploy/configurations/#disable-ingress-class-annotation","text":"--disable-ingress-class-annotation controls whether to disable new usage of the kubernetes.io/ingress.class annotation. Once disabled: you can no longer create Ingresses with the value of the kubernetes.io/ingress.class annotation equal to alb (can be overridden via --ingress-class flag of this controller). you can no longer update Ingresses to set the value of the kubernetes.io/ingress.class annotation equal to alb (can be overridden via --ingress-class flag of this controller). you can still create Ingresses with a kubernetes.io/ingress.class annotation that has other values (for example: \"nginx\")","title":"disable-ingress-class-annotation"},{"location":"deploy/configurations/#disable-ingress-group-name-annotation","text":"--disable-ingress-group-name-annotation controls whether to disable new usage of alb.ingress.kubernetes.io/group.name annotation. Once disabled: you can no longer create Ingresses with the alb.ingress.kubernetes.io/group.name annotation. you can no longer alter the value of an alb.ingress.kubernetes.io/group.name annotation on an existing Ingress.","title":"disable-ingress-group-name-annotation"},{"location":"deploy/configurations/#sync-period","text":"--sync-period defines a fixed interval for the controller to reconcile all resources even if there is no change, default to 10 hr. Please be mindful that frequent reconciliations may incur unnecessary AWS API usage. As best practice, we do not recommend users to manually modify the resources managed by the controller. And users should not depend on the controller auto-reconciliation to revert the manual modification, or to mitigate any security risks.","title":"sync-period"},{"location":"deploy/configurations/#waf-addons","text":"By default, the controller assumes sole ownership of the WAF addons associated to the provisioned ALBs, via the flag --enable-waf and --enable-wafv2 . And the users should disable them accordingly if they want a third party like AWS Firewall Manager to associate or remove the WAF-ACL of the ALBs. Once disabled, the controller shall not take any actions on the waf addons of the provisioned ALBs.","title":"waf-addons"},{"location":"deploy/configurations/#throttle-config","text":"Controller uses the following default throttle config: WAF Regional:^AssociateWebACL|DisassociateWebACL=0.5:1,WAF Regional:^GetWebACLForResource|ListResourcesForWebACL=1:1,WAFV2:^AssociateWebACL|DisassociateWebACL=0.5:1,WAFV2:^GetWebACLForResource|ListResourcesForWebACL=1:1,Elastic Load Balancing v2:^RegisterTargets|^DeregisterTargets=4:20,Elastic Load Balancing v2:.*=10:40 Client side throttling enables gradual scaling of the api calls. Additional throttle config can be specified via the --aws-api-throttle flag. You can get the ServiceID from the API definition in AWS SDK. For e.g, ELBv2 it is Elastic Load Balancing v2 . Here is an example of throttle config to specify client side throttling of ELBv2 calls. --aws-api-throttle=Elastic Load Balancing v2:RegisterTargets|DeregisterTargets=4:20,Elastic Load Balancing v2:.*=10:40","title":"throttle config"},{"location":"deploy/configurations/#instance-metadata","text":"If running on EC2, the default values are obtained from the instance metadata service.","title":"Instance metadata"},{"location":"deploy/configurations/#feature-gates","text":"They are a set of kye=value pairs that describe AWS load balance controller features. You can use it as flags --feature-gates=key1=value1,key2=value2 Features-gate Supported Key Type Default Value Description ListenerRulesTagging string true Enable or disable tagging AWS load balancer listeners and rules WeightedTargetGroups string true Enable or disable weighted target groups ServiceTypeLoadBalancerOnly string false If enabled, controller will be limited to reconciling service of type LoadBalancer EndpointsFailOpen string true Enable or disable allowing endpoints with ready:unknown state in the target groups. EnableServiceController string true Toggles support for Service type resources. EnableIPTargetType string true Used to toggle support for target-type ip across Ingress and Service type resources. EnableRGTAPI string false If enabled, the tagging manager will describe resource tags via RGT APIs, otherwise via ELB APIs. In order to enable RGT API, tag:GetResources is needed in controller IAM policy. SubnetsClusterTagCheck string true Enable or disable the check for kubernetes.io/cluster/${cluster-name} during subnet auto-discovery NLBHealthCheckAdvancedConfiguration string true Enable or disable advanced health check configuration for NLB, for example health check timeout ALBSingleSubnet string false If enabled, controller will allow using only 1 subnet for provisioning ALB, which need to get whitelisted by ELB in advance NLBSecurityGroup string true Enable or disable all NLB security groups actions including frontend sg creation, backend sg creation, and backend sg modifications","title":"Feature Gates"},{"location":"deploy/installation/","text":"AWS Load Balancer Controller installation \u00b6 The AWS Load Balancer controller (LBC) provisions AWS Network Load Balancer (NLB) and Application Load Balancer (ALB) resources. The LBC watches for new service or ingress Kubernetes resources and configures AWS resources. The LBC is supported by AWS. Some clusters may be using the legacy \"in-tree\" functionality to provision AWS load balancers. The AWS Load Balancer Controller should be installed instead. Existing AWS ALB Ingress Controller users The AWS ALB Ingress controller must be uninstalled before installing the AWS Load Balancer Controller. Please follow our migration guide to do a migration. When using AWS Load Balancer Controller v2.5+ The AWS LBC provides a mutating webhook for service resources to set the spec.loadBalancerClass field for service of type LoadBalancer on create. This makes the AWS LBC the default controller for service of type LoadBalancer. You can disable this feature and revert to set Cloud Controller Manager (in-tree controller) as the default by setting the helm chart value enableServiceMutatorWebhook to false with --set enableServiceMutatorWebhook=false . You will no longer be able to provision new Classic Load Balancer (CLB) from your kubernetes service unless you disable this feature. Existing CLB will continue to work fine. Supported Kubernetes versions \u00b6 AWS Load Balancer Controller v2.0.0~v2.1.3 requires Kubernetes 1.15+ AWS Load Balancer Controller v2.2.0~v2.3.1 requires Kubernetes 1.16-1.21 AWS Load Balancer Controller v2.4.0+ requires Kubernetes 1.19+ AWS Load Balancer Controller v2.5.0+ requires Kubernetes 1.22+ Deployment considerations \u00b6 Additional requirements for non-EKS clusters: \u00b6 Ensure subnets are tagged appropriately for auto-discovery to work For IP targets, pods must have IPs from the VPC subnets. You can configure the amazon-vpc-cni-k8s plugin for this purpose. Additional requirements for isolated cluster: \u00b6 Isolated clusters are clusters without internet access, and instead reply on VPC endpoints for all required connects. When installing the AWS LBC in isolated clusters, you need to disable shield, waf and wafv2 via controller flags --enable-shield=false, --enable-waf=false, --enable-wafv2=false Using the Amazon EC2 instance metadata server version 2 (IMDSv2) \u00b6 We recommend blocking the access to instance metadata by requiring the instance to use IMDSv2 only. For more information, please refer to the AWS guidance here . If you are using the IMDSv2, set the hop limit to 2 or higher in order to allow the LBC to perform the metadata introspection. You can set the IMDSv2 as follows: aws ec2 modify-instance-metadata-options --http-put-response-hop-limit 2 --http-tokens required --region --instance-id Instead of depending on IMDSv2, you can specify the AWS Region and the VPC via the controller flags --aws-region and --aws-vpc-id . Configure IAM \u00b6 The controller runs on the worker nodes, so it needs access to the AWS ALB/NLB APIs with IAM permissions. The IAM permissions can either be setup using IAM roles for service accounts (IRSA) or can be attached directly to the worker node IAM roles. The best practice is using IRSA if you're using Amazon EKS. If you're using kOps or self-hosted Kubernetes, you must manually attach polices to node instances. Option A: Recommended, IAM roles for service accounts (IRSA) \u00b6 The reference IAM policies contain the following permissive configuration: { \"Effect\": \"Allow\", \"Action\": [ \"ec2:AuthorizeSecurityGroupIngress\", \"ec2:RevokeSecurityGroupIngress\" ], \"Resource\": \"*\" }, We recommend further scoping down this configuration based on the VPC ID or cluster name resource tag. Example condition for VPC ID: \"Condition\": { \"ArnEquals\": { \"ec2:Vpc\": \"arn:aws:ec2:::vpc/\" } } Example condition for cluster name resource tag: \"Condition\": { \"Null\": { \"aws:ResourceTag/kubernetes.io/cluster/\": \"false\" } } Create an IAM OIDC provider. You can skip this step if you already have one for your cluster. eksctl utils associate-iam-oidc-provider \\ --region \\ --cluster \\ --approve Download an IAM policy for the LBC using one of the following commands: If your cluster is in a US Gov Cloud region: curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/install/iam_policy_us-gov.json If your cluster is in a China region: curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/install/iam_policy_cn.json If your cluster is in any other region: curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/install/iam_policy.json Create an IAM policy named AWSLoadBalancerControllerIAMPolicy . If you downloaded a different policy, replace iam-policy with the name of the policy that you downloaded. aws iam create-policy \\ --policy-name AWSLoadBalancerControllerIAMPolicy \\ --policy-document file://iam-policy.json Take note of the policy ARN that's returned. Create an IAM role and Kubernetes ServiceAccount for the LBC. Use the ARN from the previous step. eksctl create iamserviceaccount \\ --cluster= \\ --namespace=kube-system \\ --name=aws-load-balancer-controller \\ --attach-policy-arn=arn:aws:iam:::policy/AWSLoadBalancerControllerIAMPolicy \\ --override-existing-serviceaccounts \\ --region \\ --approve Option B: Attach IAM policies to nodes \u00b6 If you're not setting up IAM roles for service accounts, apply the IAM policies from the following URL at a minimum. Please be aware of the possibility that the controller permissions may be assumed by other users in a pod after retrieving the node role credentials, so the best practice would be using IRSA instead of attaching IAM policy directly. curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/install/iam_policy.json The following IAM permissions subset is for those using TargetGroupBinding only and don't plan to use the LBC to manage security group rules: { \"Statement\": [ { \"Action\": [ \"ec2:DescribeVpcs\", \"ec2:DescribeSecurityGroups\", \"ec2:DescribeInstances\", \"elasticloadbalancing:DescribeTargetGroups\", \"elasticloadbalancing:DescribeTargetHealth\", \"elasticloadbalancing:ModifyTargetGroup\", \"elasticloadbalancing:ModifyTargetGroupAttributes\", \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\" ], \"Effect\": \"Allow\", \"Resource\": \"*\" } ], \"Version\": \"2012-10-17\" } Network configuration \u00b6 Review the worker nodes security group docs. Your node security group must permit incoming traffic on TCP port 9443 from the Kubernetes control plane. This is needed for webhook access. If you use eksctl , this is the default configuration. If you use custom networking, please refer to the EKS Best Practices Guides for network configuration. Add controller to cluster \u00b6 We recommend using the Helm chart to install the controller. The chart supports Fargate and facilitates updating the controller. Helm If you want to run the controller on Fargate, use the Helm chart, since it doesn't depend on the cert-manager . Detailed instructions \u00b6 Follow the instructions in the aws-load-balancer-controller Helm chart. Summary \u00b6 Add the EKS chart repo to Helm helm repo add eks https://aws.github.io/eks-charts If upgrading the chart via helm upgrade , install the TargetGroupBinding CRDs. wget https://raw.githubusercontent.com/aws/eks-charts/master/stable/aws-load-balancer-controller/crds/crds.yaml kubectl apply -f crds.yaml Tip The helm install command automatically applies the CRDs, but helm upgrade doesn't. Helm install command for clusters with IRSA: helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName= --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller Helm install command for clusters not using IRSA: helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName= YAML manifests Install cert-manager \u00b6 kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.12.3/cert-manager.yaml Apply YAML \u00b6 Download the spec for the LBC. wget https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.7.0/v2_7_0_full.yaml Edit the saved yaml file, go to the Deployment spec, and set the controller --cluster-name arg value to your EKS cluster name apiVersion: apps/v1 kind: Deployment . . . name: aws-load-balancer-controller namespace: kube-system spec: . . . template: spec: containers: - args: - --cluster-name= If you use IAM roles for service accounts, we recommend that you delete the ServiceAccount from the yaml spec. If you delete the installation section from the yaml spec, deleting the ServiceAccount preserves the eksctl created iamserviceaccount . apiVersion: v1 kind: ServiceAccount Apply the yaml file kubectl apply -f v2_7_0_full.yaml Optionally download the default ingressclass and ingressclass params wget https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.7.0/v2_7_0_ingclass.yaml Apply the ingressclass and params kubectl apply -f v2_7_0_ingclass.yaml Create Update Strategy \u00b6 The controller doesn't receive security updates automatically. You need to manually upgrade to a newer version when it becomes available. You can upgrade using helm upgrade or another strategy to manage the controller deployment.","title":"Installation Guide"},{"location":"deploy/installation/#aws-load-balancer-controller-installation","text":"The AWS Load Balancer controller (LBC) provisions AWS Network Load Balancer (NLB) and Application Load Balancer (ALB) resources. The LBC watches for new service or ingress Kubernetes resources and configures AWS resources. The LBC is supported by AWS. Some clusters may be using the legacy \"in-tree\" functionality to provision AWS load balancers. The AWS Load Balancer Controller should be installed instead. Existing AWS ALB Ingress Controller users The AWS ALB Ingress controller must be uninstalled before installing the AWS Load Balancer Controller. Please follow our migration guide to do a migration. When using AWS Load Balancer Controller v2.5+ The AWS LBC provides a mutating webhook for service resources to set the spec.loadBalancerClass field for service of type LoadBalancer on create. This makes the AWS LBC the default controller for service of type LoadBalancer. You can disable this feature and revert to set Cloud Controller Manager (in-tree controller) as the default by setting the helm chart value enableServiceMutatorWebhook to false with --set enableServiceMutatorWebhook=false . You will no longer be able to provision new Classic Load Balancer (CLB) from your kubernetes service unless you disable this feature. Existing CLB will continue to work fine.","title":"AWS Load Balancer Controller installation"},{"location":"deploy/installation/#supported-kubernetes-versions","text":"AWS Load Balancer Controller v2.0.0~v2.1.3 requires Kubernetes 1.15+ AWS Load Balancer Controller v2.2.0~v2.3.1 requires Kubernetes 1.16-1.21 AWS Load Balancer Controller v2.4.0+ requires Kubernetes 1.19+ AWS Load Balancer Controller v2.5.0+ requires Kubernetes 1.22+","title":"Supported Kubernetes versions"},{"location":"deploy/installation/#deployment-considerations","text":"","title":"Deployment considerations"},{"location":"deploy/installation/#additional-requirements-for-non-eks-clusters","text":"Ensure subnets are tagged appropriately for auto-discovery to work For IP targets, pods must have IPs from the VPC subnets. You can configure the amazon-vpc-cni-k8s plugin for this purpose.","title":"Additional requirements for non-EKS clusters:"},{"location":"deploy/installation/#additional-requirements-for-isolated-cluster","text":"Isolated clusters are clusters without internet access, and instead reply on VPC endpoints for all required connects. When installing the AWS LBC in isolated clusters, you need to disable shield, waf and wafv2 via controller flags --enable-shield=false, --enable-waf=false, --enable-wafv2=false","title":"Additional requirements for isolated cluster:"},{"location":"deploy/installation/#using-the-amazon-ec2-instance-metadata-server-version-2-imdsv2","text":"We recommend blocking the access to instance metadata by requiring the instance to use IMDSv2 only. For more information, please refer to the AWS guidance here . If you are using the IMDSv2, set the hop limit to 2 or higher in order to allow the LBC to perform the metadata introspection. You can set the IMDSv2 as follows: aws ec2 modify-instance-metadata-options --http-put-response-hop-limit 2 --http-tokens required --region --instance-id Instead of depending on IMDSv2, you can specify the AWS Region and the VPC via the controller flags --aws-region and --aws-vpc-id .","title":"Using the Amazon EC2 instance metadata server version 2 (IMDSv2)"},{"location":"deploy/installation/#configure-iam","text":"The controller runs on the worker nodes, so it needs access to the AWS ALB/NLB APIs with IAM permissions. The IAM permissions can either be setup using IAM roles for service accounts (IRSA) or can be attached directly to the worker node IAM roles. The best practice is using IRSA if you're using Amazon EKS. If you're using kOps or self-hosted Kubernetes, you must manually attach polices to node instances.","title":"Configure IAM"},{"location":"deploy/installation/#option-a-recommended-iam-roles-for-service-accounts-irsa","text":"The reference IAM policies contain the following permissive configuration: { \"Effect\": \"Allow\", \"Action\": [ \"ec2:AuthorizeSecurityGroupIngress\", \"ec2:RevokeSecurityGroupIngress\" ], \"Resource\": \"*\" }, We recommend further scoping down this configuration based on the VPC ID or cluster name resource tag. Example condition for VPC ID: \"Condition\": { \"ArnEquals\": { \"ec2:Vpc\": \"arn:aws:ec2:::vpc/\" } } Example condition for cluster name resource tag: \"Condition\": { \"Null\": { \"aws:ResourceTag/kubernetes.io/cluster/\": \"false\" } } Create an IAM OIDC provider. You can skip this step if you already have one for your cluster. eksctl utils associate-iam-oidc-provider \\ --region \\ --cluster \\ --approve Download an IAM policy for the LBC using one of the following commands: If your cluster is in a US Gov Cloud region: curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/install/iam_policy_us-gov.json If your cluster is in a China region: curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/install/iam_policy_cn.json If your cluster is in any other region: curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/install/iam_policy.json Create an IAM policy named AWSLoadBalancerControllerIAMPolicy . If you downloaded a different policy, replace iam-policy with the name of the policy that you downloaded. aws iam create-policy \\ --policy-name AWSLoadBalancerControllerIAMPolicy \\ --policy-document file://iam-policy.json Take note of the policy ARN that's returned. Create an IAM role and Kubernetes ServiceAccount for the LBC. Use the ARN from the previous step. eksctl create iamserviceaccount \\ --cluster= \\ --namespace=kube-system \\ --name=aws-load-balancer-controller \\ --attach-policy-arn=arn:aws:iam:::policy/AWSLoadBalancerControllerIAMPolicy \\ --override-existing-serviceaccounts \\ --region \\ --approve","title":"Option A: Recommended, IAM roles for service accounts (IRSA)"},{"location":"deploy/installation/#option-b-attach-iam-policies-to-nodes","text":"If you're not setting up IAM roles for service accounts, apply the IAM policies from the following URL at a minimum. Please be aware of the possibility that the controller permissions may be assumed by other users in a pod after retrieving the node role credentials, so the best practice would be using IRSA instead of attaching IAM policy directly. curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/install/iam_policy.json The following IAM permissions subset is for those using TargetGroupBinding only and don't plan to use the LBC to manage security group rules: { \"Statement\": [ { \"Action\": [ \"ec2:DescribeVpcs\", \"ec2:DescribeSecurityGroups\", \"ec2:DescribeInstances\", \"elasticloadbalancing:DescribeTargetGroups\", \"elasticloadbalancing:DescribeTargetHealth\", \"elasticloadbalancing:ModifyTargetGroup\", \"elasticloadbalancing:ModifyTargetGroupAttributes\", \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\" ], \"Effect\": \"Allow\", \"Resource\": \"*\" } ], \"Version\": \"2012-10-17\" }","title":"Option B: Attach IAM policies to nodes"},{"location":"deploy/installation/#network-configuration","text":"Review the worker nodes security group docs. Your node security group must permit incoming traffic on TCP port 9443 from the Kubernetes control plane. This is needed for webhook access. If you use eksctl , this is the default configuration. If you use custom networking, please refer to the EKS Best Practices Guides for network configuration.","title":"Network configuration"},{"location":"deploy/installation/#add-controller-to-cluster","text":"We recommend using the Helm chart to install the controller. The chart supports Fargate and facilitates updating the controller. Helm If you want to run the controller on Fargate, use the Helm chart, since it doesn't depend on the cert-manager .","title":"Add controller to cluster"},{"location":"deploy/installation/#detailed-instructions","text":"Follow the instructions in the aws-load-balancer-controller Helm chart.","title":"Detailed instructions"},{"location":"deploy/installation/#summary","text":"Add the EKS chart repo to Helm helm repo add eks https://aws.github.io/eks-charts If upgrading the chart via helm upgrade , install the TargetGroupBinding CRDs. wget https://raw.githubusercontent.com/aws/eks-charts/master/stable/aws-load-balancer-controller/crds/crds.yaml kubectl apply -f crds.yaml Tip The helm install command automatically applies the CRDs, but helm upgrade doesn't. Helm install command for clusters with IRSA: helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName= --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller Helm install command for clusters not using IRSA: helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName= YAML manifests","title":"Summary"},{"location":"deploy/installation/#install-cert-manager","text":"kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.12.3/cert-manager.yaml","title":"Install cert-manager"},{"location":"deploy/installation/#apply-yaml","text":"Download the spec for the LBC. wget https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.7.0/v2_7_0_full.yaml Edit the saved yaml file, go to the Deployment spec, and set the controller --cluster-name arg value to your EKS cluster name apiVersion: apps/v1 kind: Deployment . . . name: aws-load-balancer-controller namespace: kube-system spec: . . . template: spec: containers: - args: - --cluster-name= If you use IAM roles for service accounts, we recommend that you delete the ServiceAccount from the yaml spec. If you delete the installation section from the yaml spec, deleting the ServiceAccount preserves the eksctl created iamserviceaccount . apiVersion: v1 kind: ServiceAccount Apply the yaml file kubectl apply -f v2_7_0_full.yaml Optionally download the default ingressclass and ingressclass params wget https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.7.0/v2_7_0_ingclass.yaml Apply the ingressclass and params kubectl apply -f v2_7_0_ingclass.yaml","title":"Apply YAML"},{"location":"deploy/installation/#create-update-strategy","text":"The controller doesn't receive security updates automatically. You need to manually upgrade to a newer version when it becomes available. You can upgrade using helm upgrade or another strategy to manage the controller deployment.","title":"Create Update Strategy"},{"location":"deploy/pod_readiness_gate/","text":"Pod readiness gate \u00b6 AWS Load Balancer controller supports \u00bbPod readiness gates\u00ab to indicate that pod is registered to the ALB/NLB and healthy to receive traffic. The controller automatically injects the necessary readiness gate configuration to the pod spec via mutating webhook during pod creation. For readiness gate configuration to be injected to the pod spec, you need to apply the label elbv2.k8s.aws/pod-readiness-gate-inject: enabled to the pod namespace. However, note that this only works with target-type: ip , since when using target-type: instance , it's the node used as backend, the ALB itself is not aware of pod/podReadiness in such case. The pod readiness gate is needed under certain circumstances to achieve full zero downtime rolling deployments. Consider the following example: Low number of replicas in a deployment Start a rolling update of the deployment Rollout of new pods takes less time than it takes the AWS Load Balancer controller to register the new pods and for their health state turn \u00bbHealthy\u00ab in the target group At some point during this rolling update, the target group might only have registered targets that are in \u00bbInitial\u00ab or \u00bbDraining\u00ab state; this results in service outage In order to avoid this situation, the AWS Load Balancer controller can set the readiness condition on the pods that constitute your ingress or service backend. The condition status on a pod will be set to True only when the corresponding target in the ALB/NLB target group shows a health state of \u00bbHealthy\u00ab. This prevents the rolling update of a deployment from terminating old pods until the newly created pods are \u00bbHealthy\u00ab in the ALB/NLB target group and ready to take traffic. upgrading from AWS ALB ingress controller If you have a pod spec with legacy readiness gate configuration, ensure you label the namespace and create the Service/Ingress objects before applying the pod/deployment manifest. The load balancer controller will remove all legacy readiness-gate configuration and add new ones during pod creation. Configuration \u00b6 Pod readiness gate support is enabled by default on the AWS load balancer controller. You need to apply the readiness gate inject label to each of the namespace that you would like to use this feature. You can create and label a namespace as follows - $ kubectl create namespace readiness namespace/readiness created $ kubectl label namespace readiness elbv2.k8s.aws/pod-readiness-gate-inject=enabled namespace/readiness labeled $ kubectl describe namespace readiness Name: readiness Labels: elbv2.k8s.aws/pod-readiness-gate-inject=enabled Annotations: Status: Active Once labelled, the controller will add the pod readiness gates config to all the pods created subsequently that meet all the following conditions There exists a service matching the pod labels in the same namespace There exists at least one target group binding that refers to the matching service The target type is IP The readiness gates have the prefix target-health.elbv2.k8s.aws and the controller injects the config to the pod spec only during pod creation. create ingress or service before pod To ensure all of your pods in a namespace get the readiness gate config, you need create your Ingress or Service and label the namespace before creating the pods Object Selector \u00b6 The default webhook configuration matches all pods in the namespaces containing the label elbv2.k8s.aws/pod-readiness-gate-inject=enabled . You can modify the webhook configuration further to select specific pods from the labeled namespace by specifying the objectSelector . For example, in order to select resources with elbv2.k8s.aws/pod-readiness-gate-inject: enabled label, you can add the following objectSelector to the webhook: objectSelector: matchLabels: elbv2.k8s.aws/pod-readiness-gate-inject: enabled To edit, $ kubectl edit mutatingwebhookconfigurations aws-load-balancer-webhook ... name: mpod.elbv2.k8s.aws namespaceSelector: matchExpressions: - key: elbv2.k8s.aws/pod-readiness-gate-inject operator: In values: - enabled objectSelector: matchLabels: elbv2.k8s.aws/pod-readiness-gate-inject: enabled ... When you specify multiple selectors, pods matching all the conditions will get mutated. Upgrading from AWS ALB Ingress controller \u00b6 If you have a pod spec with the AWS ALB ingress controller (aka v1) style readiness-gate configuration, the controller will automatically remove the legacy readiness gates config and add new ones during pod creation if the pod namespace is labelled correctly. Other than the namespace labeling, no further configuration is necessary. The legacy readiness gates have the target-health.alb.ingress.k8s.aws prefix. Disabling the readiness gate inject \u00b6 You can specify the controller flag --enable-pod-readiness-gate-inject=false during controller startup to disable the controller from modifying the pod spec. Checking the pod condition status \u00b6 The status of the readiness gates can be verified with kubectl get pod -o wide : NAME READY STATUS RESTARTS AGE IP NODE READINESS GATES nginx-test-5744b9ff84-7ftl9 1/1 Running 0 81s 10.1.2.3 ip-10-1-2-3.ec2.internal 0/1 When the target is registered and healthy in the ALB/NLB, the output will look like: NAME READY STATUS RESTARTS AGE IP NODE READINESS GATES nginx-test-5744b9ff84-7ftl9 1/1 Running 0 81s 10.1.2.3 ip-10-1-2-3.ec2.internal 1/1 If a readiness gate doesn't get ready, you can check the reason via: $ kubectl get pod nginx-test-545d8f4d89-l7rcl -o yaml | grep -B7 'type: target-health' status: conditions: - lastProbeTime: null lastTransitionTime: null message: Initial health checks in progress reason: Elb.InitialHealthChecking status: \"True\" type: target-health.elbv2.k8s.aws/k8s-readines-perf1000-7848e5026b","title":"Pod Readiness Gate"},{"location":"deploy/pod_readiness_gate/#pod-readiness-gate","text":"AWS Load Balancer controller supports \u00bbPod readiness gates\u00ab to indicate that pod is registered to the ALB/NLB and healthy to receive traffic. The controller automatically injects the necessary readiness gate configuration to the pod spec via mutating webhook during pod creation. For readiness gate configuration to be injected to the pod spec, you need to apply the label elbv2.k8s.aws/pod-readiness-gate-inject: enabled to the pod namespace. However, note that this only works with target-type: ip , since when using target-type: instance , it's the node used as backend, the ALB itself is not aware of pod/podReadiness in such case. The pod readiness gate is needed under certain circumstances to achieve full zero downtime rolling deployments. Consider the following example: Low number of replicas in a deployment Start a rolling update of the deployment Rollout of new pods takes less time than it takes the AWS Load Balancer controller to register the new pods and for their health state turn \u00bbHealthy\u00ab in the target group At some point during this rolling update, the target group might only have registered targets that are in \u00bbInitial\u00ab or \u00bbDraining\u00ab state; this results in service outage In order to avoid this situation, the AWS Load Balancer controller can set the readiness condition on the pods that constitute your ingress or service backend. The condition status on a pod will be set to True only when the corresponding target in the ALB/NLB target group shows a health state of \u00bbHealthy\u00ab. This prevents the rolling update of a deployment from terminating old pods until the newly created pods are \u00bbHealthy\u00ab in the ALB/NLB target group and ready to take traffic. upgrading from AWS ALB ingress controller If you have a pod spec with legacy readiness gate configuration, ensure you label the namespace and create the Service/Ingress objects before applying the pod/deployment manifest. The load balancer controller will remove all legacy readiness-gate configuration and add new ones during pod creation.","title":"Pod readiness gate"},{"location":"deploy/pod_readiness_gate/#configuration","text":"Pod readiness gate support is enabled by default on the AWS load balancer controller. You need to apply the readiness gate inject label to each of the namespace that you would like to use this feature. You can create and label a namespace as follows - $ kubectl create namespace readiness namespace/readiness created $ kubectl label namespace readiness elbv2.k8s.aws/pod-readiness-gate-inject=enabled namespace/readiness labeled $ kubectl describe namespace readiness Name: readiness Labels: elbv2.k8s.aws/pod-readiness-gate-inject=enabled Annotations: Status: Active Once labelled, the controller will add the pod readiness gates config to all the pods created subsequently that meet all the following conditions There exists a service matching the pod labels in the same namespace There exists at least one target group binding that refers to the matching service The target type is IP The readiness gates have the prefix target-health.elbv2.k8s.aws and the controller injects the config to the pod spec only during pod creation. create ingress or service before pod To ensure all of your pods in a namespace get the readiness gate config, you need create your Ingress or Service and label the namespace before creating the pods","title":"Configuration"},{"location":"deploy/pod_readiness_gate/#object-selector","text":"The default webhook configuration matches all pods in the namespaces containing the label elbv2.k8s.aws/pod-readiness-gate-inject=enabled . You can modify the webhook configuration further to select specific pods from the labeled namespace by specifying the objectSelector . For example, in order to select resources with elbv2.k8s.aws/pod-readiness-gate-inject: enabled label, you can add the following objectSelector to the webhook: objectSelector: matchLabels: elbv2.k8s.aws/pod-readiness-gate-inject: enabled To edit, $ kubectl edit mutatingwebhookconfigurations aws-load-balancer-webhook ... name: mpod.elbv2.k8s.aws namespaceSelector: matchExpressions: - key: elbv2.k8s.aws/pod-readiness-gate-inject operator: In values: - enabled objectSelector: matchLabels: elbv2.k8s.aws/pod-readiness-gate-inject: enabled ... When you specify multiple selectors, pods matching all the conditions will get mutated.","title":"Object Selector"},{"location":"deploy/pod_readiness_gate/#upgrading-from-aws-alb-ingress-controller","text":"If you have a pod spec with the AWS ALB ingress controller (aka v1) style readiness-gate configuration, the controller will automatically remove the legacy readiness gates config and add new ones during pod creation if the pod namespace is labelled correctly. Other than the namespace labeling, no further configuration is necessary. The legacy readiness gates have the target-health.alb.ingress.k8s.aws prefix.","title":"Upgrading from AWS ALB Ingress controller"},{"location":"deploy/pod_readiness_gate/#disabling-the-readiness-gate-inject","text":"You can specify the controller flag --enable-pod-readiness-gate-inject=false during controller startup to disable the controller from modifying the pod spec.","title":"Disabling the readiness gate inject"},{"location":"deploy/pod_readiness_gate/#checking-the-pod-condition-status","text":"The status of the readiness gates can be verified with kubectl get pod -o wide : NAME READY STATUS RESTARTS AGE IP NODE READINESS GATES nginx-test-5744b9ff84-7ftl9 1/1 Running 0 81s 10.1.2.3 ip-10-1-2-3.ec2.internal 0/1 When the target is registered and healthy in the ALB/NLB, the output will look like: NAME READY STATUS RESTARTS AGE IP NODE READINESS GATES nginx-test-5744b9ff84-7ftl9 1/1 Running 0 81s 10.1.2.3 ip-10-1-2-3.ec2.internal 1/1 If a readiness gate doesn't get ready, you can check the reason via: $ kubectl get pod nginx-test-545d8f4d89-l7rcl -o yaml | grep -B7 'type: target-health' status: conditions: - lastProbeTime: null lastTransitionTime: null message: Initial health checks in progress reason: Elb.InitialHealthChecking status: \"True\" type: target-health.elbv2.k8s.aws/k8s-readines-perf1000-7848e5026b","title":"Checking the pod condition status"},{"location":"deploy/security_groups/","text":"Security Groups for Load Balancers \u00b6 Use security groups to limit client connections to your load balancers, and restrict connections with nodes. The AWS Load Balancer Controller (LBC) defines two classifications of security groups: frontend and backend . Frontend Security Groups: Determine the clients that can access the load balancers. Backend Security Groups: Permit the load balancer to connect to targets, such as EC2 instances or ENIs. Frontend Security Groups \u00b6 Frontend security groups control access to load balancers by specifying which clients can connect to them. Use cases for Frontent Security Groups include: Placing the load balancer behind another service, such as AWS Web Application Firewall or AWS CloudFront . Blocking the IP address range (CIDR) of a region. Configuring the Load Balancer for private or internal use, by specifying internal CIDRs and Security Groups. In the default configuration, the LBC automatically creates one security group per load balancer, allowing traffic from inbound-cidrs to listen-ports . Configuration \u00b6 Apply custom frontend security groups with an annotation. This disables automatic generation of frontend security groups. For Ingress resources, use the alb.ingress.kubernetes.io/security-groups annotation. For Service resources, use the service.beta.kubernetes.io/aws-load-balancer-security-groups annotation. The annotation must be set to one or more security group IDs or security group names. Backend Security Groups \u00b6 Backend Security Groups control traffic between AWS Load Balancers and their target EC2 instances or ENIs. For example, backend security groups can restrict the ports load balancers may access on nodes. Backend security groups permit traffic from AWS Load Balancers to their targets. LBC uses a single, shared backend security group, attaching it to each load balancer and using as the traffic source in the security group rules it adds to targets. When configuring security group rules at the ENI/Instance level, use the Security Group ID of the backend security group. Avoid using the IP addresses of a specific AWS Load Balancer, these IPs are dynamic and the security group rules aren't updated automatically. Configuration \u00b6 Enable or Disable: Use --enable-backend-security-group (default true ) to enable/disable the shared backend security group. You can turn off the shared backend security group feature by setting it to false . However, if you have a high number of Ingress resources with frontend security groups auto-generated by the controller, you might run into security group rule limits on the instance/ENI security groups. Specification: Use --backend-security-group to pass in a security group ID to use as a custom shared backend security group. If --backend-security-group is left empty, a security group with the following attributes will be created: name : k8s-traffic-- tags : elbv2.k8s.aws/cluster : elbv2.k8s.aws/resource : backend-sg Coordination of Frontend and Backend Security Groups \u00b6 If the LBC auto-creates the frontend security group for a load balancer, it automatically adds the security group rules to allow traffic from the load balancer to the backend instances/ENIs. If the frontend security groups are manually specified, the LBC will not by default add any rules to the backend security group. Enable Autogeneration of Backend Security Group Rules \u00b6 If using custom frontend security groups, the LBC can be configured to automatically manage backend security group rules. To enable managing backend security group rules, apply an additional annotation to Ingress and Service resources. For Ingress resources, set the alb.ingress.kubernetes.io/manage-backend-security-group-rules annotation to true . For Service resources, set the service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules annotation to true . If management of backend security group rules is enabled with an annotation on a Service or Ingress, then --enable-backend-security-group must be set to true. These annotations are ignored when using auto-generated frontend security groups. Port Range Restrictions \u00b6 From version v2.3.0 onwards, the controller restricts port ranges in the backend security group rules by default. This improves the security of the default configuration. The LBC should generate the necessary rules to permit traffic, based on the Service and Ingress resources. If needed, set the controller flag --disable-restricted-sg-rules to true to permit traffic to all ports. This may be appropriate for backwards compatability, or troubleshooting.","title":"Security Group Management"},{"location":"deploy/security_groups/#security-groups-for-load-balancers","text":"Use security groups to limit client connections to your load balancers, and restrict connections with nodes. The AWS Load Balancer Controller (LBC) defines two classifications of security groups: frontend and backend . Frontend Security Groups: Determine the clients that can access the load balancers. Backend Security Groups: Permit the load balancer to connect to targets, such as EC2 instances or ENIs.","title":"Security Groups for Load Balancers"},{"location":"deploy/security_groups/#frontend-security-groups","text":"Frontend security groups control access to load balancers by specifying which clients can connect to them. Use cases for Frontent Security Groups include: Placing the load balancer behind another service, such as AWS Web Application Firewall or AWS CloudFront . Blocking the IP address range (CIDR) of a region. Configuring the Load Balancer for private or internal use, by specifying internal CIDRs and Security Groups. In the default configuration, the LBC automatically creates one security group per load balancer, allowing traffic from inbound-cidrs to listen-ports .","title":"Frontend Security Groups"},{"location":"deploy/security_groups/#configuration","text":"Apply custom frontend security groups with an annotation. This disables automatic generation of frontend security groups. For Ingress resources, use the alb.ingress.kubernetes.io/security-groups annotation. For Service resources, use the service.beta.kubernetes.io/aws-load-balancer-security-groups annotation. The annotation must be set to one or more security group IDs or security group names.","title":"Configuration"},{"location":"deploy/security_groups/#backend-security-groups","text":"Backend Security Groups control traffic between AWS Load Balancers and their target EC2 instances or ENIs. For example, backend security groups can restrict the ports load balancers may access on nodes. Backend security groups permit traffic from AWS Load Balancers to their targets. LBC uses a single, shared backend security group, attaching it to each load balancer and using as the traffic source in the security group rules it adds to targets. When configuring security group rules at the ENI/Instance level, use the Security Group ID of the backend security group. Avoid using the IP addresses of a specific AWS Load Balancer, these IPs are dynamic and the security group rules aren't updated automatically.","title":"Backend Security Groups"},{"location":"deploy/security_groups/#configuration_1","text":"Enable or Disable: Use --enable-backend-security-group (default true ) to enable/disable the shared backend security group. You can turn off the shared backend security group feature by setting it to false . However, if you have a high number of Ingress resources with frontend security groups auto-generated by the controller, you might run into security group rule limits on the instance/ENI security groups. Specification: Use --backend-security-group to pass in a security group ID to use as a custom shared backend security group. If --backend-security-group is left empty, a security group with the following attributes will be created: name : k8s-traffic-- tags : elbv2.k8s.aws/cluster : elbv2.k8s.aws/resource : backend-sg","title":"Configuration"},{"location":"deploy/security_groups/#coordination-of-frontend-and-backend-security-groups","text":"If the LBC auto-creates the frontend security group for a load balancer, it automatically adds the security group rules to allow traffic from the load balancer to the backend instances/ENIs. If the frontend security groups are manually specified, the LBC will not by default add any rules to the backend security group.","title":"Coordination of Frontend and Backend Security Groups"},{"location":"deploy/security_groups/#enable-autogeneration-of-backend-security-group-rules","text":"If using custom frontend security groups, the LBC can be configured to automatically manage backend security group rules. To enable managing backend security group rules, apply an additional annotation to Ingress and Service resources. For Ingress resources, set the alb.ingress.kubernetes.io/manage-backend-security-group-rules annotation to true . For Service resources, set the service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules annotation to true . If management of backend security group rules is enabled with an annotation on a Service or Ingress, then --enable-backend-security-group must be set to true. These annotations are ignored when using auto-generated frontend security groups.","title":"Enable Autogeneration of Backend Security Group Rules"},{"location":"deploy/security_groups/#port-range-restrictions","text":"From version v2.3.0 onwards, the controller restricts port ranges in the backend security group rules by default. This improves the security of the default configuration. The LBC should generate the necessary rules to permit traffic, based on the Service and Ingress resources. If needed, set the controller flag --disable-restricted-sg-rules to true to permit traffic to all ports. This may be appropriate for backwards compatability, or troubleshooting.","title":"Port Range Restrictions"},{"location":"deploy/subnet_discovery/","text":"Subnet auto-discovery \u00b6 By default, the AWS Load Balancer Controller (LBC) auto-discovers network subnets that it can create AWS Network Load Balancers (NLB) and AWS Application Load Balancers (ALB) in. ALBs require at least two subnets across Availability Zones by default, set Feature Gate ALBSingleSubnet to \"true\" allows using only 1 subnet for provisioning ALB. NLBs require one subnet. The subnets must be tagged appropriately for auto-discovery to work. The controller chooses one subnet from each Availability Zone. During auto-discovery, the controller considers subnets with at least eight available IP addresses. In the case of multiple qualified tagged subnets in an Availability Zone, the controller chooses the first one in lexicographical order by the subnet IDs. For more information about the subnets for the LBC, see Application Load Balancers and Network Load Balancers . If you used eksctl or an Amazon EKS AWS CloudFormation template to create your VPC after March 26, 2020, then the subnets are tagged appropriately when they're created. For more information about the Amazon EKS AWS CloudFormation VPC templates, see Creating a VPC for your Amazon EKS cluster . Public subnets \u00b6 Public subnets are used for internet-facing load balancers. These subnets must have the following tags: Key Value kubernetes.io/role/elb 1 or `` Private subnets \u00b6 Private subnets are used for internal load balancers. These subnets must have the following tags: Key Value kubernetes.io/role/internal-elb 1 or `` Common tag \u00b6 In version v2.1.1 and older of the LBC, both the public and private subnets must be tagged with the cluster name as follows: Key Value kubernetes.io/cluster/${cluster-name} owned or shared ${cluster-name} is the name of the Kubernetes cluster. The cluster tag is not required in versions v2.1.2 to v2.4.1, unless a cluster tag for another cluster is present. With versions v2.4.2 and later, you can disable the cluster tag check completely by specifying the feature gate SubnetsClusterTagCheck=false","title":"Subnet Discovery"},{"location":"deploy/subnet_discovery/#subnet-auto-discovery","text":"By default, the AWS Load Balancer Controller (LBC) auto-discovers network subnets that it can create AWS Network Load Balancers (NLB) and AWS Application Load Balancers (ALB) in. ALBs require at least two subnets across Availability Zones by default, set Feature Gate ALBSingleSubnet to \"true\" allows using only 1 subnet for provisioning ALB. NLBs require one subnet. The subnets must be tagged appropriately for auto-discovery to work. The controller chooses one subnet from each Availability Zone. During auto-discovery, the controller considers subnets with at least eight available IP addresses. In the case of multiple qualified tagged subnets in an Availability Zone, the controller chooses the first one in lexicographical order by the subnet IDs. For more information about the subnets for the LBC, see Application Load Balancers and Network Load Balancers . If you used eksctl or an Amazon EKS AWS CloudFormation template to create your VPC after March 26, 2020, then the subnets are tagged appropriately when they're created. For more information about the Amazon EKS AWS CloudFormation VPC templates, see Creating a VPC for your Amazon EKS cluster .","title":"Subnet auto-discovery"},{"location":"deploy/subnet_discovery/#public-subnets","text":"Public subnets are used for internet-facing load balancers. These subnets must have the following tags: Key Value kubernetes.io/role/elb 1 or ``","title":"Public subnets"},{"location":"deploy/subnet_discovery/#private-subnets","text":"Private subnets are used for internal load balancers. These subnets must have the following tags: Key Value kubernetes.io/role/internal-elb 1 or ``","title":"Private subnets"},{"location":"deploy/subnet_discovery/#common-tag","text":"In version v2.1.1 and older of the LBC, both the public and private subnets must be tagged with the cluster name as follows: Key Value kubernetes.io/cluster/${cluster-name} owned or shared ${cluster-name} is the name of the Kubernetes cluster. The cluster tag is not required in versions v2.1.2 to v2.4.1, unless a cluster tag for another cluster is present. With versions v2.4.2 and later, you can disable the cluster tag check completely by specifying the feature gate SubnetsClusterTagCheck=false","title":"Common tag"},{"location":"deploy/upgrade/migrate_v1_v2/","text":"Migrate from v1 to v2 \u00b6 This document contains the information necessary to migrate from an existing installation of AWSALBIngressController(v1) to the new AWSLoadBalancerController(v2). Prerequisites \u00b6 AWSALBIngressController >=v1.1.3 If you have AWSALBIngressController(<1.1.3) installed, you need to upgrade to version>=v1.1.3(e.g. v1.1.9) first. Backwards compatibility \u00b6 The AWSLoadBalancerController(v2.0.1) is backwards-compatible with AWSALBIngressController(>=v1.1.3). It supports existing AWS resources provisioned by AWSALBIngressController(>=v1.1.3) for Ingress resources with below caveats: The AWS LoadBalancer resource created for your Ingress will be preserved. If migrating from =v1.1.3). Existing AWSALBIngressController needs to be uninstalled first before install new AWSLoadBalancerController. Existing Ingress resources do not need to be deleted. Install new AWSLoadBalancerController Install AWSLoadBalancerController(v2.5.0) by following the installation instructions Grant additional IAM policy needed for migration to the controller. Verify all Ingresses works as expected.","title":"Migrate v1 to v2"},{"location":"deploy/upgrade/migrate_v1_v2/#migrate-from-v1-to-v2","text":"This document contains the information necessary to migrate from an existing installation of AWSALBIngressController(v1) to the new AWSLoadBalancerController(v2).","title":"Migrate from v1 to v2"},{"location":"deploy/upgrade/migrate_v1_v2/#prerequisites","text":"AWSALBIngressController >=v1.1.3 If you have AWSALBIngressController(<1.1.3) installed, you need to upgrade to version>=v1.1.3(e.g. v1.1.9) first.","title":"Prerequisites"},{"location":"deploy/upgrade/migrate_v1_v2/#backwards-compatibility","text":"The AWSLoadBalancerController(v2.0.1) is backwards-compatible with AWSALBIngressController(>=v1.1.3). It supports existing AWS resources provisioned by AWSALBIngressController(>=v1.1.3) for Ingress resources with below caveats: The AWS LoadBalancer resource created for your Ingress will be preserved. If migrating from =v1.1.3). Existing AWSALBIngressController needs to be uninstalled first before install new AWSLoadBalancerController. Existing Ingress resources do not need to be deleted. Install new AWSLoadBalancerController Install AWSLoadBalancerController(v2.5.0) by following the installation instructions Grant additional IAM policy needed for migration to the controller. Verify all Ingresses works as expected.","title":"Upgrade steps"},{"location":"examples/echo_server/","text":"walkthrough: echoserver \u00b6 In this walkthrough, you'll Create a cluster with EKS Deploy an aws-load-balancer-controller Create deployments and ingress resources in the cluster Verify access to the service (Optional) Use external-dns to create a DNS record pointing to the load balancer created by the aws-load-balancer-controller. This assumes you have a route53 hosted zone available. Otherwise you can access the service using the load balancer DNS. Create the EKS cluster \u00b6 Install eksctl : https://eksctl.io Create EKS cluster via eksctl eksctl create cluster 2018-08-14T11:19:09-07:00 [\u2139] setting availability zones to [us-west-2c us-west-2a us-west-2b] 2018-08-14T11:19:09-07:00 [\u2139] importing SSH public key \"/Users/kamador/.ssh/id_rsa.pub\" as \"eksctl-exciting-gopher-1534270749-b7:71:da:f6:f3:63:7a:ee:ad:7a:10:37:28:ff:44:d1\" 2018-08-14T11:19:10-07:00 [\u2139] creating EKS cluster \"exciting-gopher-1534270749\" in \"us-west-2\" region 2018-08-14T11:19:10-07:00 [\u2139] creating ServiceRole stack \"EKS-exciting-gopher-1534270749-ServiceRole\" 2018-08-14T11:19:10-07:00 [\u2139] creating VPC stack \"EKS-exciting-gopher-1534270749-VPC\" 2018-08-14T11:19:50-07:00 [\u2714] created ServiceRole stack \"EKS-exciting-gopher-1534270749-ServiceRole\" 2018-08-14T11:20:30-07:00 [\u2714] created VPC stack \"EKS-exciting-gopher-1534270749-VPC\" 2018-08-14T11:20:30-07:00 [\u2139] creating control plane \"exciting-gopher-1534270749\" 2018-08-14T11:31:52-07:00 [\u2714] created control plane \"exciting-gopher-1534270749\" 2018-08-14T11:31:52-07:00 [\u2139] creating DefaultNodeGroup stack \"EKS-exciting-gopher-1534270749-DefaultNodeGroup\" 2018-08-14T11:35:33-07:00 [\u2714] created DefaultNodeGroup stack \"EKS-exciting-gopher-1534270749-DefaultNodeGroup\" 2018-08-14T11:35:33-07:00 [\u2714] all EKS cluster \"exciting-gopher-1534270749\" resources has been created 2018-08-14T11:35:33-07:00 [\u2714] saved kubeconfig as \"/Users/kamador/.kube/config\" 2018-08-14T11:35:34-07:00 [\u2139] the cluster has 0 nodes 2018-08-14T11:35:34-07:00 [\u2139] waiting for at least 2 nodes to become ready 2018-08-14T11:36:05-07:00 [\u2139] the cluster has 2 nodes 2018-08-14T11:36:05-07:00 [\u2139] node \"ip-192-168-139-176.us-west-2.compute.internal\" is ready 2018-08-14T11:36:05-07:00 [\u2139] node \"ip-192-168-214-126.us-west-2.compute.internal\" is ready 2018-08-14T11:36:05-07:00 [\u2714] EKS cluster \"exciting-gopher-1534270749\" in \"us-west-2\" region is ready Setup the AWS Load Balancer controller \u00b6 Refer to the installation instructions to setup the controller Verify the deployment was successful and the controller started. kubectl logs -n kube-system --tail -1 -l app.kubernetes.io/name = aws-load-balancer-controller Should display output similar to the following. {\"level\":\"info\",\"ts\":1602778062.2588625,\"logger\":\"setup\",\"msg\":\"version\",\"GitVersion\":\"v2.0.0-rc3-13-gcdc8f715-dirty\",\"GitCommit\":\"cdc8f715919cc65ca8161b6083c4091222632d6b\",\"BuildDate\":\"2020-10-15T15:58:31+0000\"} {\"level\":\"info\",\"ts\":1602778065.4515743,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1602778065.4536595,\"logger\":\"controller-runtime.webhook\",\"msg\":\"registering webhook\",\"path\":\"/mutate-v1-pod\"} {\"level\":\"info\",\"ts\":1602778065.4537156,\"logger\":\"controller-runtime.webhook\",\"msg\":\"registering webhook\",\"path\":\"/mutate-elbv2-k8s-aws-v1beta1-targetgroupbinding\"} {\"level\":\"info\",\"ts\":1602778065.4537542,\"logger\":\"controller-runtime.webhook\",\"msg\":\"registering webhook\",\"path\":\"/validate-elbv2-k8s-aws-v1beta1-targetgroupbinding\"} {\"level\":\"info\",\"ts\":1602778065.4537594,\"logger\":\"setup\",\"msg\":\"starting manager\"} I1015 16:07:45.453851 1 leaderelection.go:242] attempting to acquire leader lease kube-system/aws-load-balancer-controller-leader... {\"level\":\"info\",\"ts\":1602778065.5544264,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1602778065.5544496,\"logger\":\"controller-runtime.webhook.webhooks\",\"msg\":\"starting webhook server\"} {\"level\":\"info\",\"ts\":1602778065.5549548,\"logger\":\"controller-runtime.certwatcher\",\"msg\":\"Updated current TLS certificate\"} {\"level\":\"info\",\"ts\":1602778065.5550802,\"logger\":\"controller-runtime.webhook\",\"msg\":\"serving webhook server\",\"host\":\"\",\"port\":9443} {\"level\":\"info\",\"ts\":1602778065.5551715,\"logger\":\"controller-runtime.certwatcher\",\"msg\":\"Starting certificate watcher\"} I1015 16:08:03.662023 1 leaderelection.go:252] successfully acquired lease kube-system/aws-load-balancer-controller-leader {\"level\":\"info\",\"ts\":1602778083.663017,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"targetGroupBinding\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.6631303,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"targetGroupBinding\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.6633205,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"ingress\",\"source\":\"channel source: 0xc0007340f0\"} {\"level\":\"info\",\"ts\":1602778083.6633654,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"ingress\",\"source\":\"channel source: 0xc000734140\"} {\"level\":\"info\",\"ts\":1602778083.6633892,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"ingress\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.663441,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"ingress\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.6634624,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"ingress\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.6635776,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"service\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.6636262,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting Controller\",\"controller\":\"service\"} {\"level\":\"info\",\"ts\":1602778083.7634695,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"targetGroupBinding\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.7637022,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting workers\",\"controller\":\"service\",\"worker count\":3} {\"level\":\"info\",\"ts\":1602778083.7641861,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting Controller\",\"controller\":\"ingress\"} {\"level\":\"info\",\"ts\":1602778083.8641882,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting Controller\",\"controller\":\"targetGroupBinding\"} {\"level\":\"info\",\"ts\":1602778083.864236,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting workers\",\"controller\":\"targetGroupBinding\",\"worker count\":3} {\"level\":\"info\",\"ts\":1602778083.8643816,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting workers\",\"controller\":\"ingress\",\"worker count\":3} Deploy the echoserver resources \u00b6 Deploy all the echoserver resources (namespace, service, deployment) kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/examples/echoservice/echoserver-namespace.yaml && \\ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/examples/echoservice/echoserver-service.yaml && \\ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/examples/echoservice/echoserver-deployment.yaml List all the resources to ensure they were created. kubectl get -n echoserver deploy,svc Should resolve similar to the following. NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/echoserver 10.3.31.76 80:31027/TCP 4d NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/echoserver 1 1 1 1 4d Deploy ingress for echoserver \u00b6 Download the echoserver ingress manifest locally. wget https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/examples/echoservice/echoserver-ingress.yaml Configure the subnets, either by add annotation to the ingress or add tags to subnets. This step is optional in lieu of auto-discovery. Tip If you'd like to use external dns, alter the host field to a domain that you own in Route 53. Assuming you managed example.com in Route 53. Edit the alb.ingress.kubernetes.io/subnets annotation to include at least two subnets. Subnets must be from different Availability Zones. eksctl get cluster exciting-gopher-1534270749 NAME VERSION STATUS CREATED VPC SUBNETS SECURITYGROUPS exciting-gopher-1534270749 1.10 ACTIVE 2018-08-14T18:20:32Z vpc-0aa01b07b3c922c9c subnet-05e1c98ed0f5b109e,subnet-07f5bb81f661df61b,subnet-0a4e6232630820516 sg-05ceb5eee9fd7cac4 apiVersion : networking.k8s.io/v1 kind : Ingress metadata : name : echoserver namespace : echoserver annotations : alb.ingress.kubernetes.io/scheme : internet-facing alb.ingress.kubernetes.io/target-type : ip alb.ingress.kubernetes.io/subnets : subnet-05e1c98ed0f5b109e,subnet-07f5bb81f661df61b,subnet-0a4e6232630820516 alb.ingress.kubernetes.io/tags : Environment=dev,Team=test spec : rules : - http : paths : Adding tags to subnets for auto-discovery(instead of alb.ingress.kubernetes.io/subnets annotation) you must include the following tags on desired subnets. kubernetes.io/cluster/$CLUSTER_NAME where $CLUSTER_NAME is the same CLUSTER_NAME specified in the above step. kubernetes.io/role/internal-elb should be set to 1 or an empty tag value for internal load balancers. kubernetes.io/role/elb should be set to 1 or an empty tag value for internet-facing load balancers. An example of a subnet with the correct tags for the cluster joshcalico is as follows. Deploy the ingress resource for echoserver kubectl apply -f echoserver-ingress.yaml Verify the aws-load-balancer-controller creates the resources kubectl logs -n kube-system --tail -1 -l app.kubernetes.io/name = aws-load-balancer-controller | grep 'echoserver\\/echoserver' You should see similar to the following. {\"level\":\"info\",\"ts\":1602803965.264764,\"logger\":\"controllers.ingress\",\"msg\":\"successfully built model\",\"model\":\"{\\\"id\\\":\\\"echoserver/echoserver\\\",\\\"resources\\\":{\\\"AWS::EC2::SecurityGroup\\\":{\\\"ManagedLBSecurityGroup\\\":{\\\"spec\\\":{\\\"groupName\\\":\\\"k8s-echoserv-echoserv-4e1e34cae5\\\",\\\"description\\\":\\\"[k8s] Managed SecurityGroup for LoadBalancer\\\",\\\"tags\\\":{\\\"Environment\\\":\\\"dev\\\",\\\"Team\\\":\\\"test\\\"},\\\"ingress\\\":[{\\\"ipProtocol\\\":\\\"tcp\\\",\\\"fromPort\\\":80,\\\"toPort\\\":80,\\\"ipRanges\\\":[{\\\"cidrIP\\\":\\\"0.0.0.0/0\\\"}]}]}}},\\\"AWS::ElasticLoadBalancingV2::Listener\\\":{\\\"80\\\":{\\\"spec\\\":{\\\"loadBalancerARN\\\":{\\\"$ref\\\":\\\"#/resources/AWS::ElasticLoadBalancingV2::LoadBalancer/LoadBalancer/status/loadBalancerARN\\\"},\\\"port\\\":80,\\\"protocol\\\":\\\"HTTP\\\",\\\"defaultActions\\\":[{\\\"type\\\":\\\"fixed-response\\\",\\\"fixedResponseConfig\\\":{\\\"contentType\\\":\\\"text/plain\\\",\\\"statusCode\\\":\\\"404\\\"}}]}}},\\\"AWS::ElasticLoadBalancingV2::ListenerRule\\\":{\\\"80:1\\\":{\\\"spec\\\":{\\\"listenerARN\\\":{\\\"$ref\\\":\\\"#/resources/AWS::ElasticLoadBalancingV2::Listener/80/status/listenerARN\\\"},\\\"priority\\\":1,\\\"actions\\\":[{\\\"type\\\":\\\"forward\\\",\\\"forwardConfig\\\":{\\\"targetGroups\\\":[{\\\"targetGroupARN\\\":{\\\"$ref\\\":\\\"#/resources/AWS::ElasticLoadBalancingV2::TargetGroup/echoserver/echoserver-echoserver:80/status/targetGroupARN\\\"}}]}}],\\\"conditions\\\":[{\\\"field\\\":\\\"host-header\\\",\\\"hostHeaderConfig\\\":{\\\"values\\\":[\\\"echoserver.example.com\\\"]}},{\\\"field\\\":\\\"path-pattern\\\",\\\"pathPatternConfig\\\":{\\\"values\\\":[\\\"/\\\"]}}]}}},\\\"AWS::ElasticLoadBalancingV2::LoadBalancer\\\":{\\\"LoadBalancer\\\":{\\\"spec\\\":{\\\"name\\\":\\\"k8s-echoserv-echoserv-d4d6bd65d0\\\",\\\"type\\\":\\\"application\\\",\\\"scheme\\\":\\\"internet-facing\\\",\\\"ipAddressType\\\":\\\"ipv4\\\",\\\"subnetMapping\\\":[{\\\"subnetID\\\":\\\"subnet-01b35707c23b0a43b\\\"},{\\\"subnetID\\\":\\\"subnet-0f7814a7ab4dfcc2c\\\"}],\\\"securityGroups\\\":[{\\\"$ref\\\":\\\"#/resources/AWS::EC2::SecurityGroup/ManagedLBSecurityGroup/status/groupID\\\"}],\\\"tags\\\":{\\\"Environment\\\":\\\"dev\\\",\\\"Team\\\":\\\"test\\\"}}}},\\\"AWS::ElasticLoadBalancingV2::TargetGroup\\\":{\\\"echoserver/echoserver-echoserver:80\\\":{\\\"spec\\\":{\\\"name\\\":\\\"k8s-echoserv-echoserv-d989093207\\\",\\\"targetType\\\":\\\"instance\\\",\\\"port\\\":1,\\\"protocol\\\":\\\"HTTP\\\",\\\"healthCheckConfig\\\":{\\\"port\\\":\\\"traffic-port\\\",\\\"protocol\\\":\\\"HTTP\\\",\\\"path\\\":\\\"/\\\",\\\"matcher\\\":{\\\"httpCode\\\":\\\"200\\\"},\\\"intervalSeconds\\\":15,\\\"timeoutSeconds\\\":5,\\\"healthyThresholdCount\\\":2,\\\"unhealthyThresholdCount\\\":2},\\\"tags\\\":{\\\"Environment\\\":\\\"dev\\\",\\\"Team\\\":\\\"test\\\"}}}},\\\"K8S::ElasticLoadBalancingV2::TargetGroupBinding\\\":{\\\"echoserver/echoserver-echoserver:80\\\":{\\\"spec\\\":{\\\"template\\\":{\\\"metadata\\\":{\\\"name\\\":\\\"k8s-echoserv-echoserv-d989093207\\\",\\\"namespace\\\":\\\"echoserver\\\",\\\"creationTimestamp\\\":null},\\\"spec\\\":{\\\"targetGroupARN\\\":{\\\"$ref\\\":\\\"#/resources/AWS::ElasticLoadBalancingV2::TargetGroup/echoserver/echoserver-echoserver:80/status/targetGroupARN\\\"},\\\"targetType\\\":\\\"instance\\\",\\\"serviceRef\\\":{\\\"name\\\":\\\"echoserver\\\",\\\"port\\\":80},\\\"networking\\\":{\\\"ingress\\\":[{\\\"from\\\":[{\\\"securityGroup\\\":{\\\"groupID\\\":{\\\"$ref\\\":\\\"#/resources/AWS::EC2::SecurityGroup/ManagedLBSecurityGroup/status/groupID\\\"}}}],\\\"ports\\\":[{\\\"protocol\\\":\\\"TCP\\\"}]}]}}}}}}}}\"} {\"level\":\"info\",\"ts\":1602803966.411922,\"logger\":\"controllers.ingress\",\"msg\":\"creating targetGroup\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"echoserver/echoserver-echoserver:80\"} {\"level\":\"info\",\"ts\":1602803966.6606336,\"logger\":\"controllers.ingress\",\"msg\":\"created targetGroup\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"echoserver/echoserver-echoserver:80\",\"arn\":\"arn:aws:elasticloadbalancing:us-west-2:019453415603:targetgroup/k8s-echoserv-echoserv-d989093207/63225ae3ead3deb6\"} {\"level\":\"info\",\"ts\":1602803966.798019,\"logger\":\"controllers.ingress\",\"msg\":\"creating loadBalancer\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"LoadBalancer\"} {\"level\":\"info\",\"ts\":1602803967.5472538,\"logger\":\"controllers.ingress\",\"msg\":\"created loadBalancer\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"LoadBalancer\",\"arn\":\"arn:aws:elasticloadbalancing:us-west-2:019453415603:loadbalancer/app/k8s-echoserv-echoserv-d4d6bd65d0/4b4ebe8d6e1ef0c1\"} {\"level\":\"info\",\"ts\":1602803967.5863476,\"logger\":\"controllers.ingress\",\"msg\":\"creating listener\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"80\"} {\"level\":\"info\",\"ts\":1602803967.6436293,\"logger\":\"controllers.ingress\",\"msg\":\"created listener\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"80\",\"arn\":\"arn:aws:elasticloadbalancing:us-west-2:019453415603:listener/app/k8s-echoserv-echoserv-d4d6bd65d0/4b4ebe8d6e1ef0c1/6e13477f9d840da0\"} {\"level\":\"info\",\"ts\":1602803967.6528971,\"logger\":\"controllers.ingress\",\"msg\":\"creating listener rule\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"80:1\"} {\"level\":\"info\",\"ts\":1602803967.7160048,\"logger\":\"controllers.ingress\",\"msg\":\"created listener rule\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"80:1\",\"arn\":\"arn:aws:elasticloadbalancing:us-west-2:019453415603:listener-rule/app/k8s-echoserv-echoserv-d4d6bd65d0/4b4ebe8d6e1ef0c1/6e13477f9d840da0/23ef859380e792e8\"} {\"level\":\"info\",\"ts\":1602803967.8484688,\"logger\":\"controllers.ingress\",\"msg\":\"successfully deployed model\",\"ingressGroup\":\"echoserver/echoserver\"} Check the events of the ingress to see what has occur. kubectl describe ing -n echoserver echoserver You should see similar to the following. Name: echoserver Namespace: echoserver Address: joshcalico-echoserver-echo-2ad7-1490890749.us-east-2.elb.amazonaws.com Default backend: default-http-backend:80 (10.2.1.28:8080) Rules: Host Path Backends ---- ---- -------- * / echoserver:80 () Annotations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 ingress-controller Normal CREATE Ingress echoserver/echoserver 3m 32s 3 ingress-controller Normal UPDATE Ingress echoserver/echoserver The address seen above is the ALB's DNS name. This will be referenced via records created by external-dns if you choose to set it up. Verify that you can access the service \u00b6 Make a curl request to the echoserver service and verify that it returns a response payload. Use the address from the output of kubectl describe ing command above. curl You should get back a valid response. (Optional) Use external-dns to create a DNS record \u00b6 Deploy external-dns to your cluster using these instructions - Setup external-dns Update your ingress resource and add spec.rules[0].host and set the value to your domain name. The example below uses echoserver.example.org . spec : rules : - host : echoserver.example.org http : paths : 1. external-dns will then create a DNS record for the host you specified. This assumes you have the hosted zone corresponding to the domain you are trying to create a record in. Annotate the ingress with the external-dns specific configuration annotations : kubernetes.io/ingress.class : alb alb.ingress.kubernetes.io/scheme : internet-facing # external-dns specific configuration for creating route53 record-set external-dns.alpha.kubernetes.io/hostname : my-app.test-dns.com # give your domain name here Verify the DNS has propagated dig echoserver.example.org ;; QUESTION SECTION: ;echoserver.example.org. IN A ;; ANSWER SECTION: echoserver.example.org. 60 IN A 13.59.147.105 echoserver.example.org. 60 IN A 18.221.65.39 echoserver.example.org. 60 IN A 52.15.186.25 Once it has, you can make a call to echoserver and it should return a response payload. curl echoserver.example.org CLIENT VALUES: client_address=10.0.50.185 command=GET real path=/ query=nil request_version=1.1 request_uri=http://echoserver.example.org:8080/ SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* host=echoserver.example.org user-agent=curl/7.54.0 x-amzn-trace-id=Root=1-59c08da5-113347df69640735312371bd x-forwarded-for=67.173.237.250 x-forwarded-port=80 x-forwarded-proto=http BODY: Kube2iam setup \u00b6 follow below steps if you want to use kube2iam to provide the AWS credentials configure the proper policy The policy to be used can be fetched from https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/install/iam_policy.json configure the proper role and create the trust relationship You have to find which role is associated with your K8S nodes. Once you found take note of the full arn: arn:aws:iam::XXXXXXXXXXXX:role/k8scluster-node create the role, called k8s-lb-controller, attach the above policy and add a Trust Relationship like: { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"\", \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"ec2.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\" }, { \"Sid\": \"\", \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::XXXXXXXXXXXX:role/k8scluster-node\" }, \"Action\": \"sts:AssumeRole\" } ] } The new role will have a similar arn: arn:aws:iam:::XXXXXXXXXXXX:role/k8s-lb-controller update the alb-load-balancer-controller deployment Add the annotations in the template's metadata point spec : replicas : 1 selector : matchLabels : app.kubernetes.io/component : controller app.kubernetes.io/name : aws-load-balancer-controller strategy : rollingUpdate : maxSurge : 1 maxUnavailable : 1 type : RollingUpdate template : metadata : annotations : iam.amazonaws.com/role : arn:aws:iam:::XXXXXXXXXXXX:role/k8s-lb-controller","title":"EchoServer"},{"location":"examples/echo_server/#walkthrough-echoserver","text":"In this walkthrough, you'll Create a cluster with EKS Deploy an aws-load-balancer-controller Create deployments and ingress resources in the cluster Verify access to the service (Optional) Use external-dns to create a DNS record pointing to the load balancer created by the aws-load-balancer-controller. This assumes you have a route53 hosted zone available. Otherwise you can access the service using the load balancer DNS.","title":"walkthrough: echoserver"},{"location":"examples/echo_server/#create-the-eks-cluster","text":"Install eksctl : https://eksctl.io Create EKS cluster via eksctl eksctl create cluster 2018-08-14T11:19:09-07:00 [\u2139] setting availability zones to [us-west-2c us-west-2a us-west-2b] 2018-08-14T11:19:09-07:00 [\u2139] importing SSH public key \"/Users/kamador/.ssh/id_rsa.pub\" as \"eksctl-exciting-gopher-1534270749-b7:71:da:f6:f3:63:7a:ee:ad:7a:10:37:28:ff:44:d1\" 2018-08-14T11:19:10-07:00 [\u2139] creating EKS cluster \"exciting-gopher-1534270749\" in \"us-west-2\" region 2018-08-14T11:19:10-07:00 [\u2139] creating ServiceRole stack \"EKS-exciting-gopher-1534270749-ServiceRole\" 2018-08-14T11:19:10-07:00 [\u2139] creating VPC stack \"EKS-exciting-gopher-1534270749-VPC\" 2018-08-14T11:19:50-07:00 [\u2714] created ServiceRole stack \"EKS-exciting-gopher-1534270749-ServiceRole\" 2018-08-14T11:20:30-07:00 [\u2714] created VPC stack \"EKS-exciting-gopher-1534270749-VPC\" 2018-08-14T11:20:30-07:00 [\u2139] creating control plane \"exciting-gopher-1534270749\" 2018-08-14T11:31:52-07:00 [\u2714] created control plane \"exciting-gopher-1534270749\" 2018-08-14T11:31:52-07:00 [\u2139] creating DefaultNodeGroup stack \"EKS-exciting-gopher-1534270749-DefaultNodeGroup\" 2018-08-14T11:35:33-07:00 [\u2714] created DefaultNodeGroup stack \"EKS-exciting-gopher-1534270749-DefaultNodeGroup\" 2018-08-14T11:35:33-07:00 [\u2714] all EKS cluster \"exciting-gopher-1534270749\" resources has been created 2018-08-14T11:35:33-07:00 [\u2714] saved kubeconfig as \"/Users/kamador/.kube/config\" 2018-08-14T11:35:34-07:00 [\u2139] the cluster has 0 nodes 2018-08-14T11:35:34-07:00 [\u2139] waiting for at least 2 nodes to become ready 2018-08-14T11:36:05-07:00 [\u2139] the cluster has 2 nodes 2018-08-14T11:36:05-07:00 [\u2139] node \"ip-192-168-139-176.us-west-2.compute.internal\" is ready 2018-08-14T11:36:05-07:00 [\u2139] node \"ip-192-168-214-126.us-west-2.compute.internal\" is ready 2018-08-14T11:36:05-07:00 [\u2714] EKS cluster \"exciting-gopher-1534270749\" in \"us-west-2\" region is ready","title":"Create the EKS cluster"},{"location":"examples/echo_server/#setup-the-aws-load-balancer-controller","text":"Refer to the installation instructions to setup the controller Verify the deployment was successful and the controller started. kubectl logs -n kube-system --tail -1 -l app.kubernetes.io/name = aws-load-balancer-controller Should display output similar to the following. {\"level\":\"info\",\"ts\":1602778062.2588625,\"logger\":\"setup\",\"msg\":\"version\",\"GitVersion\":\"v2.0.0-rc3-13-gcdc8f715-dirty\",\"GitCommit\":\"cdc8f715919cc65ca8161b6083c4091222632d6b\",\"BuildDate\":\"2020-10-15T15:58:31+0000\"} {\"level\":\"info\",\"ts\":1602778065.4515743,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1602778065.4536595,\"logger\":\"controller-runtime.webhook\",\"msg\":\"registering webhook\",\"path\":\"/mutate-v1-pod\"} {\"level\":\"info\",\"ts\":1602778065.4537156,\"logger\":\"controller-runtime.webhook\",\"msg\":\"registering webhook\",\"path\":\"/mutate-elbv2-k8s-aws-v1beta1-targetgroupbinding\"} {\"level\":\"info\",\"ts\":1602778065.4537542,\"logger\":\"controller-runtime.webhook\",\"msg\":\"registering webhook\",\"path\":\"/validate-elbv2-k8s-aws-v1beta1-targetgroupbinding\"} {\"level\":\"info\",\"ts\":1602778065.4537594,\"logger\":\"setup\",\"msg\":\"starting manager\"} I1015 16:07:45.453851 1 leaderelection.go:242] attempting to acquire leader lease kube-system/aws-load-balancer-controller-leader... {\"level\":\"info\",\"ts\":1602778065.5544264,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1602778065.5544496,\"logger\":\"controller-runtime.webhook.webhooks\",\"msg\":\"starting webhook server\"} {\"level\":\"info\",\"ts\":1602778065.5549548,\"logger\":\"controller-runtime.certwatcher\",\"msg\":\"Updated current TLS certificate\"} {\"level\":\"info\",\"ts\":1602778065.5550802,\"logger\":\"controller-runtime.webhook\",\"msg\":\"serving webhook server\",\"host\":\"\",\"port\":9443} {\"level\":\"info\",\"ts\":1602778065.5551715,\"logger\":\"controller-runtime.certwatcher\",\"msg\":\"Starting certificate watcher\"} I1015 16:08:03.662023 1 leaderelection.go:252] successfully acquired lease kube-system/aws-load-balancer-controller-leader {\"level\":\"info\",\"ts\":1602778083.663017,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"targetGroupBinding\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.6631303,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"targetGroupBinding\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.6633205,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"ingress\",\"source\":\"channel source: 0xc0007340f0\"} {\"level\":\"info\",\"ts\":1602778083.6633654,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"ingress\",\"source\":\"channel source: 0xc000734140\"} {\"level\":\"info\",\"ts\":1602778083.6633892,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"ingress\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.663441,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"ingress\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.6634624,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"ingress\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.6635776,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"service\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.6636262,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting Controller\",\"controller\":\"service\"} {\"level\":\"info\",\"ts\":1602778083.7634695,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting EventSource\",\"controller\":\"targetGroupBinding\",\"source\":\"kind source: /, Kind=\"} {\"level\":\"info\",\"ts\":1602778083.7637022,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting workers\",\"controller\":\"service\",\"worker count\":3} {\"level\":\"info\",\"ts\":1602778083.7641861,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting Controller\",\"controller\":\"ingress\"} {\"level\":\"info\",\"ts\":1602778083.8641882,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting Controller\",\"controller\":\"targetGroupBinding\"} {\"level\":\"info\",\"ts\":1602778083.864236,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting workers\",\"controller\":\"targetGroupBinding\",\"worker count\":3} {\"level\":\"info\",\"ts\":1602778083.8643816,\"logger\":\"controller-runtime.controller\",\"msg\":\"Starting workers\",\"controller\":\"ingress\",\"worker count\":3}","title":"Setup the AWS Load Balancer controller"},{"location":"examples/echo_server/#deploy-the-echoserver-resources","text":"Deploy all the echoserver resources (namespace, service, deployment) kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/examples/echoservice/echoserver-namespace.yaml && \\ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/examples/echoservice/echoserver-service.yaml && \\ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/examples/echoservice/echoserver-deployment.yaml List all the resources to ensure they were created. kubectl get -n echoserver deploy,svc Should resolve similar to the following. NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/echoserver 10.3.31.76 80:31027/TCP 4d NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/echoserver 1 1 1 1 4d","title":"Deploy the echoserver resources"},{"location":"examples/echo_server/#deploy-ingress-for-echoserver","text":"Download the echoserver ingress manifest locally. wget https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/examples/echoservice/echoserver-ingress.yaml Configure the subnets, either by add annotation to the ingress or add tags to subnets. This step is optional in lieu of auto-discovery. Tip If you'd like to use external dns, alter the host field to a domain that you own in Route 53. Assuming you managed example.com in Route 53. Edit the alb.ingress.kubernetes.io/subnets annotation to include at least two subnets. Subnets must be from different Availability Zones. eksctl get cluster exciting-gopher-1534270749 NAME VERSION STATUS CREATED VPC SUBNETS SECURITYGROUPS exciting-gopher-1534270749 1.10 ACTIVE 2018-08-14T18:20:32Z vpc-0aa01b07b3c922c9c subnet-05e1c98ed0f5b109e,subnet-07f5bb81f661df61b,subnet-0a4e6232630820516 sg-05ceb5eee9fd7cac4 apiVersion : networking.k8s.io/v1 kind : Ingress metadata : name : echoserver namespace : echoserver annotations : alb.ingress.kubernetes.io/scheme : internet-facing alb.ingress.kubernetes.io/target-type : ip alb.ingress.kubernetes.io/subnets : subnet-05e1c98ed0f5b109e,subnet-07f5bb81f661df61b,subnet-0a4e6232630820516 alb.ingress.kubernetes.io/tags : Environment=dev,Team=test spec : rules : - http : paths : Adding tags to subnets for auto-discovery(instead of alb.ingress.kubernetes.io/subnets annotation) you must include the following tags on desired subnets. kubernetes.io/cluster/$CLUSTER_NAME where $CLUSTER_NAME is the same CLUSTER_NAME specified in the above step. kubernetes.io/role/internal-elb should be set to 1 or an empty tag value for internal load balancers. kubernetes.io/role/elb should be set to 1 or an empty tag value for internet-facing load balancers. An example of a subnet with the correct tags for the cluster joshcalico is as follows. Deploy the ingress resource for echoserver kubectl apply -f echoserver-ingress.yaml Verify the aws-load-balancer-controller creates the resources kubectl logs -n kube-system --tail -1 -l app.kubernetes.io/name = aws-load-balancer-controller | grep 'echoserver\\/echoserver' You should see similar to the following. {\"level\":\"info\",\"ts\":1602803965.264764,\"logger\":\"controllers.ingress\",\"msg\":\"successfully built model\",\"model\":\"{\\\"id\\\":\\\"echoserver/echoserver\\\",\\\"resources\\\":{\\\"AWS::EC2::SecurityGroup\\\":{\\\"ManagedLBSecurityGroup\\\":{\\\"spec\\\":{\\\"groupName\\\":\\\"k8s-echoserv-echoserv-4e1e34cae5\\\",\\\"description\\\":\\\"[k8s] Managed SecurityGroup for LoadBalancer\\\",\\\"tags\\\":{\\\"Environment\\\":\\\"dev\\\",\\\"Team\\\":\\\"test\\\"},\\\"ingress\\\":[{\\\"ipProtocol\\\":\\\"tcp\\\",\\\"fromPort\\\":80,\\\"toPort\\\":80,\\\"ipRanges\\\":[{\\\"cidrIP\\\":\\\"0.0.0.0/0\\\"}]}]}}},\\\"AWS::ElasticLoadBalancingV2::Listener\\\":{\\\"80\\\":{\\\"spec\\\":{\\\"loadBalancerARN\\\":{\\\"$ref\\\":\\\"#/resources/AWS::ElasticLoadBalancingV2::LoadBalancer/LoadBalancer/status/loadBalancerARN\\\"},\\\"port\\\":80,\\\"protocol\\\":\\\"HTTP\\\",\\\"defaultActions\\\":[{\\\"type\\\":\\\"fixed-response\\\",\\\"fixedResponseConfig\\\":{\\\"contentType\\\":\\\"text/plain\\\",\\\"statusCode\\\":\\\"404\\\"}}]}}},\\\"AWS::ElasticLoadBalancingV2::ListenerRule\\\":{\\\"80:1\\\":{\\\"spec\\\":{\\\"listenerARN\\\":{\\\"$ref\\\":\\\"#/resources/AWS::ElasticLoadBalancingV2::Listener/80/status/listenerARN\\\"},\\\"priority\\\":1,\\\"actions\\\":[{\\\"type\\\":\\\"forward\\\",\\\"forwardConfig\\\":{\\\"targetGroups\\\":[{\\\"targetGroupARN\\\":{\\\"$ref\\\":\\\"#/resources/AWS::ElasticLoadBalancingV2::TargetGroup/echoserver/echoserver-echoserver:80/status/targetGroupARN\\\"}}]}}],\\\"conditions\\\":[{\\\"field\\\":\\\"host-header\\\",\\\"hostHeaderConfig\\\":{\\\"values\\\":[\\\"echoserver.example.com\\\"]}},{\\\"field\\\":\\\"path-pattern\\\",\\\"pathPatternConfig\\\":{\\\"values\\\":[\\\"/\\\"]}}]}}},\\\"AWS::ElasticLoadBalancingV2::LoadBalancer\\\":{\\\"LoadBalancer\\\":{\\\"spec\\\":{\\\"name\\\":\\\"k8s-echoserv-echoserv-d4d6bd65d0\\\",\\\"type\\\":\\\"application\\\",\\\"scheme\\\":\\\"internet-facing\\\",\\\"ipAddressType\\\":\\\"ipv4\\\",\\\"subnetMapping\\\":[{\\\"subnetID\\\":\\\"subnet-01b35707c23b0a43b\\\"},{\\\"subnetID\\\":\\\"subnet-0f7814a7ab4dfcc2c\\\"}],\\\"securityGroups\\\":[{\\\"$ref\\\":\\\"#/resources/AWS::EC2::SecurityGroup/ManagedLBSecurityGroup/status/groupID\\\"}],\\\"tags\\\":{\\\"Environment\\\":\\\"dev\\\",\\\"Team\\\":\\\"test\\\"}}}},\\\"AWS::ElasticLoadBalancingV2::TargetGroup\\\":{\\\"echoserver/echoserver-echoserver:80\\\":{\\\"spec\\\":{\\\"name\\\":\\\"k8s-echoserv-echoserv-d989093207\\\",\\\"targetType\\\":\\\"instance\\\",\\\"port\\\":1,\\\"protocol\\\":\\\"HTTP\\\",\\\"healthCheckConfig\\\":{\\\"port\\\":\\\"traffic-port\\\",\\\"protocol\\\":\\\"HTTP\\\",\\\"path\\\":\\\"/\\\",\\\"matcher\\\":{\\\"httpCode\\\":\\\"200\\\"},\\\"intervalSeconds\\\":15,\\\"timeoutSeconds\\\":5,\\\"healthyThresholdCount\\\":2,\\\"unhealthyThresholdCount\\\":2},\\\"tags\\\":{\\\"Environment\\\":\\\"dev\\\",\\\"Team\\\":\\\"test\\\"}}}},\\\"K8S::ElasticLoadBalancingV2::TargetGroupBinding\\\":{\\\"echoserver/echoserver-echoserver:80\\\":{\\\"spec\\\":{\\\"template\\\":{\\\"metadata\\\":{\\\"name\\\":\\\"k8s-echoserv-echoserv-d989093207\\\",\\\"namespace\\\":\\\"echoserver\\\",\\\"creationTimestamp\\\":null},\\\"spec\\\":{\\\"targetGroupARN\\\":{\\\"$ref\\\":\\\"#/resources/AWS::ElasticLoadBalancingV2::TargetGroup/echoserver/echoserver-echoserver:80/status/targetGroupARN\\\"},\\\"targetType\\\":\\\"instance\\\",\\\"serviceRef\\\":{\\\"name\\\":\\\"echoserver\\\",\\\"port\\\":80},\\\"networking\\\":{\\\"ingress\\\":[{\\\"from\\\":[{\\\"securityGroup\\\":{\\\"groupID\\\":{\\\"$ref\\\":\\\"#/resources/AWS::EC2::SecurityGroup/ManagedLBSecurityGroup/status/groupID\\\"}}}],\\\"ports\\\":[{\\\"protocol\\\":\\\"TCP\\\"}]}]}}}}}}}}\"} {\"level\":\"info\",\"ts\":1602803966.411922,\"logger\":\"controllers.ingress\",\"msg\":\"creating targetGroup\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"echoserver/echoserver-echoserver:80\"} {\"level\":\"info\",\"ts\":1602803966.6606336,\"logger\":\"controllers.ingress\",\"msg\":\"created targetGroup\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"echoserver/echoserver-echoserver:80\",\"arn\":\"arn:aws:elasticloadbalancing:us-west-2:019453415603:targetgroup/k8s-echoserv-echoserv-d989093207/63225ae3ead3deb6\"} {\"level\":\"info\",\"ts\":1602803966.798019,\"logger\":\"controllers.ingress\",\"msg\":\"creating loadBalancer\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"LoadBalancer\"} {\"level\":\"info\",\"ts\":1602803967.5472538,\"logger\":\"controllers.ingress\",\"msg\":\"created loadBalancer\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"LoadBalancer\",\"arn\":\"arn:aws:elasticloadbalancing:us-west-2:019453415603:loadbalancer/app/k8s-echoserv-echoserv-d4d6bd65d0/4b4ebe8d6e1ef0c1\"} {\"level\":\"info\",\"ts\":1602803967.5863476,\"logger\":\"controllers.ingress\",\"msg\":\"creating listener\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"80\"} {\"level\":\"info\",\"ts\":1602803967.6436293,\"logger\":\"controllers.ingress\",\"msg\":\"created listener\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"80\",\"arn\":\"arn:aws:elasticloadbalancing:us-west-2:019453415603:listener/app/k8s-echoserv-echoserv-d4d6bd65d0/4b4ebe8d6e1ef0c1/6e13477f9d840da0\"} {\"level\":\"info\",\"ts\":1602803967.6528971,\"logger\":\"controllers.ingress\",\"msg\":\"creating listener rule\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"80:1\"} {\"level\":\"info\",\"ts\":1602803967.7160048,\"logger\":\"controllers.ingress\",\"msg\":\"created listener rule\",\"stackID\":\"echoserver/echoserver\",\"resourceID\":\"80:1\",\"arn\":\"arn:aws:elasticloadbalancing:us-west-2:019453415603:listener-rule/app/k8s-echoserv-echoserv-d4d6bd65d0/4b4ebe8d6e1ef0c1/6e13477f9d840da0/23ef859380e792e8\"} {\"level\":\"info\",\"ts\":1602803967.8484688,\"logger\":\"controllers.ingress\",\"msg\":\"successfully deployed model\",\"ingressGroup\":\"echoserver/echoserver\"} Check the events of the ingress to see what has occur. kubectl describe ing -n echoserver echoserver You should see similar to the following. Name: echoserver Namespace: echoserver Address: joshcalico-echoserver-echo-2ad7-1490890749.us-east-2.elb.amazonaws.com Default backend: default-http-backend:80 (10.2.1.28:8080) Rules: Host Path Backends ---- ---- -------- * / echoserver:80 () Annotations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 ingress-controller Normal CREATE Ingress echoserver/echoserver 3m 32s 3 ingress-controller Normal UPDATE Ingress echoserver/echoserver The address seen above is the ALB's DNS name. This will be referenced via records created by external-dns if you choose to set it up.","title":"Deploy ingress for echoserver"},{"location":"examples/echo_server/#verify-that-you-can-access-the-service","text":"Make a curl request to the echoserver service and verify that it returns a response payload. Use the address from the output of kubectl describe ing command above. curl You should get back a valid response.","title":"Verify that you can access the service"},{"location":"examples/echo_server/#optional-use-external-dns-to-create-a-dns-record","text":"Deploy external-dns to your cluster using these instructions - Setup external-dns Update your ingress resource and add spec.rules[0].host and set the value to your domain name. The example below uses echoserver.example.org . spec : rules : - host : echoserver.example.org http : paths : 1. external-dns will then create a DNS record for the host you specified. This assumes you have the hosted zone corresponding to the domain you are trying to create a record in. Annotate the ingress with the external-dns specific configuration annotations : kubernetes.io/ingress.class : alb alb.ingress.kubernetes.io/scheme : internet-facing # external-dns specific configuration for creating route53 record-set external-dns.alpha.kubernetes.io/hostname : my-app.test-dns.com # give your domain name here Verify the DNS has propagated dig echoserver.example.org ;; QUESTION SECTION: ;echoserver.example.org. IN A ;; ANSWER SECTION: echoserver.example.org. 60 IN A 13.59.147.105 echoserver.example.org. 60 IN A 18.221.65.39 echoserver.example.org. 60 IN A 52.15.186.25 Once it has, you can make a call to echoserver and it should return a response payload. curl echoserver.example.org CLIENT VALUES: client_address=10.0.50.185 command=GET real path=/ query=nil request_version=1.1 request_uri=http://echoserver.example.org:8080/ SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* host=echoserver.example.org user-agent=curl/7.54.0 x-amzn-trace-id=Root=1-59c08da5-113347df69640735312371bd x-forwarded-for=67.173.237.250 x-forwarded-port=80 x-forwarded-proto=http BODY:","title":"(Optional) Use external-dns to create a DNS record"},{"location":"examples/echo_server/#kube2iam-setup","text":"follow below steps if you want to use kube2iam to provide the AWS credentials configure the proper policy The policy to be used can be fetched from https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.7.0/docs/install/iam_policy.json configure the proper role and create the trust relationship You have to find which role is associated with your K8S nodes. Once you found take note of the full arn: arn:aws:iam::XXXXXXXXXXXX:role/k8scluster-node create the role, called k8s-lb-controller, attach the above policy and add a Trust Relationship like: { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"\", \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"ec2.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\" }, { \"Sid\": \"\", \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::XXXXXXXXXXXX:role/k8scluster-node\" }, \"Action\": \"sts:AssumeRole\" } ] } The new role will have a similar arn: arn:aws:iam:::XXXXXXXXXXXX:role/k8s-lb-controller update the alb-load-balancer-controller deployment Add the annotations in the template's metadata point spec : replicas : 1 selector : matchLabels : app.kubernetes.io/component : controller app.kubernetes.io/name : aws-load-balancer-controller strategy : rollingUpdate : maxSurge : 1 maxUnavailable : 1 type : RollingUpdate template : metadata : annotations : iam.amazonaws.com/role : arn:aws:iam:::XXXXXXXXXXXX:role/k8s-lb-controller","title":"Kube2iam setup"},{"location":"examples/grpc_server/","text":"walkthrough: grpcserver \u00b6 In this walkthrough, you'll Deploy a grpc service to an existing EKS cluster Send a test message to the hosted service over TLS Prerequsites \u00b6 The following resources are required prior to deployment: EKS cluster aws-load-balancer-controller external-dns See echo_server.md and external_dns.md for setup instructions for those resources. Create an ACM certificate \u00b6 NOTE: An ACM certificate is required for this demo as the application uses the grpc.secure_channel method. If you already have an ACM certificate (including wildcard certificates) for the domain you would like to use in this example, you can skip this step. Request a certificate for a domain you own using the steps described in the official AWS ACM documentation . Once the status for the certificate is \"Issued\" continue to the next step. Deploy the grpcserver manifests \u00b6 Deploy all the manifests from GitHub. kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/grpc/grpcserver-namespace.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/grpc/grpcserver-service.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/grpc/grpcserver-deployment.yaml Confirm that all resources were created. kubectl get -n grpcserver all You should see the pod, service, and deployment. NAME READY STATUS RESTARTS AGE pod/grpcserver-5455b7d4d-jshk5 1/1 Running 0 35m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/grpcserver ClusterIP None 50051/TCP 77m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/grpcserver 1/1 1 1 77m NAME DESIRED CURRENT READY AGE replicaset.apps/grpcserver-5455b7d4d 1 1 1 35m Customize the ingress for grpcserver \u00b6 Download the grpcserver ingress manifest. wget https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/grpc/grpcserver-ingress.yaml Change the domain name from grpcserver.example.com to your desired domain. The example manifest assumes that you have tagged your subnets for the aws-load-balancer-controller. Otherwise add your subnets using the alb.ingress.kubernetes.io/subnets annotation. Deploy the ingress resource for grpcserver. kubectl apply -f grpcserver-ingress.yaml Wait a few minutes for the ALB to provision and for DNS to update. Check the aws-load-balancer-controller logs to ensure the ALB is created. Also ensure that external-dns creates a DNS record that points your domain to the ALB. kubectl logs -n kube-system --tail -1 -l app.kubernetes.io/name = aws-load-balancer-controller | grep 'grpcserver\\/grpcserver' kubectl logs -n kube-system --tail -1 -l app.kubernetes.io/name = external-dns | grep 'YOUR_DOMAIN_NAME' Next check that your ingress shows the correct ALB address and custom domain name. kubectl get ingress -n grpcserver grpcserver You should see similar to the following. NNAME CLASS HOSTS ADDRESS PORTS AGE grpcserver alb YOUR_DOMAIN_NAME ALB-DNS-NAME 80 90m Finally, test your secure gRPC service by running the greeter client, substituting YOUR_DOMAIN_NAME for the domain you used in the ingress manifest. docker run --rm -it --env BACKEND = YOUR_DOMAIN_NAME placeexchange/grpc-demo:latest python greeter_client.py You should see the following response. Greeter client received: Hello, you!","title":"gRPCServer"},{"location":"examples/grpc_server/#walkthrough-grpcserver","text":"In this walkthrough, you'll Deploy a grpc service to an existing EKS cluster Send a test message to the hosted service over TLS","title":"walkthrough: grpcserver"},{"location":"examples/grpc_server/#prerequsites","text":"The following resources are required prior to deployment: EKS cluster aws-load-balancer-controller external-dns See echo_server.md and external_dns.md for setup instructions for those resources.","title":"Prerequsites"},{"location":"examples/grpc_server/#create-an-acm-certificate","text":"NOTE: An ACM certificate is required for this demo as the application uses the grpc.secure_channel method. If you already have an ACM certificate (including wildcard certificates) for the domain you would like to use in this example, you can skip this step. Request a certificate for a domain you own using the steps described in the official AWS ACM documentation . Once the status for the certificate is \"Issued\" continue to the next step.","title":"Create an ACM certificate"},{"location":"examples/grpc_server/#deploy-the-grpcserver-manifests","text":"Deploy all the manifests from GitHub. kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/grpc/grpcserver-namespace.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/grpc/grpcserver-service.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/grpc/grpcserver-deployment.yaml Confirm that all resources were created. kubectl get -n grpcserver all You should see the pod, service, and deployment. NAME READY STATUS RESTARTS AGE pod/grpcserver-5455b7d4d-jshk5 1/1 Running 0 35m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/grpcserver ClusterIP None 50051/TCP 77m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/grpcserver 1/1 1 1 77m NAME DESIRED CURRENT READY AGE replicaset.apps/grpcserver-5455b7d4d 1 1 1 35m","title":"Deploy the grpcserver manifests"},{"location":"examples/grpc_server/#customize-the-ingress-for-grpcserver","text":"Download the grpcserver ingress manifest. wget https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/grpc/grpcserver-ingress.yaml Change the domain name from grpcserver.example.com to your desired domain. The example manifest assumes that you have tagged your subnets for the aws-load-balancer-controller. Otherwise add your subnets using the alb.ingress.kubernetes.io/subnets annotation. Deploy the ingress resource for grpcserver. kubectl apply -f grpcserver-ingress.yaml Wait a few minutes for the ALB to provision and for DNS to update. Check the aws-load-balancer-controller logs to ensure the ALB is created. Also ensure that external-dns creates a DNS record that points your domain to the ALB. kubectl logs -n kube-system --tail -1 -l app.kubernetes.io/name = aws-load-balancer-controller | grep 'grpcserver\\/grpcserver' kubectl logs -n kube-system --tail -1 -l app.kubernetes.io/name = external-dns | grep 'YOUR_DOMAIN_NAME' Next check that your ingress shows the correct ALB address and custom domain name. kubectl get ingress -n grpcserver grpcserver You should see similar to the following. NNAME CLASS HOSTS ADDRESS PORTS AGE grpcserver alb YOUR_DOMAIN_NAME ALB-DNS-NAME 80 90m Finally, test your secure gRPC service by running the greeter client, substituting YOUR_DOMAIN_NAME for the domain you used in the ingress manifest. docker run --rm -it --env BACKEND = YOUR_DOMAIN_NAME placeexchange/grpc-demo:latest python greeter_client.py You should see the following response. Greeter client received: Hello, you!","title":"Customize the ingress for grpcserver"},{"location":"examples/secrets_access/","text":"RBAC configuration for secrets resources \u00b6 In this walkthrough, you will configure RBAC permissions for the controller to access specific secrets resource in a particular namespace. Create Role \u00b6 Prepare the role manifest with the appropriate name, namespace, and secretName, for example: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: example-role namespace: example-namespace rules: - apiGroups: - \"\" resourceNames: - example-secret resources: - secrets verbs: - get - list - watch Apply the role manifest kubectl apply -f role.yaml Create RoleBinding \u00b6 Prepare the rolebinding manifest with the appropriate name, namespace and role reference. For example: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: example-rolebinding namespace: example-namespace roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: example-role subjects: - kind: ServiceAccount name: aws-load-balancer-controller namespace: kube-system Apply the rolebinding manifest kubectl apply -f rolebinding.yaml","title":"RBAC to access OIDC Secret"},{"location":"examples/secrets_access/#rbac-configuration-for-secrets-resources","text":"In this walkthrough, you will configure RBAC permissions for the controller to access specific secrets resource in a particular namespace.","title":"RBAC configuration for secrets resources"},{"location":"examples/secrets_access/#create-role","text":"Prepare the role manifest with the appropriate name, namespace, and secretName, for example: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: example-role namespace: example-namespace rules: - apiGroups: - \"\" resourceNames: - example-secret resources: - secrets verbs: - get - list - watch Apply the role manifest kubectl apply -f role.yaml","title":"Create Role"},{"location":"examples/secrets_access/#create-rolebinding","text":"Prepare the rolebinding manifest with the appropriate name, namespace and role reference. For example: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: example-rolebinding namespace: example-namespace roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: example-role subjects: - kind: ServiceAccount name: aws-load-balancer-controller namespace: kube-system Apply the rolebinding manifest kubectl apply -f rolebinding.yaml","title":"Create RoleBinding"},{"location":"guide/ingress/annotations/","text":"Ingress annotations \u00b6 You can add annotations to kubernetes Ingress and Service objects to customize their behavior. Annotation keys and values can only be strings. Advanced format should be encoded as below: boolean: 'true' integer: '42' stringList: s1,s2,s3 stringMap: k1=v1,k2=v2 json: 'jsonContent' Annotations applied to Service have higher priority over annotations applied to Ingress. Location column below indicates where that annotation can be applied to. Annotations that configures LoadBalancer / Listener behaviors have different merge behavior when IngressGroup feature is been used. MergeBehavior column below indicates how such annotation will be merged. Exclusive: such annotation should only be specified on a single Ingress within IngressGroup or specified with same value across all Ingresses within IngressGroup. Merge: such annotation can be specified on all Ingresses within IngressGroup, and will be merged together. Annotations \u00b6 Name Type Default Location MergeBehavior alb.ingress.kubernetes.io/load-balancer-name string N/A Ingress Exclusive alb.ingress.kubernetes.io/group.name string N/A Ingress N/A alb.ingress.kubernetes.io/group.order integer 0 Ingress N/A alb.ingress.kubernetes.io/tags stringMap N/A Ingress,Service Merge alb.ingress.kubernetes.io/ip-address-type ipv4 | dualstack ipv4 Ingress Exclusive alb.ingress.kubernetes.io/scheme internal | internet-facing internal Ingress Exclusive alb.ingress.kubernetes.io/subnets stringList N/A Ingress Exclusive alb.ingress.kubernetes.io/security-groups stringList N/A Ingress Exclusive alb.ingress.kubernetes.io/manage-backend-security-group-rules boolean N/A Ingress Exclusive alb.ingress.kubernetes.io/customer-owned-ipv4-pool string N/A Ingress Exclusive alb.ingress.kubernetes.io/load-balancer-attributes stringMap N/A Ingress Exclusive alb.ingress.kubernetes.io/wafv2-acl-arn string N/A Ingress Exclusive alb.ingress.kubernetes.io/waf-acl-id string N/A Ingress Exclusive alb.ingress.kubernetes.io/shield-advanced-protection boolean N/A Ingress Exclusive alb.ingress.kubernetes.io/listen-ports json '[{\"HTTP\": 80}]' | '[{\"HTTPS\": 443}]' Ingress Merge alb.ingress.kubernetes.io/ssl-redirect integer N/A Ingress Exclusive alb.ingress.kubernetes.io/inbound-cidrs stringList 0.0.0.0/0, ::/0 Ingress Exclusive alb.ingress.kubernetes.io/certificate-arn stringList N/A Ingress Merge alb.ingress.kubernetes.io/ssl-policy string ELBSecurityPolicy-2016-08 Ingress Exclusive alb.ingress.kubernetes.io/target-type instance | ip instance Ingress,Service N/A alb.ingress.kubernetes.io/backend-protocol HTTP | HTTPS HTTP Ingress,Service N/A alb.ingress.kubernetes.io/backend-protocol-version string HTTP1 Ingress,Service N/A alb.ingress.kubernetes.io/target-group-attributes stringMap N/A Ingress,Service N/A alb.ingress.kubernetes.io/healthcheck-port integer | traffic-port traffic-port Ingress,Service N/A alb.ingress.kubernetes.io/healthcheck-protocol HTTP | HTTPS HTTP Ingress,Service N/A alb.ingress.kubernetes.io/healthcheck-path string / | /AWS.ALB/healthcheck Ingress,Service N/A alb.ingress.kubernetes.io/healthcheck-interval-seconds integer '15' Ingress,Service N/A alb.ingress.kubernetes.io/healthcheck-timeout-seconds integer '5' Ingress,Service N/A alb.ingress.kubernetes.io/healthy-threshold-count integer '2' Ingress,Service N/A alb.ingress.kubernetes.io/unhealthy-threshold-count integer '2' Ingress,Service N/A alb.ingress.kubernetes.io/success-codes string '200' | '12' Ingress,Service N/A alb.ingress.kubernetes.io/auth-type none|oidc|cognito none Ingress,Service N/A alb.ingress.kubernetes.io/auth-idp-cognito json N/A Ingress,Service N/A alb.ingress.kubernetes.io/auth-idp-oidc json N/A Ingress,Service N/A alb.ingress.kubernetes.io/auth-on-unauthenticated-request authenticate|allow|deny authenticate Ingress,Service N/A alb.ingress.kubernetes.io/auth-scope string openid Ingress,Service N/A alb.ingress.kubernetes.io/auth-session-cookie string AWSELBAuthSessionCookie Ingress,Service N/A alb.ingress.kubernetes.io/auth-session-timeout integer '604800' Ingress,Service N/A alb.ingress.kubernetes.io/actions.${action-name} json N/A Ingress N/A alb.ingress.kubernetes.io/conditions.${conditions-name} json N/A Ingress N/A alb.ingress.kubernetes.io/target-node-labels stringMap N/A Ingress,Service N/A alb.ingress.kubernetes.io/mutual-authentication json '[{\"port\": 443, \"mode\": \"off\"}]' Ingress Exclusive IngressGroup \u00b6 IngressGroup feature enables you to group multiple Ingress resources together. The controller will automatically merge Ingress rules for all Ingresses within IngressGroup and support them with a single ALB. In addition, most annotations defined on an Ingress only apply to the paths defined by that Ingress. By default, Ingresses don't belong to any IngressGroup, and we treat it as a \"implicit IngressGroup\" consisting of the Ingress itself. alb.ingress.kubernetes.io/group.name specifies the group name that this Ingress belongs to. Ingresses with same group.name annotation will form an \"explicit IngressGroup\". groupName must consist of lower case alphanumeric characters, - or . , and must start and end with an alphanumeric character. groupName must be no more than 63 character. Security Risk IngressGroup feature should only be used when all Kubernetes users with RBAC permission to create/modify Ingress resources are within trust boundary. If you turn your Ingress to belong a \"explicit IngressGroup\" by adding group.name annotation, other Kubernetes users may create/modify their Ingresses to belong to the same IngressGroup, and can thus add more rules or overwrite existing rules with higher priority to the ALB for your Ingress. We'll add more fine-grained access-control in future versions. Rename behavior The ALB for an IngressGroup is found by searching for an AWS tag ingress.k8s.aws/stack tag with the name of the IngressGroup as its value. For an implicit IngressGroup, the value is namespace/ingressname . When the groupName of an IngressGroup for an Ingress is changed, the Ingress will be moved to a new IngressGroup and be supported by the ALB for the new IngressGroup. If the ALB for the new IngressGroup doesn't exist, a new ALB will be created. If an IngressGroup no longer contains any Ingresses, the ALB for that IngressGroup will be deleted and any deletion protection of that ALB will be ignored. Example alb.ingress.kubernetes.io/group.name: my-team.awesome-group alb.ingress.kubernetes.io/group.order specifies the order across all Ingresses within IngressGroup. You can explicitly denote the order using a number between -1000 and 1000 The smaller the order, the rule will be evaluated first. All Ingresses without an explicit order setting get order value as 0 Rules with the same order are sorted lexicographically by the Ingress\u2019s namespace/name. Example alb.ingress.kubernetes.io/group.order: '10' Traffic Listening \u00b6 Traffic Listening can be controlled with the following annotations: alb.ingress.kubernetes.io/listen-ports specifies the ports that ALB listens on. Merge Behavior listen-ports is merged across all Ingresses in IngressGroup. You can define different listen-ports per Ingress, Ingress rules will only impact the ports defined for that Ingress. If same listen-port is defined by multiple Ingress within IngressGroup, Ingress rules will be merged with respect to their group order within IngressGroup. Default defaults to '[{\"HTTP\": 80}]' or '[{\"HTTPS\": 443}]' depending on whether certificate-arn is specified. You may not have duplicate load balancer ports defined. Example alb.ingress.kubernetes.io/listen-ports: '[{\"HTTP\": 80}, {\"HTTPS\": 443}, {\"HTTP\": 8080}, {\"HTTPS\": 8443}]' alb.ingress.kubernetes.io/ssl-redirect enables SSLRedirect and specifies the SSL port that redirects to. Merge Behavior ssl-redirect is exclusive across all Ingresses in IngressGroup. Once defined on a single Ingress, it impacts every Ingress within IngressGroup. Once enabled SSLRedirect, every HTTP listener will be configured with a default action which redirects to HTTPS, other rules will be ignored. The SSL port that redirects to must exists on LoadBalancer. See alb.ingress.kubernetes.io/listen-ports for the listen ports configuration. Example alb.ingress.kubernetes.io/ssl-redirect: '443' alb.ingress.kubernetes.io/ip-address-type specifies the IP address type of ALB. Example alb.ingress.kubernetes.io/ip-address-type: ipv4 alb.ingress.kubernetes.io/customer-owned-ipv4-pool specifies the customer-owned IPv4 address pool for ALB on Outpost. This annotation should be treated as immutable. To remove or change coIPv4Pool, you need to recreate Ingress. Example alb.ingress.kubernetes.io/customer-owned-ipv4-pool: ipv4pool-coip-xxxxxxxx Traffic Routing \u00b6 Traffic Routing can be controlled with following annotations: alb.ingress.kubernetes.io/load-balancer-name specifies the custom name to use for the load balancer. Name longer than 32 characters will be treated as an error. Merge Behavior name is exclusive across all Ingresses in an IngressGroup. Once defined on a single Ingress, it impacts every Ingress within the IngressGroup. Example alb.ingress.kubernetes.io/load-balancer-name: custom-name alb.ingress.kubernetes.io/target-type specifies how to route traffic to pods. You can choose between instance and ip : instance mode will route traffic to all ec2 instances within cluster on NodePort opened for your service. service must be of type \"NodePort\" or \"LoadBalancer\" to use instance mode ip mode will route traffic directly to the pod IP. network plugin must use secondary IP addresses on ENI for pod IP to use ip mode. e.g. amazon-vpc-cni-k8s ip mode is required for sticky sessions to work with Application Load Balancers. The Service type does not matter, when using ip mode. Example alb.ingress.kubernetes.io/target-type: instance alb.ingress.kubernetes.io/target-node-labels specifies which nodes to include in the target group registration for instance target type. Example alb.ingress.kubernetes.io/target-node-labels: label1=value1, label2=value2 alb.ingress.kubernetes.io/backend-protocol specifies the protocol used when route traffic to pods. Example alb.ingress.kubernetes.io/backend-protocol: HTTPS alb.ingress.kubernetes.io/backend-protocol-version specifies the application protocol used to route traffic to pods. Only valid when HTTP or HTTPS is used as the backend protocol. Example HTTP2 alb.ingress.kubernetes.io/backend-protocol-version: HTTP2 GRPC alb.ingress.kubernetes.io/backend-protocol-version: GRPC alb.ingress.kubernetes.io/subnets specifies the Availability Zone s that the ALB will route traffic to. See Load Balancer subnets for more details. You must specify at least two subnets in different AZs unless utilizing the outpost locale, in which case a single subnet suffices. Either subnetID or subnetName(Name tag on subnets) can be used. You must not mix subnets from different locales: availability-zone, local-zone, wavelength-zone, outpost. Tip You can enable subnet auto discovery to avoid specifying this annotation on every Ingress. See Subnet Discovery for instructions. Example alb.ingress.kubernetes.io/subnets: subnet-xxxx, mySubnet alb.ingress.kubernetes.io/actions.${action-name} Provides a method for configuring custom actions on a listener, such as Redirect Actions. The action-name in the annotation must match the serviceName in the Ingress rules, and servicePort must be use-annotation . use ARN in forward Action ARN can be used in forward action(both simplified schema and advanced schema), it must be an targetGroup created outside of k8s, typically an targetGroup for legacy application. use ServiceName/ServicePort in forward Action ServiceName/ServicePort can be used in forward action(advanced schema only). Auth related annotations on Service object will only be respected if a single TargetGroup in is used. Example response-503: return fixed 503 response redirect-to-eks: redirect to an external url forward-single-tg: forward to a single targetGroup [ simplified schema ] forward-multiple-tg: forward to multiple targetGroups with different weights and stickiness config [ advanced schema ] apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/scheme : internet-facing alb.ingress.kubernetes.io/actions.response-503 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"503\",\"messageBody\":\"503 error text\"}} alb.ingress.kubernetes.io/actions.redirect-to-eks : > {\"type\":\"redirect\",\"redirectConfig\":{\"host\":\"aws.amazon.com\",\"path\":\"/eks/\",\"port\":\"443\",\"protocol\":\"HTTPS\",\"query\":\"k=v\",\"statusCode\":\"HTTP_302\"}} alb.ingress.kubernetes.io/actions.forward-single-tg : > {\"type\":\"forward\",\"targetGroupARN\": \"arn-of-your-target-group\"} alb.ingress.kubernetes.io/actions.forward-multiple-tg : > {\"type\":\"forward\",\"forwardConfig\":{\"targetGroups\":[{\"serviceName\":\"service-1\",\"servicePort\":\"http\",\"weight\":20},{\"serviceName\":\"service-2\",\"servicePort\":80,\"weight\":20},{\"targetGroupARN\":\"arn-of-your-non-k8s-target-group\",\"weight\":60}],\"targetGroupStickinessConfig\":{\"enabled\":true,\"durationSeconds\":200}}} spec : ingressClassName : alb rules : - http : paths : - path : /503 pathType : Exact backend : service : name : response-503 port : name : use-annotation - path : /eks pathType : Exact backend : service : name : redirect-to-eks port : name : use-annotation - path : /path1 pathType : Exact backend : service : name : forward-single-tg port : name : use-annotation - path : /path2 pathType : Exact backend : service : name : forward-multiple-tg port : name : use-annotation alb.ingress.kubernetes.io/conditions.${conditions-name} Provides a method for specifying routing conditions in addition to original host/path condition on Ingress spec . The conditions-name in the annotation must match the serviceName in the Ingress rules. It can be a either real serviceName or an annotation based action name when servicePort is use-annotation . limitations General ALB limitations applies: Each rule can optionally include up to one of each of the following conditions: host-header, http-request-method, path-pattern, and source-ip. Each rule can also optionally include one or more of each of the following conditions: http-header and query-string. You can specify up to three match evaluations per condition. You can specify up to five match evaluations per rule. Refer ALB documentation for more details. Example rule-path1: Host is www.example.com OR anno.example.com Path is /path1 rule-path2: Host is www.example.com Path is /path2 OR /anno/path2 rule-path3: Host is www.example.com Path is /path3 Http header HeaderName is HeaderValue1 OR HeaderValue2 rule-path4: Host is www.example.com Path is /path4 Http request method is GET OR HEAD rule-path5: Host is www.example.com Path is /path5 Query string is paramA:valueA1 OR paramA:valueA2 rule-path6: Host is www.example.com Path is /path6 Source IP is192.168.0.0/16 OR 172.16.0.0/16 rule-path7: Host is www.example.com Path is /path7 Http header HeaderName is HeaderValue Query string is paramA:valueA Query string is paramB:valueB apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/scheme : internet-facing alb.ingress.kubernetes.io/actions.rule-path1 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Host is www.example.com OR anno.example.com\"}} alb.ingress.kubernetes.io/conditions.rule-path1 : > [{\"field\":\"host-header\",\"hostHeaderConfig\":{\"values\":[\"anno.example.com\"]}}] alb.ingress.kubernetes.io/actions.rule-path2 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Path is /path2 OR /anno/path2\"}} alb.ingress.kubernetes.io/conditions.rule-path2 : > [{\"field\":\"path-pattern\",\"pathPatternConfig\":{\"values\":[\"/anno/path2\"]}}] alb.ingress.kubernetes.io/actions.rule-path3 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Http header HeaderName is HeaderValue1 OR HeaderValue2\"}} alb.ingress.kubernetes.io/conditions.rule-path3 : > [{\"field\":\"http-header\",\"httpHeaderConfig\":{\"httpHeaderName\": \"HeaderName\", \"values\":[\"HeaderValue1\", \"HeaderValue2\"]}}] alb.ingress.kubernetes.io/actions.rule-path4 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Http request method is GET OR HEAD\"}} alb.ingress.kubernetes.io/conditions.rule-path4 : > [{\"field\":\"http-request-method\",\"httpRequestMethodConfig\":{\"Values\":[\"GET\", \"HEAD\"]}}] alb.ingress.kubernetes.io/actions.rule-path5 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Query string is paramA:valueA1 OR paramA:valueA2\"}} alb.ingress.kubernetes.io/conditions.rule-path5 : > [{\"field\":\"query-string\",\"queryStringConfig\":{\"values\":[{\"key\":\"paramA\",\"value\":\"valueA1\"},{\"key\":\"paramA\",\"value\":\"valueA2\"}]}}] alb.ingress.kubernetes.io/actions.rule-path6 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Source IP is 192.168.0.0/16 OR 172.16.0.0/16\"}} alb.ingress.kubernetes.io/conditions.rule-path6 : > [{\"field\":\"source-ip\",\"sourceIpConfig\":{\"values\":[\"192.168.0.0/16\", \"172.16.0.0/16\"]}}] alb.ingress.kubernetes.io/actions.rule-path7 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"multiple conditions applies\"}} alb.ingress.kubernetes.io/conditions.rule-path7 : > [{\"field\":\"http-header\",\"httpHeaderConfig\":{\"httpHeaderName\": \"HeaderName\", \"values\":[\"HeaderValue\"]}},{\"field\":\"query-string\",\"queryStringConfig\":{\"values\":[{\"key\":\"paramA\",\"value\":\"valueA\"}]}},{\"field\":\"query-string\",\"queryStringConfig\":{\"values\":[{\"key\":\"paramB\",\"value\":\"valueB\"}]}}] spec : ingressClassName : alb rules : - host : www.example.com http : paths : - path : /path1 pathType : Exact backend : service : name : rule-path1 port : name : use-annotation - path : /path2 pathType : Exact backend : service : name : rule-path2 port : name : use-annotation - path : /path3 pathType : Exact backend : service : name : rule-path3 port : name : use-annotation - path : /path4 pathType : Exact backend : service : name : rule-path4 port : name : use-annotation - path : /path5 pathType : Exact backend : service : name : rule-path5 port : name : use-annotation - path : /path6 pathType : Exact backend : service : name : rule-path6 port : name : use-annotation - path : /path7 pathType : Exact backend : service : name : rule-path7 port : name : use-annotation Note If you are using alb.ingress.kubernetes.io/target-group-attributes with stickiness.enabled=true , you should add TargetGroupStickinessConfig under alb.ingress.kubernetes.io/actions.weighted-routing Example apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/scheme : internet-facing alb.ingress.kubernetes.io/target-type : ip alb.ingress.kubernetes.io/target-group-attributes : stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=60 alb.ingress.kubernetes.io/actions.weighted-routing : | { \"type\" : \"forward\" , \"forwardConfig\" :{ \"targetGroups\" :[ { \"serviceName\" : \"service-1\" , \"servicePort\" : \"80\" , \"weight\" : 50 }, { \"serviceName\" : \"service-2\" , \"servicePort\" : \"80\" , \"weight\" : 50 } ], \"TargetGroupStickinessConfig\" : { \"Enabled\" : true , \"DurationSeconds\" : 120 } } } spec : ingressClassName : alb rules : - host : www.example.com http : paths : - path : / pathType : Prefix backend : service : name : weighted-routing port : name : use-annotation Access control \u00b6 Access control for LoadBalancer can be controlled with following annotations: alb.ingress.kubernetes.io/scheme specifies whether your LoadBalancer will be internet facing. See Load balancer scheme in the AWS documentation for more details. Example alb.ingress.kubernetes.io/scheme: internal alb.ingress.kubernetes.io/inbound-cidrs specifies the CIDRs that are allowed to access LoadBalancer. Merge Behavior inbound-cidrs is merged across all Ingresses in IngressGroup, but is exclusive per listen-port. the inbound-cidrs will only impact the ports defined for that Ingress. if same listen-port is defined by multiple Ingress within IngressGroup, inbound-cidrs should only be defined on one of the Ingress. Default 0.0.0.0/0 will be used if the IPAddressType is \"ipv4\" 0.0.0.0/0 and ::/0 will be used if the IPAddressType is \"dualstack\" this annotation will be ignored if alb.ingress.kubernetes.io/security-groups is specified. Example alb.ingress.kubernetes.io/inbound-cidrs: 10.0.0.0/24 alb.ingress.kubernetes.io/security-groups specifies the securityGroups you want to attach to LoadBalancer. When this annotation is not present, the controller will automatically create one security group, the security group will be attached to the LoadBalancer and allow access from inbound-cidrs to the listen-ports . Also, the securityGroups for Node/Pod will be modified to allow inbound traffic from this securityGroup. If you specify this annotation, you need to configure the security groups on your Node/Pod to allow inbound traffic from the load balancer. You could also set the manage-backend-security-group-rules if you want the controller to manage the access rules. Both name or ID of securityGroups are supported. Name matches a Name tag, not the groupName attribute. Example alb.ingress.kubernetes.io/security-groups: sg-xxxx, nameOfSg1, nameOfSg2 alb.ingress.kubernetes.io/manage-backend-security-group-rules specifies whether you want the controller to configure security group rules on Node/Pod for traffic access when you specify security-groups . This annotation applies only in case you specify the security groups via security-groups annotation. If set to true, controller attaches an additional shared backend security group to your load balancer. This backend security group is used in the Node/Pod security group rules. Example alb.ingress.kubernetes.io/manage-backend-security-group-rules: \"true\" Authentication \u00b6 ALB supports authentication with Cognito or OIDC. See Authenticate Users Using an Application Load Balancer for more details. HTTPS only Authentication is only supported for HTTPS listeners. See TLS for configuring HTTPS listeners. alb.ingress.kubernetes.io/auth-type specifies the authentication type on targets. Example alb.ingress.kubernetes.io/auth-type: cognito alb.ingress.kubernetes.io/auth-idp-cognito specifies the cognito idp configuration. If you are using Amazon Cognito Domain, the userPoolDomain should be set to the domain prefix(my-domain) instead of full domain(https://my-domain.auth.us-west-2.amazoncognito.com) Example alb.ingress.kubernetes.io/auth-idp-cognito: '{\"userPoolARN\":\"arn:aws:cognito-idp:us-west-2:xxx:userpool/xxx\",\"userPoolClientID\":\"my-clientID\",\"userPoolDomain\":\"my-domain\"}' alb.ingress.kubernetes.io/auth-idp-oidc specifies the oidc idp configuration. You need to create an secret within the same namespace as Ingress to hold your OIDC clientID and clientSecret. The format of secret is as below: apiVersion : v1 kind : Secret metadata : namespace : testcase name : my-k8s-secret data : clientID : base64 of your plain text clientId clientSecret : base64 of your plain text clientSecret Example alb.ingress.kubernetes.io/auth-idp-oidc: '{\"issuer\":\"https://example.com\",\"authorizationEndpoint\":\"https://authorization.example.com\",\"tokenEndpoint\":\"https://token.example.com\",\"userInfoEndpoint\":\"https://userinfo.example.com\",\"secretName\":\"my-k8s-secret\"}' alb.ingress.kubernetes.io/auth-on-unauthenticated-request specifies the behavior if the user is not authenticated. options: authenticate : try authenticate with configured IDP. deny : return an HTTP 401 Unauthorized error. allow : allow the request to be forwarded to the target. Example alb.ingress.kubernetes.io/auth-on-unauthenticated-request: authenticate alb.ingress.kubernetes.io/auth-scope specifies the set of user claims to be requested from the IDP(cognito or oidc), in a space-separated list. options: phone email profile openid aws.cognito.signin.user.admin Example alb.ingress.kubernetes.io/auth-scope: 'email openid' alb.ingress.kubernetes.io/auth-session-cookie specifies the name of the cookie used to maintain session information Example alb.ingress.kubernetes.io/auth-session-cookie: custom-cookie alb.ingress.kubernetes.io/auth-session-timeout specifies the maximum duration of the authentication session, in seconds Example alb.ingress.kubernetes.io/auth-session-timeout: '86400' Health Check \u00b6 Health check on target groups can be controlled with following annotations: alb.ingress.kubernetes.io/healthcheck-protocol specifies the protocol used when performing health check on targets. Example alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS alb.ingress.kubernetes.io/healthcheck-port specifies the port used when performing health check on targets. When using target-type: instance with a service of type \"NodePort\", the healthcheck port can be set to traffic-port to automatically point to the correct port. Example set the healthcheck port to the traffic port alb.ingress.kubernetes.io/healthcheck-port: traffic-port set the healthcheck port to the NodePort(when target-type=instance) or TargetPort(when target-type=ip) of a named port alb.ingress.kubernetes.io/healthcheck-port: my-port set the healthcheck port to 80/tcp alb.ingress.kubernetes.io/healthcheck-port: '80' alb.ingress.kubernetes.io/healthcheck-path specifies the HTTP path when performing health check on targets. Example HTTP alb.ingress.kubernetes.io/healthcheck-path: /ping GRPC alb.ingress.kubernetes.io/healthcheck-path: /package.service/method alb.ingress.kubernetes.io/healthcheck-interval-seconds specifies the interval(in seconds) between health check of an individual target. Example alb.ingress.kubernetes.io/healthcheck-interval-seconds: '10' alb.ingress.kubernetes.io/healthcheck-timeout-seconds specifies the timeout(in seconds) during which no response from a target means a failed health check Example alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '8' alb.ingress.kubernetes.io/success-codes specifies the HTTP or gRPC status code that should be expected when doing health checks against the specified health check path. Example use single value alb.ingress.kubernetes.io/success-codes: '200' use multiple values alb.ingress.kubernetes.io/success-codes: 200,201 use range of value alb.ingress.kubernetes.io/success-codes: 200-300 use gRPC single value alb.ingress.kubernetes.io/success-codes: '0' use gRPC multiple value alb.ingress.kubernetes.io/success-codes: 0,1 use gRPC range of value alb.ingress.kubernetes.io/success-codes: 0-5 alb.ingress.kubernetes.io/healthy-threshold-count specifies the consecutive health checks successes required before considering an unhealthy target healthy. Example alb.ingress.kubernetes.io/healthy-threshold-count: '2' alb.ingress.kubernetes.io/unhealthy-threshold-count specifies the consecutive health check failures required before considering a target unhealthy. Example alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' TLS \u00b6 TLS support can be controlled with the following annotations: alb.ingress.kubernetes.io/certificate-arn specifies the ARN of one or more certificate managed by AWS Certificate Manager The first certificate in the list will be added as default certificate. And remaining certificate will be added to the optional certificate list. See SSL Certificates for more details. Certificate Discovery TLS certificates for ALB Listeners can be automatically discovered with hostnames from Ingress resources. See Certificate Discovery for instructions. Example single certificate alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:xxxxx:certificate/xxxxxxx multiple certificates alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:xxxxx:certificate/cert1,arn:aws:acm:us-west-2:xxxxx:certificate/cert2,arn:aws:acm:us-west-2:xxxxx:certificate/cert3 alb.ingress.kubernetes.io/ssl-policy specifies the Security Policy that should be assigned to the ALB, allowing you to control the protocol and ciphers. Example alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 alb.ingress.kubernetes.io/mutual-authentication specifies the mutual authentication configuration that should be assigned to the Application Load Balancer secure listener ports. See Mutual authentication with TLS in the AWS documentation for more details. Configuration Options port: listen port Must be a HTTPS port specified by listen-ports . mode: \"off\" (default) | \"passthrough\" | \"verify\" verify mode requires an existing trust store resource. See Create a trust store in the AWS documentation for more details. trustStore: ARN (arn:aws:elasticloadbalancing:trustStoreArn) | Name (my-trust-store) Both ARN and Name of trustStore are supported values. trustStore is required when mode is verify . ignoreClientCertificateExpiry : true | false (default) Example listen-ports specifies four HTTPS ports: 80, 443, 8080, 8443 listener HTTPS:80 will be set to passthrough mode listener HTTPS:443 will be set to verify mode, associated with trust store arn arn:aws:elasticloadbalancing:trustStoreArn and have ignoreClientCertificateExpiry set to true listeners HTTPS:8080 and HTTPS:8443 remain in the default mode off . alb.ingress.kubernetes.io/listen-ports: '[{\"HTTPS\": 80}, {\"HTTPS\": 443}, {\"HTTPS\": 8080}, {\"HTTPS\": 8443}]' alb.ingress.kubernetes.io/mutual-authentication: '[{\"port\": 80, \"mode\": \"passthrough\"}, {\"port\": 443, \"mode\": \"verify\", \"trustStore\": \"arn:aws:elasticloadbalancing:trustStoreArn\", \"ignoreClientCertificateExpiry\" : true}]' Note To avoid conflict errors in IngressGroup, this annotation should only be specified on a single Ingress within IngressGroup or specified with same value across all Ingresses within IngressGroup. Trust stores limit per Application Load Balancer A maximum of two different trust stores can be associated among listeners on the same ingress. See Quotas for your Application Load Balancers in the AWS documentation for more details. Custom attributes \u00b6 Custom attributes to LoadBalancers and TargetGroups can be controlled with following annotations: alb.ingress.kubernetes.io/load-balancer-attributes specifies Load Balancer Attributes that should be applied to the ALB. Only attributes defined in the annotation will be updated. To unset any AWS defaults(e.g. Disabling access logs after having them enabled once), the values need to be explicitly set to the original values( access_logs.s3.enabled=false ) and omitting them is not sufficient. If deletion_protection.enabled=true is in annotation, the controller will not be able to delete the ALB during reconciliation. Once the attribute gets edited to deletion_protection.enabled=false during reconciliation, the deployer will force delete the resource. Please note, if the deletion protection is not enabled via annotation (e.g. via AWS console), the controller still deletes the underlying resource. Example enable access log to s3 alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=my-access-log-bucket,access_logs.s3.prefix=my-app enable deletion protection alb.ingress.kubernetes.io/load-balancer-attributes: deletion_protection.enabled=true enable invalid header fields removal alb.ingress.kubernetes.io/load-balancer-attributes: routing.http.drop_invalid_header_fields.enabled=true enable http2 support alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true set idle_timeout delay to 600 seconds alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=600 enable connection logs alb.ingress.kubernetes.io/load-balancer-attributes: connection_logs.s3.enabled=true,connection_logs.s3.bucket=my-connection-log-bucket,connection_logs.s3.prefix=my-app alb.ingress.kubernetes.io/target-group-attributes specifies Target Group Attributes which should be applied to Target Groups. Example set the slow start duration to 30 seconds (available range is 30-900 seconds) alb.ingress.kubernetes.io/target-group-attributes: slow_start.duration_seconds=30 set the deregistration delay to 30 seconds (available range is 0-3600 seconds) alb.ingress.kubernetes.io/target-group-attributes: deregistration_delay.timeout_seconds=30 enable sticky sessions (requires alb.ingress.kubernetes.io/target-type be set to ip ) alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=60 alb.ingress.kubernetes.io/target-type: ip set load balancing algorithm to least outstanding requests alb.ingress.kubernetes.io/target-group-attributes: load_balancing.algorithm.type=least_outstanding_requests enable Automated Target Weights(ATW) on HTTP/HTTPS target groups to increase application availability. Set your load balancing algorithm to weighted random and turn on anomaly mitigation (recommended) alb.ingress.kubernetes.io/target-group-attributes: load_balancing.algorithm.type=weighted_random,load_balancing.algorithm.anomaly_mitigation=on Resource Tags \u00b6 The AWS Load Balancer Controller automatically applies following tags to the AWS resources (ALB/TargetGroups/SecurityGroups/Listener/ListenerRule) it creates: elbv2.k8s.aws/cluster: ${clusterName} ingress.k8s.aws/stack: ${stackID} ingress.k8s.aws/resource: ${resourceID} In addition, you can use annotations to specify additional tags alb.ingress.kubernetes.io/tags specifies additional tags that will be applied to AWS resources created. In case of target group, the controller will merge the tags from the ingress and the backend service giving precedence to the values specified on the service when there is conflict. Example alb.ingress.kubernetes.io/tags: Environment=dev,Team=test Addons \u00b6 Note If waf-acl-arn is specified via the ingress annotations, the controller will make sure the waf-acl is associated to the provisioned ALB with the ingress. If there is not such annotation, the controller will make sure no waf-acl is associated, so it may remove the existing waf-acl on the ALB provisioned. If users do not want the controller to manage the waf-acl on the ALBs, they can disable the feature by setting controller command line flags --enable-waf=false or --enable-wafv2=false alb.ingress.kubernetes.io/waf-acl-id specifies the identifier for the Amazon WAF web ACL. Only Regional WAF is supported. Example alb.ingress.kubernetes.io/waf-acl-id: 499e8b99-6671-4614-a86d-adb1810b7fbe alb.ingress.kubernetes.io/wafv2-acl-arn specifies ARN for the Amazon WAFv2 web ACL. Only Regional WAFv2 is supported. To get the WAFv2 Web ACL ARN from the Console, click the gear icon in the upper right and enable the ARN column. Example alb.ingress.kubernetes.io/wafv2-acl-arn: arn:aws:wafv2:us-west-2:xxxxx:regional/webacl/xxxxxxx/3ab78708-85b0-49d3-b4e1-7a9615a6613b alb.ingress.kubernetes.io/shield-advanced-protection turns on / off the AWS Shield Advanced protection for the load balancer. Example alb.ingress.kubernetes.io/shield-advanced-protection: 'true'","title":"Annotations"},{"location":"guide/ingress/annotations/#ingress-annotations","text":"You can add annotations to kubernetes Ingress and Service objects to customize their behavior. Annotation keys and values can only be strings. Advanced format should be encoded as below: boolean: 'true' integer: '42' stringList: s1,s2,s3 stringMap: k1=v1,k2=v2 json: 'jsonContent' Annotations applied to Service have higher priority over annotations applied to Ingress. Location column below indicates where that annotation can be applied to. Annotations that configures LoadBalancer / Listener behaviors have different merge behavior when IngressGroup feature is been used. MergeBehavior column below indicates how such annotation will be merged. Exclusive: such annotation should only be specified on a single Ingress within IngressGroup or specified with same value across all Ingresses within IngressGroup. Merge: such annotation can be specified on all Ingresses within IngressGroup, and will be merged together.","title":"Ingress annotations"},{"location":"guide/ingress/annotations/#annotations","text":"Name Type Default Location MergeBehavior alb.ingress.kubernetes.io/load-balancer-name string N/A Ingress Exclusive alb.ingress.kubernetes.io/group.name string N/A Ingress N/A alb.ingress.kubernetes.io/group.order integer 0 Ingress N/A alb.ingress.kubernetes.io/tags stringMap N/A Ingress,Service Merge alb.ingress.kubernetes.io/ip-address-type ipv4 | dualstack ipv4 Ingress Exclusive alb.ingress.kubernetes.io/scheme internal | internet-facing internal Ingress Exclusive alb.ingress.kubernetes.io/subnets stringList N/A Ingress Exclusive alb.ingress.kubernetes.io/security-groups stringList N/A Ingress Exclusive alb.ingress.kubernetes.io/manage-backend-security-group-rules boolean N/A Ingress Exclusive alb.ingress.kubernetes.io/customer-owned-ipv4-pool string N/A Ingress Exclusive alb.ingress.kubernetes.io/load-balancer-attributes stringMap N/A Ingress Exclusive alb.ingress.kubernetes.io/wafv2-acl-arn string N/A Ingress Exclusive alb.ingress.kubernetes.io/waf-acl-id string N/A Ingress Exclusive alb.ingress.kubernetes.io/shield-advanced-protection boolean N/A Ingress Exclusive alb.ingress.kubernetes.io/listen-ports json '[{\"HTTP\": 80}]' | '[{\"HTTPS\": 443}]' Ingress Merge alb.ingress.kubernetes.io/ssl-redirect integer N/A Ingress Exclusive alb.ingress.kubernetes.io/inbound-cidrs stringList 0.0.0.0/0, ::/0 Ingress Exclusive alb.ingress.kubernetes.io/certificate-arn stringList N/A Ingress Merge alb.ingress.kubernetes.io/ssl-policy string ELBSecurityPolicy-2016-08 Ingress Exclusive alb.ingress.kubernetes.io/target-type instance | ip instance Ingress,Service N/A alb.ingress.kubernetes.io/backend-protocol HTTP | HTTPS HTTP Ingress,Service N/A alb.ingress.kubernetes.io/backend-protocol-version string HTTP1 Ingress,Service N/A alb.ingress.kubernetes.io/target-group-attributes stringMap N/A Ingress,Service N/A alb.ingress.kubernetes.io/healthcheck-port integer | traffic-port traffic-port Ingress,Service N/A alb.ingress.kubernetes.io/healthcheck-protocol HTTP | HTTPS HTTP Ingress,Service N/A alb.ingress.kubernetes.io/healthcheck-path string / | /AWS.ALB/healthcheck Ingress,Service N/A alb.ingress.kubernetes.io/healthcheck-interval-seconds integer '15' Ingress,Service N/A alb.ingress.kubernetes.io/healthcheck-timeout-seconds integer '5' Ingress,Service N/A alb.ingress.kubernetes.io/healthy-threshold-count integer '2' Ingress,Service N/A alb.ingress.kubernetes.io/unhealthy-threshold-count integer '2' Ingress,Service N/A alb.ingress.kubernetes.io/success-codes string '200' | '12' Ingress,Service N/A alb.ingress.kubernetes.io/auth-type none|oidc|cognito none Ingress,Service N/A alb.ingress.kubernetes.io/auth-idp-cognito json N/A Ingress,Service N/A alb.ingress.kubernetes.io/auth-idp-oidc json N/A Ingress,Service N/A alb.ingress.kubernetes.io/auth-on-unauthenticated-request authenticate|allow|deny authenticate Ingress,Service N/A alb.ingress.kubernetes.io/auth-scope string openid Ingress,Service N/A alb.ingress.kubernetes.io/auth-session-cookie string AWSELBAuthSessionCookie Ingress,Service N/A alb.ingress.kubernetes.io/auth-session-timeout integer '604800' Ingress,Service N/A alb.ingress.kubernetes.io/actions.${action-name} json N/A Ingress N/A alb.ingress.kubernetes.io/conditions.${conditions-name} json N/A Ingress N/A alb.ingress.kubernetes.io/target-node-labels stringMap N/A Ingress,Service N/A alb.ingress.kubernetes.io/mutual-authentication json '[{\"port\": 443, \"mode\": \"off\"}]' Ingress Exclusive","title":"Annotations"},{"location":"guide/ingress/annotations/#ingressgroup","text":"IngressGroup feature enables you to group multiple Ingress resources together. The controller will automatically merge Ingress rules for all Ingresses within IngressGroup and support them with a single ALB. In addition, most annotations defined on an Ingress only apply to the paths defined by that Ingress. By default, Ingresses don't belong to any IngressGroup, and we treat it as a \"implicit IngressGroup\" consisting of the Ingress itself. alb.ingress.kubernetes.io/group.name specifies the group name that this Ingress belongs to. Ingresses with same group.name annotation will form an \"explicit IngressGroup\". groupName must consist of lower case alphanumeric characters, - or . , and must start and end with an alphanumeric character. groupName must be no more than 63 character. Security Risk IngressGroup feature should only be used when all Kubernetes users with RBAC permission to create/modify Ingress resources are within trust boundary. If you turn your Ingress to belong a \"explicit IngressGroup\" by adding group.name annotation, other Kubernetes users may create/modify their Ingresses to belong to the same IngressGroup, and can thus add more rules or overwrite existing rules with higher priority to the ALB for your Ingress. We'll add more fine-grained access-control in future versions. Rename behavior The ALB for an IngressGroup is found by searching for an AWS tag ingress.k8s.aws/stack tag with the name of the IngressGroup as its value. For an implicit IngressGroup, the value is namespace/ingressname . When the groupName of an IngressGroup for an Ingress is changed, the Ingress will be moved to a new IngressGroup and be supported by the ALB for the new IngressGroup. If the ALB for the new IngressGroup doesn't exist, a new ALB will be created. If an IngressGroup no longer contains any Ingresses, the ALB for that IngressGroup will be deleted and any deletion protection of that ALB will be ignored. Example alb.ingress.kubernetes.io/group.name: my-team.awesome-group alb.ingress.kubernetes.io/group.order specifies the order across all Ingresses within IngressGroup. You can explicitly denote the order using a number between -1000 and 1000 The smaller the order, the rule will be evaluated first. All Ingresses without an explicit order setting get order value as 0 Rules with the same order are sorted lexicographically by the Ingress\u2019s namespace/name. Example alb.ingress.kubernetes.io/group.order: '10'","title":"IngressGroup"},{"location":"guide/ingress/annotations/#traffic-listening","text":"Traffic Listening can be controlled with the following annotations: alb.ingress.kubernetes.io/listen-ports specifies the ports that ALB listens on. Merge Behavior listen-ports is merged across all Ingresses in IngressGroup. You can define different listen-ports per Ingress, Ingress rules will only impact the ports defined for that Ingress. If same listen-port is defined by multiple Ingress within IngressGroup, Ingress rules will be merged with respect to their group order within IngressGroup. Default defaults to '[{\"HTTP\": 80}]' or '[{\"HTTPS\": 443}]' depending on whether certificate-arn is specified. You may not have duplicate load balancer ports defined. Example alb.ingress.kubernetes.io/listen-ports: '[{\"HTTP\": 80}, {\"HTTPS\": 443}, {\"HTTP\": 8080}, {\"HTTPS\": 8443}]' alb.ingress.kubernetes.io/ssl-redirect enables SSLRedirect and specifies the SSL port that redirects to. Merge Behavior ssl-redirect is exclusive across all Ingresses in IngressGroup. Once defined on a single Ingress, it impacts every Ingress within IngressGroup. Once enabled SSLRedirect, every HTTP listener will be configured with a default action which redirects to HTTPS, other rules will be ignored. The SSL port that redirects to must exists on LoadBalancer. See alb.ingress.kubernetes.io/listen-ports for the listen ports configuration. Example alb.ingress.kubernetes.io/ssl-redirect: '443' alb.ingress.kubernetes.io/ip-address-type specifies the IP address type of ALB. Example alb.ingress.kubernetes.io/ip-address-type: ipv4 alb.ingress.kubernetes.io/customer-owned-ipv4-pool specifies the customer-owned IPv4 address pool for ALB on Outpost. This annotation should be treated as immutable. To remove or change coIPv4Pool, you need to recreate Ingress. Example alb.ingress.kubernetes.io/customer-owned-ipv4-pool: ipv4pool-coip-xxxxxxxx","title":"Traffic Listening"},{"location":"guide/ingress/annotations/#traffic-routing","text":"Traffic Routing can be controlled with following annotations: alb.ingress.kubernetes.io/load-balancer-name specifies the custom name to use for the load balancer. Name longer than 32 characters will be treated as an error. Merge Behavior name is exclusive across all Ingresses in an IngressGroup. Once defined on a single Ingress, it impacts every Ingress within the IngressGroup. Example alb.ingress.kubernetes.io/load-balancer-name: custom-name alb.ingress.kubernetes.io/target-type specifies how to route traffic to pods. You can choose between instance and ip : instance mode will route traffic to all ec2 instances within cluster on NodePort opened for your service. service must be of type \"NodePort\" or \"LoadBalancer\" to use instance mode ip mode will route traffic directly to the pod IP. network plugin must use secondary IP addresses on ENI for pod IP to use ip mode. e.g. amazon-vpc-cni-k8s ip mode is required for sticky sessions to work with Application Load Balancers. The Service type does not matter, when using ip mode. Example alb.ingress.kubernetes.io/target-type: instance alb.ingress.kubernetes.io/target-node-labels specifies which nodes to include in the target group registration for instance target type. Example alb.ingress.kubernetes.io/target-node-labels: label1=value1, label2=value2 alb.ingress.kubernetes.io/backend-protocol specifies the protocol used when route traffic to pods. Example alb.ingress.kubernetes.io/backend-protocol: HTTPS alb.ingress.kubernetes.io/backend-protocol-version specifies the application protocol used to route traffic to pods. Only valid when HTTP or HTTPS is used as the backend protocol. Example HTTP2 alb.ingress.kubernetes.io/backend-protocol-version: HTTP2 GRPC alb.ingress.kubernetes.io/backend-protocol-version: GRPC alb.ingress.kubernetes.io/subnets specifies the Availability Zone s that the ALB will route traffic to. See Load Balancer subnets for more details. You must specify at least two subnets in different AZs unless utilizing the outpost locale, in which case a single subnet suffices. Either subnetID or subnetName(Name tag on subnets) can be used. You must not mix subnets from different locales: availability-zone, local-zone, wavelength-zone, outpost. Tip You can enable subnet auto discovery to avoid specifying this annotation on every Ingress. See Subnet Discovery for instructions. Example alb.ingress.kubernetes.io/subnets: subnet-xxxx, mySubnet alb.ingress.kubernetes.io/actions.${action-name} Provides a method for configuring custom actions on a listener, such as Redirect Actions. The action-name in the annotation must match the serviceName in the Ingress rules, and servicePort must be use-annotation . use ARN in forward Action ARN can be used in forward action(both simplified schema and advanced schema), it must be an targetGroup created outside of k8s, typically an targetGroup for legacy application. use ServiceName/ServicePort in forward Action ServiceName/ServicePort can be used in forward action(advanced schema only). Auth related annotations on Service object will only be respected if a single TargetGroup in is used. Example response-503: return fixed 503 response redirect-to-eks: redirect to an external url forward-single-tg: forward to a single targetGroup [ simplified schema ] forward-multiple-tg: forward to multiple targetGroups with different weights and stickiness config [ advanced schema ] apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/scheme : internet-facing alb.ingress.kubernetes.io/actions.response-503 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"503\",\"messageBody\":\"503 error text\"}} alb.ingress.kubernetes.io/actions.redirect-to-eks : > {\"type\":\"redirect\",\"redirectConfig\":{\"host\":\"aws.amazon.com\",\"path\":\"/eks/\",\"port\":\"443\",\"protocol\":\"HTTPS\",\"query\":\"k=v\",\"statusCode\":\"HTTP_302\"}} alb.ingress.kubernetes.io/actions.forward-single-tg : > {\"type\":\"forward\",\"targetGroupARN\": \"arn-of-your-target-group\"} alb.ingress.kubernetes.io/actions.forward-multiple-tg : > {\"type\":\"forward\",\"forwardConfig\":{\"targetGroups\":[{\"serviceName\":\"service-1\",\"servicePort\":\"http\",\"weight\":20},{\"serviceName\":\"service-2\",\"servicePort\":80,\"weight\":20},{\"targetGroupARN\":\"arn-of-your-non-k8s-target-group\",\"weight\":60}],\"targetGroupStickinessConfig\":{\"enabled\":true,\"durationSeconds\":200}}} spec : ingressClassName : alb rules : - http : paths : - path : /503 pathType : Exact backend : service : name : response-503 port : name : use-annotation - path : /eks pathType : Exact backend : service : name : redirect-to-eks port : name : use-annotation - path : /path1 pathType : Exact backend : service : name : forward-single-tg port : name : use-annotation - path : /path2 pathType : Exact backend : service : name : forward-multiple-tg port : name : use-annotation alb.ingress.kubernetes.io/conditions.${conditions-name} Provides a method for specifying routing conditions in addition to original host/path condition on Ingress spec . The conditions-name in the annotation must match the serviceName in the Ingress rules. It can be a either real serviceName or an annotation based action name when servicePort is use-annotation . limitations General ALB limitations applies: Each rule can optionally include up to one of each of the following conditions: host-header, http-request-method, path-pattern, and source-ip. Each rule can also optionally include one or more of each of the following conditions: http-header and query-string. You can specify up to three match evaluations per condition. You can specify up to five match evaluations per rule. Refer ALB documentation for more details. Example rule-path1: Host is www.example.com OR anno.example.com Path is /path1 rule-path2: Host is www.example.com Path is /path2 OR /anno/path2 rule-path3: Host is www.example.com Path is /path3 Http header HeaderName is HeaderValue1 OR HeaderValue2 rule-path4: Host is www.example.com Path is /path4 Http request method is GET OR HEAD rule-path5: Host is www.example.com Path is /path5 Query string is paramA:valueA1 OR paramA:valueA2 rule-path6: Host is www.example.com Path is /path6 Source IP is192.168.0.0/16 OR 172.16.0.0/16 rule-path7: Host is www.example.com Path is /path7 Http header HeaderName is HeaderValue Query string is paramA:valueA Query string is paramB:valueB apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/scheme : internet-facing alb.ingress.kubernetes.io/actions.rule-path1 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Host is www.example.com OR anno.example.com\"}} alb.ingress.kubernetes.io/conditions.rule-path1 : > [{\"field\":\"host-header\",\"hostHeaderConfig\":{\"values\":[\"anno.example.com\"]}}] alb.ingress.kubernetes.io/actions.rule-path2 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Path is /path2 OR /anno/path2\"}} alb.ingress.kubernetes.io/conditions.rule-path2 : > [{\"field\":\"path-pattern\",\"pathPatternConfig\":{\"values\":[\"/anno/path2\"]}}] alb.ingress.kubernetes.io/actions.rule-path3 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Http header HeaderName is HeaderValue1 OR HeaderValue2\"}} alb.ingress.kubernetes.io/conditions.rule-path3 : > [{\"field\":\"http-header\",\"httpHeaderConfig\":{\"httpHeaderName\": \"HeaderName\", \"values\":[\"HeaderValue1\", \"HeaderValue2\"]}}] alb.ingress.kubernetes.io/actions.rule-path4 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Http request method is GET OR HEAD\"}} alb.ingress.kubernetes.io/conditions.rule-path4 : > [{\"field\":\"http-request-method\",\"httpRequestMethodConfig\":{\"Values\":[\"GET\", \"HEAD\"]}}] alb.ingress.kubernetes.io/actions.rule-path5 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Query string is paramA:valueA1 OR paramA:valueA2\"}} alb.ingress.kubernetes.io/conditions.rule-path5 : > [{\"field\":\"query-string\",\"queryStringConfig\":{\"values\":[{\"key\":\"paramA\",\"value\":\"valueA1\"},{\"key\":\"paramA\",\"value\":\"valueA2\"}]}}] alb.ingress.kubernetes.io/actions.rule-path6 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"Source IP is 192.168.0.0/16 OR 172.16.0.0/16\"}} alb.ingress.kubernetes.io/conditions.rule-path6 : > [{\"field\":\"source-ip\",\"sourceIpConfig\":{\"values\":[\"192.168.0.0/16\", \"172.16.0.0/16\"]}}] alb.ingress.kubernetes.io/actions.rule-path7 : > {\"type\":\"fixed-response\",\"fixedResponseConfig\":{\"contentType\":\"text/plain\",\"statusCode\":\"200\",\"messageBody\":\"multiple conditions applies\"}} alb.ingress.kubernetes.io/conditions.rule-path7 : > [{\"field\":\"http-header\",\"httpHeaderConfig\":{\"httpHeaderName\": \"HeaderName\", \"values\":[\"HeaderValue\"]}},{\"field\":\"query-string\",\"queryStringConfig\":{\"values\":[{\"key\":\"paramA\",\"value\":\"valueA\"}]}},{\"field\":\"query-string\",\"queryStringConfig\":{\"values\":[{\"key\":\"paramB\",\"value\":\"valueB\"}]}}] spec : ingressClassName : alb rules : - host : www.example.com http : paths : - path : /path1 pathType : Exact backend : service : name : rule-path1 port : name : use-annotation - path : /path2 pathType : Exact backend : service : name : rule-path2 port : name : use-annotation - path : /path3 pathType : Exact backend : service : name : rule-path3 port : name : use-annotation - path : /path4 pathType : Exact backend : service : name : rule-path4 port : name : use-annotation - path : /path5 pathType : Exact backend : service : name : rule-path5 port : name : use-annotation - path : /path6 pathType : Exact backend : service : name : rule-path6 port : name : use-annotation - path : /path7 pathType : Exact backend : service : name : rule-path7 port : name : use-annotation Note If you are using alb.ingress.kubernetes.io/target-group-attributes with stickiness.enabled=true , you should add TargetGroupStickinessConfig under alb.ingress.kubernetes.io/actions.weighted-routing Example apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/scheme : internet-facing alb.ingress.kubernetes.io/target-type : ip alb.ingress.kubernetes.io/target-group-attributes : stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=60 alb.ingress.kubernetes.io/actions.weighted-routing : | { \"type\" : \"forward\" , \"forwardConfig\" :{ \"targetGroups\" :[ { \"serviceName\" : \"service-1\" , \"servicePort\" : \"80\" , \"weight\" : 50 }, { \"serviceName\" : \"service-2\" , \"servicePort\" : \"80\" , \"weight\" : 50 } ], \"TargetGroupStickinessConfig\" : { \"Enabled\" : true , \"DurationSeconds\" : 120 } } } spec : ingressClassName : alb rules : - host : www.example.com http : paths : - path : / pathType : Prefix backend : service : name : weighted-routing port : name : use-annotation","title":"Traffic Routing"},{"location":"guide/ingress/annotations/#access-control","text":"Access control for LoadBalancer can be controlled with following annotations: alb.ingress.kubernetes.io/scheme specifies whether your LoadBalancer will be internet facing. See Load balancer scheme in the AWS documentation for more details. Example alb.ingress.kubernetes.io/scheme: internal alb.ingress.kubernetes.io/inbound-cidrs specifies the CIDRs that are allowed to access LoadBalancer. Merge Behavior inbound-cidrs is merged across all Ingresses in IngressGroup, but is exclusive per listen-port. the inbound-cidrs will only impact the ports defined for that Ingress. if same listen-port is defined by multiple Ingress within IngressGroup, inbound-cidrs should only be defined on one of the Ingress. Default 0.0.0.0/0 will be used if the IPAddressType is \"ipv4\" 0.0.0.0/0 and ::/0 will be used if the IPAddressType is \"dualstack\" this annotation will be ignored if alb.ingress.kubernetes.io/security-groups is specified. Example alb.ingress.kubernetes.io/inbound-cidrs: 10.0.0.0/24 alb.ingress.kubernetes.io/security-groups specifies the securityGroups you want to attach to LoadBalancer. When this annotation is not present, the controller will automatically create one security group, the security group will be attached to the LoadBalancer and allow access from inbound-cidrs to the listen-ports . Also, the securityGroups for Node/Pod will be modified to allow inbound traffic from this securityGroup. If you specify this annotation, you need to configure the security groups on your Node/Pod to allow inbound traffic from the load balancer. You could also set the manage-backend-security-group-rules if you want the controller to manage the access rules. Both name or ID of securityGroups are supported. Name matches a Name tag, not the groupName attribute. Example alb.ingress.kubernetes.io/security-groups: sg-xxxx, nameOfSg1, nameOfSg2 alb.ingress.kubernetes.io/manage-backend-security-group-rules specifies whether you want the controller to configure security group rules on Node/Pod for traffic access when you specify security-groups . This annotation applies only in case you specify the security groups via security-groups annotation. If set to true, controller attaches an additional shared backend security group to your load balancer. This backend security group is used in the Node/Pod security group rules. Example alb.ingress.kubernetes.io/manage-backend-security-group-rules: \"true\"","title":"Access control"},{"location":"guide/ingress/annotations/#authentication","text":"ALB supports authentication with Cognito or OIDC. See Authenticate Users Using an Application Load Balancer for more details. HTTPS only Authentication is only supported for HTTPS listeners. See TLS for configuring HTTPS listeners. alb.ingress.kubernetes.io/auth-type specifies the authentication type on targets. Example alb.ingress.kubernetes.io/auth-type: cognito alb.ingress.kubernetes.io/auth-idp-cognito specifies the cognito idp configuration. If you are using Amazon Cognito Domain, the userPoolDomain should be set to the domain prefix(my-domain) instead of full domain(https://my-domain.auth.us-west-2.amazoncognito.com) Example alb.ingress.kubernetes.io/auth-idp-cognito: '{\"userPoolARN\":\"arn:aws:cognito-idp:us-west-2:xxx:userpool/xxx\",\"userPoolClientID\":\"my-clientID\",\"userPoolDomain\":\"my-domain\"}' alb.ingress.kubernetes.io/auth-idp-oidc specifies the oidc idp configuration. You need to create an secret within the same namespace as Ingress to hold your OIDC clientID and clientSecret. The format of secret is as below: apiVersion : v1 kind : Secret metadata : namespace : testcase name : my-k8s-secret data : clientID : base64 of your plain text clientId clientSecret : base64 of your plain text clientSecret Example alb.ingress.kubernetes.io/auth-idp-oidc: '{\"issuer\":\"https://example.com\",\"authorizationEndpoint\":\"https://authorization.example.com\",\"tokenEndpoint\":\"https://token.example.com\",\"userInfoEndpoint\":\"https://userinfo.example.com\",\"secretName\":\"my-k8s-secret\"}' alb.ingress.kubernetes.io/auth-on-unauthenticated-request specifies the behavior if the user is not authenticated. options: authenticate : try authenticate with configured IDP. deny : return an HTTP 401 Unauthorized error. allow : allow the request to be forwarded to the target. Example alb.ingress.kubernetes.io/auth-on-unauthenticated-request: authenticate alb.ingress.kubernetes.io/auth-scope specifies the set of user claims to be requested from the IDP(cognito or oidc), in a space-separated list. options: phone email profile openid aws.cognito.signin.user.admin Example alb.ingress.kubernetes.io/auth-scope: 'email openid' alb.ingress.kubernetes.io/auth-session-cookie specifies the name of the cookie used to maintain session information Example alb.ingress.kubernetes.io/auth-session-cookie: custom-cookie alb.ingress.kubernetes.io/auth-session-timeout specifies the maximum duration of the authentication session, in seconds Example alb.ingress.kubernetes.io/auth-session-timeout: '86400'","title":"Authentication"},{"location":"guide/ingress/annotations/#health-check","text":"Health check on target groups can be controlled with following annotations: alb.ingress.kubernetes.io/healthcheck-protocol specifies the protocol used when performing health check on targets. Example alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS alb.ingress.kubernetes.io/healthcheck-port specifies the port used when performing health check on targets. When using target-type: instance with a service of type \"NodePort\", the healthcheck port can be set to traffic-port to automatically point to the correct port. Example set the healthcheck port to the traffic port alb.ingress.kubernetes.io/healthcheck-port: traffic-port set the healthcheck port to the NodePort(when target-type=instance) or TargetPort(when target-type=ip) of a named port alb.ingress.kubernetes.io/healthcheck-port: my-port set the healthcheck port to 80/tcp alb.ingress.kubernetes.io/healthcheck-port: '80' alb.ingress.kubernetes.io/healthcheck-path specifies the HTTP path when performing health check on targets. Example HTTP alb.ingress.kubernetes.io/healthcheck-path: /ping GRPC alb.ingress.kubernetes.io/healthcheck-path: /package.service/method alb.ingress.kubernetes.io/healthcheck-interval-seconds specifies the interval(in seconds) between health check of an individual target. Example alb.ingress.kubernetes.io/healthcheck-interval-seconds: '10' alb.ingress.kubernetes.io/healthcheck-timeout-seconds specifies the timeout(in seconds) during which no response from a target means a failed health check Example alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '8' alb.ingress.kubernetes.io/success-codes specifies the HTTP or gRPC status code that should be expected when doing health checks against the specified health check path. Example use single value alb.ingress.kubernetes.io/success-codes: '200' use multiple values alb.ingress.kubernetes.io/success-codes: 200,201 use range of value alb.ingress.kubernetes.io/success-codes: 200-300 use gRPC single value alb.ingress.kubernetes.io/success-codes: '0' use gRPC multiple value alb.ingress.kubernetes.io/success-codes: 0,1 use gRPC range of value alb.ingress.kubernetes.io/success-codes: 0-5 alb.ingress.kubernetes.io/healthy-threshold-count specifies the consecutive health checks successes required before considering an unhealthy target healthy. Example alb.ingress.kubernetes.io/healthy-threshold-count: '2' alb.ingress.kubernetes.io/unhealthy-threshold-count specifies the consecutive health check failures required before considering a target unhealthy. Example alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'","title":"Health Check"},{"location":"guide/ingress/annotations/#tls","text":"TLS support can be controlled with the following annotations: alb.ingress.kubernetes.io/certificate-arn specifies the ARN of one or more certificate managed by AWS Certificate Manager The first certificate in the list will be added as default certificate. And remaining certificate will be added to the optional certificate list. See SSL Certificates for more details. Certificate Discovery TLS certificates for ALB Listeners can be automatically discovered with hostnames from Ingress resources. See Certificate Discovery for instructions. Example single certificate alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:xxxxx:certificate/xxxxxxx multiple certificates alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:xxxxx:certificate/cert1,arn:aws:acm:us-west-2:xxxxx:certificate/cert2,arn:aws:acm:us-west-2:xxxxx:certificate/cert3 alb.ingress.kubernetes.io/ssl-policy specifies the Security Policy that should be assigned to the ALB, allowing you to control the protocol and ciphers. Example alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 alb.ingress.kubernetes.io/mutual-authentication specifies the mutual authentication configuration that should be assigned to the Application Load Balancer secure listener ports. See Mutual authentication with TLS in the AWS documentation for more details. Configuration Options port: listen port Must be a HTTPS port specified by listen-ports . mode: \"off\" (default) | \"passthrough\" | \"verify\" verify mode requires an existing trust store resource. See Create a trust store in the AWS documentation for more details. trustStore: ARN (arn:aws:elasticloadbalancing:trustStoreArn) | Name (my-trust-store) Both ARN and Name of trustStore are supported values. trustStore is required when mode is verify . ignoreClientCertificateExpiry : true | false (default) Example listen-ports specifies four HTTPS ports: 80, 443, 8080, 8443 listener HTTPS:80 will be set to passthrough mode listener HTTPS:443 will be set to verify mode, associated with trust store arn arn:aws:elasticloadbalancing:trustStoreArn and have ignoreClientCertificateExpiry set to true listeners HTTPS:8080 and HTTPS:8443 remain in the default mode off . alb.ingress.kubernetes.io/listen-ports: '[{\"HTTPS\": 80}, {\"HTTPS\": 443}, {\"HTTPS\": 8080}, {\"HTTPS\": 8443}]' alb.ingress.kubernetes.io/mutual-authentication: '[{\"port\": 80, \"mode\": \"passthrough\"}, {\"port\": 443, \"mode\": \"verify\", \"trustStore\": \"arn:aws:elasticloadbalancing:trustStoreArn\", \"ignoreClientCertificateExpiry\" : true}]' Note To avoid conflict errors in IngressGroup, this annotation should only be specified on a single Ingress within IngressGroup or specified with same value across all Ingresses within IngressGroup. Trust stores limit per Application Load Balancer A maximum of two different trust stores can be associated among listeners on the same ingress. See Quotas for your Application Load Balancers in the AWS documentation for more details.","title":"TLS"},{"location":"guide/ingress/annotations/#custom-attributes","text":"Custom attributes to LoadBalancers and TargetGroups can be controlled with following annotations: alb.ingress.kubernetes.io/load-balancer-attributes specifies Load Balancer Attributes that should be applied to the ALB. Only attributes defined in the annotation will be updated. To unset any AWS defaults(e.g. Disabling access logs after having them enabled once), the values need to be explicitly set to the original values( access_logs.s3.enabled=false ) and omitting them is not sufficient. If deletion_protection.enabled=true is in annotation, the controller will not be able to delete the ALB during reconciliation. Once the attribute gets edited to deletion_protection.enabled=false during reconciliation, the deployer will force delete the resource. Please note, if the deletion protection is not enabled via annotation (e.g. via AWS console), the controller still deletes the underlying resource. Example enable access log to s3 alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=my-access-log-bucket,access_logs.s3.prefix=my-app enable deletion protection alb.ingress.kubernetes.io/load-balancer-attributes: deletion_protection.enabled=true enable invalid header fields removal alb.ingress.kubernetes.io/load-balancer-attributes: routing.http.drop_invalid_header_fields.enabled=true enable http2 support alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true set idle_timeout delay to 600 seconds alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=600 enable connection logs alb.ingress.kubernetes.io/load-balancer-attributes: connection_logs.s3.enabled=true,connection_logs.s3.bucket=my-connection-log-bucket,connection_logs.s3.prefix=my-app alb.ingress.kubernetes.io/target-group-attributes specifies Target Group Attributes which should be applied to Target Groups. Example set the slow start duration to 30 seconds (available range is 30-900 seconds) alb.ingress.kubernetes.io/target-group-attributes: slow_start.duration_seconds=30 set the deregistration delay to 30 seconds (available range is 0-3600 seconds) alb.ingress.kubernetes.io/target-group-attributes: deregistration_delay.timeout_seconds=30 enable sticky sessions (requires alb.ingress.kubernetes.io/target-type be set to ip ) alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=60 alb.ingress.kubernetes.io/target-type: ip set load balancing algorithm to least outstanding requests alb.ingress.kubernetes.io/target-group-attributes: load_balancing.algorithm.type=least_outstanding_requests enable Automated Target Weights(ATW) on HTTP/HTTPS target groups to increase application availability. Set your load balancing algorithm to weighted random and turn on anomaly mitigation (recommended) alb.ingress.kubernetes.io/target-group-attributes: load_balancing.algorithm.type=weighted_random,load_balancing.algorithm.anomaly_mitigation=on","title":"Custom attributes"},{"location":"guide/ingress/annotations/#resource-tags","text":"The AWS Load Balancer Controller automatically applies following tags to the AWS resources (ALB/TargetGroups/SecurityGroups/Listener/ListenerRule) it creates: elbv2.k8s.aws/cluster: ${clusterName} ingress.k8s.aws/stack: ${stackID} ingress.k8s.aws/resource: ${resourceID} In addition, you can use annotations to specify additional tags alb.ingress.kubernetes.io/tags specifies additional tags that will be applied to AWS resources created. In case of target group, the controller will merge the tags from the ingress and the backend service giving precedence to the values specified on the service when there is conflict. Example alb.ingress.kubernetes.io/tags: Environment=dev,Team=test","title":"Resource Tags"},{"location":"guide/ingress/annotations/#addons","text":"Note If waf-acl-arn is specified via the ingress annotations, the controller will make sure the waf-acl is associated to the provisioned ALB with the ingress. If there is not such annotation, the controller will make sure no waf-acl is associated, so it may remove the existing waf-acl on the ALB provisioned. If users do not want the controller to manage the waf-acl on the ALBs, they can disable the feature by setting controller command line flags --enable-waf=false or --enable-wafv2=false alb.ingress.kubernetes.io/waf-acl-id specifies the identifier for the Amazon WAF web ACL. Only Regional WAF is supported. Example alb.ingress.kubernetes.io/waf-acl-id: 499e8b99-6671-4614-a86d-adb1810b7fbe alb.ingress.kubernetes.io/wafv2-acl-arn specifies ARN for the Amazon WAFv2 web ACL. Only Regional WAFv2 is supported. To get the WAFv2 Web ACL ARN from the Console, click the gear icon in the upper right and enable the ARN column. Example alb.ingress.kubernetes.io/wafv2-acl-arn: arn:aws:wafv2:us-west-2:xxxxx:regional/webacl/xxxxxxx/3ab78708-85b0-49d3-b4e1-7a9615a6613b alb.ingress.kubernetes.io/shield-advanced-protection turns on / off the AWS Shield Advanced protection for the load balancer. Example alb.ingress.kubernetes.io/shield-advanced-protection: 'true'","title":"Addons"},{"location":"guide/ingress/cert_discovery/","text":"Certificate Discovery \u00b6 TLS certificates for ALB Listeners can be automatically discovered with hostnames from Ingress resources if the alb.ingress.kubernetes.io/certificate-arn annotation is not specified. The controller will attempt to discover TLS certificates from the tls field in Ingress and host field in Ingress rules. You need to explicitly specify to use HTTPS listener with listen-ports annotation. Discover via Ingress tls \u00b6 Example attaches certs for www.example.com to the ALB apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/listen-ports : '[{\"HTTPS\":443}]' spec : ingressClassName : alb tls : - hosts : - www.example.com rules : - http : paths : - path : /users pathType : Prefix backend : service : name : user-service port : number : 80 Discover via Ingress rule host. \u00b6 Example attaches a cert for dev.example.com or *.example.com to the ALB apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/listen-ports : '[{\"HTTPS\":443}]' spec : ingressClassName : alb rules : - host : dev.example.com http : paths : - path : /users pathType : Prefix backend : service : name : user-service port : number : 80","title":"Certificate Discovery"},{"location":"guide/ingress/cert_discovery/#certificate-discovery","text":"TLS certificates for ALB Listeners can be automatically discovered with hostnames from Ingress resources if the alb.ingress.kubernetes.io/certificate-arn annotation is not specified. The controller will attempt to discover TLS certificates from the tls field in Ingress and host field in Ingress rules. You need to explicitly specify to use HTTPS listener with listen-ports annotation.","title":"Certificate Discovery"},{"location":"guide/ingress/cert_discovery/#discover-via-ingress-tls","text":"Example attaches certs for www.example.com to the ALB apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/listen-ports : '[{\"HTTPS\":443}]' spec : ingressClassName : alb tls : - hosts : - www.example.com rules : - http : paths : - path : /users pathType : Prefix backend : service : name : user-service port : number : 80","title":"Discover via Ingress tls"},{"location":"guide/ingress/cert_discovery/#discover-via-ingress-rule-host","text":"Example attaches a cert for dev.example.com or *.example.com to the ALB apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/listen-ports : '[{\"HTTPS\":443}]' spec : ingressClassName : alb rules : - host : dev.example.com http : paths : - path : /users pathType : Prefix backend : service : name : user-service port : number : 80","title":"Discover via Ingress rule host."},{"location":"guide/ingress/ingress_class/","text":"IngressClass \u00b6 Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class. IngressClass resources contain an optional parameters field. This can be used to reference additional implementation-specific configuration for this class. For the AWS Load Balancer controller, the implementation-specific configuration is IngressClassParams in the elbv2.k8s.aws API group. Example specify controller as ingress.k8s.aws/alb to denote Ingresses should be managed by AWS Load Balancer Controller. apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: name: awesome-class spec: controller: ingress.k8s.aws/alb specify additional configurations by referencing an IngressClassParams resource. apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: name: awesome-class spec: controller: ingress.k8s.aws/alb parameters: apiGroup: elbv2.k8s.aws kind: IngressClassParams name: awesome-class-cfg default IngressClass You can mark a particular IngressClass as the default for your cluster. Setting the ingressclass.kubernetes.io/is-default-class annotation to true on an IngressClass resource will ensure that new Ingresses without an ingressClassName field specified will be assigned this default IngressClass. Deprecated kubernetes.io/ingress.class annotation \u00b6 Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation was never formally defined, but was widely supported by Ingress controllers. The newer ingressClassName field on Ingresses is a replacement for that annotation, but is not a direct equivalent. While the annotation was generally used to reference the name of the Ingress controller that should implement the Ingress, the field is a reference to an IngressClass resource that contains additional Ingress configuration, including the name of the Ingress controller. disable kubernetes.io/ingress.class annotation In order to maintain backwards-compatibility, kubernetes.io/ingress.class annotation is still supported currently. You can enforce IngressClass resource adoption by disabling the kubernetes.io/ingress.class annotation via --disable-ingress-class-annotation controller flag. IngressClassParams \u00b6 IngressClassParams is a CRD specific to the AWS Load Balancer Controller, which can be used along with IngressClass\u2019s parameter field. You can use IngressClassParams to enforce settings for a set of Ingresses. Example with scheme & ipAddressType & tags apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: awesome-class spec: scheme: internal ipAddressType: dualstack tags: - key: org value: my-org with namespaceSelector apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: awesome-class spec: namespaceSelector: matchLabels: team: team-a with IngressGroup apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: awesome-class spec: group: name: my-group with loadBalancerAttributes apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: awesome-class spec: loadBalancerAttributes: - key: deletion_protection.enabled value: \"true\" - key: idle_timeout.timeout_seconds value: \"120\" with subnets.ids apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: awesome-class spec: subnets: ids: - subnet-xxx - subnet-123 with subnets.tags apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: class2048-config spec: subnets: tags: kubernetes.io/role/internal-elb: - \"1\" myKey: - myVal0 - myVal1 IngressClassParams specification \u00b6 spec.namespaceSelector \u00b6 namespaceSelector is an optional setting that follows general Kubernetes label selector semantics. Cluster administrators can use the namespaceSelector field to restrict the namespaces of Ingresses that are allowed to specify the IngressClass. If namespaceSelector specified, only Ingresses in selected namespaces can use IngressClasses with this parameter. The controller will refuse to reconcile for Ingresses that violates namespaceSelector . If namespaceSelector un-specified, all Ingresses in any namespace can use IngressClasses with this parameter. spec.group \u00b6 group is an optional setting. The only available sub-field is group.name . Cluster administrators can use group.name field to denote the groupName for all Ingresses belong to this IngressClass. If group.name specified, all Ingresses with this IngressClass will belong to the same IngressGroup specified and result in a single ALB. If group.name is not specified, Ingresses with this IngressClass can use the older / legacy alb.ingress.kubernetes.io/group.name annotation to specify their IngressGroup. Ingresses that belong to the same IngressClass can form different IngressGroups via that annotation. spec.scheme \u00b6 scheme is an optional setting. The available options are internet-facing or internal . Cluster administrators can use the scheme field to restrict the scheme for all Ingresses that belong to this IngressClass. If scheme specified, all Ingresses with this IngressClass will have the specified scheme. If scheme un-specified, Ingresses with this IngressClass can continue to use alb.ingress.kubernetes.io/scheme annotation to specify scheme. spec.inboundCIDRs \u00b6 Cluster administrators can use the optional inboundCIDRs field to specify the CIDRs that are allowed to access the load balancers that belong to this IngressClass. If the field is specified, LBC will ignore the alb.ingress.kubernetes.io/inbound-cidrs annotation. spec.sslPolicy \u00b6 Cluster administrators can use the optional sslPolicy field to specify the SSL policy for the load balancers that belong to this IngressClass. If the field is specified, LBC will ignore the alb.ingress.kubernetes.io/ssl-policy annotation. spec.subnets \u00b6 Cluster administrators can use the optional subnets field to specify the subnets for the load balancers that belong to this IngressClass. They may specify either ids or tags . If the field is specified, LBC will ignore the alb.ingress.kubernetes.io/subnets annotation annotation. spec.subnets.ids \u00b6 If ids is specified, it must be a set of at least one resource ID of a subnet in the VPC. No two subnets may be in the same availability zone. spec.subnets.tags \u00b6 If tags is specified, it is a map of tag filters. The filters will match subnets in the VPC for which each listed tag key is present and has one of the corresponding tag values. Unless the SubnetsClusterTagCheck feature gate is disabled, subnets without a cluster tag and with the cluster tag for another cluster will be excluded. Within any given availability zone, subnets with a cluster tag will be chosen over subnets without, then the subnet with the lowest-sorting resource ID will be chosen. spec.ipAddressType \u00b6 ipAddressType is an optional setting. The available options are ipv4 or dualstack . Cluster administrators can use ipAddressType field to restrict the ipAddressType for all Ingresses that belong to this IngressClass. If ipAddressType specified, all Ingresses with this IngressClass will have the specified ipAddressType. If ipAddressType un-specified, Ingresses with this IngressClass can continue to use alb.ingress.kubernetes.io/ip-address-type annotation to specify ipAddressType. spec.tags \u00b6 tags is an optional setting. Cluster administrators can use tags field to specify the custom tags for AWS resources provisioned for all Ingresses belong to this IngressClass. If tags is set, AWS resources provisioned for all Ingresses with this IngressClass will have the specified tags. You can also use controller-level flag --default-tags or alb.ingress.kubernetes.io/tags annotation to specify custom tags. These tags will be merged together based on tag-key. If same tag-key appears in multiple sources, the priority is as follows: controller-level flag --default-tags will have the highest priority. spec.tags in IngressClassParams will have the middle priority. alb.ingress.kubernetes.io/tags annotation will have the lowest priority. spec.loadBalancerAttributes \u00b6 loadBalancerAttributes is an optional setting. Cluster administrators can use loadBalancerAttributes field to specify the Load Balancer Attributes that should be applied to the load balancers that belong to this IngressClass. You can specify the list of load balancer attribute name and the desired value in the spec.loadBalancerAttributes field. If loadBalancerAttributes is set, the attributes defined will be applied to the load balancer that belong to this IngressClass. If you specify invalid keys or values for the load balancer attributes, the controller will fail to reconcile ingresses belonging to the particular ingress class. If loadBalancerAttributes un-specified, Ingresses with this IngressClass can continue to use alb.ingress.kubernetes.io/load-balancer-attributes annotation to specify the load balancer attributes.","title":"IngressClass"},{"location":"guide/ingress/ingress_class/#ingressclass","text":"Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class. IngressClass resources contain an optional parameters field. This can be used to reference additional implementation-specific configuration for this class. For the AWS Load Balancer controller, the implementation-specific configuration is IngressClassParams in the elbv2.k8s.aws API group. Example specify controller as ingress.k8s.aws/alb to denote Ingresses should be managed by AWS Load Balancer Controller. apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: name: awesome-class spec: controller: ingress.k8s.aws/alb specify additional configurations by referencing an IngressClassParams resource. apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: name: awesome-class spec: controller: ingress.k8s.aws/alb parameters: apiGroup: elbv2.k8s.aws kind: IngressClassParams name: awesome-class-cfg default IngressClass You can mark a particular IngressClass as the default for your cluster. Setting the ingressclass.kubernetes.io/is-default-class annotation to true on an IngressClass resource will ensure that new Ingresses without an ingressClassName field specified will be assigned this default IngressClass.","title":"IngressClass"},{"location":"guide/ingress/ingress_class/#deprecated-kubernetesioingressclass-annotation","text":"Before the IngressClass resource and ingressClassName field were added in Kubernetes 1.18, Ingress classes were specified with a kubernetes.io/ingress.class annotation on the Ingress. This annotation was never formally defined, but was widely supported by Ingress controllers. The newer ingressClassName field on Ingresses is a replacement for that annotation, but is not a direct equivalent. While the annotation was generally used to reference the name of the Ingress controller that should implement the Ingress, the field is a reference to an IngressClass resource that contains additional Ingress configuration, including the name of the Ingress controller. disable kubernetes.io/ingress.class annotation In order to maintain backwards-compatibility, kubernetes.io/ingress.class annotation is still supported currently. You can enforce IngressClass resource adoption by disabling the kubernetes.io/ingress.class annotation via --disable-ingress-class-annotation controller flag.","title":"Deprecated kubernetes.io/ingress.class annotation"},{"location":"guide/ingress/ingress_class/#ingressclassparams","text":"IngressClassParams is a CRD specific to the AWS Load Balancer Controller, which can be used along with IngressClass\u2019s parameter field. You can use IngressClassParams to enforce settings for a set of Ingresses. Example with scheme & ipAddressType & tags apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: awesome-class spec: scheme: internal ipAddressType: dualstack tags: - key: org value: my-org with namespaceSelector apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: awesome-class spec: namespaceSelector: matchLabels: team: team-a with IngressGroup apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: awesome-class spec: group: name: my-group with loadBalancerAttributes apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: awesome-class spec: loadBalancerAttributes: - key: deletion_protection.enabled value: \"true\" - key: idle_timeout.timeout_seconds value: \"120\" with subnets.ids apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: awesome-class spec: subnets: ids: - subnet-xxx - subnet-123 with subnets.tags apiVersion: elbv2.k8s.aws/v1beta1 kind: IngressClassParams metadata: name: class2048-config spec: subnets: tags: kubernetes.io/role/internal-elb: - \"1\" myKey: - myVal0 - myVal1","title":"IngressClassParams"},{"location":"guide/ingress/ingress_class/#ingressclassparams-specification","text":"","title":"IngressClassParams specification"},{"location":"guide/ingress/ingress_class/#specnamespaceselector","text":"namespaceSelector is an optional setting that follows general Kubernetes label selector semantics. Cluster administrators can use the namespaceSelector field to restrict the namespaces of Ingresses that are allowed to specify the IngressClass. If namespaceSelector specified, only Ingresses in selected namespaces can use IngressClasses with this parameter. The controller will refuse to reconcile for Ingresses that violates namespaceSelector . If namespaceSelector un-specified, all Ingresses in any namespace can use IngressClasses with this parameter.","title":"spec.namespaceSelector"},{"location":"guide/ingress/ingress_class/#specgroup","text":"group is an optional setting. The only available sub-field is group.name . Cluster administrators can use group.name field to denote the groupName for all Ingresses belong to this IngressClass. If group.name specified, all Ingresses with this IngressClass will belong to the same IngressGroup specified and result in a single ALB. If group.name is not specified, Ingresses with this IngressClass can use the older / legacy alb.ingress.kubernetes.io/group.name annotation to specify their IngressGroup. Ingresses that belong to the same IngressClass can form different IngressGroups via that annotation.","title":"spec.group"},{"location":"guide/ingress/ingress_class/#specscheme","text":"scheme is an optional setting. The available options are internet-facing or internal . Cluster administrators can use the scheme field to restrict the scheme for all Ingresses that belong to this IngressClass. If scheme specified, all Ingresses with this IngressClass will have the specified scheme. If scheme un-specified, Ingresses with this IngressClass can continue to use alb.ingress.kubernetes.io/scheme annotation to specify scheme.","title":"spec.scheme"},{"location":"guide/ingress/ingress_class/#specinboundcidrs","text":"Cluster administrators can use the optional inboundCIDRs field to specify the CIDRs that are allowed to access the load balancers that belong to this IngressClass. If the field is specified, LBC will ignore the alb.ingress.kubernetes.io/inbound-cidrs annotation.","title":"spec.inboundCIDRs"},{"location":"guide/ingress/ingress_class/#specsslpolicy","text":"Cluster administrators can use the optional sslPolicy field to specify the SSL policy for the load balancers that belong to this IngressClass. If the field is specified, LBC will ignore the alb.ingress.kubernetes.io/ssl-policy annotation.","title":"spec.sslPolicy"},{"location":"guide/ingress/ingress_class/#specsubnets","text":"Cluster administrators can use the optional subnets field to specify the subnets for the load balancers that belong to this IngressClass. They may specify either ids or tags . If the field is specified, LBC will ignore the alb.ingress.kubernetes.io/subnets annotation annotation.","title":"spec.subnets"},{"location":"guide/ingress/ingress_class/#specsubnetsids","text":"If ids is specified, it must be a set of at least one resource ID of a subnet in the VPC. No two subnets may be in the same availability zone.","title":"spec.subnets.ids"},{"location":"guide/ingress/ingress_class/#specsubnetstags","text":"If tags is specified, it is a map of tag filters. The filters will match subnets in the VPC for which each listed tag key is present and has one of the corresponding tag values. Unless the SubnetsClusterTagCheck feature gate is disabled, subnets without a cluster tag and with the cluster tag for another cluster will be excluded. Within any given availability zone, subnets with a cluster tag will be chosen over subnets without, then the subnet with the lowest-sorting resource ID will be chosen.","title":"spec.subnets.tags"},{"location":"guide/ingress/ingress_class/#specipaddresstype","text":"ipAddressType is an optional setting. The available options are ipv4 or dualstack . Cluster administrators can use ipAddressType field to restrict the ipAddressType for all Ingresses that belong to this IngressClass. If ipAddressType specified, all Ingresses with this IngressClass will have the specified ipAddressType. If ipAddressType un-specified, Ingresses with this IngressClass can continue to use alb.ingress.kubernetes.io/ip-address-type annotation to specify ipAddressType.","title":"spec.ipAddressType"},{"location":"guide/ingress/ingress_class/#spectags","text":"tags is an optional setting. Cluster administrators can use tags field to specify the custom tags for AWS resources provisioned for all Ingresses belong to this IngressClass. If tags is set, AWS resources provisioned for all Ingresses with this IngressClass will have the specified tags. You can also use controller-level flag --default-tags or alb.ingress.kubernetes.io/tags annotation to specify custom tags. These tags will be merged together based on tag-key. If same tag-key appears in multiple sources, the priority is as follows: controller-level flag --default-tags will have the highest priority. spec.tags in IngressClassParams will have the middle priority. alb.ingress.kubernetes.io/tags annotation will have the lowest priority.","title":"spec.tags"},{"location":"guide/ingress/ingress_class/#specloadbalancerattributes","text":"loadBalancerAttributes is an optional setting. Cluster administrators can use loadBalancerAttributes field to specify the Load Balancer Attributes that should be applied to the load balancers that belong to this IngressClass. You can specify the list of load balancer attribute name and the desired value in the spec.loadBalancerAttributes field. If loadBalancerAttributes is set, the attributes defined will be applied to the load balancer that belong to this IngressClass. If you specify invalid keys or values for the load balancer attributes, the controller will fail to reconcile ingresses belonging to the particular ingress class. If loadBalancerAttributes un-specified, Ingresses with this IngressClass can continue to use alb.ingress.kubernetes.io/load-balancer-attributes annotation to specify the load balancer attributes.","title":"spec.loadBalancerAttributes"},{"location":"guide/ingress/spec/","text":"Ingress specification \u00b6 This document covers how ingress resources work in relation to The AWS Load Balancer Controller. Beginning from v2.4.3 of the AWS LBC, rules are ordered as follows: pathType: Exact paths are always ordered first followed by pathType: Prefix paths, with the longest prefix first followed by pathType: ImplementationSpecific paths, in the order they are listed in the manifest An example ingress, from example is as follows. apiVersion : networking.k8s.io/v1 kind : Ingress metadata : name : \"2048-ingress\" namespace : \"2048-game\" labels : app : 2048-nginx-ingress spec : ingressClassName : alb rules : - host : 2048.example.com http : paths : - path : /* pathType : ImplementationSpecific backend : service : name : \"service-2048\" port : number : 80 The host field specifies the eventual Route 53-managed domain that will route to this service. The service, service-2048, must be of type NodePort in order for the provisioned ALB to route to it.(see echoserver-service.yaml ) The AWS Load Balancer Controller does not support the resource field of backend .","title":"Specification"},{"location":"guide/ingress/spec/#ingress-specification","text":"This document covers how ingress resources work in relation to The AWS Load Balancer Controller. Beginning from v2.4.3 of the AWS LBC, rules are ordered as follows: pathType: Exact paths are always ordered first followed by pathType: Prefix paths, with the longest prefix first followed by pathType: ImplementationSpecific paths, in the order they are listed in the manifest An example ingress, from example is as follows. apiVersion : networking.k8s.io/v1 kind : Ingress metadata : name : \"2048-ingress\" namespace : \"2048-game\" labels : app : 2048-nginx-ingress spec : ingressClassName : alb rules : - host : 2048.example.com http : paths : - path : /* pathType : ImplementationSpecific backend : service : name : \"service-2048\" port : number : 80 The host field specifies the eventual Route 53-managed domain that will route to this service. The service, service-2048, must be of type NodePort in order for the provisioned ALB to route to it.(see echoserver-service.yaml ) The AWS Load Balancer Controller does not support the resource field of backend .","title":"Ingress specification"},{"location":"guide/integrations/external_dns/","text":"Setup External DNS \u00b6 external-dns provisions DNS records based on the host information. This project will setup and manage records in Route 53 that point to controller deployed ALBs. Prerequisites \u00b6 Role Permissions \u00b6 Adequate roles and policies must be configured in AWS and available to the node(s) running the external-dns. See https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md#iam-permissions. Installation \u00b6 Download sample external-dns manifest wget https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/external-dns.yaml Edit the --domain-filter flag to include your hosted zone(s) The following example is for a hosted zone test-dns.com : args : - --source=service - --source=ingress - --domain-filter=test-dns.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones - --provider=aws - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization - --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both) - --registry=txt - --txt-owner-id=my-identifier Deploy external-dns kubectl apply -f external-dns.yaml Verify it deployed successfully. kubectl logs -f $( kubectl get po | egrep -o 'external-dns[A-Za-z0-9-]+' ) Should display output similar to the following: time=\"2019-12-11T10:26:05Z\" level=info msg=\"config: {Master: KubeConfig: RequestTimeout:30s IstioIngressGateway:istio-system/istio-ingressgateway Sources:[service ingress] Namespace: AnnotationFilter: FQDNTemplate: CombineFQDNAndAnnotation:false Compatibility: PublishInternal:false PublishHostIP:false ConnectorSourceServer:localhost:8080 Provider:aws GoogleProject: DomainFilter:[test-dns.com] ZoneIDFilter:[] AlibabaCloudConfigFile:/etc/kubernetes/alibaba-cloud.json AlibabaCloudZoneType: AWSZoneType:public AWSAssumeRole: AWSBatchChangeSize:4000 AWSBatchChangeInterval:1s AWSEvaluateTargetHealth:true AzureConfigFile:/etc/kubernetes/azure.json AzureResourceGroup: CloudflareProxied:false InfobloxGridHost: InfobloxWapiPort:443 InfobloxWapiUsername:admin InfobloxWapiPassword: InfobloxWapiVersion:2.3.1 InfobloxSSLVerify:true DynCustomerName: DynUsername: DynPassword: DynMinTTLSeconds:0 OCIConfigFile:/etc/kubernetes/oci.yaml InMemoryZones:[] PDNSServer:http://localhost:8081 PDNSAPIKey: PDNSTLSEnabled:false TLSCA: TLSClientCert: TLSClientCertKey: Policy:upsert-only Registry:txt TXTOwnerID:my-identifier TXTPrefix: Interval:1m0s Once:false DryRun:false LogFormat:text MetricsAddress::7979 LogLevel:info TXTCacheInterval:0s ExoscaleEndpoint:https://api.exoscale.ch/dns ExoscaleAPIKey: ExoscaleAPISecret: CRDSourceAPIVersion:externaldns.k8s.io/v1alpha CRDSourceKind:DNSEndpoint ServiceTypeFilter:[] RFC2136Host: RFC2136Port:0 RFC2136Zone: RFC2136Insecure:false RFC2136TSIGKeyName: RFC2136TSIGSecret: RFC2136TSIGSecretAlg: RFC2136TAXFR:false}\" time=\"2019-12-11T10:26:05Z\" level=info msg=\"Created Kubernetes client https://10.100.0.1:443\" Usage \u00b6 To create a record set in the subdomain, from your ingress which has been created by the ingress-controller, add the following annotation in the ingress objectresource: annotations : kubernetes.io/ingress.class : alb alb.ingress.kubernetes.io/scheme : internet-facing # external-dns specific configuration for creating route53 record-set external-dns.alpha.kubernetes.io/hostname : my-app.test-dns.com # give your domain name here A snippet of the external-dns pod log indicating route53 update: time=\"2019-12-11T10:26:08Z\" level=info msg=\"Desired change: CREATE my-app.test-dns.com A\" time=\"2019-12-11T10:26:08Z\" level=info msg=\"Desired change: CREATE my-app.test-dns.com TXT\" time=\"2019-12-11T10:26:08Z\" level=info msg=\"2 record(s) in zone my-app.test-dns.com. were successfully updated\" External DNS configures Simple routing policy for the route53 records. You can configure Weighted policy by specifying the weight and the identifier via annotation. Weighted policy allows you to split the traffic between multiple load balancers. Here is an example to specify weight and identifier: annotations : # For creating weighted route53 records external-dns.alpha.kubernetes.io/hostname : my-app.test-dns.com external-dns.alpha.kubernetes.io/aws-weight : \"100\" external-dns.alpha.kubernetes.io/set-identifier : \"3\" You can refer to the External DNS documentation for further details [ link ].","title":"Setup External DNS"},{"location":"guide/integrations/external_dns/#setup-external-dns","text":"external-dns provisions DNS records based on the host information. This project will setup and manage records in Route 53 that point to controller deployed ALBs.","title":"Setup External DNS"},{"location":"guide/integrations/external_dns/#prerequisites","text":"","title":"Prerequisites"},{"location":"guide/integrations/external_dns/#role-permissions","text":"Adequate roles and policies must be configured in AWS and available to the node(s) running the external-dns. See https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md#iam-permissions.","title":"Role Permissions"},{"location":"guide/integrations/external_dns/#installation","text":"Download sample external-dns manifest wget https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/external-dns.yaml Edit the --domain-filter flag to include your hosted zone(s) The following example is for a hosted zone test-dns.com : args : - --source=service - --source=ingress - --domain-filter=test-dns.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones - --provider=aws - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization - --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both) - --registry=txt - --txt-owner-id=my-identifier Deploy external-dns kubectl apply -f external-dns.yaml Verify it deployed successfully. kubectl logs -f $( kubectl get po | egrep -o 'external-dns[A-Za-z0-9-]+' ) Should display output similar to the following: time=\"2019-12-11T10:26:05Z\" level=info msg=\"config: {Master: KubeConfig: RequestTimeout:30s IstioIngressGateway:istio-system/istio-ingressgateway Sources:[service ingress] Namespace: AnnotationFilter: FQDNTemplate: CombineFQDNAndAnnotation:false Compatibility: PublishInternal:false PublishHostIP:false ConnectorSourceServer:localhost:8080 Provider:aws GoogleProject: DomainFilter:[test-dns.com] ZoneIDFilter:[] AlibabaCloudConfigFile:/etc/kubernetes/alibaba-cloud.json AlibabaCloudZoneType: AWSZoneType:public AWSAssumeRole: AWSBatchChangeSize:4000 AWSBatchChangeInterval:1s AWSEvaluateTargetHealth:true AzureConfigFile:/etc/kubernetes/azure.json AzureResourceGroup: CloudflareProxied:false InfobloxGridHost: InfobloxWapiPort:443 InfobloxWapiUsername:admin InfobloxWapiPassword: InfobloxWapiVersion:2.3.1 InfobloxSSLVerify:true DynCustomerName: DynUsername: DynPassword: DynMinTTLSeconds:0 OCIConfigFile:/etc/kubernetes/oci.yaml InMemoryZones:[] PDNSServer:http://localhost:8081 PDNSAPIKey: PDNSTLSEnabled:false TLSCA: TLSClientCert: TLSClientCertKey: Policy:upsert-only Registry:txt TXTOwnerID:my-identifier TXTPrefix: Interval:1m0s Once:false DryRun:false LogFormat:text MetricsAddress::7979 LogLevel:info TXTCacheInterval:0s ExoscaleEndpoint:https://api.exoscale.ch/dns ExoscaleAPIKey: ExoscaleAPISecret: CRDSourceAPIVersion:externaldns.k8s.io/v1alpha CRDSourceKind:DNSEndpoint ServiceTypeFilter:[] RFC2136Host: RFC2136Port:0 RFC2136Zone: RFC2136Insecure:false RFC2136TSIGKeyName: RFC2136TSIGSecret: RFC2136TSIGSecretAlg: RFC2136TAXFR:false}\" time=\"2019-12-11T10:26:05Z\" level=info msg=\"Created Kubernetes client https://10.100.0.1:443\"","title":"Installation"},{"location":"guide/integrations/external_dns/#usage","text":"To create a record set in the subdomain, from your ingress which has been created by the ingress-controller, add the following annotation in the ingress objectresource: annotations : kubernetes.io/ingress.class : alb alb.ingress.kubernetes.io/scheme : internet-facing # external-dns specific configuration for creating route53 record-set external-dns.alpha.kubernetes.io/hostname : my-app.test-dns.com # give your domain name here A snippet of the external-dns pod log indicating route53 update: time=\"2019-12-11T10:26:08Z\" level=info msg=\"Desired change: CREATE my-app.test-dns.com A\" time=\"2019-12-11T10:26:08Z\" level=info msg=\"Desired change: CREATE my-app.test-dns.com TXT\" time=\"2019-12-11T10:26:08Z\" level=info msg=\"2 record(s) in zone my-app.test-dns.com. were successfully updated\" External DNS configures Simple routing policy for the route53 records. You can configure Weighted policy by specifying the weight and the identifier via annotation. Weighted policy allows you to split the traffic between multiple load balancers. Here is an example to specify weight and identifier: annotations : # For creating weighted route53 records external-dns.alpha.kubernetes.io/hostname : my-app.test-dns.com external-dns.alpha.kubernetes.io/aws-weight : \"100\" external-dns.alpha.kubernetes.io/set-identifier : \"3\" You can refer to the External DNS documentation for further details [ link ].","title":"Usage"},{"location":"guide/service/annotations/","text":"Service annotations \u00b6 Annotation keys and values can only be strings. All other types below must be string-encoded, for example: boolean: \"true\" integer: \"42\" stringList: \"s1,s2,s3\" stringMap: \"k1=v1,k2=v2\" json: \"{ \\\"key\\\": \\\"value\\\" }\" Annotations \u00b6 Warning These annotations are specific to the kubernetes service resources reconciled by the AWS Load Balancer Controller. Although the list was initially derived from the k8s in-tree kube-controller-manager , this documentation is not an accurate reference for the services reconciled by the in-tree controller. Name Type Default Notes service.beta.kubernetes.io/load-balancer-source-ranges stringList service.beta.kubernetes.io/aws-load-balancer-type string service.beta.kubernetes.io/aws-load-balancer-nlb-target-type string default instance in case of LoadBalancerClass service.beta.kubernetes.io/aws-load-balancer-name string service.beta.kubernetes.io/aws-load-balancer-internal boolean false deprecated, in favor of aws-load-balancer-scheme service.beta.kubernetes.io/aws-load-balancer-scheme string internal service.beta.kubernetes.io/aws-load-balancer-proxy-protocol string Set to \"*\" to enable service.beta.kubernetes.io/aws-load-balancer-ip-address-type string ipv4 ipv4 | dualstack service.beta.kubernetes.io/aws-load-balancer-access-log-enabled boolean false deprecated, in favor of aws-load-balancer-attributes service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name string deprecated, in favor of aws-load-balancer-attributes service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix string deprecated, in favor of aws-load-balancer-attributes service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled boolean false deprecated, in favor of aws-load-balancer-attributes service.beta.kubernetes.io/aws-load-balancer-ssl-cert stringList service.beta.kubernetes.io/aws-load-balancer-ssl-ports stringList service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy string ELBSecurityPolicy-2016-08 service.beta.kubernetes.io/aws-load-balancer-backend-protocol string service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags stringMap service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol string TCP service.beta.kubernetes.io/aws-load-balancer-healthcheck-port integer | traffic-port traffic-port service.beta.kubernetes.io/aws-load-balancer-healthcheck-path string \"/\" for HTTP(S) protocols service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold integer 3 service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold integer 3 service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout integer 10 service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval integer 10 service.beta.kubernetes.io/aws-load-balancer-healthcheck-success-codes string 200-399 service.beta.kubernetes.io/aws-load-balancer-eip-allocations stringList internet-facing lb only. Length must match the number of subnets service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses stringList internal lb only. Length must match the number of subnets service.beta.kubernetes.io/aws-load-balancer-ipv6-addresses stringList dualstack lb only. Length must match the number of subnets service.beta.kubernetes.io/aws-load-balancer-target-group-attributes stringMap service.beta.kubernetes.io/aws-load-balancer-subnets stringList service.beta.kubernetes.io/aws-load-balancer-alpn-policy string service.beta.kubernetes.io/aws-load-balancer-target-node-labels stringMap service.beta.kubernetes.io/aws-load-balancer-attributes stringMap service.beta.kubernetes.io/aws-load-balancer-security-groups stringList service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules boolean true service.beta.kubernetes.io/aws-load-balancer-inbound-sg-rules-on-private-link-traffic string Traffic Routing \u00b6 Traffic Routing can be controlled with following annotations: service.beta.kubernetes.io/aws-load-balancer-name specifies the custom name to use for the load balancer. Name longer than 32 characters will be treated as an error. limitations If you modify this annotation after service creation, there is no effect. Example service.beta.kubernetes.io/aws-load-balancer-name: custom-name service.beta.kubernetes.io/aws-load-balancer-type specifies the load balancer type. This controller reconciles those service resources with this annotation set to either nlb-ip or external . Tip This annotation specifies the controller used to provision LoadBalancers (as specified in legacy-cloud-provider ). Refer to lb-scheme to specify whether the LoadBalancer is internet-facing or internal. [Deprecated] For type nlb-ip , the controller will provision an NLB with targets registered by IP address. This value is supported for backwards compatibility. For type external , the NLB target type depends on the nlb-target-type annotation. limitations This annotation should not be modified after service creation. Example service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type specifies the target type to configure for NLB. You can choose between instance and ip . instance mode will route traffic to all EC2 instances within cluster on the NodePort opened for your service. The kube-proxy on the individual worker nodes sets up the forwarding of the traffic from the NodePort to the pods behind the service. service must be of type NodePort or LoadBalancer for instance targets for k8s 1.22 and later if spec.allocateLoadBalancerNodePorts is set to false , NodePort must be allocated manually default value If you configure spec.loadBalancerClass , the controller defaults to instance target type NodePort allocation k8s version 1.22 and later support disabling NodePort allocation by setting the service field spec.allocateLoadBalancerNodePorts to false . If the NodePort is not allocated for a service port, the controller will fail to reconcile instance mode NLB. ip mode will route traffic directly to the pod IP. In this mode, AWS NLB sends traffic directly to the Kubernetes pods behind the service, eliminating the need for an extra network hop through the worker nodes in the Kubernetes cluster. ip target mode supports pods running on AWS EC2 instances and AWS Fargate network plugin must use native AWS VPC networking configuration for pod IP, for example Amazon VPC CNI plugin . Example service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance service.beta.kubernetes.io/aws-load-balancer-subnets specifies the Availability Zone the NLB will route traffic to. See Network Load Balancers for more details. Tip Subnets are auto-discovered if this annotation is not specified, see Subnet Discovery for further details. You must specify at least one subnet in any of the AZs, both subnetID or subnetName(Name tag on subnets) can be used. limitations Each subnets must be from a different Availability Zone AWS has restrictions on disabling existing subnets for NLB. As a result, you might not be able to edit this annotation once the NLB gets provisioned. Example service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-xxxx, mySubnet service.beta.kubernetes.io/aws-load-balancer-alpn-policy allows you to configure the ALPN policies on the load balancer. supported policies HTTP1Only Negotiate only HTTP/1.*. The ALPN preference list is http/1.1, http/1.0. HTTP2Only Negotiate only HTTP/2. The ALPN preference list is h2. HTTP2Optional Prefer HTTP/1.* over HTTP/2 (which can be useful for HTTP/2 testing). The ALPN preference list is http/1.1, http/1.0, h2. HTTP2Preferred Prefer HTTP/2 over HTTP/1.*. The ALPN preference list is h2, http/1.1, http/1.0. None Do not negotiate ALPN. This is the default. Example service.beta.kubernetes.io/aws-load-balancer-alpn-policy: HTTP2Preferred service.beta.kubernetes.io/aws-load-balancer-target-node-labels specifies which nodes to include in the target group registration for instance target type. Example service.beta.kubernetes.io/aws-load-balancer-target-node-labels: label1=value1, label2=value2 service.beta.kubernetes.io/aws-load-balancer-eip-allocations specifies a list of elastic IP address configuration for an internet-facing NLB. Note This configuration is optional, and you can use it to assign static IP addresses to your NLB You must specify the same number of eip allocations as load balancer subnets annotation NLB must be internet-facing Example service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-xyz, eipalloc-zzz service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses specifies a list of private IPv4 addresses for an internal NLB. Note NLB must be internal This configuration is optional, and you can use it to assign static IPv4 addresses to your NLB You must specify the same number of private IPv4 addresses as load balancer subnets annotation You must specify the IPv4 addresses from the load balancer subnet IPv4 ranges Example service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: 192.168.10.15, 192.168.32.16 service.beta.kubernetes.io/aws-load-balancer-ipv6-addresses specifies a list of IPv6 addresses for an dualstack NLB. Note NLB must be dualstack This configuration is optional, and you can use it to assign static IPv6 addresses to your NLB You must specify the same number of private IPv6 addresses as load balancer subnets annotation You must specify the IPv6 addresses from the load balancer subnet IPv6 ranges Example service.beta.kubernetes.io/aws-load-balancer-ipv6-addresses: 2600:1f13:837:8501::1, 2600:1f13:837:8504::1 Traffic Listening \u00b6 Traffic Listening can be controlled with following annotations: service.beta.kubernetes.io/aws-load-balancer-ip-address-type specifies the IP address type of NLB. Example service.beta.kubernetes.io/aws-load-balancer-ip-address-type: ipv4 Resource attributes \u00b6 NLB resource attributes can be controlled via the following annotations: service.beta.kubernetes.io/aws-load-balancer-proxy-protocol specifies whether to enable proxy protocol v2 on the target group. Set to '*' to enable proxy protocol v2. This annotation takes precedence over the annotation service.beta.kubernetes.io/aws-load-balancer-target-group-attributes for proxy protocol v2 configuration. The only valid value for this annotation is * . service.beta.kubernetes.io/aws-load-balancer-target-group-attributes specifies the Target Group Attributes to be configured. Example set the deregistration delay to 120 seconds (available range is 0-3600 seconds) service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.timeout_seconds=120 enable source IP affinity service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: stickiness.enabled=true,stickiness.type=source_ip enable proxy protocol version 2 service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: proxy_protocol_v2.enabled=true enable connection termination on deregistration service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.connection_termination.enabled=true enable client IP preservation service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true disable immediate connection termination for unhealthy targets and configure a 30s draining interval (available range is 0-360000 seconds) service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: target_health_state.unhealthy.connection_termination.enabled=false,target_health_state.unhealthy.draining_interval_seconds=30 service.beta.kubernetes.io/aws-load-balancer-attributes specifies Load Balancer Attributes that should be applied to the NLB. Only attributes defined in the annotation will be updated. To unset any AWS defaults(e.g. Disabling access logs after having them enabled once), the values need to be explicitly set to the original values( access_logs.s3.enabled=false ) and omitting them is not sufficient. Custom attributes set in this annotation's config map will be overriden by annotation-specific attributes. For backwards compatibility, existing annotations for the individual load balancer attributes get precedence in case of ties. If deletion_protection.enabled=true is in the annotation, the controller will not be able to delete the NLB during reconciliation. Once the attribute gets edited to deletion_protection.enabled=false during reconciliation, the deployer will force delete the resource. Please note, if the deletion protection is not enabled via annotation (e.g. via AWS console), the controller still deletes the underlying resource. Example enable access log to s3 service.beta.kubernetes.io/aws-load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=my-access-log-bucket,access_logs.s3.prefix=my-app enable NLB deletion protection service.beta.kubernetes.io/aws-load-balancer-attributes: deletion_protection.enabled=true enable cross zone load balancing service.beta.kubernetes.io/aws-load-balancer-attributes: load_balancing.cross_zone.enabled=true enable client availability zone affinity service.beta.kubernetes.io/aws-load-balancer-attributes: dns_record.client_routing_policy=availability_zone_affinity the following annotations are deprecated in v2.3.0 release in favor of service.beta.kubernetes.io/aws-load-balancer-attributes service.beta.kubernetes.io/aws-load-balancer-access-log-enabled service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled AWS Resource Tags \u00b6 The AWS Load Balancer Controller automatically applies following tags to the AWS resources it creates (NLB/TargetGroups/Listener/ListenerRule): elbv2.k8s.aws/cluster: ${clusterName} service.k8s.aws/stack: ${stackID} service.k8s.aws/resource: ${resourceID} In addition, you can use annotations to specify additional tags service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags specifies additional tags to apply to the AWS resources. you cannot override the default controller tags mentioned above or the tags specified in the --default-tags controller flag if any of the tag conflicts with the ones configured via --external-managed-tags controller flag, the controller fails to reconcile the service Example service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Environment=dev,Team=test Health Check \u00b6 Health check on target groups can be configured with following annotations: service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol specifies the target group health check protocol. you can specify tcp , or http or https , tcp is the default tcp is the default health check protocol if the service spec.externalTrafficPolicy is Cluster , http if Local if the service spec.externalTrafficPolicy is Local , do not use tcp for health check Supports only single protocol per service Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: http service.beta.kubernetes.io/aws-load-balancer-healthcheck-port specifies the TCP port to use for target group health check. default value if you do not specify the health check port, the default value will be spec.healthCheckNodePort when externalTrafficPolicy=local or traffic-port otherwise. Example set the health check port to traffic-port service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: traffic-port set the health check port to port 80 service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: \"80\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-path specifies the http path for the health check in case of http/https protocol. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /healthz service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold specifies the consecutive health check successes required before a target is considered healthy. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: \"3\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold specifies the consecutive health check failures before a target gets marked unhealthy. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: \"3\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval specifies the interval between consecutive health checks. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: \"10\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-success-codes specifies the http success codes for the health check in case of http/https protocol. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-success-codes: \"200-399\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout specifies the target group health check timeout. The target has to respond within the timeout for a successful health check. Note The controller currently ignores the timeout configuration due to the limitations on the AWS NLB. The default timeout for TCP is 10s and HTTP is 6s. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: \"10\" TLS \u00b6 You can configure TLS support via the following annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-cert specifies the ARN of one or more certificates managed by the AWS Certificate Manager . The first certificate in the list is the default certificate and remaining certificates are for the optional certificate list. See Server Certificates for further details. Example service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:xxxxx:certificate/xxxxxxx service.beta.kubernetes.io/aws-load-balancer-ssl-ports specifies the frontend ports with TLS listeners. You must configure at least one certificate for TLS listeners You can specify a list of port names or port values, * does not match any ports If you don't specify this annotation, controller creates TLS listener for all the service ports Specify this annotation if you need both TLS and non-TLS listeners on the same load balancer Example service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 443, custom-port service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy specifies the Security Policy for NLB frontend connections, allowing you to control the protocol and ciphers. Example service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS13-1-2-2021-06 service.beta.kubernetes.io/aws-load-balancer-backend-protocol specifies whether to use TLS for the backend traffic between the load balancer and the kubernetes pods. If you specify ssl as the backend protocol, NLB uses TLS connections for the traffic to your kubernetes pods in case of TLS listeners You can specify ssl or tcp (default) Example service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl Access control \u00b6 Load balancer access can be controlled via following annotations: service.beta.kubernetes.io/load-balancer-source-ranges specifies the CIDRs that are allowed to access the NLB. Tip we recommend specifying CIDRs in the service spec.loadBalancerSourceRanges instead Default 0.0.0.0/0 will be used if the IPAddressType is \"ipv4\" 0.0.0.0/0 and ::/0 will be used if the IPAddressType is \"dualstack\" The VPC CIDR will be used if service.beta.kubernetes.io/aws-load-balancer-scheme is internal This annotation will be ignored in case preserve client IP is not enabled. - preserve client IP is disabled by default for IP targets - preserve client IP is enabled by default for instance targets Preserve client IP has no effect on traffic converted from IPv4 to IPv6 and on traffic converted from IPv6 to IPv4. The source IP of this type of traffic is always the private IP address of the Network Load Balancer. - This could cause the clients that have their traffic converted to bypass the specified CIDRs that are allowed to access the NLB. this annotation will be ignored if service.beta.kubernetes.io/aws-load-balancer-security-groups is specified. Example service.beta.kubernetes.io/load-balancer-source-ranges: 10.0.0.0/24 service.beta.kubernetes.io/aws-load-balancer-scheme specifies whether the NLB will be internet-facing or internal. Valid values are internal , internet-facing . If not specified, default is internal . Example service.beta.kubernetes.io/aws-load-balancer-scheme: \"internet-facing\" service.beta.kubernetes.io/aws-load-balancer-internal specifies whether the NLB will be internet-facing or internal. deprecation note This annotation is deprecated starting v2.2.0 release in favor of the new aws-load-balancer-scheme annotation. It will be supported, but in case of ties, the aws-load-balancer-scheme gets precedence. Example service.beta.kubernetes.io/aws-load-balancer-internal: \"true\" service.beta.kubernetes.io/aws-load-balancer-security-groups specifies the frontend securityGroups you want to attach to an NLB. When this annotation is not present, the controller will automatically create one security group. The security group will be attached to the LoadBalancer and allow access from inbound-cidrs to the listen-ports . Also, the securityGroups for target instances/ENIs will be modified to allow inbound traffic from this securityGroup. If you specify this annotation, you need to configure the security groups on your target instances/ENIs to allow inbound traffic from the load balancer. You could also set the manage-backend-security-group-rules if you want the controller to manage the security group rules. Both name and ID of securityGroups are supported. Name matches a Name tag, not the groupName attribute. Example service.beta.kubernetes.io/aws-load-balancer-security-groups: sg-xxxx, nameOfSg1, nameOfSg2 service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules specifies whether the controller should automatically add the ingress rules to the instance/ENI security group. If you disable the automatic management of security group rules for an NLB, you will need to manually add appropriate ingress rules to your EC2 instance or ENI security groups to allow access to the traffic and health check ports. Example service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: \"false\" service.beta.kubernetes.io/aws-load-balancer-inbound-sg-rules-on-private-link-traffic specifies whether to apply security group rules to traffic sent to the load balancer through AWS PrivateLink. Example service.beta.kubernetes.io/aws-load-balancer-inbound-sg-rules-on-private-link-traffic: \"off\" Legacy Cloud Provider \u00b6 The AWS Load Balancer Controller manages Kubernetes Services in a compatible way with the AWS cloud provider's legacy service controller. For users on v2.5.0+, The AWS LBC provides a mutating webhook for service resources to set the spec.loadBalancerCLass field for Serive of type LoadBalancer, effectively making the AWS LBC the default controller for Service of type LoadBalancer. Users can disable this feature and revert to using the AWS Cloud Controller Manager as the default service controller by setting the helm chart value enableServiceMutatorWebhook to false with --set enableServiceMutatorWebhook=false . For users on older versions, the annotation service.beta.kubernetes.io/aws-load-balancer-type is used to determine which controller reconciles the service. If the annotation value is nlb-ip or external , recent versions of the legacy cloud provider ignore the Service resource so that the AWS LBC can take over. For all other values of the annotation, the legacy cloud provider will handle the service. Note that this annotation should be specified during service creation and not edited later. Support for the annotation was added to the legacy cloud provider in Kubernetes v1.20, and is backported to v1.18.18+ and v1.19.10+.","title":"Annotations"},{"location":"guide/service/annotations/#service-annotations","text":"Annotation keys and values can only be strings. All other types below must be string-encoded, for example: boolean: \"true\" integer: \"42\" stringList: \"s1,s2,s3\" stringMap: \"k1=v1,k2=v2\" json: \"{ \\\"key\\\": \\\"value\\\" }\"","title":"Service annotations"},{"location":"guide/service/annotations/#annotations","text":"Warning These annotations are specific to the kubernetes service resources reconciled by the AWS Load Balancer Controller. Although the list was initially derived from the k8s in-tree kube-controller-manager , this documentation is not an accurate reference for the services reconciled by the in-tree controller. Name Type Default Notes service.beta.kubernetes.io/load-balancer-source-ranges stringList service.beta.kubernetes.io/aws-load-balancer-type string service.beta.kubernetes.io/aws-load-balancer-nlb-target-type string default instance in case of LoadBalancerClass service.beta.kubernetes.io/aws-load-balancer-name string service.beta.kubernetes.io/aws-load-balancer-internal boolean false deprecated, in favor of aws-load-balancer-scheme service.beta.kubernetes.io/aws-load-balancer-scheme string internal service.beta.kubernetes.io/aws-load-balancer-proxy-protocol string Set to \"*\" to enable service.beta.kubernetes.io/aws-load-balancer-ip-address-type string ipv4 ipv4 | dualstack service.beta.kubernetes.io/aws-load-balancer-access-log-enabled boolean false deprecated, in favor of aws-load-balancer-attributes service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name string deprecated, in favor of aws-load-balancer-attributes service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix string deprecated, in favor of aws-load-balancer-attributes service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled boolean false deprecated, in favor of aws-load-balancer-attributes service.beta.kubernetes.io/aws-load-balancer-ssl-cert stringList service.beta.kubernetes.io/aws-load-balancer-ssl-ports stringList service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy string ELBSecurityPolicy-2016-08 service.beta.kubernetes.io/aws-load-balancer-backend-protocol string service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags stringMap service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol string TCP service.beta.kubernetes.io/aws-load-balancer-healthcheck-port integer | traffic-port traffic-port service.beta.kubernetes.io/aws-load-balancer-healthcheck-path string \"/\" for HTTP(S) protocols service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold integer 3 service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold integer 3 service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout integer 10 service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval integer 10 service.beta.kubernetes.io/aws-load-balancer-healthcheck-success-codes string 200-399 service.beta.kubernetes.io/aws-load-balancer-eip-allocations stringList internet-facing lb only. Length must match the number of subnets service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses stringList internal lb only. Length must match the number of subnets service.beta.kubernetes.io/aws-load-balancer-ipv6-addresses stringList dualstack lb only. Length must match the number of subnets service.beta.kubernetes.io/aws-load-balancer-target-group-attributes stringMap service.beta.kubernetes.io/aws-load-balancer-subnets stringList service.beta.kubernetes.io/aws-load-balancer-alpn-policy string service.beta.kubernetes.io/aws-load-balancer-target-node-labels stringMap service.beta.kubernetes.io/aws-load-balancer-attributes stringMap service.beta.kubernetes.io/aws-load-balancer-security-groups stringList service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules boolean true service.beta.kubernetes.io/aws-load-balancer-inbound-sg-rules-on-private-link-traffic string","title":"Annotations"},{"location":"guide/service/annotations/#traffic-routing","text":"Traffic Routing can be controlled with following annotations: service.beta.kubernetes.io/aws-load-balancer-name specifies the custom name to use for the load balancer. Name longer than 32 characters will be treated as an error. limitations If you modify this annotation after service creation, there is no effect. Example service.beta.kubernetes.io/aws-load-balancer-name: custom-name service.beta.kubernetes.io/aws-load-balancer-type specifies the load balancer type. This controller reconciles those service resources with this annotation set to either nlb-ip or external . Tip This annotation specifies the controller used to provision LoadBalancers (as specified in legacy-cloud-provider ). Refer to lb-scheme to specify whether the LoadBalancer is internet-facing or internal. [Deprecated] For type nlb-ip , the controller will provision an NLB with targets registered by IP address. This value is supported for backwards compatibility. For type external , the NLB target type depends on the nlb-target-type annotation. limitations This annotation should not be modified after service creation. Example service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type specifies the target type to configure for NLB. You can choose between instance and ip . instance mode will route traffic to all EC2 instances within cluster on the NodePort opened for your service. The kube-proxy on the individual worker nodes sets up the forwarding of the traffic from the NodePort to the pods behind the service. service must be of type NodePort or LoadBalancer for instance targets for k8s 1.22 and later if spec.allocateLoadBalancerNodePorts is set to false , NodePort must be allocated manually default value If you configure spec.loadBalancerClass , the controller defaults to instance target type NodePort allocation k8s version 1.22 and later support disabling NodePort allocation by setting the service field spec.allocateLoadBalancerNodePorts to false . If the NodePort is not allocated for a service port, the controller will fail to reconcile instance mode NLB. ip mode will route traffic directly to the pod IP. In this mode, AWS NLB sends traffic directly to the Kubernetes pods behind the service, eliminating the need for an extra network hop through the worker nodes in the Kubernetes cluster. ip target mode supports pods running on AWS EC2 instances and AWS Fargate network plugin must use native AWS VPC networking configuration for pod IP, for example Amazon VPC CNI plugin . Example service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance service.beta.kubernetes.io/aws-load-balancer-subnets specifies the Availability Zone the NLB will route traffic to. See Network Load Balancers for more details. Tip Subnets are auto-discovered if this annotation is not specified, see Subnet Discovery for further details. You must specify at least one subnet in any of the AZs, both subnetID or subnetName(Name tag on subnets) can be used. limitations Each subnets must be from a different Availability Zone AWS has restrictions on disabling existing subnets for NLB. As a result, you might not be able to edit this annotation once the NLB gets provisioned. Example service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-xxxx, mySubnet service.beta.kubernetes.io/aws-load-balancer-alpn-policy allows you to configure the ALPN policies on the load balancer. supported policies HTTP1Only Negotiate only HTTP/1.*. The ALPN preference list is http/1.1, http/1.0. HTTP2Only Negotiate only HTTP/2. The ALPN preference list is h2. HTTP2Optional Prefer HTTP/1.* over HTTP/2 (which can be useful for HTTP/2 testing). The ALPN preference list is http/1.1, http/1.0, h2. HTTP2Preferred Prefer HTTP/2 over HTTP/1.*. The ALPN preference list is h2, http/1.1, http/1.0. None Do not negotiate ALPN. This is the default. Example service.beta.kubernetes.io/aws-load-balancer-alpn-policy: HTTP2Preferred service.beta.kubernetes.io/aws-load-balancer-target-node-labels specifies which nodes to include in the target group registration for instance target type. Example service.beta.kubernetes.io/aws-load-balancer-target-node-labels: label1=value1, label2=value2 service.beta.kubernetes.io/aws-load-balancer-eip-allocations specifies a list of elastic IP address configuration for an internet-facing NLB. Note This configuration is optional, and you can use it to assign static IP addresses to your NLB You must specify the same number of eip allocations as load balancer subnets annotation NLB must be internet-facing Example service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-xyz, eipalloc-zzz service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses specifies a list of private IPv4 addresses for an internal NLB. Note NLB must be internal This configuration is optional, and you can use it to assign static IPv4 addresses to your NLB You must specify the same number of private IPv4 addresses as load balancer subnets annotation You must specify the IPv4 addresses from the load balancer subnet IPv4 ranges Example service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: 192.168.10.15, 192.168.32.16 service.beta.kubernetes.io/aws-load-balancer-ipv6-addresses specifies a list of IPv6 addresses for an dualstack NLB. Note NLB must be dualstack This configuration is optional, and you can use it to assign static IPv6 addresses to your NLB You must specify the same number of private IPv6 addresses as load balancer subnets annotation You must specify the IPv6 addresses from the load balancer subnet IPv6 ranges Example service.beta.kubernetes.io/aws-load-balancer-ipv6-addresses: 2600:1f13:837:8501::1, 2600:1f13:837:8504::1","title":"Traffic Routing"},{"location":"guide/service/annotations/#traffic-listening","text":"Traffic Listening can be controlled with following annotations: service.beta.kubernetes.io/aws-load-balancer-ip-address-type specifies the IP address type of NLB. Example service.beta.kubernetes.io/aws-load-balancer-ip-address-type: ipv4","title":"Traffic Listening"},{"location":"guide/service/annotations/#resource-attributes","text":"NLB resource attributes can be controlled via the following annotations: service.beta.kubernetes.io/aws-load-balancer-proxy-protocol specifies whether to enable proxy protocol v2 on the target group. Set to '*' to enable proxy protocol v2. This annotation takes precedence over the annotation service.beta.kubernetes.io/aws-load-balancer-target-group-attributes for proxy protocol v2 configuration. The only valid value for this annotation is * . service.beta.kubernetes.io/aws-load-balancer-target-group-attributes specifies the Target Group Attributes to be configured. Example set the deregistration delay to 120 seconds (available range is 0-3600 seconds) service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.timeout_seconds=120 enable source IP affinity service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: stickiness.enabled=true,stickiness.type=source_ip enable proxy protocol version 2 service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: proxy_protocol_v2.enabled=true enable connection termination on deregistration service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.connection_termination.enabled=true enable client IP preservation service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true disable immediate connection termination for unhealthy targets and configure a 30s draining interval (available range is 0-360000 seconds) service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: target_health_state.unhealthy.connection_termination.enabled=false,target_health_state.unhealthy.draining_interval_seconds=30 service.beta.kubernetes.io/aws-load-balancer-attributes specifies Load Balancer Attributes that should be applied to the NLB. Only attributes defined in the annotation will be updated. To unset any AWS defaults(e.g. Disabling access logs after having them enabled once), the values need to be explicitly set to the original values( access_logs.s3.enabled=false ) and omitting them is not sufficient. Custom attributes set in this annotation's config map will be overriden by annotation-specific attributes. For backwards compatibility, existing annotations for the individual load balancer attributes get precedence in case of ties. If deletion_protection.enabled=true is in the annotation, the controller will not be able to delete the NLB during reconciliation. Once the attribute gets edited to deletion_protection.enabled=false during reconciliation, the deployer will force delete the resource. Please note, if the deletion protection is not enabled via annotation (e.g. via AWS console), the controller still deletes the underlying resource. Example enable access log to s3 service.beta.kubernetes.io/aws-load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=my-access-log-bucket,access_logs.s3.prefix=my-app enable NLB deletion protection service.beta.kubernetes.io/aws-load-balancer-attributes: deletion_protection.enabled=true enable cross zone load balancing service.beta.kubernetes.io/aws-load-balancer-attributes: load_balancing.cross_zone.enabled=true enable client availability zone affinity service.beta.kubernetes.io/aws-load-balancer-attributes: dns_record.client_routing_policy=availability_zone_affinity the following annotations are deprecated in v2.3.0 release in favor of service.beta.kubernetes.io/aws-load-balancer-attributes service.beta.kubernetes.io/aws-load-balancer-access-log-enabled service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled","title":"Resource attributes"},{"location":"guide/service/annotations/#aws-resource-tags","text":"The AWS Load Balancer Controller automatically applies following tags to the AWS resources it creates (NLB/TargetGroups/Listener/ListenerRule): elbv2.k8s.aws/cluster: ${clusterName} service.k8s.aws/stack: ${stackID} service.k8s.aws/resource: ${resourceID} In addition, you can use annotations to specify additional tags service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags specifies additional tags to apply to the AWS resources. you cannot override the default controller tags mentioned above or the tags specified in the --default-tags controller flag if any of the tag conflicts with the ones configured via --external-managed-tags controller flag, the controller fails to reconcile the service Example service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Environment=dev,Team=test","title":"AWS Resource Tags"},{"location":"guide/service/annotations/#health-check","text":"Health check on target groups can be configured with following annotations: service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol specifies the target group health check protocol. you can specify tcp , or http or https , tcp is the default tcp is the default health check protocol if the service spec.externalTrafficPolicy is Cluster , http if Local if the service spec.externalTrafficPolicy is Local , do not use tcp for health check Supports only single protocol per service Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: http service.beta.kubernetes.io/aws-load-balancer-healthcheck-port specifies the TCP port to use for target group health check. default value if you do not specify the health check port, the default value will be spec.healthCheckNodePort when externalTrafficPolicy=local or traffic-port otherwise. Example set the health check port to traffic-port service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: traffic-port set the health check port to port 80 service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: \"80\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-path specifies the http path for the health check in case of http/https protocol. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /healthz service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold specifies the consecutive health check successes required before a target is considered healthy. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: \"3\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold specifies the consecutive health check failures before a target gets marked unhealthy. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: \"3\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval specifies the interval between consecutive health checks. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: \"10\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-success-codes specifies the http success codes for the health check in case of http/https protocol. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-success-codes: \"200-399\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout specifies the target group health check timeout. The target has to respond within the timeout for a successful health check. Note The controller currently ignores the timeout configuration due to the limitations on the AWS NLB. The default timeout for TCP is 10s and HTTP is 6s. Example service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: \"10\"","title":"Health Check"},{"location":"guide/service/annotations/#tls","text":"You can configure TLS support via the following annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-cert specifies the ARN of one or more certificates managed by the AWS Certificate Manager . The first certificate in the list is the default certificate and remaining certificates are for the optional certificate list. See Server Certificates for further details. Example service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:xxxxx:certificate/xxxxxxx service.beta.kubernetes.io/aws-load-balancer-ssl-ports specifies the frontend ports with TLS listeners. You must configure at least one certificate for TLS listeners You can specify a list of port names or port values, * does not match any ports If you don't specify this annotation, controller creates TLS listener for all the service ports Specify this annotation if you need both TLS and non-TLS listeners on the same load balancer Example service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 443, custom-port service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy specifies the Security Policy for NLB frontend connections, allowing you to control the protocol and ciphers. Example service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS13-1-2-2021-06 service.beta.kubernetes.io/aws-load-balancer-backend-protocol specifies whether to use TLS for the backend traffic between the load balancer and the kubernetes pods. If you specify ssl as the backend protocol, NLB uses TLS connections for the traffic to your kubernetes pods in case of TLS listeners You can specify ssl or tcp (default) Example service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl","title":"TLS"},{"location":"guide/service/annotations/#access-control","text":"Load balancer access can be controlled via following annotations: service.beta.kubernetes.io/load-balancer-source-ranges specifies the CIDRs that are allowed to access the NLB. Tip we recommend specifying CIDRs in the service spec.loadBalancerSourceRanges instead Default 0.0.0.0/0 will be used if the IPAddressType is \"ipv4\" 0.0.0.0/0 and ::/0 will be used if the IPAddressType is \"dualstack\" The VPC CIDR will be used if service.beta.kubernetes.io/aws-load-balancer-scheme is internal This annotation will be ignored in case preserve client IP is not enabled. - preserve client IP is disabled by default for IP targets - preserve client IP is enabled by default for instance targets Preserve client IP has no effect on traffic converted from IPv4 to IPv6 and on traffic converted from IPv6 to IPv4. The source IP of this type of traffic is always the private IP address of the Network Load Balancer. - This could cause the clients that have their traffic converted to bypass the specified CIDRs that are allowed to access the NLB. this annotation will be ignored if service.beta.kubernetes.io/aws-load-balancer-security-groups is specified. Example service.beta.kubernetes.io/load-balancer-source-ranges: 10.0.0.0/24 service.beta.kubernetes.io/aws-load-balancer-scheme specifies whether the NLB will be internet-facing or internal. Valid values are internal , internet-facing . If not specified, default is internal . Example service.beta.kubernetes.io/aws-load-balancer-scheme: \"internet-facing\" service.beta.kubernetes.io/aws-load-balancer-internal specifies whether the NLB will be internet-facing or internal. deprecation note This annotation is deprecated starting v2.2.0 release in favor of the new aws-load-balancer-scheme annotation. It will be supported, but in case of ties, the aws-load-balancer-scheme gets precedence. Example service.beta.kubernetes.io/aws-load-balancer-internal: \"true\" service.beta.kubernetes.io/aws-load-balancer-security-groups specifies the frontend securityGroups you want to attach to an NLB. When this annotation is not present, the controller will automatically create one security group. The security group will be attached to the LoadBalancer and allow access from inbound-cidrs to the listen-ports . Also, the securityGroups for target instances/ENIs will be modified to allow inbound traffic from this securityGroup. If you specify this annotation, you need to configure the security groups on your target instances/ENIs to allow inbound traffic from the load balancer. You could also set the manage-backend-security-group-rules if you want the controller to manage the security group rules. Both name and ID of securityGroups are supported. Name matches a Name tag, not the groupName attribute. Example service.beta.kubernetes.io/aws-load-balancer-security-groups: sg-xxxx, nameOfSg1, nameOfSg2 service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules specifies whether the controller should automatically add the ingress rules to the instance/ENI security group. If you disable the automatic management of security group rules for an NLB, you will need to manually add appropriate ingress rules to your EC2 instance or ENI security groups to allow access to the traffic and health check ports. Example service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: \"false\" service.beta.kubernetes.io/aws-load-balancer-inbound-sg-rules-on-private-link-traffic specifies whether to apply security group rules to traffic sent to the load balancer through AWS PrivateLink. Example service.beta.kubernetes.io/aws-load-balancer-inbound-sg-rules-on-private-link-traffic: \"off\"","title":"Access control"},{"location":"guide/service/annotations/#legacy-cloud-provider","text":"The AWS Load Balancer Controller manages Kubernetes Services in a compatible way with the AWS cloud provider's legacy service controller. For users on v2.5.0+, The AWS LBC provides a mutating webhook for service resources to set the spec.loadBalancerCLass field for Serive of type LoadBalancer, effectively making the AWS LBC the default controller for Service of type LoadBalancer. Users can disable this feature and revert to using the AWS Cloud Controller Manager as the default service controller by setting the helm chart value enableServiceMutatorWebhook to false with --set enableServiceMutatorWebhook=false . For users on older versions, the annotation service.beta.kubernetes.io/aws-load-balancer-type is used to determine which controller reconciles the service. If the annotation value is nlb-ip or external , recent versions of the legacy cloud provider ignore the Service resource so that the AWS LBC can take over. For all other values of the annotation, the legacy cloud provider will handle the service. Note that this annotation should be specified during service creation and not edited later. Support for the annotation was added to the legacy cloud provider in Kubernetes v1.20, and is backported to v1.18.18+ and v1.19.10+.","title":"Legacy Cloud Provider"},{"location":"guide/service/nlb/","text":"Network Load Balancer \u00b6 The AWS Load Balancer Controller (LBC) supports reconciliation for Kubernetes Service resources of type LoadBalancer by provisioning an AWS Network Load Balancer (NLB) with an instance or ip target type . Secure by default Since the v2.2.0 release, the LBC provisions an internal NLB by default. To create an internet-facing NLB, the following annotation is required on your service: service.beta.kubernetes.io/aws-load-balancer-scheme : internet-facing For backwards compatibility, if the service.beta.kubernetes.io/aws-load-balancer-scheme annotation is absent, an existing NLB's scheme remains unchanged. Prerequisites \u00b6 LBC >= v2.2.0 For Kubernetes Service resources of type LoadBalancer : Kubernetes >= v1.20 or Kubernetes >= v1.19.10 for 1.19 or Kubernetes >= v1.18.18 for 1.18 or EKS >= v1.16 For Kubernetes Service resources of type NodePort : Kubernetes >= v1.16 For ip target type: Pods have native AWS VPC networking configured. For more information, see the Amazon VPC CNI plugin documentation. Configuration \u00b6 By default, Kubernetes Service resources of type LoadBalancer get reconciled by the Kubernetes controller built into the CloudProvider component of the kube-controller-manager or the cloud-controller-manager (also known as the in-tree controller). In order for the LBC to manage the reconciliation of Kubernetes Service resources of type LoadBalancer , you need to offload the reconciliation from the in-tree controller to the LBC, explicitly. With LoadBalancerClass The LBC supports the LoadBalancerClass feature since the v2.4.0 release for Kubernetes v1.22+ clusters. The LoadBalancerClass feature provides a CloudProvider agnostic way of offloading the reconciliation for Kubernetes Service resources of type LoadBalancer to an external controller. When you specify the spec.loadBalancerClass to be service.k8s.aws/nlb on a Kubernetes Service resource of type LoadBalancer , the LBC takes charge of reconciliation by provisioning an NLB. Warning If you modify a Service resource with matching spec.loadBalancerClass by changing its type from LoadBalancer to anything else, the controller will cleanup the provisioned NLB for that Service. If the spec.loadBalancerClass is set to a loadBalancerClass that isn't recognized by the LBC, it ignores the Service resource, regardless of the service.beta.kubernetes.io/aws-load-balancer-type annotation. Tip By default, the NLB uses the instance target type. You can customize it using the service.beta.kubernetes.io/aws-load-balancer-nlb-target-type annotation . The LBC uses service.k8s.aws/nlb as the default LoadBalancerClass . You can customize it to a different value using the controller flag --load-balancer-class . Example: instance mode apiVersion : v1 kind : Service metadata : name : echoserver annotations : service.beta.kubernetes.io/aws-load-balancer-nlb-target-type : instance spec : selector : app : echoserver ports : - port : 80 targetPort : 8080 protocol : TCP type : LoadBalancer loadBalancerClass : service.k8s.aws/nlb Example: ip mode apiVersion : v1 kind : Service metadata : name : echoserver annotations : service.beta.kubernetes.io/aws-load-balancer-nlb-target-type : ip spec : selector : app : echoserver ports : - port : 80 targetPort : 8080 protocol : TCP type : LoadBalancer loadBalancerClass : service.k8s.aws/nlb With service.beta.kubernetes.io/aws-load-balancer-type annotation The AWS in-tree controller supports an AWS specific way of offloading the reconciliation for Kubernetes Service resources of type LoadBalancer to an external controller. When you specify the service.beta.kubernetes.io/aws-load-balancer-type annotation to be external on a Kubernetes Service resource of type LoadBalancer , the in-tree controller ignores the Service resource. In addition, if you specify the service.beta.kubernetes.io/aws-load-balancer-nlb-target-type annotation on the Service resource, the LBC takes charge of reconciliation by provisioning an NLB. Warning It's not recommended to modify or add the service.beta.kubernetes.io/aws-load-balancer-type annotation on an existing Service resource. If a change is desired, delete the existing Service resource and create a new one instead of modifying an existing Service. If you modify this annotation on an existing Service resource, you might end up with leaked LBC resources. backwards compatibility for nlb-ip type For backwards compatibility, both the in-tree and LBC controller supports nlb-ip as a value for the service.beta.kubernetes.io/aws-load-balancer-type annotation. The controllers treats it as if you specified both of the following annotations: service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip Example: instance mode apiVersion : v1 kind : Service metadata : name : echoserver annotations : service.beta.kubernetes.io/aws-load-balancer-type : external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type : instance spec : selector : app : echoserver ports : - port : 80 targetPort : 8080 protocol : TCP type : LoadBalancer Example: ip mode apiVersion : v1 kind : Service metadata : name : echoserver annotations : service.beta.kubernetes.io/aws-load-balancer-type : external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type : ip spec : selector : app : echoserver ports : - port : 80 targetPort : 8080 protocol : TCP type : LoadBalancer Protocols \u00b6 The LBC supports both TCP and UDP protocols. The controller also configures TLS termination on your NLB if you configure the Service with a certificate annotation. In the case of TCP, an NLB with IP targets doesn't pass the client source IP address, unless you specifically configure it to using target group attributes. Your application pods might not see the actual client IP address, even if the NLB passes it along. For example, if you're using instance mode with externalTrafficPolicy set to Cluster . In such cases, you can configure NLB proxy protocol v2 using an annotation if you need visibility into the client source IP address on your application pods. To enable proxy protocol v2, apply the following annotation to your Service: service.beta.kubernetes.io/aws-load-balancer-proxy-protocol : \"*\" If you enable proxy protocol v2, NLB health checks with HTTP/HTTPS only work if the health check port supports proxy protocol v2. Due to this behavior, you shouldn't configure proxy protocol v2 with NLB instance mode and externalTrafficPolicy set to Local . Subnet tagging requirements \u00b6 See Subnet Discovery for details on configuring Elastic Load Balancing for public or private placement. Security group \u00b6 From v2.6.0, the AWS LBC creates and attaches frontend and backend security groups to NLB by default. For more information please see the security groups documentation In older versions, the controller by default adds inbound rules to the worker node security groups, to allow inbound traffic from an NLB. disable worker node security group rule management You can disable the worker node security group rule management using an annotation . Worker node security groups selection \u00b6 The controller automatically selects the worker node security groups that it modifies to allow inbound traffic using the following rules: For instance mode, the security group of each backend worker node's primary elastic network interface (ENI) is selected. For ip mode, the security group of each backend pod's ENI is selected. Multiple security groups on an ENI If there are multiple security groups attached to an ENI, the controller expects only one security group tagged with following tags: Key Value kubernetes.io/cluster/${cluster-name} owned or shared ${cluster-name} is the name of the Kubernetes cluster. If it is possible for multiple security groups with the tag kubernetes.io/cluster/${cluster-name} to be on a target ENI, you may use the --service-target-eni-security-group-tags flag to specify additional tags that must also match in order for a security group to be used. Worker node security groups rules \u00b6 When client IP preservation is enabled Rule Protocol Port(s) IpRanges(s) Client Traffic spec.ports[*].protocol spec.ports[*].port Traffic Source CIDRs Health Check Traffic TCP Health Check Ports NLB Subnet CIDRs When client IP preservation is disabled Rule Protocol Port(s) IpRange(s) Client Traffic spec.ports[*].protocol spec.ports[*].port NLB Subnet CIDRs Health Check Traffic TCP Health Check Ports NLB Subnet CIDRs","title":"Network Load Balancer"},{"location":"guide/service/nlb/#network-load-balancer","text":"The AWS Load Balancer Controller (LBC) supports reconciliation for Kubernetes Service resources of type LoadBalancer by provisioning an AWS Network Load Balancer (NLB) with an instance or ip target type . Secure by default Since the v2.2.0 release, the LBC provisions an internal NLB by default. To create an internet-facing NLB, the following annotation is required on your service: service.beta.kubernetes.io/aws-load-balancer-scheme : internet-facing For backwards compatibility, if the service.beta.kubernetes.io/aws-load-balancer-scheme annotation is absent, an existing NLB's scheme remains unchanged.","title":"Network Load Balancer"},{"location":"guide/service/nlb/#prerequisites","text":"LBC >= v2.2.0 For Kubernetes Service resources of type LoadBalancer : Kubernetes >= v1.20 or Kubernetes >= v1.19.10 for 1.19 or Kubernetes >= v1.18.18 for 1.18 or EKS >= v1.16 For Kubernetes Service resources of type NodePort : Kubernetes >= v1.16 For ip target type: Pods have native AWS VPC networking configured. For more information, see the Amazon VPC CNI plugin documentation.","title":"Prerequisites"},{"location":"guide/service/nlb/#configuration","text":"By default, Kubernetes Service resources of type LoadBalancer get reconciled by the Kubernetes controller built into the CloudProvider component of the kube-controller-manager or the cloud-controller-manager (also known as the in-tree controller). In order for the LBC to manage the reconciliation of Kubernetes Service resources of type LoadBalancer , you need to offload the reconciliation from the in-tree controller to the LBC, explicitly. With LoadBalancerClass The LBC supports the LoadBalancerClass feature since the v2.4.0 release for Kubernetes v1.22+ clusters. The LoadBalancerClass feature provides a CloudProvider agnostic way of offloading the reconciliation for Kubernetes Service resources of type LoadBalancer to an external controller. When you specify the spec.loadBalancerClass to be service.k8s.aws/nlb on a Kubernetes Service resource of type LoadBalancer , the LBC takes charge of reconciliation by provisioning an NLB. Warning If you modify a Service resource with matching spec.loadBalancerClass by changing its type from LoadBalancer to anything else, the controller will cleanup the provisioned NLB for that Service. If the spec.loadBalancerClass is set to a loadBalancerClass that isn't recognized by the LBC, it ignores the Service resource, regardless of the service.beta.kubernetes.io/aws-load-balancer-type annotation. Tip By default, the NLB uses the instance target type. You can customize it using the service.beta.kubernetes.io/aws-load-balancer-nlb-target-type annotation . The LBC uses service.k8s.aws/nlb as the default LoadBalancerClass . You can customize it to a different value using the controller flag --load-balancer-class . Example: instance mode apiVersion : v1 kind : Service metadata : name : echoserver annotations : service.beta.kubernetes.io/aws-load-balancer-nlb-target-type : instance spec : selector : app : echoserver ports : - port : 80 targetPort : 8080 protocol : TCP type : LoadBalancer loadBalancerClass : service.k8s.aws/nlb Example: ip mode apiVersion : v1 kind : Service metadata : name : echoserver annotations : service.beta.kubernetes.io/aws-load-balancer-nlb-target-type : ip spec : selector : app : echoserver ports : - port : 80 targetPort : 8080 protocol : TCP type : LoadBalancer loadBalancerClass : service.k8s.aws/nlb With service.beta.kubernetes.io/aws-load-balancer-type annotation The AWS in-tree controller supports an AWS specific way of offloading the reconciliation for Kubernetes Service resources of type LoadBalancer to an external controller. When you specify the service.beta.kubernetes.io/aws-load-balancer-type annotation to be external on a Kubernetes Service resource of type LoadBalancer , the in-tree controller ignores the Service resource. In addition, if you specify the service.beta.kubernetes.io/aws-load-balancer-nlb-target-type annotation on the Service resource, the LBC takes charge of reconciliation by provisioning an NLB. Warning It's not recommended to modify or add the service.beta.kubernetes.io/aws-load-balancer-type annotation on an existing Service resource. If a change is desired, delete the existing Service resource and create a new one instead of modifying an existing Service. If you modify this annotation on an existing Service resource, you might end up with leaked LBC resources. backwards compatibility for nlb-ip type For backwards compatibility, both the in-tree and LBC controller supports nlb-ip as a value for the service.beta.kubernetes.io/aws-load-balancer-type annotation. The controllers treats it as if you specified both of the following annotations: service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip Example: instance mode apiVersion : v1 kind : Service metadata : name : echoserver annotations : service.beta.kubernetes.io/aws-load-balancer-type : external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type : instance spec : selector : app : echoserver ports : - port : 80 targetPort : 8080 protocol : TCP type : LoadBalancer Example: ip mode apiVersion : v1 kind : Service metadata : name : echoserver annotations : service.beta.kubernetes.io/aws-load-balancer-type : external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type : ip spec : selector : app : echoserver ports : - port : 80 targetPort : 8080 protocol : TCP type : LoadBalancer","title":"Configuration"},{"location":"guide/service/nlb/#protocols","text":"The LBC supports both TCP and UDP protocols. The controller also configures TLS termination on your NLB if you configure the Service with a certificate annotation. In the case of TCP, an NLB with IP targets doesn't pass the client source IP address, unless you specifically configure it to using target group attributes. Your application pods might not see the actual client IP address, even if the NLB passes it along. For example, if you're using instance mode with externalTrafficPolicy set to Cluster . In such cases, you can configure NLB proxy protocol v2 using an annotation if you need visibility into the client source IP address on your application pods. To enable proxy protocol v2, apply the following annotation to your Service: service.beta.kubernetes.io/aws-load-balancer-proxy-protocol : \"*\" If you enable proxy protocol v2, NLB health checks with HTTP/HTTPS only work if the health check port supports proxy protocol v2. Due to this behavior, you shouldn't configure proxy protocol v2 with NLB instance mode and externalTrafficPolicy set to Local .","title":"Protocols"},{"location":"guide/service/nlb/#subnet-tagging-requirements","text":"See Subnet Discovery for details on configuring Elastic Load Balancing for public or private placement.","title":"Subnet tagging requirements"},{"location":"guide/service/nlb/#security-group","text":"From v2.6.0, the AWS LBC creates and attaches frontend and backend security groups to NLB by default. For more information please see the security groups documentation In older versions, the controller by default adds inbound rules to the worker node security groups, to allow inbound traffic from an NLB. disable worker node security group rule management You can disable the worker node security group rule management using an annotation .","title":"Security group"},{"location":"guide/service/nlb/#worker-node-security-groups-selection","text":"The controller automatically selects the worker node security groups that it modifies to allow inbound traffic using the following rules: For instance mode, the security group of each backend worker node's primary elastic network interface (ENI) is selected. For ip mode, the security group of each backend pod's ENI is selected. Multiple security groups on an ENI If there are multiple security groups attached to an ENI, the controller expects only one security group tagged with following tags: Key Value kubernetes.io/cluster/${cluster-name} owned or shared ${cluster-name} is the name of the Kubernetes cluster. If it is possible for multiple security groups with the tag kubernetes.io/cluster/${cluster-name} to be on a target ENI, you may use the --service-target-eni-security-group-tags flag to specify additional tags that must also match in order for a security group to be used.","title":"Worker node security groups selection"},{"location":"guide/service/nlb/#worker-node-security-groups-rules","text":"When client IP preservation is enabled Rule Protocol Port(s) IpRanges(s) Client Traffic spec.ports[*].protocol spec.ports[*].port Traffic Source CIDRs Health Check Traffic TCP Health Check Ports NLB Subnet CIDRs When client IP preservation is disabled Rule Protocol Port(s) IpRange(s) Client Traffic spec.ports[*].protocol spec.ports[*].port NLB Subnet CIDRs Health Check Traffic TCP Health Check Ports NLB Subnet CIDRs","title":"Worker node security groups rules"},{"location":"guide/targetgroupbinding/spec/","text":"Packages: elbv2.k8s.aws/v1beta1 elbv2.k8s.aws/v1beta1 Package v1beta1 contains API Schema definitions for the elbv2 v1beta1 API group Resource Types: TargetGroupBinding TargetGroupBinding TargetGroupBinding is the Schema for the TargetGroupBinding API Field Description apiVersion string elbv2.k8s.aws/v1beta1 kind string TargetGroupBinding metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec TargetGroupBindingSpec targetGroupARN string targetGroupARN is the Amazon Resource Name (ARN) for the TargetGroup. targetType TargetType (Optional) targetType is the TargetType of TargetGroup. If unspecified, it will be automatically inferred. serviceRef ServiceReference serviceRef is a reference to a Kubernetes Service and ServicePort. networking TargetGroupBindingNetworking (Optional) networking defines the networking rules to allow ELBV2 LoadBalancer to access targets in TargetGroup. status TargetGroupBindingStatus IPBlock ( Appears on: NetworkingPeer ) IPBlock defines source/destination IPBlock in networking rules. Field Description cidr string CIDR is the network CIDR. Both IPV4 or IPV6 CIDR are accepted. NetworkingIngressRule ( Appears on: TargetGroupBindingNetworking ) NetworkingIngressRule defines a particular set of traffic that is allowed to access TargetGroup\u2019s targets. Field Description from []NetworkingPeer List of peers which should be able to access the targets in TargetGroup. At least one NetworkingPeer should be specified. ports []NetworkingPort List of ports which should be made accessible on the targets in TargetGroup. If ports is empty or unspecified, it defaults to all ports with TCP. NetworkingPeer ( Appears on: NetworkingIngressRule ) NetworkingPeer defines the source/destination peer for networking rules. Field Description ipBlock IPBlock (Optional) IPBlock defines an IPBlock peer. If specified, none of the other fields can be set. securityGroup SecurityGroup (Optional) SecurityGroup defines a SecurityGroup peer. If specified, none of the other fields can be set. NetworkingPort ( Appears on: NetworkingIngressRule ) NetworkingPort defines the port and protocol for networking rules. Field Description protocol NetworkingProtocol The protocol which traffic must match. If protocol is unspecified, it defaults to TCP. port k8s.io/apimachinery/pkg/util/intstr.IntOrString (Optional) The port which traffic must match. When NodePort endpoints(instance TargetType) is used, this must be a numerical port. When Port endpoints(ip TargetType) is used, this can be either numerical or named port on pods. if port is unspecified, it defaults to all ports. NetworkingProtocol ( string alias) ( Appears on: NetworkingPort ) NetworkingProtocol defines the protocol for networking rules. SecurityGroup ( Appears on: NetworkingPeer ) SecurityGroup defines reference to an AWS EC2 SecurityGroup. Field Description groupID string GroupID is the EC2 SecurityGroupID. ServiceReference ( Appears on: TargetGroupBindingSpec ) ServiceReference defines reference to a Kubernetes Service and its ServicePort. Field Description name string Name is the name of the Service. port k8s.io/apimachinery/pkg/util/intstr.IntOrString Port is the port of the ServicePort. TargetGroupBindingNetworking ( Appears on: TargetGroupBindingSpec ) TargetGroupBindingNetworking defines the networking rules to allow ELBV2 LoadBalancer to access targets in TargetGroup. Field Description ingress []NetworkingIngressRule (Optional) List of ingress rules to allow ELBV2 LoadBalancer to access targets in TargetGroup. TargetGroupBindingSpec ( Appears on: TargetGroupBinding ) TargetGroupBindingSpec defines the desired state of TargetGroupBinding Field Description targetGroupARN string targetGroupARN is the Amazon Resource Name (ARN) for the TargetGroup. targetType TargetType (Optional) targetType is the TargetType of TargetGroup. If unspecified, it will be automatically inferred. serviceRef ServiceReference serviceRef is a reference to a Kubernetes Service and ServicePort. networking TargetGroupBindingNetworking (Optional) networking defines the networking rules to allow ELBV2 LoadBalancer to access targets in TargetGroup. TargetGroupBindingStatus ( Appears on: TargetGroupBinding ) TargetGroupBindingStatus defines the observed state of TargetGroupBinding Field Description observedGeneration int64 (Optional) The generation observed by the TargetGroupBinding controller. TargetType ( string alias) ( Appears on: TargetGroupBindingSpec ) TargetType is the targetType of your ELBV2 TargetGroup. with instance TargetType, nodes with nodePort for your service will be registered as targets with ip TargetType, Pods with containerPort for your service will be registered as targets Generated with gen-crd-api-reference-docs on git commit 21418f44 .","title":"Specification"},{"location":"guide/targetgroupbinding/targetgroupbinding/","text":"TargetGroupBinding \u00b6 TargetGroupBinding is a custom resource (CR) that can expose your pods using an existing ALB TargetGroup or NLB TargetGroup . This will allow you to provision the load balancer infrastructure completely outside of Kubernetes but still manage the targets with Kubernetes Service. usage to support Ingress and Service The AWS LoadBalancer controller internally used TargetGroupBinding to support the functionality for Ingress and Service resource as well. It automatically creates TargetGroupBinding in the same namespace of the Service used. You can view all TargetGroupBindings in a namespace by kubectl get targetgroupbindings -n -o wide TargetType \u00b6 TargetGroupBinding CR supports TargetGroups of either instance or ip TargetType. If TargetType is not explicitly specified, a mutating webhook will automatically call AWS API to find the TargetType for your TargetGroup and set it to correct value. Sample YAML \u00b6 apiVersion : elbv2.k8s.aws/v1beta1 kind : TargetGroupBinding metadata : name : my-tgb spec : serviceRef : name : awesome-service # route traffic to the awesome-service port : 80 targetGroupARN : VpcID \u00b6 TargetGroupBinding CR supports the explicit definition of the Virtual Private Cloud (VPC) of your TargetGroup. If the VpcID is not explicitly specified, a mutating webhook will automatically call AWS API to find the VpcID for your TargetGroup and set it to correct value. Sample YAML \u00b6 apiVersion : elbv2.k8s.aws/v1beta1 kind : TargetGroupBinding metadata : name : my-tgb spec : serviceRef : name : awesome-service # route traffic to the awesome-service port : 80 targetGroupARN : vpcID : NodeSelector \u00b6 Default Node Selector \u00b6 For TargetType: instance , all nodes of a cluster that match the following selector are added to the target group by default: matchExpressions : - key : node-role.kubernetes.io/master operator : DoesNotExist - key : node.kubernetes.io/exclude-from-external-load-balancers operator : DoesNotExist - key : alpha.service-controller.kubernetes.io/exclude-balancer operator : DoesNotExist - key : eks.amazonaws.com/compute-type operator : NotIn values : [ \"fargate\" ] Custom Node Selector \u00b6 TargetGroupBinding CR supports NodeSelector which is a LabelSelector . This will select nodes to attach to the instance TargetType target group and is merged with the default node selector above . apiVersion : elbv2.k8s.aws/v1beta1 kind : TargetGroupBinding metadata : name : my-tgb spec : nodeSelector : matchLabels : foo : bar ... Reference \u00b6 See the reference for TargetGroupBinding CR","title":"TargetGroupBinding"},{"location":"guide/targetgroupbinding/targetgroupbinding/#targetgroupbinding","text":"TargetGroupBinding is a custom resource (CR) that can expose your pods using an existing ALB TargetGroup or NLB TargetGroup . This will allow you to provision the load balancer infrastructure completely outside of Kubernetes but still manage the targets with Kubernetes Service. usage to support Ingress and Service The AWS LoadBalancer controller internally used TargetGroupBinding to support the functionality for Ingress and Service resource as well. It automatically creates TargetGroupBinding in the same namespace of the Service used. You can view all TargetGroupBindings in a namespace by kubectl get targetgroupbindings -n -o wide","title":"TargetGroupBinding"},{"location":"guide/targetgroupbinding/targetgroupbinding/#targettype","text":"TargetGroupBinding CR supports TargetGroups of either instance or ip TargetType. If TargetType is not explicitly specified, a mutating webhook will automatically call AWS API to find the TargetType for your TargetGroup and set it to correct value.","title":"TargetType"},{"location":"guide/targetgroupbinding/targetgroupbinding/#sample-yaml","text":"apiVersion : elbv2.k8s.aws/v1beta1 kind : TargetGroupBinding metadata : name : my-tgb spec : serviceRef : name : awesome-service # route traffic to the awesome-service port : 80 targetGroupARN : ","title":"Sample YAML"},{"location":"guide/targetgroupbinding/targetgroupbinding/#vpcid","text":"TargetGroupBinding CR supports the explicit definition of the Virtual Private Cloud (VPC) of your TargetGroup. If the VpcID is not explicitly specified, a mutating webhook will automatically call AWS API to find the VpcID for your TargetGroup and set it to correct value.","title":"VpcID"},{"location":"guide/targetgroupbinding/targetgroupbinding/#sample-yaml_1","text":"apiVersion : elbv2.k8s.aws/v1beta1 kind : TargetGroupBinding metadata : name : my-tgb spec : serviceRef : name : awesome-service # route traffic to the awesome-service port : 80 targetGroupARN : vpcID : ","title":"Sample YAML"},{"location":"guide/targetgroupbinding/targetgroupbinding/#nodeselector","text":"","title":"NodeSelector"},{"location":"guide/targetgroupbinding/targetgroupbinding/#default-node-selector","text":"For TargetType: instance , all nodes of a cluster that match the following selector are added to the target group by default: matchExpressions : - key : node-role.kubernetes.io/master operator : DoesNotExist - key : node.kubernetes.io/exclude-from-external-load-balancers operator : DoesNotExist - key : alpha.service-controller.kubernetes.io/exclude-balancer operator : DoesNotExist - key : eks.amazonaws.com/compute-type operator : NotIn values : [ \"fargate\" ]","title":"Default Node Selector"},{"location":"guide/targetgroupbinding/targetgroupbinding/#custom-node-selector","text":"TargetGroupBinding CR supports NodeSelector which is a LabelSelector . This will select nodes to attach to the instance TargetType target group and is merged with the default node selector above . apiVersion : elbv2.k8s.aws/v1beta1 kind : TargetGroupBinding metadata : name : my-tgb spec : nodeSelector : matchLabels : foo : bar ...","title":"Custom Node Selector"},{"location":"guide/targetgroupbinding/targetgroupbinding/#reference","text":"See the reference for TargetGroupBinding CR","title":"Reference"},{"location":"guide/tasks/cognito_authentication/","text":"Setup Cognito/AWS Load Balancer Controller \u00b6 This document describes how to install AWS Load Balancer Controller with AWS Cognito integration to minimal capacity, other options and or configurations may be required for production, and on an app to app basis. Assumptions \u00b6 The following assumptions are observed regarding this procedure. ExternalDNS is installed to the cluster and will provide a custom URL for your ALB. To setup ExternalDNS refer to the install instructions . Cognito Configuration \u00b6 Configure Cognito for use with AWS Load Balancer Controller using the following links with specified caveats. Create Cognito user pool Configure application integration On step 11.c for the Callback URL enter https:///oauth2/idpresponse . On step 11.d for Allowed OAuth Flows select authorization code grant and for Allowed OAuth Scopes select openid . AWS Load Balancer Controller Setup \u00b6 Install the AWS Load Balancer Controller using the install instructions with the following caveats. When setting up IAM Role Permissions, add the cognito-idp:DescribeUserPoolClient permission to the example policy. Deploying an Ingress \u00b6 Using the cognito-ingress-template you can fill in the variables to create an ALB ingress connected to your Cognito user pool for authentication.","title":"Cognito Authentication"},{"location":"guide/tasks/cognito_authentication/#setup-cognitoaws-load-balancer-controller","text":"This document describes how to install AWS Load Balancer Controller with AWS Cognito integration to minimal capacity, other options and or configurations may be required for production, and on an app to app basis.","title":"Setup Cognito/AWS Load Balancer Controller"},{"location":"guide/tasks/cognito_authentication/#assumptions","text":"The following assumptions are observed regarding this procedure. ExternalDNS is installed to the cluster and will provide a custom URL for your ALB. To setup ExternalDNS refer to the install instructions .","title":"Assumptions"},{"location":"guide/tasks/cognito_authentication/#cognito-configuration","text":"Configure Cognito for use with AWS Load Balancer Controller using the following links with specified caveats. Create Cognito user pool Configure application integration On step 11.c for the Callback URL enter https:///oauth2/idpresponse . On step 11.d for Allowed OAuth Flows select authorization code grant and for Allowed OAuth Scopes select openid .","title":"Cognito Configuration"},{"location":"guide/tasks/cognito_authentication/#aws-load-balancer-controller-setup","text":"Install the AWS Load Balancer Controller using the install instructions with the following caveats. When setting up IAM Role Permissions, add the cognito-idp:DescribeUserPoolClient permission to the example policy.","title":"AWS Load Balancer Controller Setup"},{"location":"guide/tasks/cognito_authentication/#deploying-an-ingress","text":"Using the cognito-ingress-template you can fill in the variables to create an ALB ingress connected to your Cognito user pool for authentication.","title":"Deploying an Ingress"},{"location":"guide/tasks/migrate_legacy_apps/","text":"Migrating From Legacy Apps with Manually Configured Target Groups \u00b6 Many organizations are decomposing old legacy apps into smaller services and components. During the transition they may be running a hybrid ecosystem with some parts of the app running in ec2 instances, some in Kubernetes microservices, and possibly even some in serverless environments like Lambda. The existing clients of the application expect all endpoints under one DNS entry and it's desirable to be able to route traffic at the ALB to services running outside the Kubernetes cluster. The actions annotation allows the definition of a forward rule to a previously configured target group. Learn more about the actions annotation at alb.ingress.kubernetes.io/actions.${action-name} Example Ingress Manifest \u00b6 apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : testcase name : echoserver annotations : alb.ingress.kubernetes.io/actions.legacy-app : '{\"Type\": \"forward\", \"TargetGroupArn\": \"legacy-tg-arn\"}' spec : ingressClassName : alb rules : - http : paths : - path : /v1/endpoints pathType : Exact backend : service : name : legacy-app port : name : use-annotation - path : /normal-path pathType : Exact backend : service : name : echoserver port : number : 80 Note The TargetGroupArn must be set and the user is responsible for configuring the Target group in AWS before applying the forward rule.","title":"Migrating From Legacy Apps with Manually Configured Target Groups"},{"location":"guide/tasks/migrate_legacy_apps/#migrating-from-legacy-apps-with-manually-configured-target-groups","text":"Many organizations are decomposing old legacy apps into smaller services and components. During the transition they may be running a hybrid ecosystem with some parts of the app running in ec2 instances, some in Kubernetes microservices, and possibly even some in serverless environments like Lambda. The existing clients of the application expect all endpoints under one DNS entry and it's desirable to be able to route traffic at the ALB to services running outside the Kubernetes cluster. The actions annotation allows the definition of a forward rule to a previously configured target group. Learn more about the actions annotation at alb.ingress.kubernetes.io/actions.${action-name}","title":"Migrating From Legacy Apps with Manually Configured Target Groups"},{"location":"guide/tasks/migrate_legacy_apps/#example-ingress-manifest","text":"apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : testcase name : echoserver annotations : alb.ingress.kubernetes.io/actions.legacy-app : '{\"Type\": \"forward\", \"TargetGroupArn\": \"legacy-tg-arn\"}' spec : ingressClassName : alb rules : - http : paths : - path : /v1/endpoints pathType : Exact backend : service : name : legacy-app port : name : use-annotation - path : /normal-path pathType : Exact backend : service : name : echoserver port : number : 80 Note The TargetGroupArn must be set and the user is responsible for configuring the Target group in AWS before applying the forward rule.","title":"Example Ingress Manifest"},{"location":"guide/tasks/ssl_redirect/","text":"Redirect Traffic from HTTP to HTTPS \u00b6 You can use the alb.ingress.kubernetes.io/ssl-redirect annotation to setup an ingress to redirect http traffic to https Example Ingress Manifest \u00b6 apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/certificate-arn : arn:aws:acm:us-west-2:xxxx:certificate/xxxxxx alb.ingress.kubernetes.io/listen-ports : '[{\"HTTP\": 80}, {\"HTTPS\":443}]' alb.ingress.kubernetes.io/ssl-redirect : '443' spec : ingressClassName : alb rules : - http : paths : - path : /users/* pathType : ImplementationSpecific backend : service : name : user-service port : number : 80 - path : /* pathType : ImplementationSpecific backend : service : name : default-service port : number : 80 Note alb.ingress.kubernetes.io/listen-ports annotation must at least include [{\"HTTP\": 80}, {\"HTTPS\":443}] to listen on 80 and 443. alb.ingress.kubernetes.io/certificate-arn annotation must be set to allow listen for HTTPS traffic the ssl-redirect port must appear in the listen-port annotation, and must be an HTTPS port How it works \u00b6 If you enable SSL redirection, the controller configures each HTTP listener with a default action to redirect to HTTPS. The controller does not add any other rules to the HTTP listener. For the above example, the HTTP listener on port 80 will have a single default rule to redirect traffic to HTTPS on port 443.","title":"SSL Redirect"},{"location":"guide/tasks/ssl_redirect/#redirect-traffic-from-http-to-https","text":"You can use the alb.ingress.kubernetes.io/ssl-redirect annotation to setup an ingress to redirect http traffic to https","title":"Redirect Traffic from HTTP to HTTPS"},{"location":"guide/tasks/ssl_redirect/#example-ingress-manifest","text":"apiVersion : networking.k8s.io/v1 kind : Ingress metadata : namespace : default name : ingress annotations : alb.ingress.kubernetes.io/certificate-arn : arn:aws:acm:us-west-2:xxxx:certificate/xxxxxx alb.ingress.kubernetes.io/listen-ports : '[{\"HTTP\": 80}, {\"HTTPS\":443}]' alb.ingress.kubernetes.io/ssl-redirect : '443' spec : ingressClassName : alb rules : - http : paths : - path : /users/* pathType : ImplementationSpecific backend : service : name : user-service port : number : 80 - path : /* pathType : ImplementationSpecific backend : service : name : default-service port : number : 80 Note alb.ingress.kubernetes.io/listen-ports annotation must at least include [{\"HTTP\": 80}, {\"HTTPS\":443}] to listen on 80 and 443. alb.ingress.kubernetes.io/certificate-arn annotation must be set to allow listen for HTTPS traffic the ssl-redirect port must appear in the listen-port annotation, and must be an HTTPS port","title":"Example Ingress Manifest"},{"location":"guide/tasks/ssl_redirect/#how-it-works","text":"If you enable SSL redirection, the controller configures each HTTP listener with a default action to redirect to HTTPS. The controller does not add any other rules to the HTTP listener. For the above example, the HTTP listener on port 80 will have a single default rule to redirect traffic to HTTPS on port 443.","title":"How it works"},{"location":"guide/use_cases/blue_green/","text":"Split Traffic \u00b6 You can configure an Application Load Balancer (ALB) to split traffic from the same listener across multiple target groups using rules. This facilitates A/B testing, blue/green deployment, and traffic management without additional tools. The Load Balancer Controller (LBC) supports defining this behavior alongside the standard configuration of an Ingress resource. More specifically, the ALB supports weighted target groups and advanced request routing. Weighted target group Multiple target groups can be attached to the same forward action of a listener rule and specify a weight for each group. It allows developers to control how to distribute traffic to multiple versions of their application. For example, when you define a rule having two target groups with weights of 8 and 2, the load balancer will route 80 percent of the traffic to the first target group and 20 percent to the other. Advanced request routing In addition to the weighted target group, AWS announced the advanced request routing feature in 2019. Advanced request routing gives developers the ability to write rules (and route traffic) based on standard and custom HTTP headers and methods, the request path, the query string, and the source IP address. This new feature simplifies the application architecture by eliminating the need for a proxy fleet for routing, blocks unwanted traffic at the load balancer, and enables the implementation of A/B testing. Overview \u00b6 The ALB is configured to split traffic using annotations on the ingress resrouces. More specifically, the ingress annotation alb.ingress.kubernetes.io/actions.${service-name} configures custom actions on the listener. The body of the annotation is a JSON document that identifies an action type, and configures it. The supported actions are redirect , forward , and fixed-response . With forward action, multiple target groups with different weights can be defined in the annotation. The LBC provisions the target groups and configures the listener rules as per the annotation to direct the traffic. Importantly: * The action-name in the annotation must match the service name in the Ingress rules. For example, the annotation alb.ingress.kubernetes.io/actions.blue-green matches the service name blue-green referenced in the Ingress rules. * The servicePort of the service in the Ingress rules must be use-annotation . Example \u00b6 The following ingress resource configures the ALB to forward all traffic to hello-kubernetes-v1 service (weight: 100 vs. 0). Note that the annotation name includes blue-green , which matches the service name referenced in the ingress rules. The annotation reference includes further examples of the JSON configuration for different actions. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: \"hello-kubernetes\" namespace: \"hello-kubernetes\" annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/actions.blue-green: | { \"type\":\"forward\", \"forwardConfig\":{ \"targetGroups\":[ { \"serviceName\":\"hello-kubernetes-v1\", \"servicePort\":\"80\", \"weight\":100 }, { \"serviceName\":\"hello-kubernetes-v2\", \"servicePort\":\"80\", \"weight\":0 } ] } } labels: app: hello-kubernetes spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: blue-green port: name: use-annotation","title":"Blue/Green"},{"location":"guide/use_cases/blue_green/#split-traffic","text":"You can configure an Application Load Balancer (ALB) to split traffic from the same listener across multiple target groups using rules. This facilitates A/B testing, blue/green deployment, and traffic management without additional tools. The Load Balancer Controller (LBC) supports defining this behavior alongside the standard configuration of an Ingress resource. More specifically, the ALB supports weighted target groups and advanced request routing. Weighted target group Multiple target groups can be attached to the same forward action of a listener rule and specify a weight for each group. It allows developers to control how to distribute traffic to multiple versions of their application. For example, when you define a rule having two target groups with weights of 8 and 2, the load balancer will route 80 percent of the traffic to the first target group and 20 percent to the other. Advanced request routing In addition to the weighted target group, AWS announced the advanced request routing feature in 2019. Advanced request routing gives developers the ability to write rules (and route traffic) based on standard and custom HTTP headers and methods, the request path, the query string, and the source IP address. This new feature simplifies the application architecture by eliminating the need for a proxy fleet for routing, blocks unwanted traffic at the load balancer, and enables the implementation of A/B testing.","title":"Split Traffic"},{"location":"guide/use_cases/blue_green/#overview","text":"The ALB is configured to split traffic using annotations on the ingress resrouces. More specifically, the ingress annotation alb.ingress.kubernetes.io/actions.${service-name} configures custom actions on the listener. The body of the annotation is a JSON document that identifies an action type, and configures it. The supported actions are redirect , forward , and fixed-response . With forward action, multiple target groups with different weights can be defined in the annotation. The LBC provisions the target groups and configures the listener rules as per the annotation to direct the traffic. Importantly: * The action-name in the annotation must match the service name in the Ingress rules. For example, the annotation alb.ingress.kubernetes.io/actions.blue-green matches the service name blue-green referenced in the Ingress rules. * The servicePort of the service in the Ingress rules must be use-annotation .","title":"Overview"},{"location":"guide/use_cases/blue_green/#example","text":"The following ingress resource configures the ALB to forward all traffic to hello-kubernetes-v1 service (weight: 100 vs. 0). Note that the annotation name includes blue-green , which matches the service name referenced in the ingress rules. The annotation reference includes further examples of the JSON configuration for different actions. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: \"hello-kubernetes\" namespace: \"hello-kubernetes\" annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/actions.blue-green: | { \"type\":\"forward\", \"forwardConfig\":{ \"targetGroups\":[ { \"serviceName\":\"hello-kubernetes-v1\", \"servicePort\":\"80\", \"weight\":100 }, { \"serviceName\":\"hello-kubernetes-v2\", \"servicePort\":\"80\", \"weight\":0 } ] } } labels: app: hello-kubernetes spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: blue-green port: name: use-annotation","title":"Example"},{"location":"guide/use_cases/frontend_sg/","text":"Frontend security groups limit client/internet traffic with a load balancer. This improves security by preventing unauthorized access to cluster services, and blocking unexpected outbound connections. Both AWS Network Load Balancers (NLBs) and Application Load Balancers (ALBs) support frontend security groups. Learn more about how the Load Balancer Controller uses Frontend and Backend Security Groups . Solution Overview \u00b6 Load balancers expose cluster workloads to a wider network. Creating a frontend security group limits access to these workloads (service or ingress resources). More specifically, a security group acts as a virtual firewall to control incoming and outgoing traffic. Inbound rules control the incoming traffic to your load balancer, and outbound rules control the outgoing traffic from your load balancer. Security groups are particularly suited for defining what access other AWS resources (services, EC2 instances) have to your cluster. For example, if you have an existing security group including EC2 instances, you can permit only that security group to access a service. In this example, you will restrict access to a cluster service. You will create a new security group for the frontend of a load balancer, and add an inbound rule permitting traffic. The rule may limit traffic to a specific port, CIDR, or existing security group. Prerequisites \u00b6 Kubernetes Cluster Version 1.22+ AWS Load Balancer Controller v2.6.0+ AWS CLI v2 Configure \u00b6 1. Find the VPC ID of your cluster \u00b6 $ aws eks describe-cluster --name --query \"cluster.resourcesVpcConfig.vpcId\" --output text vpc-0101XXXXa356 Ensure you have the right cluster name, AWS region, and the AWS CLI is configured. 2. Create a security group using the VPC ID \u00b6 $ aws ec2 create-security-group --group-name --description --vpc-id { \"GroupId\" : \"sg-0406XXXX645c\" } Note the security group ID. This will be the frontend security group for the load balancer. 3. Create your ingress rules \u00b6 Load balancers generally serve as an entrypoint for clients to access your cluster. This makes ingress rules especially important. For example, this rule permits all traffic on port 443: aws ec2 authorize-security-group-ingress --group-id --protocol all --port 443 --cidr 0 .0.0.0/0 Learn more about how to create an ingress rule with the AWS CLI. 4. Determine your egress rules (optional) \u00b6 By default, all outbound traffic is allowed. Further, security groups are stateful, and responses to an allowed connection will also be permitted. Learn how to create an egress rule with the AWS CLI. 5. Add the security group annotation to your Ingress or Service \u00b6 For Ingress resources , add the following annotation: apiVersion : networking.k8s.io/v1 kind : Ingress metadata : name : frontend annotations : alb.ingress.kubernetes.io/security-groups : For Service resources , add the following annotation: apiVersion : v1 kind : Service metadata : name : frontend annotations : service.beta.kubernetes.io/aws-load-balancer-security-groups : spec : type : LoadBalancer loadBalancerClass : service.k8s.aws/nlb For Ingress resources, the associated Application Load Balancer will be updated. For Service resources, the associated Network Load Balancer will be updated. 6. List your load balancers and verify the security groups are attached \u00b6 $ aws elbv2 describe-load-balancers { \"LoadBalancers\" : [ { \"LoadBalancerArn\" : \"arn:aws:elasticloadbalancing:us-east-1:1853XXXX5115:loadbalancer/net/k8s-default-frontend-ae3743b818/3ad6d16fb75ff688\" , <...> \"SecurityGroups\" : [ \"sg-0406XXXX645c\" , \"sg-0873XXXX2bef\" ] , \"IpAddressType\" : \"ipv4\" } ] } If you don't see the security groups, verify: The Load Balancer Controller is properly installed. The controller has proper IAM permissions to modify load balancers. Look at the logs of the controller pods for IAM errors. 7. Clean up (Optional) \u00b6 Removing the annotations from Service/Ingress resources will revert to the default frontend ecurity groups. Load balancers may be costly. Delete Ingress and Service resources to deprovision the load balancers. If the load balancers are deleted from the console, they may be recreated by the controller.","title":"Frontend Security Groups"},{"location":"guide/use_cases/frontend_sg/#solution-overview","text":"Load balancers expose cluster workloads to a wider network. Creating a frontend security group limits access to these workloads (service or ingress resources). More specifically, a security group acts as a virtual firewall to control incoming and outgoing traffic. Inbound rules control the incoming traffic to your load balancer, and outbound rules control the outgoing traffic from your load balancer. Security groups are particularly suited for defining what access other AWS resources (services, EC2 instances) have to your cluster. For example, if you have an existing security group including EC2 instances, you can permit only that security group to access a service. In this example, you will restrict access to a cluster service. You will create a new security group for the frontend of a load balancer, and add an inbound rule permitting traffic. The rule may limit traffic to a specific port, CIDR, or existing security group.","title":"Solution Overview"},{"location":"guide/use_cases/frontend_sg/#prerequisites","text":"Kubernetes Cluster Version 1.22+ AWS Load Balancer Controller v2.6.0+ AWS CLI v2","title":"Prerequisites"},{"location":"guide/use_cases/frontend_sg/#configure","text":"","title":"Configure"},{"location":"guide/use_cases/frontend_sg/#1-find-the-vpc-id-of-your-cluster","text":"$ aws eks describe-cluster --name --query \"cluster.resourcesVpcConfig.vpcId\" --output text vpc-0101XXXXa356 Ensure you have the right cluster name, AWS region, and the AWS CLI is configured.","title":"1. Find the VPC ID of your cluster"},{"location":"guide/use_cases/frontend_sg/#2-create-a-security-group-using-the-vpc-id","text":"$ aws ec2 create-security-group --group-name --description --vpc-id { \"GroupId\" : \"sg-0406XXXX645c\" } Note the security group ID. This will be the frontend security group for the load balancer.","title":"2. Create a security group using the VPC ID"},{"location":"guide/use_cases/frontend_sg/#3-create-your-ingress-rules","text":"Load balancers generally serve as an entrypoint for clients to access your cluster. This makes ingress rules especially important. For example, this rule permits all traffic on port 443: aws ec2 authorize-security-group-ingress --group-id --protocol all --port 443 --cidr 0 .0.0.0/0 Learn more about how to create an ingress rule with the AWS CLI.","title":"3. Create your ingress rules"},{"location":"guide/use_cases/frontend_sg/#4-determine-your-egress-rules-optional","text":"By default, all outbound traffic is allowed. Further, security groups are stateful, and responses to an allowed connection will also be permitted. Learn how to create an egress rule with the AWS CLI.","title":"4. Determine your egress rules (optional)"},{"location":"guide/use_cases/frontend_sg/#5-add-the-security-group-annotation-to-your-ingress-or-service","text":"For Ingress resources , add the following annotation: apiVersion : networking.k8s.io/v1 kind : Ingress metadata : name : frontend annotations : alb.ingress.kubernetes.io/security-groups : For Service resources , add the following annotation: apiVersion : v1 kind : Service metadata : name : frontend annotations : service.beta.kubernetes.io/aws-load-balancer-security-groups : spec : type : LoadBalancer loadBalancerClass : service.k8s.aws/nlb For Ingress resources, the associated Application Load Balancer will be updated. For Service resources, the associated Network Load Balancer will be updated.","title":"5. Add the security group annotation to your Ingress or Service"},{"location":"guide/use_cases/frontend_sg/#6-list-your-load-balancers-and-verify-the-security-groups-are-attached","text":"$ aws elbv2 describe-load-balancers { \"LoadBalancers\" : [ { \"LoadBalancerArn\" : \"arn:aws:elasticloadbalancing:us-east-1:1853XXXX5115:loadbalancer/net/k8s-default-frontend-ae3743b818/3ad6d16fb75ff688\" , <...> \"SecurityGroups\" : [ \"sg-0406XXXX645c\" , \"sg-0873XXXX2bef\" ] , \"IpAddressType\" : \"ipv4\" } ] } If you don't see the security groups, verify: The Load Balancer Controller is properly installed. The controller has proper IAM permissions to modify load balancers. Look at the logs of the controller pods for IAM errors.","title":"6. List your load balancers and verify the security groups are attached"},{"location":"guide/use_cases/frontend_sg/#7-clean-up-optional","text":"Removing the annotations from Service/Ingress resources will revert to the default frontend ecurity groups. Load balancers may be costly. Delete Ingress and Service resources to deprovision the load balancers. If the load balancers are deleted from the console, they may be recreated by the controller.","title":"7. Clean up (Optional)"},{"location":"guide/use_cases/nlb_tls_termination/","text":"Motivation \u00b6 Managing TLS certificates (and related configuration) for production cluster workloads is both time consuming, and high risk. For example, storing multiple copies of a certificate secret key in the cluster may increases the chances of it being compromised. Additionally, TLS can be complicated to configure and implement properly. Traditionally, TLS termination at the load balancer step required using more expensive application load balancers (ALBs). AWS introduced TLS termination for network load balancers (NLBs) for enhanced security and cost effectiveness. The TLS implementation used by the AWS NLB is formally verified and maintained. Additionally, AWS Certificate Manager (ACM) is used, fully isolating your cluster from access to the private key. Solution Overview \u00b6 An external client transmits a request to the NLB. The request is encrypted with TLS using the production (e.g., client facing) certificate, and on port 443. The NLB decrypts the request, and transmits it on to your cluster on port 80. It follows the standard request routing configured within the cluster. Notably, the request received within the cluster includes the actual origin IP address of the external client. Alternate ports may be configured. Note The NLB may be configured to maintain the source (i.e., client) IP address. However, there are some limitations. Review Client IP Preservation in the AWS docs. Prerequisites \u00b6 \u2705 Access to DNS records for domain name. Review the docs on registering domains with AWS's Route 53. Alternate DNS providers may be used, such as Google Domains or Namecheap. Later, a subdomain (e.g., demo-service.gcline.us) will be created, pointing to the NLB. Access to the DNS records is required to generate a TLS certificate for use by the NLB. \u2705 AWS Load Balancer Controller Installed Generally, setting up the Load Balancer Controller has two steps: enabling IAM roles for service accounts, and adding the controller to the cluster. The IAM role allows the controller in the Kubernetes cluster to manage AWS resources. Learn more about IAM roles for service accounts. Configure \u00b6 Generate TLS Certificate \u00b6 Create a public TLS certificate for the domain using AWS Certificate Manager (ACM). This is streamlined when the domain is managed by Route 53. Review the AWS Certificate Manager Docs. The domain name on the TLS certificate must correspond to the planned domain name for the kubernetes service. The domain name may be specified explicitly (e.g., tls-demo.gcline.us), or a wildcard certificate can be used (e.g., *.gcline.us). If the domain is registered with Route53, the TLS certificate request will automatically be approved. Otherwise, follow ACM console the instructions to create a DNS record to validate the domain. After validation, the certificate will be available for use in your AWS account. Note the ARN of the certificate, which uniquely identifies it in kubernetes config files. Create Service with new NLB \u00b6 Add annotations to a load balancer service to enable NLB TLS termination, before the traffic reaches Envoy. The annotations are actioned by the load balancer controller. Review all the NLB annotations on GitHub. annotation name value meaning service.beta.kubernetes.io/aws-load-balancer-type external explicitly requires an NLB, instead of an ALB service.beta.kubernetes.io/aws-load-balancer-nlb-target-type ip route traffic directly to the pod IP service.beta.kubernetes.io/aws-load-balancer-scheme internet-facing An internet-facing load balancer has a publicly resolvable DNS name service.beta.kubernetes.io/aws-load-balancer-ssl-cert \"arn:aws:acm:...\" identifies the TLS certificate used by the NLB service.beta.kubernetes.io/aws-load-balancer-ssl-ports 443 determines the port the NLB should listen for TLS traffic on Example: apiVersion: v1 kind: Service metadata: name: MyAppSvc namespace: dev annotations: service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing service.beta.kubernetes.io/aws-load-balancer-ssl-cert: \"arn:aws:acm:us-east-2:185309785115:certificate/7610ed7d-5a81-4ea2-a18a-7ba1606cca3e\" service.beta.kubernetes.io/aws-load-balancer-ssl-ports: \"443\" spec: externalTrafficPolicy: Local ports: - port: 443 targetPort: 80 name: http protocol: TCP selector: app: MyApp type: LoadBalancer Configure DNS \u00b6 Get domain name using kubectl. The service name and namespace were defined above. kubectl get svc MyAppSvc --namespace dev NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE envoy LoadBalancer 10.100.24.154 k8s---xxxxxxxxxx-xxxxxxxxxxxxxxxx.elb..amazonaws.com 443:31606/TCP 40d Note the last 4 digits of the domain name for the NLB. For example, \"bb1f\". Setup DNS alias for NLB Create a DNS record pointing from a friendly name (e.g., tls-demo.gcline.us) to the NLB domain (e.g., bb1f.elb.us-east-2.amazonaws.com). For AWS's Route 53, follow the instructions below. If you use a different DNS provider, follow their instructions for creating a CNAME record . First, create a new record in Route 53. Use the \"A\" record type, and enable the \"alias\" option. This option attaches the DNS record to the AWS resource, without requiring an extra lookup step for clients. Select the NLB resource. Double check the region, and use the last 4 digits (noted earlier) to select the proper resource. Verify \u00b6 Attempt to access the NLB domain at port 443 with HTTPS/TLS. Is the connection successful? What certificate is used? Does it reach the expected endpoint within the cluster?","title":"NLB TLS Termination"},{"location":"guide/use_cases/nlb_tls_termination/#motivation","text":"Managing TLS certificates (and related configuration) for production cluster workloads is both time consuming, and high risk. For example, storing multiple copies of a certificate secret key in the cluster may increases the chances of it being compromised. Additionally, TLS can be complicated to configure and implement properly. Traditionally, TLS termination at the load balancer step required using more expensive application load balancers (ALBs). AWS introduced TLS termination for network load balancers (NLBs) for enhanced security and cost effectiveness. The TLS implementation used by the AWS NLB is formally verified and maintained. Additionally, AWS Certificate Manager (ACM) is used, fully isolating your cluster from access to the private key.","title":"Motivation"},{"location":"guide/use_cases/nlb_tls_termination/#solution-overview","text":"An external client transmits a request to the NLB. The request is encrypted with TLS using the production (e.g., client facing) certificate, and on port 443. The NLB decrypts the request, and transmits it on to your cluster on port 80. It follows the standard request routing configured within the cluster. Notably, the request received within the cluster includes the actual origin IP address of the external client. Alternate ports may be configured. Note The NLB may be configured to maintain the source (i.e., client) IP address. However, there are some limitations. Review Client IP Preservation in the AWS docs.","title":"Solution Overview"},{"location":"guide/use_cases/nlb_tls_termination/#prerequisites","text":"\u2705 Access to DNS records for domain name. Review the docs on registering domains with AWS's Route 53. Alternate DNS providers may be used, such as Google Domains or Namecheap. Later, a subdomain (e.g., demo-service.gcline.us) will be created, pointing to the NLB. Access to the DNS records is required to generate a TLS certificate for use by the NLB. \u2705 AWS Load Balancer Controller Installed Generally, setting up the Load Balancer Controller has two steps: enabling IAM roles for service accounts, and adding the controller to the cluster. The IAM role allows the controller in the Kubernetes cluster to manage AWS resources. Learn more about IAM roles for service accounts.","title":"Prerequisites"},{"location":"guide/use_cases/nlb_tls_termination/#configure","text":"","title":"Configure"},{"location":"guide/use_cases/nlb_tls_termination/#generate-tls-certificate","text":"Create a public TLS certificate for the domain using AWS Certificate Manager (ACM). This is streamlined when the domain is managed by Route 53. Review the AWS Certificate Manager Docs. The domain name on the TLS certificate must correspond to the planned domain name for the kubernetes service. The domain name may be specified explicitly (e.g., tls-demo.gcline.us), or a wildcard certificate can be used (e.g., *.gcline.us). If the domain is registered with Route53, the TLS certificate request will automatically be approved. Otherwise, follow ACM console the instructions to create a DNS record to validate the domain. After validation, the certificate will be available for use in your AWS account. Note the ARN of the certificate, which uniquely identifies it in kubernetes config files.","title":"Generate TLS Certificate"},{"location":"guide/use_cases/nlb_tls_termination/#create-service-with-new-nlb","text":"Add annotations to a load balancer service to enable NLB TLS termination, before the traffic reaches Envoy. The annotations are actioned by the load balancer controller. Review all the NLB annotations on GitHub. annotation name value meaning service.beta.kubernetes.io/aws-load-balancer-type external explicitly requires an NLB, instead of an ALB service.beta.kubernetes.io/aws-load-balancer-nlb-target-type ip route traffic directly to the pod IP service.beta.kubernetes.io/aws-load-balancer-scheme internet-facing An internet-facing load balancer has a publicly resolvable DNS name service.beta.kubernetes.io/aws-load-balancer-ssl-cert \"arn:aws:acm:...\" identifies the TLS certificate used by the NLB service.beta.kubernetes.io/aws-load-balancer-ssl-ports 443 determines the port the NLB should listen for TLS traffic on Example: apiVersion: v1 kind: Service metadata: name: MyAppSvc namespace: dev annotations: service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing service.beta.kubernetes.io/aws-load-balancer-ssl-cert: \"arn:aws:acm:us-east-2:185309785115:certificate/7610ed7d-5a81-4ea2-a18a-7ba1606cca3e\" service.beta.kubernetes.io/aws-load-balancer-ssl-ports: \"443\" spec: externalTrafficPolicy: Local ports: - port: 443 targetPort: 80 name: http protocol: TCP selector: app: MyApp type: LoadBalancer","title":"Create Service with new NLB"},{"location":"guide/use_cases/nlb_tls_termination/#configure-dns","text":"Get domain name using kubectl. The service name and namespace were defined above. kubectl get svc MyAppSvc --namespace dev NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE envoy LoadBalancer 10.100.24.154 k8s---xxxxxxxxxx-xxxxxxxxxxxxxxxx.elb..amazonaws.com 443:31606/TCP 40d Note the last 4 digits of the domain name for the NLB. For example, \"bb1f\". Setup DNS alias for NLB Create a DNS record pointing from a friendly name (e.g., tls-demo.gcline.us) to the NLB domain (e.g., bb1f.elb.us-east-2.amazonaws.com). For AWS's Route 53, follow the instructions below. If you use a different DNS provider, follow their instructions for creating a CNAME record . First, create a new record in Route 53. Use the \"A\" record type, and enable the \"alias\" option. This option attaches the DNS record to the AWS resource, without requiring an extra lookup step for clients. Select the NLB resource. Double check the region, and use the last 4 digits (noted earlier) to select the proper resource.","title":"Configure DNS"},{"location":"guide/use_cases/nlb_tls_termination/#verify","text":"Attempt to access the NLB domain at port 443 with HTTPS/TLS. Is the connection successful? What certificate is used? Does it reach the expected endpoint within the cluster?","title":"Verify"},{"location":"guide/use_cases/self_managed_lb/","text":"Motivation \u00b6 The load balancer controller (LBC) generally creates and destroys AWS Load Balancers in response to Kubernetes resources. However, some cluster operators may prefer to manually manage AWS Load Balancers. This supports use cases like: Preventing accidental release of key IP addresses. Supporting load balancers where the Kubernetes cluster is one of multiple targets. Complying with organizational requirements on provisioning load balancers, for security or cost reasons. Solution Overview \u00b6 Use the TargetGroupBinding CRD to sync a Kubernetes service with the targets of a load balancer. First, a load balancer is manually created directly with AWS. This guide uses a network load balancer, but an application load balancer may be similarly configured. Second, A listener and a target group are then added to the load balancer. Third, a TargetGroupBinding CRD is created in a cluster. The CRD includes references to a Kubernetes service and the ARN of the Load Balancer Target Group. The CRD configures the LBC to watch the service and automatically update the target group with the appropriate pod VPC IP addresses. Prerequisites \u00b6 Install: Load Balancer Controller Installed on Cluster AWS CLI Kubectl Have this information available: Cluster VPC Information ID of EKS Cluster Subnet IDs This information is available in the \"Networking\" section of the EKS Cluster Console. Port and Protocol of Target Kubernetes Service Configure Load Balancer \u00b6 Create Load Balancer: (optional) Use the create-load-balancer command to create an IPv4 load balancer, specifying a public subnet for each Availability Zone in which you have instances. You can specify only one subnet per Availability Zone. aws elbv2 create-load-balancer --name my-load-balancer --type network --subnets subnet-0e3f5cac72EXAMPLE Important: The output includes the ARN of the load balancer. This value is needed to configure the LBC. Example: arn:aws:elasticloadbalancing:us-east-2:123456789012:loadbalancer/net/my-load-balancer/1234567890123456 Use the create-target-group command to create an IPv4 target group, specifying the same VPC of your EKS cluster. aws elbv2 create-target-group --name my-targets --protocol TCP --port 80 --vpc-id vpc-0598c7d356EXAMPLE The output includes the ARN of the target group, with this format: arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/my-targets/1234567890123456 Use the create-listener command to create a listener for your load balancer with a default rule that forwards requests to your target group. The listener port and protocol must match the Kubernetes service. However, TLS termination is permitted. [[double check it works in this configuration?]] aws elbv2 create-listener --load-balancer-arn loadbalancer-arn --protocol TCP --port 80 \\ --default-actions Type=forward,TargetGroupArn=targetgroup-arn Create TargetGroupBinding CRD \u00b6 Create the TargetGroupBinding CRD Insert the ARN of the Target Group, as created above. Insert the name and port of the target Kubernetes service. apiVersion : elbv2.k8s.aws/v1beta1 kind : TargetGroupBinding metadata : name : my-tgb spec : serviceRef : name : awesome-service # route traffic to the awesome-service port : 80 targetGroupARN : arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/my-targets/1234567890123456 2. Apply the CRD Apply the TargetGroupBinding CRD CRD file to your Cluster. kubectl apply -f my-tgb.yaml Verify \u00b6 Wait approximately 30 seconds for the LBC to update the load balancer. View all target groups in the AWS console. Find the target group by the ARN noted above, and verify the appropriate instances from the cluster have been added.","title":"Externally Managed Load Balancer"},{"location":"guide/use_cases/self_managed_lb/#motivation","text":"The load balancer controller (LBC) generally creates and destroys AWS Load Balancers in response to Kubernetes resources. However, some cluster operators may prefer to manually manage AWS Load Balancers. This supports use cases like: Preventing accidental release of key IP addresses. Supporting load balancers where the Kubernetes cluster is one of multiple targets. Complying with organizational requirements on provisioning load balancers, for security or cost reasons.","title":"Motivation"},{"location":"guide/use_cases/self_managed_lb/#solution-overview","text":"Use the TargetGroupBinding CRD to sync a Kubernetes service with the targets of a load balancer. First, a load balancer is manually created directly with AWS. This guide uses a network load balancer, but an application load balancer may be similarly configured. Second, A listener and a target group are then added to the load balancer. Third, a TargetGroupBinding CRD is created in a cluster. The CRD includes references to a Kubernetes service and the ARN of the Load Balancer Target Group. The CRD configures the LBC to watch the service and automatically update the target group with the appropriate pod VPC IP addresses.","title":"Solution Overview"},{"location":"guide/use_cases/self_managed_lb/#prerequisites","text":"Install: Load Balancer Controller Installed on Cluster AWS CLI Kubectl Have this information available: Cluster VPC Information ID of EKS Cluster Subnet IDs This information is available in the \"Networking\" section of the EKS Cluster Console. Port and Protocol of Target Kubernetes Service","title":"Prerequisites"},{"location":"guide/use_cases/self_managed_lb/#configure-load-balancer","text":"Create Load Balancer: (optional) Use the create-load-balancer command to create an IPv4 load balancer, specifying a public subnet for each Availability Zone in which you have instances. You can specify only one subnet per Availability Zone. aws elbv2 create-load-balancer --name my-load-balancer --type network --subnets subnet-0e3f5cac72EXAMPLE Important: The output includes the ARN of the load balancer. This value is needed to configure the LBC. Example: arn:aws:elasticloadbalancing:us-east-2:123456789012:loadbalancer/net/my-load-balancer/1234567890123456 Use the create-target-group command to create an IPv4 target group, specifying the same VPC of your EKS cluster. aws elbv2 create-target-group --name my-targets --protocol TCP --port 80 --vpc-id vpc-0598c7d356EXAMPLE The output includes the ARN of the target group, with this format: arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/my-targets/1234567890123456 Use the create-listener command to create a listener for your load balancer with a default rule that forwards requests to your target group. The listener port and protocol must match the Kubernetes service. However, TLS termination is permitted. [[double check it works in this configuration?]] aws elbv2 create-listener --load-balancer-arn loadbalancer-arn --protocol TCP --port 80 \\ --default-actions Type=forward,TargetGroupArn=targetgroup-arn","title":"Configure Load Balancer"},{"location":"guide/use_cases/self_managed_lb/#create-targetgroupbinding-crd","text":"Create the TargetGroupBinding CRD Insert the ARN of the Target Group, as created above. Insert the name and port of the target Kubernetes service. apiVersion : elbv2.k8s.aws/v1beta1 kind : TargetGroupBinding metadata : name : my-tgb spec : serviceRef : name : awesome-service # route traffic to the awesome-service port : 80 targetGroupARN : arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/my-targets/1234567890123456 2. Apply the CRD Apply the TargetGroupBinding CRD CRD file to your Cluster. kubectl apply -f my-tgb.yaml","title":"Create TargetGroupBinding CRD"},{"location":"guide/use_cases/self_managed_lb/#verify","text":"Wait approximately 30 seconds for the LBC to update the load balancer. View all target groups in the AWS console. Find the target group by the ARN noted above, and verify the appropriate instances from the cluster have been added.","title":"Verify"}]} \ No newline at end of file diff --git a/v2.7/sitemap.xml b/v2.7/sitemap.xml index 1d2f5a3f4..046610104 100644 --- a/v2.7/sitemap.xml +++ b/v2.7/sitemap.xml @@ -1,107 +1,107 @@ None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily None - 2024-02-02 + 2024-04-17 daily \ No newline at end of file diff --git a/v2.7/sitemap.xml.gz b/v2.7/sitemap.xml.gz index 599f66a9d..593d85ba7 100644 Binary files a/v2.7/sitemap.xml.gz and b/v2.7/sitemap.xml.gz differ