Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dual TCP/UDP NLB shared across multiple services #1608

Open
hongkongkiwi opened this issue Nov 4, 2020 · 56 comments
Open

Dual TCP/UDP NLB shared across multiple services #1608

hongkongkiwi opened this issue Nov 4, 2020 · 56 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@hongkongkiwi
Copy link

Hi there,

I would like to share a single NLB across two services with both UDP and TCP ports open.

For example:
serviceA - Port 550 UDP
serivceB - Port 8899 TCP

I couldn't seem to find a way to do this unless using an application load balancer and ingress routes.

Is there a way to do this in the v2.0.0 release?

The major blocker was that the targetgroup annotation was only supported at an ingress level (not a service level) so there just seems no way to share a LB.

@kishorj
Copy link
Collaborator

kishorj commented Nov 4, 2020

The service resource models closely to the AWS NLB resource, so we don't intend to support single LB across multiple service resources. See if specifying multiple service ports helps your use case.

If you are not able to specify both TCP and UDP ports on a service resource of type LoadBalancer, you can try using service of type NodePort. The only limitation currently is that the TCP and UDP ports cannot be the same value.

@hongkongkiwi
Copy link
Author

hongkongkiwi commented Nov 6, 2020

Thanks @kishorj, that's a pretty clear explanation. I will try the NodePort approach.

I was wondering why I can't specify the targetgroup with NLB like I can with the ALB? That would also be a way to resolve this, just specify the same targetgroup for multiple services (such as I can do with ingress).

@philomory
Copy link

philomory commented Nov 14, 2020

I also would love to see something akin to alb.ingress.kubernetes.io/group.name on NLBs (presumably as something akin to service.beta.kubernetes.io/aws-load-balancer-group-name).

NodePort seems like a nonstarter with an autoscaling cluster - you have to go manually create additional target groups and pin your autoscaling group to each of them every time you add a new service (though honestly this is more of a failing of the way NLB target groups work - ideally you should only need one target group, not one target group per port all with identical instance lists)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 12, 2021
@philomory
Copy link

As far as I'm aware this shouldn't be stale. I'd still love to see this, at least.

@M00nF1sh
Copy link
Collaborator

M00nF1sh commented Feb 12, 2021

@philomory
This controller currently supports NodePort service as well. If you use the NLB-IP mode annotation service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip" on services, even it's a nodePort service, the controller will manage the loadbalancer and targetGroups for you automatically.

Also, there have been proposal in upstream to allow dual TCP/UDP.

@M00nF1sh
Copy link
Collaborator

/kind feature

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Feb 12, 2021
@TBBle
Copy link
Contributor

TBBle commented Feb 25, 2021

Mixed-protocol (TCP/UDP) Service is alpha in k8s 1.20, is this the feature ticket to track for support for the MixedProtocolLBService feature gate?

Or is this for multiple Services contributing to a single NLB, similar to #1707 and #1545?

@kishorj kishorj removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 26, 2021
@kishorj
Copy link
Collaborator

kishorj commented Feb 26, 2021

@TBBle, this is the correct issue for mixed protocol support.

Once the MixedProtocolLBService feature gate is enabled, service of type LoadBalancer with mixed protocols should work fine without further changes with the following limitations -

  • cannot have same port for both UDP and TCP, for example TCP port 53 and UDP port 53. This is a limitation on the AWS side.
  • NLB dual-stack currently doesn't support UDP

@philomory
Copy link

@TBBle What, exactly, is the limitation on the AWS side that causes this? Doesn't the NLB listener protocol TCP_UDP cover the case of the same port over both TCP and UDP?

@TBBle
Copy link
Contributor

TBBle commented Feb 27, 2021

I assume you meant @kishorj with that question.

Per the docs, TCP_UDP is explicitly for the "same port in TCP and UDP" case.

To support both TCP and UDP on the same port, create a TCP_UDP listener. The target groups for a TCP_UDP listener must use the TCP_UDP protocol.

@philomory
Copy link

@TBBle You're absolutely right, I meant @kishorj. My apologies.

@kishorj, what's the cause of the limitation that a Load Balancer service with mixed protocols cannot use the same port for both TCP and UDP? It's definitely supported on the AWS side in the form of the TCP_UDP protocol type. But maybe I'm misunderstanding something here?

@kishorj
Copy link
Collaborator

kishorj commented Feb 27, 2021

@philomory, you are correct, AWS supports TCP_UDP protocol type. In my prior response, I was referring to the current controller code without further changes handling services with TCP and UDP protocols.

As you mentioned, It is possible to utilize the TCP_UDP protocol type supported by AWS to combine matching TCP and UDP ports from the service spec as long as the target ports or node ports for TCP and UDP protocols are same. This is something that we have been considering to add in future releases.

@ArchiFleKs
Copy link

ArchiFleKs commented Apr 16, 2021

So as of today there is no way to have a listener with TCP_UDP ?

I have try using service.beta.kubernetes.io/aws-load-balancer-backend-protocol: TCP_UDP but it does nothing. I'm using v2.1.3.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 17, 2021
@philomory
Copy link

I don't think this should be marked as stale, this would still be a valuable feature

@TBBle
Copy link
Contributor

TBBle commented Jul 17, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 17, 2021
@Yasumoto
Copy link
Contributor

Yasumoto commented Oct 6, 2021

Sounds like this is something that'd be good to support, I imagine adding support for TCP_UDP would probably need to be in a few places, but would a good place to start be model_build_listener.go?

@kishorj
Copy link
Collaborator

kishorj commented Oct 7, 2021

@Yasumoto, I've included the design details if you are interested

  • model_build_listener.go is a good place to start
  • also need to modify the deployer code to create TCP_UDP type listener

TCP_UDP listener support for NLB

Overview

NLB has support for both TCP and UDP listeners and the k8s LoadBalancer service with mixed protocol is deployed as NLB configuration using both type of listeners. However, due to limitation from the AWS ELBv2 side, listener ports for TCP and UDP cannot be the same. Use cases where both TCP and UDP listeners are used where we are currently not able to provide a reasonable solution -

  • DNS with both TCP and UDP listeners on port 53
  • HTTP/2 over TLS+TCP and HTTP/3 over QUIC on port 443

The AWS ELBv2 has a TCP_UDP type of listener that listens for both TCP and UDP protocol on the same port and this construct is useful for providing solution for mixed protocols in limited cases. This document describe the proposed solution with the limitations.

Proposal

In case the service spec has the same TCP and UDP ports specified, convert to TCP_UDP type listener during model generation if there exists two ports p1 and p2 in the service.Spec.Ports such that:

  • (p1.protocol == UDP AND p2.protocol == TCP) OR (p1.protocol == TCP AND p2.protocol == UDP)
  • p1.port == p2.port
  • p1.targetPort == p2.targetPort if target type is IP
  • p1.nodePort == p2.nodePort if target type is Instance

For each (p1, p2) pairs, create a listener of type TCP_UDP instead of separate TCP and UDP listeners.

Backwards compatibility/User facing changes

There are no issues with backwards compatibility. This feature does not require any user action.

Limitations

Since the target ports for both the TCP and UDP protocols have be the same, the nodePort for instance targets must be statically allocated.

@amorey
Copy link

amorey commented Oct 7, 2021

I just submitted a PR that implements this using a similar strategy #2275

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@artyom-p
Copy link

artyom-p commented Feb 2, 2023

This ticket is created in 2020 and support is still not added... Is there anyone that has expertise in this area and can help with that PR #2275?

@csk06
Copy link

csk06 commented Mar 31, 2023

For people still having this issue, I think that issue is not happening on Kubernetes version 1.25

I was having the same problem, but on 1.25 with the following commands i am able to use it properly.

kind: Service
metadata:
  name: app
  labels:
*
*
*
 annotations:
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
    service.beta.kubernetes.io/aws-load-balancer-type: external
spec:
  ports:
    - name: api
      port: 8686
      protocol: TCP
      targetPort: 8686
    - name: syslog
      port: 514
      protocol: UDP
      targetPort: 514
    - name: someother
      port: 900
      protocol: TCP
      targetPort: 900
  selector:
*
*
*
  type: LoadBalancer```

@jbg
Copy link

jbg commented Mar 31, 2023

@csk06 The config that you showed doesn't share an NLB across multiple Services, which is what this issue is about.

@danielloader
Copy link

danielloader commented Apr 17, 2023

To update the issue with the latest versions:

EKS 1.26
AWS Load balancer controller 2.5.0

Kubenetes Manifests

apiVersion: apps/v1
kind: Deployment
metadata:
  name: bind-deployment
  labels:
    app: bind
spec:
  replicas: 1
  selector:
    matchLabels:
      app: bind
  template:
    metadata:
      labels:
        app: bind
    spec:
      containers:
      - name: bind
        image: cytopia/bind
        env:
        - name: DOCKER_LOGS
          value: "1"
        - name: ALLOW_QUERY
          value: "any"
        ports:
        - containerPort: 53
          protocol: TCP
        - containerPort: 53
          protocol: UDP
---
apiVersion: v1
kind: Service
metadata:
  name: bind
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "external"
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "53"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: TCP 
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "3"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "10"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10"   
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
  selector:
    app: bind
  ports:
    - protocol: UDP
      name: dns-udp
      port: 53
      targetPort: 53
    - protocol: TCP
      name: dns-tcp
      port: 53
      targetPort: 53
  type: LoadBalancer

The above fails, creating a UDP only target group in the NLB. The controller/service AWS side thinks it's all working fine...

kubectl get service bind
NAME   TYPE           CLUSTER-IP       EXTERNAL-IP                                                                PORT(S)                     AGE
bind   LoadBalancer   10.100.142.242   k8s-default-bind-3442e35570-f7a7240948fc0d92.elb.eu-west-1.amazonaws.com   53:31953/UDP,53:31953/TCP   17s

Controller Logs:

{"level":"info","ts":"2023-04-17T11:36:02Z","logger":"controllers.service","msg":"deleted targetGroupBinding","targetGroupBinding":{"namespace":"default","name":"k8s-default-bind-c1c7d775f1"}}
{"level":"info","ts":"2023-04-17T11:36:02Z","logger":"controllers.service","msg":"deleting targetGroup","arn":"arn:aws:elasticloadbalancing:eu-west-1:766648550982:targetgroup/k8s-default-bind-c1c7d775f1/e482770d4dc448e9"}
{"level":"info","ts":"2023-04-17T11:36:02Z","logger":"controllers.service","msg":"deleted targetGroup","arn":"arn:aws:elasticloadbalancing:eu-west-1:766648550982:targetgroup/k8s-default-bind-c1c7d775f1/e482770d4dc448e9"}
{"level":"info","ts":"2023-04-17T11:36:02Z","logger":"controllers.service","msg":"successfully deployed model","service":{"namespace":"default","name":"bind"}}
{"level":"info","ts":"2023-04-17T11:36:05Z","logger":"controllers.service","msg":"successfully built model","model":"{\"id\":\"default/bind\",\"resources\":{\"AWS::ElasticLoadBalancingV2::Listener\":{\"53\":{\"spec\":{\"loadBalancerARN\":{\"$ref\":\"#/resources/AWS::ElasticLoadBalancingV2::LoadBalancer/LoadBalancer/status/loadBalancerARN\"},\"port\":53,\"protocol\":\"UDP\",\"defaultActions\":[{\"type\":\"forward\",\"forwardConfig\":{\"targetGroups\":[{\"targetGroupARN\":{\"$ref\":\"#/resources/AWS::ElasticLoadBalancingV2::TargetGroup/default/bind:53/status/targetGroupARN\"}}]}}]}}},\"AWS::ElasticLoadBalancingV2::LoadBalancer\":{\"LoadBalancer\":{\"spec\":{\"name\":\"k8s-default-bind-3442e35570\",\"type\":\"network\",\"scheme\":\"internet-facing\",\"ipAddressType\":\"ipv4\",\"subnetMapping\":[{\"subnetID\":\"subnet-00422897a83381cf9\"},{\"subnetID\":\"subnet-014a4b24b2d5ecb6e\"},{\"subnetID\":\"subnet-05df8b99d50eb1b56\"}]}}},\"AWS::ElasticLoadBalancingV2::TargetGroup\":{\"default/bind:53\":{\"spec\":{\"name\":\"k8s-default-bind-5c2f99f91c\",\"targetType\":\"ip\",\"port\":53,\"protocol\":\"UDP\",\"ipAddressType\":\"ipv4\",\"healthCheckConfig\":{\"port\":53,\"protocol\":\"TCP\",\"intervalSeconds\":10,\"timeoutSeconds\":10,\"healthyThresholdCount\":3,\"unhealthyThresholdCount\":3},\"targetGroupAttributes\":[{\"key\":\"proxy_protocol_v2.enabled\",\"value\":\"false\"}]}}},\"K8S::ElasticLoadBalancingV2::TargetGroupBinding\":{\"default/bind:53\":{\"spec\":{\"template\":{\"metadata\":{\"name\":\"k8s-default-bind-5c2f99f91c\",\"namespace\":\"default\",\"creationTimestamp\":null},\"spec\":{\"targetGroupARN\":{\"$ref\":\"#/resources/AWS::ElasticLoadBalancingV2::TargetGroup/default/bind:53/status/targetGroupARN\"},\"targetType\":\"ip\",\"serviceRef\":{\"name\":\"bind\",\"port\":53},\"networking\":{\"ingress\":[{\"from\":[{\"ipBlock\":{\"cidr\":\"0.0.0.0/0\"}}],\"ports\":[{\"protocol\":\"UDP\",\"port\":53}]},{\"from\":[{\"ipBlock\":{\"cidr\":\"192.168.32.0/19\"}},{\"ipBlock\":{\"cidr\":\"192.168.0.0/19\"}},{\"ipBlock\":{\"cidr\":\"192.168.64.0/19\"}}],\"ports\":[{\"protocol\":\"TCP\",\"port\":53}]}]},\"ipAddressType\":\"ipv4\"}}}}}}}"}
{"level":"info","ts":"2023-04-17T11:36:05Z","logger":"controllers.service","msg":"creating targetGroup","stackID":"default/bind","resourceID":"default/bind:53"}
{"level":"info","ts":"2023-04-17T11:36:05Z","logger":"controllers.service","msg":"created targetGroup","stackID":"default/bind","resourceID":"default/bind:53","arn":"arn:aws:elasticloadbalancing:eu-west-1:766648550982:targetgroup/k8s-default-bind-5c2f99f91c/8ecf762ec4c4eac4"}
{"level":"info","ts":"2023-04-17T11:36:05Z","logger":"controllers.service","msg":"creating loadBalancer","stackID":"default/bind","resourceID":"LoadBalancer"}
{"level":"info","ts":"2023-04-17T11:36:05Z","logger":"controllers.service","msg":"created loadBalancer","stackID":"default/bind","resourceID":"LoadBalancer","arn":"arn:aws:elasticloadbalancing:eu-west-1:766648550982:loadbalancer/net/k8s-default-bind-3442e35570/f7a7240948fc0d92"}
{"level":"info","ts":"2023-04-17T11:36:05Z","logger":"controllers.service","msg":"creating listener","stackID":"default/bind","resourceID":"53"}
{"level":"info","ts":"2023-04-17T11:36:05Z","logger":"controllers.service","msg":"created listener","stackID":"default/bind","resourceID":"53","arn":"arn:aws:elasticloadbalancing:eu-west-1:766648550982:listener/net/k8s-default-bind-3442e35570/f7a7240948fc0d92/5a44371fac255ea2"}
{"level":"info","ts":"2023-04-17T11:36:05Z","logger":"controllers.service","msg":"creating targetGroupBinding","stackID":"default/bind","resourceID":"default/bind:53"}
{"level":"info","ts":"2023-04-17T11:36:05Z","logger":"controllers.service","msg":"created targetGroupBinding","stackID":"default/bind","resourceID":"default/bind:53","targetGroupBinding":{"namespace":"default","name":"k8s-default-bind-5c2f99f91c"}}
{"level":"info","ts":"2023-04-17T11:36:05Z","logger":"controllers.service","msg":"successfully deployed model","service":{"namespace":"default","name":"bind"}}
{"level":"info","ts":"2023-04-17T11:36:06Z","msg":"authorizing securityGroup ingress","securityGroupID":"sg-0e8b8ba05b8c46832","permission":[{"FromPort":53,"IpProtocol":"tcp","IpRanges":[{"CidrIp":"192.168.0.0/19","Description":"elbv2.k8s.aws/targetGroupBinding=shared"}],"Ipv6Ranges":null,"PrefixListIds":null,"ToPort":53,"UserIdGroupPairs":null},{"FromPort":53,"IpProtocol":"tcp","IpRanges":[{"CidrIp":"192.168.32.0/19","Description":"elbv2.k8s.aws/targetGroupBinding=shared"}],"Ipv6Ranges":null,"PrefixListIds":null,"ToPort":53,"UserIdGroupPairs":null},{"FromPort":53,"IpProtocol":"tcp","IpRanges":[{"CidrIp":"192.168.64.0/19","Description":"elbv2.k8s.aws/targetGroupBinding=shared"}],"Ipv6Ranges":null,"PrefixListIds":null,"ToPort":53,"UserIdGroupPairs":null},{"FromPort":53,"IpProtocol":"udp","IpRanges":[{"CidrIp":"0.0.0.0/0","Description":"elbv2.k8s.aws/targetGroupBinding=shared"}],"Ipv6Ranges":null,"PrefixListIds":null,"ToPort":53,"UserIdGroupPairs":null}]}
{"level":"info","ts":"2023-04-17T11:36:06Z","msg":"authorized securityGroup ingress","securityGroupID":"sg-0e8b8ba05b8c46832"}
{"level":"info","ts":"2023-04-17T11:36:06Z","msg":"registering targets","arn":"arn:aws:elasticloadbalancing:eu-west-1:766648550982:targetgroup/k8s-default-bind-5c2f99f91c/8ecf762ec4c4eac4","targets":[{"AvailabilityZone":null,"Id":"192.168.149.126","Port":53}]}
{"level":"info","ts":"2023-04-17T11:36:06Z","msg":"registered targets","arn":"arn:aws:elasticloadbalancing:eu-west-1:766648550982:targetgroup/k8s-default-bind-5c2f99f91c/8ecf762ec4c4eac4"}

Screenshot_2023-04-17_at_12-43-54_EC2_Management_Console

I know this issue is about sharing between services but it's also the point of reference for multiple protocols in the same reference as evidenced by all the google hits that get you here talking about this controller and TCP_UDP mode.

The good news:

EKS/AWS doesn't reject the service yaml any more like it used to, by virtue of https://kubernetes.io/docs/concepts/services-networking/service/#load-balancers-with-mixed-protocol-types being available now.

$ dig @k8s-default-bind-3442e35570-f7a7240948fc0d92.elb.eu-west-1.amazonaws.com example.com 

; <<>> DiG 9.18.1-1ubuntu1.1-Ubuntu <<>> @k8s-default-bind-3442e35570-f7a7240948fc0d92.elb.eu-west-1.amazonaws.com example.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20842
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: 7f17659536ea77a601000000643d30ab5837db7e312a6de7 (good)
;; QUESTION SECTION:
;example.com.                   IN      A

;; ANSWER SECTION:
example.com.            85348   IN      A       93.184.216.34

;; Query time: 20 msec
;; SERVER: 63.35.59.113#53(k8s-default-bind-3442e35570-f7a7240948fc0d92.elb.eu-west-1.amazonaws.com) (UDP)
;; WHEN: Mon Apr 17 12:42:38 BST 2023
;; MSG SIZE  rcvd: 84

The bad news:

The controller provisions only the first in the array, in my case, the UDP service. Quietly ignoring the TCP service on the same port.

@TBBle
Copy link
Contributor

TBBle commented Apr 17, 2023

Yeah, looking back, only the initial request was about sharing a single NLB across multiple services, and the response was "We don't plan to do that", and then we ended up talking about TCP_UDP support instead, apart from a brief return in December 2022-January 2023 to the "multiple services sharing an NLB", including a solution using a self-managed NLB (which is similar to what's in the docs AFAICT).

For multiple services sharing an NLB, the existing TargetGroupBinding feature plus an implementation of NLB and TargetGroup-creation primitives would allow that. I was going to suggest the latter would make sense in the ACK project, but they referred NLB support back to here.

For TCP_UDP support, the only PR I'm aware of was #2275, and the PR creator hasn't commented there in over a year, so I assume it's off their radar now.

@celliso1
Copy link

celliso1 commented May 3, 2023

SEO might be promoting this issue over #2275 for the TCP_UDP conversation, and people might click "Dual TCP/UDP NLB" before they read "...shared across multiple services".

Aside from the above mentioned QUIC and DNS scenarios there is also SIP (5060). It is helpful if SIP/UDP can switch to SIP/TCP when a message exceeds MTU, at the same load balancer IP address.

If there is no intention to support the original request directly (sharing a single NLB across multiple services) should this issue be closed with TargetGroupBinding as the official answer? The conversation about TCP_UDP can be directed to the other issue.

@cresta-dongzhao
Copy link

cresta-dongzhao commented Nov 11, 2023

Gratitude to everyone for the invaluable discussions that immensely aided our previous project development. I've documented our successful integration of AWS NLB with Kubernetes, exposing the same 5060 port for both TCP and UDP, along with 5061 for TLS in a single load balancer instance with Kubernetes service. For more insights, check out our blog here: https://dongzhao2023.wordpress.com/2023/11/11/demystifying-kubernetes-and-aws-nlb-integration-a-comprehensive-guide-to-exposing-tcp_udp-ports-for-sip-recording-siprec/

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 10, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 12, 2024
@sidewinder12s
Copy link

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 12, 2024
@omri-kaslasi
Copy link

omri-kaslasi commented Jun 4, 2024

So is this 3 and a half year old issue going to be fixed or currently not worth the time of anyone?
@TBBle

@TBBle
Copy link
Contributor

TBBle commented Jun 5, 2024

#2275 (comment) suggests the last person to be putting time into trying to implement TCP_UDP support for the AWS Load Balancer Controller (for same-port TCP and UDP services) hasn't had any time to put into this recently, no.

#1608 (comment) describes a successful work-around where you actually manage the load balancer in Terraform or similar and then use the AWS Load Balancer Controller's TargetGroupBinding to bind that NLB to your Kubernetes Service.

So basically, no change in status compared to #1608 (comment).

If you're actually looking for a solution to the use-case described in the head of the ticket, i.e. sharing a single NLB across multiple services with different TCP/UDP ports) that comment links to a documented solution, also using TargetGroupBinding to connect Services to an externally-managed NLB.

@DevBey
Copy link

DevBey commented Jun 19, 2024

we use nginx ingress controller or similar kong, for this, which can be used to have a single NLB for whole of EKS

@dhnrkssn
Copy link

#2275 (comment) suggests the last person to be putting time into trying to implement TCP_UDP support for the AWS Load Balancer Controller (for same-port TCP and UDP services) hasn't had any time to put into this recently, no.

#1608 (comment) describes a successful work-around where you actually manage the load balancer in Terraform or similar and then use the AWS Load Balancer Controller's TargetGroupBinding to bind that NLB to your Kubernetes Service.

So basically, no change in status compared to #1608 (comment).

If you're actually looking for a solution to the use-case described in the head of the ticket, i.e. sharing a single NLB across multiple services with different TCP/UDP ports) that comment links to a documented solution, also using TargetGroupBinding to connect Services to an externally-managed NLB.

The workaround solution with an externally load balancer will not work for my scenario.
We have a system where we are deploying multiple services. In Dev & Test scenarios, multiple copies of the system will run on the same K8s cluster but in individual name spaces. Some of these deployments are short lived and some live longer. The loadbalancer must be deployed together with the system so that each system get its own loadbalancer and also its own Route53/DNS entry. We cannot pre-deploy these loadbalancers using terraform since it is not known in advance, which systems that will run and when they will run.

So the original request in this ticket, to be capable of deploying a loadbalancer using e.g. helm and then have services being capable of connecting to this service is a vlid use case for us.

@PfisterAn
Copy link

PfisterAn commented Jul 18, 2024

> TCP_UDP

there was a proposal, I guess that this is a rare use-case, while I also stumbled over it just now and it's kind of not really clear why the proposed logic can't be used to combine the ports in the provisioner ... I understand that in the service definition it is not possible as it would break the kubernetes scheme ... while you could workaround with annotations, but thats not really how it should work ... typically the provisioner should adapt to the possibilities and schemes on the target.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 16, 2024
@lyda
Copy link

lyda commented Nov 4, 2024

I think #3807 is ready for review. I've tested it and it works in a test EKS cluster. But I'd like some feedback on it. I've pinged the reviewers but if others in this issue have experience with this code base and/or AWS load balancers I'd appreciate the feedback. Thanks in advance for any feedback.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 4, 2024
@TBBle
Copy link
Contributor

TBBle commented Dec 4, 2024

/remove-lifecycle rotten

The relevant PR (#3807) is under active review, although the ticket has drifted away from the original poster's use-case into support for the TCP_UDP NLB protocol feature.

(Interesting that merely commenting on the issue doesn't reset the lifecycle state...)

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Dec 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests