-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dual TCP/UDP NLB shared across multiple services #1608
Comments
The service resource models closely to the AWS NLB resource, so we don't intend to support single LB across multiple service resources. See if specifying multiple service ports helps your use case. If you are not able to specify both TCP and UDP ports on a service resource of type |
Thanks @kishorj, that's a pretty clear explanation. I will try the NodePort approach. I was wondering why I can't specify the targetgroup with NLB like I can with the ALB? That would also be a way to resolve this, just specify the same targetgroup for multiple services (such as I can do with ingress). |
I also would love to see something akin to NodePort seems like a nonstarter with an autoscaling cluster - you have to go manually create additional target groups and pin your autoscaling group to each of them every time you add a new service (though honestly this is more of a failing of the way NLB target groups work - ideally you should only need one target group, not one target group per port all with identical instance lists) |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
As far as I'm aware this shouldn't be stale. I'd still love to see this, at least. |
@philomory Also, there have been proposal in upstream to allow dual TCP/UDP. |
/kind feature |
Mixed-protocol (TCP/UDP) Service is alpha in k8s 1.20, is this the feature ticket to track for support for the Or is this for multiple Services contributing to a single NLB, similar to #1707 and #1545? |
@TBBle, this is the correct issue for mixed protocol support. Once the MixedProtocolLBService feature gate is enabled, service of type LoadBalancer with mixed protocols should work fine without further changes with the following limitations -
|
@TBBle What, exactly, is the limitation on the AWS side that causes this? Doesn't the NLB listener protocol |
I assume you meant @kishorj with that question. Per the docs,
|
@TBBle You're absolutely right, I meant @kishorj. My apologies. @kishorj, what's the cause of the limitation that a Load Balancer service with mixed protocols cannot use the same port for both TCP and UDP? It's definitely supported on the AWS side in the form of the TCP_UDP protocol type. But maybe I'm misunderstanding something here? |
@philomory, you are correct, AWS supports TCP_UDP protocol type. In my prior response, I was referring to the current controller code without further changes handling services with TCP and UDP protocols. As you mentioned, It is possible to utilize the TCP_UDP protocol type supported by AWS to combine matching TCP and UDP ports from the service spec as long as the target ports or node ports for TCP and UDP protocols are same. This is something that we have been considering to add in future releases. |
So as of today there is no way to have a listener with I have try using |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
I don't think this should be marked as stale, this would still be a valuable feature |
/remove-lifecycle stale |
Sounds like this is something that'd be good to support, I imagine adding support for |
@Yasumoto, I've included the design details if you are interested
TCP_UDP listener support for NLBOverviewNLB has support for both TCP and UDP listeners and the k8s
The AWS ELBv2 has a TCP_UDP type of listener that listens for both TCP and UDP protocol on the same port and this construct is useful for providing solution for mixed protocols in limited cases. This document describe the proposed solution with the limitations. ProposalIn case the service spec has the same TCP and UDP ports specified, convert to TCP_UDP type listener during model generation if there exists two ports
For each (p1, p2) pairs, create a listener of type TCP_UDP instead of separate TCP and UDP listeners. Backwards compatibility/User facing changesThere are no issues with backwards compatibility. This feature does not require any user action. LimitationsSince the target ports for both the TCP and UDP protocols have be the same, the nodePort for instance targets must be statically allocated. |
I just submitted a PR that implements this using a similar strategy #2275 |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
This ticket is created in 2020 and support is still not added... Is there anyone that has expertise in this area and can help with that PR #2275? |
For people still having this issue, I think that issue is not happening on Kubernetes version 1.25 I was having the same problem, but on 1.25 with the following commands i am able to use it properly.
|
@csk06 The config that you showed doesn't share an NLB across multiple |
To update the issue with the latest versions: EKS 1.26 Kubenetes Manifests apiVersion: apps/v1
kind: Deployment
metadata:
name: bind-deployment
labels:
app: bind
spec:
replicas: 1
selector:
matchLabels:
app: bind
template:
metadata:
labels:
app: bind
spec:
containers:
- name: bind
image: cytopia/bind
env:
- name: DOCKER_LOGS
value: "1"
- name: ALLOW_QUERY
value: "any"
ports:
- containerPort: 53
protocol: TCP
- containerPort: 53
protocol: UDP
---
apiVersion: v1
kind: Service
metadata:
name: bind
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "external"
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "53"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: TCP
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "3"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "10"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10"
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
selector:
app: bind
ports:
- protocol: UDP
name: dns-udp
port: 53
targetPort: 53
- protocol: TCP
name: dns-tcp
port: 53
targetPort: 53
type: LoadBalancer The above fails, creating a UDP only target group in the NLB. The controller/service AWS side thinks it's all working fine...
Controller Logs:
I know this issue is about sharing between services but it's also the point of reference for multiple protocols in the same reference as evidenced by all the google hits that get you here talking about this controller and TCP_UDP mode. The good news: EKS/AWS doesn't reject the service yaml any more like it used to, by virtue of https://kubernetes.io/docs/concepts/services-networking/service/#load-balancers-with-mixed-protocol-types being available now. $ dig @k8s-default-bind-3442e35570-f7a7240948fc0d92.elb.eu-west-1.amazonaws.com example.com
; <<>> DiG 9.18.1-1ubuntu1.1-Ubuntu <<>> @k8s-default-bind-3442e35570-f7a7240948fc0d92.elb.eu-west-1.amazonaws.com example.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20842
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: 7f17659536ea77a601000000643d30ab5837db7e312a6de7 (good)
;; QUESTION SECTION:
;example.com. IN A
;; ANSWER SECTION:
example.com. 85348 IN A 93.184.216.34
;; Query time: 20 msec
;; SERVER: 63.35.59.113#53(k8s-default-bind-3442e35570-f7a7240948fc0d92.elb.eu-west-1.amazonaws.com) (UDP)
;; WHEN: Mon Apr 17 12:42:38 BST 2023
;; MSG SIZE rcvd: 84 The bad news: The controller provisions only the first in the array, in my case, the UDP service. Quietly ignoring the TCP service on the same port. |
Yeah, looking back, only the initial request was about sharing a single NLB across multiple services, and the response was "We don't plan to do that", and then we ended up talking about For multiple services sharing an NLB, the existing For |
SEO might be promoting this issue over #2275 for the Aside from the above mentioned QUIC and DNS scenarios there is also SIP (5060). It is helpful if SIP/UDP can switch to SIP/TCP when a message exceeds MTU, at the same load balancer IP address. If there is no intention to support the original request directly (sharing a single NLB across multiple services) should this issue be closed with |
Gratitude to everyone for the invaluable discussions that immensely aided our previous project development. I've documented our successful integration of AWS NLB with Kubernetes, exposing the same 5060 port for both TCP and UDP, along with 5061 for TLS in a single load balancer instance with Kubernetes service. For more insights, check out our blog here: https://dongzhao2023.wordpress.com/2023/11/11/demystifying-kubernetes-and-aws-nlb-integration-a-comprehensive-guide-to-exposing-tcp_udp-ports-for-sip-recording-siprec/ |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
So is this 3 and a half year old issue going to be fixed or currently not worth the time of anyone? |
#2275 (comment) suggests the last person to be putting time into trying to implement TCP_UDP support for the AWS Load Balancer Controller (for same-port TCP and UDP services) hasn't had any time to put into this recently, no. #1608 (comment) describes a successful work-around where you actually manage the load balancer in Terraform or similar and then use the AWS Load Balancer Controller's So basically, no change in status compared to #1608 (comment). If you're actually looking for a solution to the use-case described in the head of the ticket, i.e. sharing a single NLB across multiple services with different TCP/UDP ports) that comment links to a documented solution, also using |
we use nginx ingress controller or similar kong, for this, which can be used to have a single NLB for whole of EKS |
The workaround solution with an externally load balancer will not work for my scenario. So the original request in this ticket, to be capable of deploying a loadbalancer using e.g. helm and then have services being capable of connecting to this service is a vlid use case for us. |
there was a proposal, I guess that this is a rare use-case, while I also stumbled over it just now and it's kind of not really clear why the proposed logic can't be used to combine the ports in the provisioner ... I understand that in the service definition it is not possible as it would break the kubernetes scheme ... while you could workaround with annotations, but thats not really how it should work ... typically the provisioner should adapt to the possibilities and schemes on the target. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
I think #3807 is ready for review. I've tested it and it works in a test EKS cluster. But I'd like some feedback on it. I've pinged the reviewers but if others in this issue have experience with this code base and/or AWS load balancers I'd appreciate the feedback. Thanks in advance for any feedback. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten The relevant PR (#3807) is under active review, although the ticket has drifted away from the original poster's use-case into support for the TCP_UDP NLB protocol feature. (Interesting that merely commenting on the issue doesn't reset the lifecycle state...) |
Hi there,
I would like to share a single NLB across two services with both UDP and TCP ports open.
For example:
serviceA - Port 550 UDP
serivceB - Port 8899 TCP
I couldn't seem to find a way to do this unless using an application load balancer and ingress routes.
Is there a way to do this in the v2.0.0 release?
The major blocker was that the targetgroup annotation was only supported at an ingress level (not a service level) so there just seems no way to share a LB.
The text was updated successfully, but these errors were encountered: