-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connection to load balancer HTTPS port from within cluster does not terminate TLS #8
Comments
Hey @jcassee, thanks for the report. I can confirm and reproduce the behavior you're describing. Let me look into it and get back to you as soon as I know more. |
@jcassee here's a question while we are still investigating the issue: is there a particular reason you are not using the Service's cluster IP / internal DNS name from a pod running inside the cluster? |
@timoreimann Sure, the thing is we use HAL API resources, and links are absolute URLs that are accessed by applications both within and outside of the cluster. |
@jcassee thanks for clarifying. As a workaround, I wonder if you could use DNS names that point to the external IP and cluster IP, respectively, depending on whether they are resolved in-cluster or out-of-cluster? |
@timoreimann Well I could try, but the pods use HTTP and the URLs are HTTPS. TLS termination is handled by the load balancer. |
Ah true, you'd have to change the protocol as well. I see how this is can be bothering. I opened an internal ticket to investigate, will keep you posted. |
I was googling for my own problem and came across this issue and I think my problem is related. I created a self-signed SSL certificate and uploaded that to DigitalOcean certificates. Then I set up a LoadBalancer service and deployment exactly like in the example at: https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/examples/https-with-cert-nginx.yml HTTP requests are forwarded fine to the nginx backend. However, TLS requests seem to be forwarded as-is, recognisable by the bunch of hex characters that I see coming into the backend nginx access logs. So it looks like SSL is not terminated by the load balancer. I used the IP address of my load balancer as CN for the self-signed certificate. Maybe to give some context: I want to use my Kubernetes cluster as backend pool that lives behind other network elements we already have in our current infrastructure. However, since DO LBs do not live/have a private network IP I want to make sure that traffic from the edge router is sent encrypted to the LB and Kubernetes cluster. |
Some additional info. When listing the load balancers via
Let me know if this is a separate issue, then I'll open that and move my comments to not disrupt the original issue by @jcassee :) |
@michiels this might be a different issue. Could you post your Service object in YAML format to be certain? I know you said it resembled the example, but it'd be good to double-check. Thanks. |
@michiels you can also check if the events from CCM show anything suspicious via |
We also ran into this problem in production. An internal API call was routed to the same domain, was getting weird like |
@erkie thanks for sharing your feedback. We had a few other reports by now and are currently looking into the issue. Will let you know as soon as we've got something. |
We have confirmed now that Kubernetes purposefully routes requests for external LBs towards the associated pods directly, thereby bypassing the LB and leading to the issues described here by some people. There is an upstream issue about the matter, and we have started to engage in discussions in order to determine whether a solution built into the Kubernetes core might be feasible at some point. Any newly created upstream solution would certainly need a few release cycles to become available. We have been thinking about quick workarounds feasible today, but it seems difficult to find one. :/ For now, let's see where the upstream discussion leads to. |
My current workaround is to make the load balancer service HTTPS-only, then manually add a dummy port 80 HTTP rule and enable HTTP->HTTPS redirection. The pod(s) behind the service need(s) to support HTTPS, of course. This requires a manual step, but it is currently the only set-up that works. |
Update: we got in touch with SIG Networking (meeting recording). The plan forward is to put together a PR that addresses the issue. @jcodybaker will be working on that front. Will keep you posted as we make progress. |
Note that recently my workaround stopped working because the load balancer will now connect to the node using HTTP instead of HTTPS, even though the protocol is HTTPS and the service port is 443. @timoreimann Is kubernetes/kubernetes#77523 the fix for this issue? |
Hmm strange, the protocol annotations should still work. If you have an example Service manifest to look at, we could investigate.
Unfortunately not -- see also my coworker's comment on the PR. |
@timoreimann Sure, this is the manifest:
The change in behavior started when the cluster nodes were recycled after the recent critical update. (At that time, the nodes names started to contain the cluster name.) The same manifest is used on a different cluster that has not yet been updated (because of this issue) without problems. Let me know if I can do anything else to help debug. |
@timoreimann The problem I mentioned above has not occurred in the last week. It may be fixed...? |
@jcassee are you saying that your workaround started working again, or that the general routing problem this issue describes has been fixed? |
@timoreimann Sorry, I meant that my workaround seems to be working and stable again. |
@jcassee off the top of my head, I can't think of a recent change we did that would have been relevant to your workaround. Figuring it out for sure depends on what CCM / DOKS image versions your cluster was on across the timeline of when your workaround was doing fine, when it started to fail, and when it would work again. Something to keep in mind is that manual LB changes (i.e., modifying the LB directly on the DO cloud control panel / the DO API vs. making changes to the Service object exclusively) will eventually be reconciled by CCM but it can involve a big delay: CCM only reconciles when it detects a delta between the current and the future state on the local Kubernetes end (i.e., on the Service object). So it would take another local change or a CCM restart (as happening during a cluster upgrade) for the LB customization to be reverted. I know a few customers have run into this and got surprised (and it's something we need to address, at least by better documentation). Given that it's working for you now, I'm inclined to skip any further investigations unless you see the problem resurfacing. Please ping if that's the case, I'm happy to help. |
@timoreimann i am having the same problem in k8s-1.13 cluster in digital ocean. For Digital ocean
All commands from a pod within cluster:
For AWS
Commands from a pod:
The only difference i can see in nslookup output theres a dns entry with the service entry in case of digital ocean where that entry is not there for aws. |
@marufbd thanks for sharing your test results. The reason why this works in AWS is because the ingress hostname is not subject to the same bypassing mechanism. We also looked into leveraging ingress hostnames for DigitalOcean load-balancers. Unfortunately, this isn't easily feasible for certain reasons. I think the best way forward is still to try to submit an upstream fix. We were running short on bandwidth over the last couple of weeks but hope to be able to tackle the matter in the foreseeable future. |
I have transferred this issue into our new, generic DOKS feature/bug tracking repository. |
While the underlying issue is yet to be fixed, CCM v0.1.17 supports a workaround: users may specify a custom hostname and point a corresponding DNS record to the external IP address of the LB. A more detailed guide is available in the CCM documentation. Per our release notes, the feature requires at least one of Kubernetes 1.15.2-do.0, 1.14.5-do.0, or 1.13.9-do.0. |
Hi Timo, so everytime when I need to get or renew cert I need to add |
@timoreimann What about if ingress-nginx is being used as per below with multiple domains? I'm assuming then that workaround is not going to work.
|
@vasili439 the new annotation works independently from certificates. You'd still use |
@dottodot I'm not super familiar with the Nginx controller. To my understanding though, it shouldn't affect your scenario: Essentially, the new hostname annotation just sets the hostname part of the Ingress status field. Nothing should stop you from setting up further hostnames/DNS names (in addition to the one that should point to the hostname from the annotation) and have those point to the load balancer IP as well. You could also skip the hostname annotation entirely, set up extra DNS names, and point your clients to those. Returning the hostname within the Ingress status is supposed to ease consumption of the field, but that's not a requirement. |
The community has started an effort in form of a KEP to disable bypassing: kubernetes/enhancements#1392 |
@timoreimann i've been trying to follow this issue via the many open issues on the kubernetes github, and it doesn't sound like there is a fix yet. Is that correct? I've been looking at your workaround (https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/annotations.md#servicebetakubernetesiodo-loadbalancer-hostname) but I'm not sure how to implement it. We have many DNS A records (dns managed by DO) pointing to a single loadbalancer (managed by DO) and then have ingress defintions in kubernetes to direct them to the correct services based on hostname. I can't see how the service.beta.kubernetes.io/do-loadbalancer-hostname setting works with the different A records as you can only specify one here. |
Hey @spenceclark You're correct that the issue isn't resolved yet. The best we have today is the workaround you've been looking at. For multiple DNS records, the suggestion is to use CNAMEs that all point at the hostname. We have a bit more on that in the docs at https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/examples/README.md#accessing-pods-over-a-managed-load-balancer-from-inside-the-cluster. Hope this helps. If not, please let me know. |
Thanks for your reply. So I would create a new A record (example lb-internal.xxxx.net), add that to the LB config using service.beta.kubernetes.io/do-loadbalancer-hostname, and then create individual CNAME records for each host i need to access from internally? I would need to add each CNAME to the SSL certificate also? (We're using LetsEncrypt via DO also) |
Ah ok, i was over complicating it in my head. I already had all the required DNS records for each host and the SSL and Ingess config. All I needed to do was add a new A record (lb-internal.xxxxx.net), update the LB service definition to use it and that has fixed it - the workaround is working as described. |
I did the same thing as @spenceclark and it worked for me! |
@timoreimann I'm currently testing the proposed workaround and, while it seems to work as expected, there is one thing that I don't understand: The guide says:
Why should I configure additional domains using CNAMEs to the same hostname instead of A records to the LB external ip? I'm worried about this because I run multiple websites (each with its own domain) and having to point each domain to the common hostname would have 2 issues:
My current configuration seems to work with features like PROXY protocol, http-to-https redirect, TLS passthrough and cert-manager:
What am I missing? I suspect the "over complication" by @spenceclark was due to the same thing... Thank you for your time |
The upstream claims this was fixed 4 years ago. Yet here we are, the problem seems to remain? At least it does not work with PROXY protocol enabled. Here is my use case: I have a domain name that is bound to an A record pointing to a managed LB. The LB is defined in Kubernetes and forwards port 443 to the cluster. The cluster uses Docker registry images from the same domain name. If I enable PROXY protocol it breaks the pull. I cannot deploy images in the cluster that references a self-hosted image. I hope that was clear. It does not sound like an exotic use case at all. The workaround to create a CNAME lb.domain.com pointing to domain.com and adding an annotation does work but why should we need that?
|
When a pod within the cluster connects to a load balancer HTTPS port that is configured to perform TLS termination (i.e. has a certificate configured), TLS is not terminated and the connection is forwarded to the pod HTTP port as-is. This causes traffic from within the cluster to fail.
The Service definition:
(See also the https-with-cert-nginx.yml example.)
Connection flow:
External -> LB proto HTTPS port 443 -> Service proto HTTP port 443 -> Pod proto HTTP port 80
Internal -> LB proto HTTPS port 443 -> Service proto HTTPS port 443 -> Pod proto HTTPS port 80
(For DigitalOcean engineers, I posted debugging information in support ticket 3402891.)
The text was updated successfully, but these errors were encountered: