-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DO Proxy Protocol broken header #3996
Comments
It seems HTTPS traffic is being sent to the HTTP port, please check the port mapping in the ingres-nginx service and your DO console |
@aledbf my service ports are
and the port forwarding on my load balancer is set to |
Can confirm this issue, same configuration here across the board. |
Looking for a solution to this as well. All public requests work but internal traffic to a host of the ingress fail.
|
It works for me using the helm chart with the following values: ## nginx configuration
## https://github.com/helm/charts/blob/master/stable/nginx-ingress/values.yaml
## https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-proxy-protocol
## https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer
controller:
config:
use-proxy-protocol: "true"
service:
externalTrafficPolicy: "Local"
annotations:
# https://www.digitalocean.com/docs/kubernetes/how-to/configure-load-balancers/
# https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/annotations.md
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true" |
I've got a similar setup. Which is working if I access the host publicly. However, accessing the host from within the cluster seems to fail (i.e. a server side request from one pod to another pod using the host) Ingress
Config
Curl Example Error
Log from failed request
|
I don't think |
@tlaverdure that's because you are no specifying the flag |
That is correct. |
@dperetti I have @aledbf I'm not using curl command but get the same errors. |
I also have: |
@dperetti thanks for the tip. I think I added it initially when testing and forgot to remove after I added that annotation to the @aledbf I'm experiencing this with any type of server side http request. Curl was used to verify the issue, but any server side scripting language that makes a http request to the host (i.e. node) is failing. |
I also have |
Yes, Proxy Protocol is enabled. |
Just tested setting |
OK I've had some advise back from Digital Ocean
Only problem is I'm not entirely sure how to find which pods have the issue and need updating. |
Also when i turn on proxy protocol the logs suggest that not all requests have a broken header so how do I identify what's causing the broken header and fix it. |
I ran into this and at one point, I deleted the ingress service and recreated it and it worked. I got the broken headers issue when I had the nginx configmap set, but not the annotations on the ingress service that created the DO LB. Manually configuring "Proxy Protocol" on the LB w/ the Web UI didn't work for me. Anyways, here is a config that worked for me: mandatory.yaml
cloud-generic.yaml
ingress.yaml (I wanted a whitelist for only CloudFlare)
|
Is it possible to allow nginx to listen on the same http/https port with and without proxy protocol like this setup? |
@w1ndy no. You need a custom template for that https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/ |
The same issue happens in Kubernetes cluster on top of OpenStack cloud (using openstack-cloud-controller-manager). The Ingress service could be accessed from outside the cluster, but not from the cluster node or in the pod. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
I'm a bit unclear whether this issue is the same one we just encountered, but perhaps the following is helpful: We have a microservice architecture, where services talk to each other via the axios library, all inside the same cluster. What we'd misconfigured was the URL by which the services talk to each other. We had one service talk to the other via the external dns record by which the target service was known e.g foo.domain.com, causing traffic for it to go all the way out and back into the cluster again. When nginx tried to handle the request, the header looked broken because the request wasn't preceded by the instruction By changing the URL of the target service to the internal dns record e.g |
Temporary solved it by using Cloudflare proxy mode for subdomains. Looking forward to the resolution of this issue. |
Hi, kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
service.beta.kubernetes.io/do-loadbalancer-hostname: "do-k8s.example.com"
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
|
Thank you |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Hello, |
Hi @Oznup, |
This is precisely the problem we are facing and the workaround was to add the hostname as described. I can confirm that this also works with wildcard DNS matches. |
Workaround does not work for me. |
adding Basically it contains the following steps:
After this you can create your Ingress resource and use Cert Manager to issue SSL certificate as usual, and when you |
I face the same problem,how to solve? |
I have the same issue with the option while integrating it with AKS services. Same issue, same error: |
Hello, I am running EKS with kubernetes nginx ingress on NLB with SSL termination but the proxy protocol annotations was not working. controller:
electionID: external-ingress-controller-leader
ingressClassResource:
name: external-nginx
enabled: true
default: false
controllerValue: "k8s.io/external-ingress-nginx"
config:
use-proxy-protocol": "true"
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=dev,owner=platform-engineering"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:<REDACTED>:certificate/<REDACTED>
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-<REDACTED>, subnet-<REDACTED>
# below annotations were not working
# service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
# service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: "preserve_client_ip.enabled=false,proxy_protocol_v2.enabled=true" But once the NLB and ingress objects are created, the services wont be reachable because the proxy protocol on the NLB target groups was not turned on automatically even with the provided annotations. I got the below error in the controller pod logs, 2022/11/03 20:50:23 [error] 32#32: *26209 broken header: "GET /index.html HTTP/1.1
Host: nginx.<REDACTED>
Connection: keep-alive
Cache-Control: max-age=0
sec-ch-ua: "Google Chrome";v="107", "Chromium";v="107", "Not=A?Brand";v="24"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "macOS"
DNT: 1
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Sec-Fetch-Site: cross-site
Sec-Fetch-Mode: navigate
Sec-Fetch-User: ?1
Sec-Fetch-Dest: document
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9,ta;q=0.8
" while reading PROXY protocol, client: <REDACTED>, server: 0.0.0.0:80 Once I turned them on manually through console (NLB -> Listeners -> Target Group attributes), it works (I get 200 response in the same controller logs). But both the annotations in the helm values does not seem to work. Can anyone help on this ? |
Appending '--haproxy-protocol' to curl commands works |
I still have the same issue but on bare-metal, and I am not able to resolve it. Though, I have a question. Why does everybody use the externalpolicy local? This policy preserves the source IP anyway. What is the benifit of Proxy Protocol if I am going to use the local policy? |
For anyone stumbling across this using Hetzner LoadBalancers: |
In our case the problem was that we tried to deploy two environments on nginx (backend) port 80 - one basic and one going through proxy:
We thought that traffic from first LB configured to use proxy just reach block "listen 80 proxy_protocol" while traffic from second LB just reach block "listen 80;", but it won't happen due to error "broken header: ... while reading PROXY protocol, client: ...". Nginx does not allow configuring one port in these two ways and crashes on searching right server block, even if the correct one exists. One of solutions is to deploy one of the environments on another port - this works fine. |
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
NGINX Ingress controller version:
0.24.0
Kubernetes version (use
kubectl version
):Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-04T04:48:03Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Environment:
What happened:
Digital Ocean now allows for use of Proxy Protocol
https://www.digitalocean.com/docs/kubernetes/how-to/configure-load-balancers/#proxy-protocol
So I've added the annotation to my service
and updated my config as follows
However once I've applied these changes I get lots of errors such as the following
Digital Ocean's response to these error was
I can't see what I'm missing
What you expected to happen:
No errors
The text was updated successfully, but these errors were encountered: