Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Connection Timed out always seen for UDP when num req/rsp are matched given a wrong perception on upstream delay #877

Open
bnnk opened this issue Apr 9, 2024 · 4 comments

Comments

@bnnk
Copy link

bnnk commented Apr 9, 2024

Describe the bug

When number of UDP Req/Rsp matched session should be deleted without any error log

To reproduce

Configured UDP Req/Rsp to 10/10 and send 10 packet and receive 10 rsp

Expected behavior

Post 10 req/rsp session should be cleaned up without any error log saying timed out

Your environment

nginx:alpine

Additional context


                if (pscf->responses == NGX_MAX_INT32_VALUE
                    || (u->responses >= pscf->responses * u->requests))
                {

                    /*
                     * successfully terminate timed out UDP session
                     * if expected number of responses was received
                     */

                    handler = c->log->handler;
                    c->log->handler = NULL;

                    c->log->handler = handler;

                    ngx_stream_proxy_finalize(s, NGX_STREAM_OK);
                    return;
                }

I expect above condition should rather be

                if (pscf->responses == NGX_MAX_INT32_VALUE
                    || (u->responses >=  u->requests)) << like this
                {

                    /*

Seeing always log

2024/04/08 20:31:58 [error] 1932#0: *131 upstream timed out (110: Connection timed out) while proxying connection, udp client: 127.0.0.1, server: 0.0.0.0:3123, upstream: "127.0.0.1:8084", bytes from/to client:0/170, bytes from/to upstream:170/0

@thresheek
Copy link
Collaborator

Hi @bnnk,

can you show the configuration you're using?

@bnnk
Copy link
Author

bnnk commented Apr 10, 2024

@thresheek Please find the same below

upstream ts_mod_transporter-app {
    zone ts_mod_transporter-app 256k;


    random two least_conn;



    server 127.0.0.1:8080 max_fails=0 fail_timeout=10s max_conns=0;
    server 127.0.0.1:8081 max_fails=0 fail_timeout=10s max_conns=0;
    server 127.0.0.1:8082 max_fails=0 fail_timeout=10s max_conns=0;
    server 127.0.0.1:8083 max_fails=0 fail_timeout=10s max_conns=0;
    server 127.0.0.1:8084 max_fails=0 fail_timeout=10s max_conns=0;
}




server {

    listen 3123 udp;

    proxy_requests 10;
    proxy_responses 10;


    proxy_pass ts_mod_transporter-app;

    proxy_timeout 8s;
    proxy_connect_timeout 12s;


    proxy_next_upstream on;
    proxy_next_upstream_timeout 20s;
    proxy_next_upstream_tries 1;
}

@bnnk
Copy link
Author

bnnk commented Apr 12, 2024

@thresheek Any input on the same ? does this looks like a bug ?

@bnnk
Copy link
Author

bnnk commented Apr 17, 2024

Can someone help on the same ? Is this a bug ? if not why this multiplier ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants