Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

monolake proxy under benchmark gives low throughput or high latency #114

Open
xiaosongyang-sv opened this issue Jul 30, 2024 · 3 comments
Open
Assignees
Labels
C-bug This is a bug-report. Bug-fix PRs use `C-enhancement` instead.
Milestone

Comments

@xiaosongyang-sv
Copy link
Collaborator

Follow readme, we can do benchmark for monolake proxy, Current monolake gives low throughput or high latency under benchmark using wrk2. For same network topology, Nginx and Traefik give much better result.

Running 1m test @ http://ec2-18-191-195-50.us-east-2.compute.amazonaws.com:8402
20 threads and 1000 connections
Thread calibration: mean lat.: 86.401ms, rate sampling interval: 263ms
Thread calibration: mean lat.: 94.680ms, rate sampling interval: 258ms
Thread calibration: mean lat.: 94.880ms, rate sampling interval: 263ms
Thread calibration: mean lat.: 222.463ms, rate sampling interval: 361ms
Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 96.783ms, rate sampling interval: 261ms
Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 226.143ms, rate sampling interval: 363ms
Thread calibration: mean lat.: 92.596ms, rate sampling interval: 360ms
Thread calibration: mean lat.: 97.167ms, rate sampling interval: 257ms
Thread calibration: mean lat.: 73.641ms, rate sampling interval: 370ms
Thread calibration: mean lat.: 213.978ms, rate sampling interval: 358ms
Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 96.996ms, rate sampling interval: 256ms
Thread calibration: mean lat.: 208.880ms, rate sampling interval: 358ms
Thread calibration: mean lat.: 91.494ms, rate sampling interval: 258ms
Thread calibration: mean lat.: 94.220ms, rate sampling interval: 261ms
Thread calibration: mean lat.: 94.591ms, rate sampling interval: 257ms
Thread calibration: mean lat.: 96.692ms, rate sampling interval: 268ms
Thread calibration: mean lat.: 87.983ms, rate sampling interval: 260ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 30.75s 13.98s 0.93m 58.69%
Req/Sec 1.23 11.96 333.00 98.74%
Latency Distribution (HdrHistogram - Recorded Latency)
50.000% 33.16s
75.000% 42.86s
90.000% 0.87m
99.000% 0.89m
99.900% 0.93m
99.990% 0.94m
99.999% 0.94m
100.000% 0.94m

Detailed Percentile spectrum:
Value Percentile TotalCount 1/(1-Percentile)

9756.671     0.000000           21         1.00
9994.239     0.100000          582         1.11

14581.759 0.200000 1149 1.25
19447.807 0.300000 1740 1.43
24510.463 0.400000 2264 1.67
33161.215 0.500000 2908 2.00
33357.823 0.550000 3129 2.22
34701.311 0.600000 3395 2.50
38240.255 0.650000 3714 2.86
40730.623 0.700000 3960 3.33
42860.543 0.750000 4311 4.00
42958.847 0.775000 4390 4.44
43122.687 0.800000 4529 5.00
47480.831 0.825000 4691 5.71
47644.671 0.850000 4839 6.67
47939.583 0.875000 4989 8.00
49446.911 0.887500 5023 8.89
52101.119 0.900000 5167 10.00
52101.119 0.912500 5167 11.43
52232.191 0.925000 5257 13.33
52330.495 0.937500 5324 16.00
52363.263 0.943750 5347 17.78
52559.871 0.950000 5390 20.00
52592.639 0.956250 5458 22.86
52592.639 0.962500 5458 26.67
52625.407 0.968750 5508 32.00
52625.407 0.971875 5508 35.56
52756.479 0.975000 5553 40.00
52756.479 0.978125 5553 45.71
52756.479 0.981250 5553 53.33
52789.247 0.984375 5575 64.00
53051.391 0.985938 5585 71.11
53084.159 0.987500 5587 80.00
53575.679 0.989062 5597 91.43
54034.431 0.990625 5605 106.67
54132.735 0.992188 5613 128.00
54558.719 0.992969 5621 142.22
54591.487 0.993750 5623 160.00
54624.255 0.994531 5627 182.86
55050.239 0.995313 5633 213.33
55115.775 0.996094 5637 256.00
55148.543 0.996484 5638 284.44
55279.615 0.996875 5640 320.00
55541.759 0.997266 5643 365.71
55574.527 0.997656 5644 426.67
55607.295 0.998047 5647 512.00
55640.063 0.998242 5649 568.89
55640.063 0.998437 5649 640.00
56033.279 0.998633 5650 731.43
56066.047 0.998828 5652 853.33
56066.047 0.999023 5652 1024.00
56098.815 0.999121 5655 1137.78
56098.815 0.999219 5655 1280.00
56098.815 0.999316 5655 1462.86
56098.815 0.999414 5655 1706.67
56098.815 0.999512 5655 2048.00
56098.815 0.999561 5655 2275.56
56098.815 0.999609 5655 2560.00
56131.583 0.999658 5657 2925.71
56131.583 1.000000 5657 inf
#[Mean = 30751.497, StdDeviation = 13976.741]
#[Max = 56098.816, Total count = 5657]
#[Buckets = 27, SubBuckets = 2048]

6342 requests in 1.00m, 2.06MB read
Socket errors: connect 0, read 0, write 0, timeout 23514
Requests/sec: 105.24
Transfer/sec: 35.04KB

@goldenrye goldenrye added the C-bug This is a bug-report. Bug-fix PRs use `C-enhancement` instead. label Jul 31, 2024
@goldenrye goldenrye added this to the v1.0 milestone Jul 31, 2024
@goldenrye
Copy link
Contributor

@xiaosongyang-sv can you check if this issue can be reproduced in local setup?

@goldenrye
Copy link
Contributor

goldenrye commented Jul 31, 2024

6342 requests in 1.00m, 2.06MB read
Socket errors: connect 0, read 0, write 0, timeout 23514
Requests/sec: 105.24
Transfer/sec: 35.04KB

@xiaosongyang-sv
Copy link
Collaborator Author

Found the root cause: it is because of DNS lookup.

When we configure the proxied destination to IP instead of domain name, the timeout is gone and throughput gets improved a lot.

But nginx handles much better in the domain name case. It doesn't have timeout error.

Monolake still need to fix DNS lookup issue. Some layer in the code need to cache the DNS lookup result.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-bug This is a bug-report. Bug-fix PRs use `C-enhancement` instead.
Development

No branches or pull requests

3 participants