Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some crazy error rate when testing ASP.NET Core API endpoints #35

Open
HymanZHAN opened this issue Aug 30, 2022 · 7 comments
Open

Some crazy error rate when testing ASP.NET Core API endpoints #35

HymanZHAN opened this issue Aug 30, 2022 · 7 comments

Comments

@HymanZHAN
Copy link

HymanZHAN commented Aug 30, 2022

When benchmarking ASP.NET API endpoints, rewrk is yielding some bizarre numbers (unexpected low) and almost all requests got error responses. On the other hand, testing Rust API endpoints (Actix, Axum) and Node.js endpoints (Nest, Fastify) seems fine. Any thoughts? Thanks~

I am running Fedora 36.

ASP.NET Core (.net 6 + Minimal API) result:

> rewrk -c 256 -t 12 -d 15s -h http://127.0.0.1:5000      
Beginning round 1...
Benchmarking 256 connections @ http://127.0.0.1:5000 for 15 second(s)
  Latencies:
    Avg      Stdev    Min      Max      
    4.96ms   1.82ms   0.12ms   29.61ms  
  Requests:
    Total: 772075  Req/Sec: 51517.15
  Transfer:
    Total: 88.36 MB Transfer Rate: 5.90 MB/Sec

772112 Errors: connection closed

> wrk -c 256 -t 12 -d 15s http://127.0.0.1:5000           
Running 15s test @ http://127.0.0.1:5000
  12 threads and 256 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   360.27us    0.90ms  43.45ms   96.32%
    Req/Sec    78.35k    11.33k  184.48k    76.26%
  14071720 requests in 15.10s, 1.60GB read
Requests/sec: 932057.72
Transfer/sec:    108.44MB

NestJS (w/ Fastify):

> rewrk -c 256 -t 12 -d 15s -h http://127.0.0.1:3000                 
Beginning round 1...
Benchmarking 256 connections @ http://127.0.0.1:3000 for 15 second(s)
  Latencies:
    Avg      Stdev    Min      Max      
    2.18ms   0.56ms   1.02ms   39.25ms  
  Requests:
    Total: 1756945 Req/Sec: 117196.97
  Transfer:
    Total: 296.57 MB Transfer Rate: 19.78 MB/Sec

> wrk -c 256 -t 12 -d 15s http://127.0.0.1:3000     
Running 15s test @ http://127.0.0.1:3000
  12 threads and 256 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.17ms  202.46us  15.30ms   96.03%
    Req/Sec     9.75k     1.13k   39.03k    99.11%
  1748814 requests in 15.09s, 295.20MB read
Requests/sec: 115855.62
Transfer/sec:     19.56MB
@ChillFish8
Copy link
Member

ChillFish8 commented Aug 30, 2022

Any logs on your asp.net server? The crazy rates you're getting with ASP is because the server seems just to be immediately closing the connection. Perhaps there's some weird additional check ASP is doing which is failing to cause it to close the connection?
Unfortunately, the most common issue with it redirecting to HTTPS is probably not the cause here because it should be logging 3xx status codes.

@HymanZHAN
Copy link
Author

I disabled the HTTPS redirect so it should be something else. The logger was disabled due to performance reasons, but I will turn on the trace log and take a look.

@HymanZHAN
Copy link
Author

Ah, should've thought of giving the log a shot last night. Yes the log says "Invalid host header '127.0.0.1'".

dbug: Microsoft.AspNetCore.Server.Kestrel.BadRequests[17]
      Connection id "0HMKAI4MMDM1L" bad request data: "Invalid Host header: '127.0.0.1'"
      Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Invalid Host header: '127.0.0.1'
         at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1Connection.ValidateNonOriginHostHeader(String hostText)
         at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1Connection.EnsureHostHeaderExists()
         at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1Connection.TryParseRequest(ReadResult result, Boolean& endConnection)
         at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequests[TContext](IHttpApplication`1 application)
         at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequestsAsync[TContext](IHttpApplication`1 application)
dbug: Microsoft.AspNetCore.Server.Kestrel.Connections[2]
      Connection id "0HMKAI4MMDM1L" stopped.

@ChillFish8
Copy link
Member

Ah, looks like ASP is complaining that the Host Header is being set to 127.0.0.1 not 127.0.0.1:5000 providing it's not just stripping out the port to make things confusing.

To get around this you can probably override the header by passing the -H "Host: 127.0.0.1:5000" flag.
Technically this looks like a small bug on our behalf but also a slightly iffy way of handling the header on ASP's side as well, the MDN docs suggest it should return a 4xx error instead :/

Anyway using the host flag should fix it so give it a go.

@HymanZHAN
Copy link
Author

Yep that did the trick. Thanks for your quick (in fact, instant:zap:) follow-up~

The difference compared to wrk is quite a bit more significant than other frameworks. Could it be due to the -H flag?

~ took 15s 
> rewrk -c 256 -t 12 -d 15s -H "Host: 127.0.0.1:5000" -h http://127.0.0.1:5000
Beginning round 1...
Benchmarking 256 connections @ http://127.0.0.1:5000 for 15 second(s)
  Latencies:
    Avg      Stdev    Min      Max      
    0.62ms   0.45ms   0.02ms   41.29ms  
  Requests:
    Total: 6203201 Req/Sec: 413782.57
  Transfer:
    Total: 721.74 MB Transfer Rate: 48.14 MB/Sec


~ took 15s 
> wrk -c 256 -t 12 -d 15s http://127.0.0.1:5000                     
Running 15s test @ http://127.0.0.1:5000
  12 threads and 256 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   392.48us    1.11ms  46.51ms   96.60%
    Req/Sec    77.93k    10.56k  153.91k    74.61%
  13968915 requests in 15.06s, 1.59GB read
Requests/sec: 927824.89
Transfer/sec:    107.95MB

Anyway, that's a topic for another issue if needed. Please close the issue as you see fit.

@ChillFish8
Copy link
Member

I imagine it's probably down to wrk using HTTP/1 pipelining which tends to inflate artificial benchmarks.

rewrk doesn't do any HTTP pipelining which is probably why that difference is there, but hard to say.

One way to check is to run a simple python uvicorn server and run the two. As uvicorn doesn't support pipelining wrk wont inflate the results.

Reason why we don't do pipelining: https://stackoverflow.com/questions/30477476/why-is-pipelining-disabled-in-modern-browsers

@HymanZHAN
Copy link
Author

Yeah, that might be the cause. With uvicorn (I am testing the blacksheep framework):

~ 
> wrk -c 256 -t 12 -d 15s http://127.0.0.1:5000
Running 15s test @ http://127.0.0.1:5000
  12 threads and 256 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   845.50us  724.26us  18.18ms   89.52%
    Req/Sec    28.38k     7.57k   91.68k    83.03%
  5093954 requests in 15.10s, 709.26MB read
Requests/sec: 337354.18
Transfer/sec:     46.97MB

~ took 15s 
> rewrk -c 256 -t 12 -d 15s -h http://127.0.0.1:5000
Beginning round 1...
Benchmarking 256 connections @ http://127.0.0.1:5000 for 15 second(s)
  Latencies:
    Avg      Stdev    Min      Max      
    0.90ms   0.45ms   0.04ms   7.59ms   
  Requests:
    Total: 4238345 Req/Sec: 282822.26
  Transfer:
    Total: 590.13 MB Transfer Rate: 39.38 MB/Sec

Still, not exactly the same but I'd consider them in the same ballpark.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants