You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've created a Tornado service using HTTPServer, where each request triggers an HTTPClient call to another more time-consuming service. When running the service as a single process and making concurrent requests, later requests wait for earlier ones to complete before they are processed.
I've reviewed the RequestHandler source code and noticed that the request_time function only measures the duration from the start of processing (initialization) to the end of the request, which doesn't include time spent waiting in the queue.
I would like to obtain the actual time each request spends in the queue before it starts being processed, as well as the total time (queue time + processing time) it takes to complete each request. This information would allow for more accurate request analysis.
Is there currently a way to obtain the queue time or the time when each request first arrives at the server?
The text was updated successfully, but these errors were encountered:
There is no single "queue of requests" such that you could track when a request enters and leaves the queue. Defining "queue time" in Tornado is not easy. I would define it broadly and say that what you're looking for when you talk about queue time is any delay caused by the fact that the event loop can only do one thing at once.
The event loop does have a queue of callbacks and you could track the total time that any of the callbacks that make up a request spend in the queue; this is a part of queue time. But It's not the only part, and queue-like delays can appear even when there is no actual queue involved. Specifically, after a network packet arrives, two things have to happen: the event loop calls epoll (or equivalent) to discover that the socket is readable, and as a result it adds a callback to the queue to process the new data. Only the second part of this involves a "queue", but the time between the packet arriving and the epoll call is also waiting for the event loop to schedule something, so it should still count (I think this first delay is often more significant than the actual queue, although I don't have any data to back this up).
You could imagine adding more tracking throughout the stack to try and capture this more precisely. But that's expensive, and in the end it's looking at the wrong side of the problem. Once you've seen that a slow request spends X ms in the queue, what then? You need to figure out what's blocking the event loop and fix that. But that's a separate (and easier!) problem - use asyncio debug mode and fix anything that it reports as slow (consider reducing slow_callback_duration from its default). When you don't have any individual slow callbacks, you can have a reasonable expectation that queue time will be minimal (and uniformly distributed) and you won't need to worry as much about tracking per-request queue time.
Description:
I've created a Tornado service using
HTTPServer
, where each request triggers anHTTPClient
call to another more time-consuming service. When running the service as a single process and making concurrent requests, later requests wait for earlier ones to complete before they are processed.I've reviewed the
RequestHandler
source code and noticed that therequest_time
function only measures the duration from the start of processing (initialization) to the end of the request, which doesn't include time spent waiting in the queue.I would like to obtain the actual time each request spends in the queue before it starts being processed, as well as the total time (queue time + processing time) it takes to complete each request. This information would allow for more accurate request analysis.
Is there currently a way to obtain the queue time or the time when each request first arrives at the server?
The text was updated successfully, but these errors were encountered: