-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Implement Progress Tracking for Long-Running Queries #285
Comments
Unfortunately there's no way to do this currently using existing Python http libraries and ClickHouse's HTTP 1.1 interface. While intermediate progress headers are returned by ClickHouse, neither the |
They do read it. They just ignore duplicate headers in the |
@pkit -- Yes, I realized that recently when digging into the code. It is truly irritating that there's no hooks or other means to actually capture these in real time. |
Fortunately |
BTW, authors of |
Description:
I would like to suggest the implementation of a progress tracking mechanism for long-running queries, such as
insert from S3
. This feature could be incredibly beneficial in monitoring the execution of these extensive operations.Motivation:
In many cases, queries in ClickHouse can take a substantial amount of time to execute, ranging from several minutes to hours or even days. During such long-running operations, users currently do not have a way to monitor the progress of these queries. Implementing a progress tracking feature, akin to what the ClickHouse CLI client offers, would be extremely beneficial. This would not only improve the user experience by providing real-time updates on query execution but also help in diagnosing and troubleshooting any issues that might arise during the execution of these lengthy queries.
The text was updated successfully, but these errors were encountered: