-
Notifications
You must be signed in to change notification settings - Fork 316
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support distributed ANALYZE
#2374
Comments
What kind of output do you expect? For example, something like: +----------------------------------------------------+
| component | plan_type | plan |
+----------------------------------------------------+
| frontend | Plan with Metrics | RepartitionExec |
+----------------------------------------------------+
| node1 | ~ | ~ |
+----------------------------------------------------+
| node2 | ~ | ~ |
+----------------------------------------------------+
| node3 | ~ | ~ |
+----------------------------------------------------+ |
Considering the plan might be further distributed into more parts, I'd prefer to use a tuple +------------------------------------------------------------------------+
| stage | node | plan_type | plan |
+------------------------------------------------------------------------+
| stage 2 | node 1 (addr: ...) | Plan with Metrics | RepartitionExec |
+------------------------------------------------------------------------+
| stage 1 | node 1 (addr: ...) | ~ | ~ |
+------------------------------------------------------------------------+
| stage 1 | node 2 (addr: ...) | ~ | ~ |
+------------------------------------------------------------------------+
| stage 1 | node 3 (addr: ...) | ~ | ~ |
+------------------------------------------------------------------------+ Some key point:
|
I see. By the way, if the verbose option is set, I guess that the output is as follows: +--------------------------------------------------------------------------------------------
| stage | node | plan_type | plan | output rows | ...
+--------------------------------------------------------------------------------------------
| stage 2 | node 1 (addr: ...) | Plan with Metrics | RepartitionExec | 2 | ...
+--------------------------------------------------------------------------------------------
| stage 1 | node 1 (addr: ...) | ~ | ~ | 0 | ...
+--------------------------------------------------------------------------------------------
| stage 1 | node 2 (addr: ...) | ~ | ~ | 1 | ...
+--------------------------------------------------------------------------------------------
| stage 1 | node 3 (addr: ...) | ~ | ~ | 1 | ...
+--------------------------------------------------------------------------------------------
|
If you agree with the above format, I would like to work on this issue. |
Ahh, sorry for the delay. Other parts look good to me. But things like |
Hi @NiwakaDev, just a friendly ping. Do you have an initial plan or a rough structure for this? I wondered if you would like to have a discussion on any undetermined thing or question. |
Here's an initial plan:
two types of output:
While I haven't yet come up with a solution on how to send both |
Thanks for your thoughtful investigation 👍 I have one concern about passing intermediate metrics (those rendered in Thus I came up with another way: encode and transfer metrics ( For transferring metrics together with data, I've submitted a PR to add corresponding fields on proto file GreptimeTeam/greptime-proto#130 (if we decide to go this way, we can define some general metric in the proto message, instead of string-string map) Then we don't need |
One thing I haven't figured out is how we handle different metrics from different nodes. In datafusion, those metrics are attached to the plan itself. E.g., a join plan has two children, and each child can keep its own metrics. But here we don't have actual children nodes beside If we don't need to distinguish metrics from each node, we can aggregate them into one sub-tree in |
The issue you wrote is related to the below code? Sorry, I might be wrong because I'm not familiar with datafusion.
If we don't need analyze per-node, I agree with this. |
By the way, if we implement your idea, what kind of output do you expect? |
greptimedb/src/query/src/dist_plan/merge_scan.rs Lines 264 to 266 in c7b3677
We can retrieve metrics from datanodes, but I'm afraid we have to keep and access those "remote metrics" in a different way for this reason.
I would like to have two forms. One is distinguished with the tuple |
Update: at this stage, it is clear that we have to find a way to pass data and execution metrics together in the same query call. @shuiyisong and I are trying to add a method to
We can assume this issue is resolved (if everything works as expected...) and bring this ticket forward 🙌 @NiwakaDev |
Do we select "aggregate them into one child plan in MergeScan", not plan per node? Something like:
or
|
What do you think about distinguishing them by the
and +-------------------------------------------------------------------------
| stage | node | plan_type | plan |
+------------------------------------------------------------------------
| stage 2 | node 1 (addr: ...) | Plan with Metrics | RepartitionExec |
+------------------------------------------------------------------------
| stage 1 | node 1 (addr: ...) | ~ | ~ |
+------------------------------------------------------------------------
| stage 1 | node 2 (addr: ...) | ~ | ~ |
+------------------------------------------------------------------------
| stage 1 | node 3 (addr: ...) | ~ | ~ |
+------------------------------------------------------------------------ |
Looks good to me, but as you said, I guess that we need to think another approach to implement that.
Maybe, we need to divide it into two logical plans for two purposes, one is |
Yes, I'm afraid the built-in
Looks good to me! 👍 I guess we've found answers for all previous problems we had? |
Sorry for the late reply.
Yes! I'll review the above PR tonight and tomorrow. Sorry for the late reply again. |
Don't worry! Hope you have had a nice New Year holiday 🎉 |
@waynexia {
"name": "ProjectionExec",
"metrics:" {
"total_num": 0,
~
},
"children": [
{
"name": "CoalesceBatchesExec",
"metrics": {~},
"children": [
~
]
},
]
} After json format discussion lands, I'll write a rough implementation of the |
#3113 is merged, and there are some little changes to plan metrics after that. Now we will pass the corresponding physical (execution) plan together with the result |
@waynexia |
What problem does the new feature solve?
EXPLAIN ANALYZE
only contains the execution result in frontend and the gRPC time to each datanodes. It works but can be more detailed.What does the feature do?
Also show the detailed
ANALYZE
result in each datanode. I.e., implement the distributedANALZYE
planImplementation challenges
No response
The text was updated successfully, but these errors were encountered: