-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GH-38837: [Format] Add the specification to pass statistics through the Arrow C data interface #43553
GH-38837: [Format] Add the specification to pass statistics through the Arrow C data interface #43553
Changes from all commits
4a54462
ae37a22
190ad78
99ea21e
5df2c67
56d1b16
7780f8f
eae18a6
189bda3
9f2f017
4644418
595d916
5fbddf4
6815b99
f744c80
ed0cbe2
50ca4c5
1825520
2d70741
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
@@ -0,0 +1,339 @@ | ||||||||||||
.. Licensed to the Apache Software Foundation (ASF) under one | ||||||||||||
.. or more contributor license agreements. See the NOTICE file | ||||||||||||
.. distributed with this work for additional information | ||||||||||||
.. regarding copyright ownership. The ASF licenses this file | ||||||||||||
.. to you under the Apache License, Version 2.0 (the | ||||||||||||
.. "License"); you may not use this file except in compliance | ||||||||||||
.. with the License. You may obtain a copy of the License at | ||||||||||||
.. http://www.apache.org/licenses/LICENSE-2.0 | ||||||||||||
.. Unless required by applicable law or agreed to in writing, | ||||||||||||
.. software distributed under the License is distributed on an | ||||||||||||
.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY | ||||||||||||
.. KIND, either express or implied. See the License for the | ||||||||||||
.. specific language governing permissions and limitations | ||||||||||||
.. under the License. | ||||||||||||
.. _c-data-interface-statistics: | ||||||||||||
|
||||||||||||
===================================================== | ||||||||||||
Passing statistics through the Arrow C data interface | ||||||||||||
===================================================== | ||||||||||||
|
||||||||||||
.. warning:: This specification should be considered experimental. | ||||||||||||
|
||||||||||||
Rationale | ||||||||||||
========= | ||||||||||||
|
||||||||||||
Statistics are useful for fast query processing. Many query engines | ||||||||||||
use statistics to optimize their query plan. | ||||||||||||
|
||||||||||||
Apache Arrow format doesn't have statistics but other formats that can | ||||||||||||
be read as Apache Arrow data may have statistics. For example, the | ||||||||||||
Apache Parquet C++ implementation can read an Apache Parquet file as | ||||||||||||
Apache Arrow data and the Apache Parquet file may have statistics. | ||||||||||||
|
||||||||||||
Use case | ||||||||||||
-------- | ||||||||||||
|
||||||||||||
One of :ref:`c-stream-interface` use cases is the following: | ||||||||||||
|
||||||||||||
1. Module A reads Apache Parquet file as Apache Arrow data. | ||||||||||||
2. Module A passes the read Apache Arrow data to module B through the | ||||||||||||
Arrow C stream interface. | ||||||||||||
3. Module B processes the passed Apache Arrow data. | ||||||||||||
|
||||||||||||
If module A can pass the statistics associated with the Apache Parquet | ||||||||||||
file to module B through the Arrow C stream interface, module B can | ||||||||||||
use the statistics to optimize its query plan. | ||||||||||||
|
||||||||||||
For example, DuckDB uses this approach but DuckDB couldn't use | ||||||||||||
statistics because there wasn't the standardized way to pass | ||||||||||||
statistics. | ||||||||||||
|
||||||||||||
.. seealso:: | ||||||||||||
|
||||||||||||
`duckdb::ArrowTableFunction::ArrowScanBind() in DuckDB 1.1.3 | ||||||||||||
<https://github.com/duckdb/duckdb/blob/v1.1.3/src/function/table/arrow.cpp#L373-L403>`_ | ||||||||||||
|
||||||||||||
Goals | ||||||||||||
----- | ||||||||||||
|
||||||||||||
TODO: Remove the C data interface limitation? | ||||||||||||
|
||||||||||||
* Establish a standard way to pass statistics through the Arrow C data | ||||||||||||
interface. | ||||||||||||
* Provide this in a manner that enables compatibility and ease of | ||||||||||||
implementation for existing users of the Arrow C data interface. | ||||||||||||
|
||||||||||||
Non-goals | ||||||||||||
--------- | ||||||||||||
|
||||||||||||
TODO: Remove the C data interface limitation? | ||||||||||||
|
||||||||||||
* Provide a common way to pass statistics that can be used for | ||||||||||||
other interfaces such Arrow Flight too. | ||||||||||||
Comment on lines
+75
to
+76
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What about the Arrow IPC format? Can you add a sentence here that explains why we do not recommend using this to pass statistics over Arrow IPC? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Good point. This may fit the Arrow IPC format. (A producer sends data and statistics as 2 separated the Arrow IPC format data.) But the Arrow IPC format can use more approaches. For example, the Arrow IPC format can have metadata for each record batch data: https://arrow.apache.org/docs/format/Columnar.html#encapsulated-message-format (The Arrow C data can't have metadata for So this may not be the best approach for the Arrow IPC format. We should discuss this use case with the Arrow IPC format separately. I'll add something to here. |
||||||||||||
|
||||||||||||
For example, ADBC has `the statistics related APIs | ||||||||||||
<https://arrow.apache.org/adbc/current/format/specification.html#statistics>`__. | ||||||||||||
This specification doesn't replace them. | ||||||||||||
Comment on lines
+78
to
+80
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not sure I would change this sentence, but as a personal note I hope that we do unify the two approaches. In the short term I suppose ADBC can have logic in its driver and client APIs to translate between the two. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @lidavidm Do you have any opinion for unifying this specification and the ADBC's statistics related APIs? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In retrospect since I think nothing in ADBC actually implements the statistics properly we may choose to deprecate the existing API and down the line reintroduce it using the new schema. |
||||||||||||
|
||||||||||||
TODO: Should we deprecate the current ADBC's statistics API and | ||||||||||||
redesign with this specification? | ||||||||||||
|
||||||||||||
This specification may fit some use cases of :ref:`format-ipc` not the | ||||||||||||
Arrow data interface. But we don't recommend this specification for | ||||||||||||
the Arrow IPC format for now. Because we may be able to define better | ||||||||||||
specification for the Arrow IPC format. The Arrow IPC format has some | ||||||||||||
different features compared with the Arrow C data interface. For | ||||||||||||
example, the Arrow IPC format can have :ref:`metadata for each message | ||||||||||||
<ipc-message-format>`. If you're interested in the specification for | ||||||||||||
passing statistics through the Arrow IPC format, please start a | ||||||||||||
discussion on the `Arrow development mailing-list | ||||||||||||
<https://arrow.apache.org/community/>`__. | ||||||||||||
|
||||||||||||
.. _c-data-interface-statistics-schema: | ||||||||||||
|
||||||||||||
Schema | ||||||||||||
====== | ||||||||||||
|
||||||||||||
This specification provides only the schema for statistics. The | ||||||||||||
producer passes statistics through the Arrow C data interface as an | ||||||||||||
Arrow map array that uses this schema. | ||||||||||||
|
||||||||||||
Here is the outline of the schema for statistics:: | ||||||||||||
|
||||||||||||
struct< | ||||||||||||
column: int32, | ||||||||||||
kou marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||
statistics: map< | ||||||||||||
key: dictionary< | ||||||||||||
indices: int32, | ||||||||||||
dictionary: utf8 | ||||||||||||
>, | ||||||||||||
items: dense_union<...all needed types...> | ||||||||||||
> | ||||||||||||
> | ||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Does this mean we get an array of maps or a single map? Because we really only need a single map. Any consumer of these statistics will loop through the values and gather only the statistic values that it cares about by skipping the keys that it doesn't understand. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. An (one element) array of maps. But we can simplify this. I didn't notice it. Thanks. How about the following? It removes the outer
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That looks reasonable to me! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I've updated this part! |
||||||||||||
|
||||||||||||
Here is the details of top-level ``struct``: | ||||||||||||
|
||||||||||||
.. list-table:: | ||||||||||||
:header-rows: 1 | ||||||||||||
|
||||||||||||
* - Name | ||||||||||||
- Data type | ||||||||||||
- Nullable | ||||||||||||
- Notes | ||||||||||||
* - ``column`` | ||||||||||||
- ``int32`` | ||||||||||||
- ``true`` | ||||||||||||
- The zero-based column index, or null if the statistics | ||||||||||||
describe the whole table or record batch. | ||||||||||||
kou marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||
|
||||||||||||
The column index is computed as the same rule used by | ||||||||||||
:ref:`ipc-recordbatch-message`. | ||||||||||||
* - ``statistics`` | ||||||||||||
- ``map`` | ||||||||||||
- ``false`` | ||||||||||||
- Statistics for the target column, table or record batch. See | ||||||||||||
the separate table below for details. | ||||||||||||
|
||||||||||||
Here is the details of the ``map`` of the ``statistics``: | ||||||||||||
|
||||||||||||
.. list-table:: | ||||||||||||
:header-rows: 1 | ||||||||||||
|
||||||||||||
* - Key or items | ||||||||||||
- Data type | ||||||||||||
- Nullable | ||||||||||||
- Notes | ||||||||||||
* - key | ||||||||||||
- ``dictionary<indices: int32, dictionary: utf8>`` | ||||||||||||
- ``false`` | ||||||||||||
- Statistics key is string. Dictionary is used for | ||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe we could be more explicitly about what the key means (it seems like it is the "statistics type" )? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks!
#45058 uses "statistics name" instead of "statistics key". |
||||||||||||
efficiency. Different keys are assigned for exact value and | ||||||||||||
approximate value. Also see the separate description below for | ||||||||||||
statistics key. | ||||||||||||
* - items | ||||||||||||
- ``dense_union`` | ||||||||||||
- ``false`` | ||||||||||||
- Statistics value is dense union. It has at least all needed | ||||||||||||
types based on statistics kinds in the keys. For example, you | ||||||||||||
need at least ``int64`` and ``float64`` types when you have a | ||||||||||||
``int64`` distinct count statistic and a ``float64`` average | ||||||||||||
byte width statistic. Also see the separate description below | ||||||||||||
for statistics key. | ||||||||||||
|
||||||||||||
We don't standardize field names for the dense union because | ||||||||||||
consumers can access to proper field by type code not name. So | ||||||||||||
producers can use any valid name for fields. | ||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do we really want this? What's the point of allowing arbitrary field names? This will make interoperability more difficult. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmm. I think that it's difficult that we standardize field names. Because we don't define concrete field types in this specification. We can use There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think the point is that consumers shouldn't rely on particular field names, given the index/type is supposed to fully define it already? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We have had trouble in the arrow-rs implementation or other places where schema's contain names but they aren't standardized For example, we have an embedded field name for Therefore I also recommend removing field names unless there is some good reason for doing so (it isn't clear to me from the text why there are arbitrary field namds) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks. I didn't notice that the schema comparison case. In this case, I think that we don't need to care about the case. Because we have a dynamic part in the proposed statistics schema. (This dense union part.) In general, two statistics schema will not be same with filed names standardization. For example, one statistics schema use |
||||||||||||
|
||||||||||||
TODO: Should we standardize field names? | ||||||||||||
|
||||||||||||
.. _c-data-interface-statistics-key: | ||||||||||||
|
||||||||||||
Statistics key | ||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Nit is I would find the term "statistics type" rather than "statistics key" easier to understand |
||||||||||||
-------------- | ||||||||||||
|
||||||||||||
Statistics key is string. ``dictionary<int32, utf8>`` is used for | ||||||||||||
efficiency. | ||||||||||||
|
||||||||||||
We assign different statistics keys for individual statistics instead of using | ||||||||||||
flags. For example, we assign different statistics keys for exact | ||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What does "instead of using values" means here? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "instead of using flags" means that we use "max" statistics key + "exact" flag for exact max statistics and "max" statistics key + "approximate" flag for approximate max statistics. ADBC uses this style: https://github.com/apache/arrow-adbc/blob/95f55c73e916a9b47bc287d10296e8f561284d42/c/include/arrow-adbc/adbc.h#L1761 |
||||||||||||
value and approximate value. | ||||||||||||
|
||||||||||||
The colon symbol ``:`` is to be used as a namespace separator like | ||||||||||||
:ref:`format_metadata`. It can be used multiple times in a key. | ||||||||||||
|
||||||||||||
The ``ARROW`` pattern is a reserved namespace for pre-defined | ||||||||||||
statistics keys. User-defined statistics must not use it. | ||||||||||||
kou marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||
For example, you can use your product name as namespace | ||||||||||||
such as ``MY_PRODUCT:my_statistics:exact``. | ||||||||||||
|
||||||||||||
Here are pre-defined statistics keys: | ||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. They are the same as ADBC's one: https://github.com/apache/arrow-adbc/blob/05fa60d643c66b572d426ab28aa78fc52e9520e8/c/include/arrow-adbc/adbc.h#L524-L570 |
||||||||||||
|
||||||||||||
.. list-table:: | ||||||||||||
:header-rows: 1 | ||||||||||||
|
||||||||||||
* - Key | ||||||||||||
- Data type | ||||||||||||
- Notes | ||||||||||||
* - ``ARROW:average_byte_width:exact`` | ||||||||||||
- ``float64`` | ||||||||||||
- The average size in bytes of a row in the target column. (exact) | ||||||||||||
* - ``ARROW:average_byte_width:approximate`` | ||||||||||||
- ``float64``: TODO: Should we use ``int64`` instead? | ||||||||||||
- The average size in bytes of a row in the target column. (approximate) | ||||||||||||
* - ``ARROW:distinct_count:exact`` | ||||||||||||
- ``int64`` | ||||||||||||
- The number of distinct values in the target column. (exact) | ||||||||||||
* - ``ARROW:distinct_count:approximate`` | ||||||||||||
- ``float64`` | ||||||||||||
- The number of distinct values in the target column. (approximate) | ||||||||||||
* - ``ARROW:max_byte_width:exact`` | ||||||||||||
- ``int64`` | ||||||||||||
- The maximum size in bytes of a row in the target column. (exact) | ||||||||||||
* - ``ARROW:max_byte_width:approximate`` | ||||||||||||
- ``float64`` | ||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why is this float64? It doesn't seem to make sense. The maximum is obviously an integer. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There's precedent in Apache Calcite where metadata handlers return float64. The exact value is not important and these are approximations used as part of query planning. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's correct for the "exact" case. But I think that float64 is better for the "approximate" case. Because an approximate max byte width may be float for string array. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It doesn't make sense for the maximum approximation to be float64. The fractional part is entirely nonsensical. |
||||||||||||
- The maximum size in bytes of a row in the target column. (approximate) | ||||||||||||
* - ``ARROW:max_value:exact`` | ||||||||||||
- Target dependent | ||||||||||||
- The maximum value in the target column. (exact) | ||||||||||||
* - ``ARROW:max_value:approximate`` | ||||||||||||
- Target dependent | ||||||||||||
- The maximum value in the target column. (approximate) | ||||||||||||
* - ``ARROW:min_value:exact`` | ||||||||||||
- Target dependent | ||||||||||||
- The minimum value in the target column. (exact) | ||||||||||||
* - ``ARROW:min_value:approximate`` | ||||||||||||
- Target dependent | ||||||||||||
- The minimum value in the target column. (approximate) | ||||||||||||
* - ``ARROW:null_count:exact`` | ||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The C Data Interface already exposes the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. These stats apply for a stream of data, so this would be the null_count for the dataset behind the stream as a whole. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's the number of |
||||||||||||
- ``int64`` | ||||||||||||
- The number of nulls in the target column. (exact) | ||||||||||||
* - ``ARROW:null_count:approximate`` | ||||||||||||
- ``float64`` | ||||||||||||
- The number of nulls in the target column. (approximate) | ||||||||||||
* - ``ARROW:row_count:exact`` | ||||||||||||
- ``int64`` | ||||||||||||
- The number of rows in the target table or record batch. (exact) | ||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What's the point? The length is already known. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ditto as above. These are summary statistics, not statistics for a single array (which would be mostly useless). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's not known yet. |
||||||||||||
* - ``ARROW:row_count:approximate`` | ||||||||||||
- ``float64`` | ||||||||||||
- The number of rows in the target table or record | ||||||||||||
batch. (approximate) | ||||||||||||
kou marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||
|
||||||||||||
If you find a missing statistics key that is usable for multiple | ||||||||||||
systems, please propose it on the `Arrow development mailing-list | ||||||||||||
<https://arrow.apache.org/community/>`__. | ||||||||||||
|
||||||||||||
Examples | ||||||||||||
======== | ||||||||||||
|
||||||||||||
Here are some examples to help you understand. | ||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should we provide concrete examples of data in the statistics array? I'm not sure if this is the best place. An impatient reader may be interested to see what content to expect from a quick glance. |
||||||||||||
|
||||||||||||
Simple record batch | ||||||||||||
------------------- | ||||||||||||
|
||||||||||||
Schema:: | ||||||||||||
|
||||||||||||
vendor_id: int32 | ||||||||||||
passenger_count: int64 | ||||||||||||
|
||||||||||||
Data:: | ||||||||||||
|
||||||||||||
vendor_id: [5, 1, 5, 1, 5] | ||||||||||||
passenger_count: [1, 1, 2, 0, null] | ||||||||||||
|
||||||||||||
Statistics schema:: | ||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It is probably too late, but I figured I would point out another format we use for statistics in DataFusion is a columnar form rather than a nested structure https://docs.rs/datafusion/latest/datafusion/physical_optimizer/pruning/trait.PruningStatistics.html For example to represent this data in DataFusion it would be
The benefit of this encoding is that it can be used to quickly evaluate predicates on ranges (e.g. figure out if We use this format to read statistics from the parquet files (see It does result in potentially very wide schemas however There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for sharing the existing statistics representation. We chose the current approach than the DataFusion approach because the current approach will be easy to add support for not pre-defined (application-specific) statistics names (statistics types): https://lists.apache.org/thread/g362w785ntcdp0hjjqr06lk5gl4bfrj0 |
||||||||||||
|
||||||||||||
struct< | ||||||||||||
column: int32, | ||||||||||||
statistics: map< | ||||||||||||
key: dictionary< | ||||||||||||
indices: int32, | ||||||||||||
dictionary: utf8 | ||||||||||||
>, | ||||||||||||
items: dense_union<int64> | ||||||||||||
> | ||||||||||||
> | ||||||||||||
|
||||||||||||
Statistics array:: | ||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I didn't understand this example. I thought the statistics were structs, so I would have expected the data to look something like this (perhaps we could give the "logical contents" and then the specific array encoding):
I can help work out the example if people think this is a good idea There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah, it makes sense. How about showing both of them? (We keep the current representation and add row based representation like you suggested.) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Let's continue this in the current PR: https://github.com/apache/arrow/pull/45058/files#r1896967442 |
||||||||||||
|
||||||||||||
column: [ | ||||||||||||
null, # record batch | ||||||||||||
0, # vendor_id | ||||||||||||
0, # vendor_id | ||||||||||||
0, # vendor_id | ||||||||||||
0, # vendor_id | ||||||||||||
1, # passenger_count | ||||||||||||
1, # passenger_count | ||||||||||||
1, # passenger_count | ||||||||||||
1, # passenger_count | ||||||||||||
] | ||||||||||||
statistics: | ||||||||||||
key: | ||||||||||||
indices: [ | ||||||||||||
0, # "ARROW:row_count:exact" | ||||||||||||
1, # "ARROW:null_count:exact" | ||||||||||||
2, # "ARROW:distinct_count:exact" | ||||||||||||
3, # "ARROW:max_value:exact" | ||||||||||||
4, # "ARROW:min_value:exact" | ||||||||||||
1, # "ARROW:null_count:exact" | ||||||||||||
2, # "ARROW:distinct_count:exact" | ||||||||||||
3, # "ARROW:max_value:exact" | ||||||||||||
4, # "ARROW:min_value:exact" | ||||||||||||
] | ||||||||||||
dictionary: [ | ||||||||||||
"ARROW:row_count:exact", | ||||||||||||
"ARROW:null_count:exact", | ||||||||||||
"ARROW:distinct_count:exact", | ||||||||||||
"ARROW:max_value:exact", | ||||||||||||
"ARROW:min_value:exact", | ||||||||||||
], | ||||||||||||
items: [ | ||||||||||||
5, # record batch: "ARROW:row_count:exact" | ||||||||||||
0, # vendor_id: "ARROW:null_count:exact" | ||||||||||||
2, # vendor_id: "ARROW:distinct_count:exact" | ||||||||||||
5, # vendor_id: "ARROW:max_value:exact" | ||||||||||||
1, # vendor_id: "ARROW:min_value:exact" | ||||||||||||
1, # passenger_count: "ARROW:null_count:exact" | ||||||||||||
3, # passenger_count: "ARROW:distinct_count:exact" | ||||||||||||
4, # passenger_count: "ARROW:max_value:exact" | ||||||||||||
0, # passenger_count: "ARROW:min_value:exact" | ||||||||||||
] | ||||||||||||
|
||||||||||||
Complex record batch | ||||||||||||
-------------------- | ||||||||||||
|
||||||||||||
TODO: It uses nested type. | ||||||||||||
|
||||||||||||
|
||||||||||||
Simple array | ||||||||||||
------------ | ||||||||||||
|
||||||||||||
TODO | ||||||||||||
|
||||||||||||
Complex array | ||||||||||||
------------- | ||||||||||||
|
||||||||||||
TODO: It uses nested type. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think what we have actually defined here is a way to represent statistics as an Arrow array (i.e., there's no real connection to the C data interface...these can and should be Julia arrays, C++ arrays, Arrow C interface arrays, serialized as IPC, as appropriate).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right.
But I'm not sure whether this approach is the best approach not for the Arrow C data interface...
See also: https://github.com/apache/arrow/pull/43553/files#r1704373291
I think that our discussion focused on only the Arrow C data interface:
https://lists.apache.org/thread/6m9xrhfktnt0nnyss7qo333n0cl76ypc
With the C data interface, we have only limited approaches for this. For example, separated Arrow array like this approach, metadata in
ArrowSchema
and so on.But we may have more approaches with other interfaces. Other approaches may be better than this approach with other approaches.
I'm OK with expanding scope of this specification. Is it OK that we restart this discussion with not only the C data interface but also all interfaces?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any Arrow array can be transmitted over Flight or IPC, including statistics arrays. I agree with @lidavidm about explaining the context and original motivation, but I don't think we should exclude IPC or Flight here. People can use statistics arrays over these protocols if they find it useful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the confusion is that I think Kou is trying to say that this isn't the "one true way" to embed statistics inside Flight or an IPC file (or in other words, this isn't a spec for adding Parquet-like statistics to IPC), but of course there's nothing stopping you from using those APIs to transmit statistics data for some other purpose. (Correct me if I'm wrong here @kou.) Again, because the original motivation was the DuckDB use case and so trying to solve "statistics for IPC files" was out of scope.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! It's correct.
(I'll be able to work on this proposal tomorrow. I'll update the current proposal and start a discussion on the mailing list tomorrow.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's fair to mention that this proposal doesn't cover the API used to request these arrays or the transport mechanism used in that API. Mentioning any particular transport/request mechanism is probably more confusing than helpful (and likely to change following this proposal as formalizing that request/transport will be valuable!).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Sorry... I couldn't work on this today... I hope that I can work on this tomorrow...)