Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GH-38837: [Format] Add the specification to pass statistics through the Arrow C data interface #43553

Closed
wants to merge 19 commits into from
Closed
2 changes: 2 additions & 0 deletions cpp/tools/parquet/parquet_dump_arrow_statistics.cc
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
#include <iostream>

namespace {
// doc: start: print-arrow-statistics
arrow::Status PrintArrowStatistics(const char* path) {
ARROW_ASSIGN_OR_RAISE(
auto input, arrow::io::MemoryMappedFile::Open(path, arrow::io::FileMode::READ));
Expand All @@ -39,6 +40,7 @@ arrow::Status PrintArrowStatistics(const char* path) {
}
return arrow::Status::OK();
}
// doc: end: print-arrow-statistics
}; // namespace

int main(int argc, char** argv) {
Expand Down
339 changes: 339 additions & 0 deletions docs/source/format/CDataInterfaceStatistics.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,339 @@
.. Licensed to the Apache Software Foundation (ASF) under one
.. or more contributor license agreements. See the NOTICE file
.. distributed with this work for additional information
.. regarding copyright ownership. The ASF licenses this file
.. to you under the Apache License, Version 2.0 (the
.. "License"); you may not use this file except in compliance
.. with the License. You may obtain a copy of the License at
.. http://www.apache.org/licenses/LICENSE-2.0
.. Unless required by applicable law or agreed to in writing,
.. software distributed under the License is distributed on an
.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
.. KIND, either express or implied. See the License for the
.. specific language governing permissions and limitations
.. under the License.
.. _c-data-interface-statistics:

=====================================================
Passing statistics through the Arrow C data interface
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think what we have actually defined here is a way to represent statistics as an Arrow array (i.e., there's no real connection to the C data interface...these can and should be Julia arrays, C++ arrays, Arrow C interface arrays, serialized as IPC, as appropriate).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right.

But I'm not sure whether this approach is the best approach not for the Arrow C data interface...

See also: https://github.com/apache/arrow/pull/43553/files#r1704373291

I think that our discussion focused on only the Arrow C data interface:

https://lists.apache.org/thread/6m9xrhfktnt0nnyss7qo333n0cl76ypc

Out of scope:

  • Transmit statistics through not the C data interface

Examples:

  • Transmit statistics through Apache Arrow IPC file
  • Transmit statistics through Apache Arrow Flight

With the C data interface, we have only limited approaches for this. For example, separated Arrow array like this approach, metadata in ArrowSchema and so on.

But we may have more approaches with other interfaces. Other approaches may be better than this approach with other approaches.

I'm OK with expanding scope of this specification. Is it OK that we restart this discussion with not only the C data interface but also all interfaces?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any Arrow array can be transmitted over Flight or IPC, including statistics arrays. I agree with @lidavidm about explaining the context and original motivation, but I don't think we should exclude IPC or Flight here. People can use statistics arrays over these protocols if they find it useful.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the confusion is that I think Kou is trying to say that this isn't the "one true way" to embed statistics inside Flight or an IPC file (or in other words, this isn't a spec for adding Parquet-like statistics to IPC), but of course there's nothing stopping you from using those APIs to transmit statistics data for some other purpose. (Correct me if I'm wrong here @kou.) Again, because the original motivation was the DuckDB use case and so trying to solve "statistics for IPC files" was out of scope.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! It's correct.

(I'll be able to work on this proposal tomorrow. I'll update the current proposal and start a discussion on the mailing list tomorrow.)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's fair to mention that this proposal doesn't cover the API used to request these arrays or the transport mechanism used in that API. Mentioning any particular transport/request mechanism is probably more confusing than helpful (and likely to change following this proposal as formalizing that request/transport will be valuable!).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Sorry... I couldn't work on this today... I hope that I can work on this tomorrow...)

=====================================================

.. warning:: This specification should be considered experimental.

Rationale
=========

Statistics are useful for fast query processing. Many query engines
use statistics to optimize their query plan.

Apache Arrow format doesn't have statistics but other formats that can
be read as Apache Arrow data may have statistics. For example, the
Apache Parquet C++ implementation can read an Apache Parquet file as
Apache Arrow data and the Apache Parquet file may have statistics.

Use case
--------

One of :ref:`c-stream-interface` use cases is the following:

1. Module A reads Apache Parquet file as Apache Arrow data.
2. Module A passes the read Apache Arrow data to module B through the
Arrow C stream interface.
3. Module B processes the passed Apache Arrow data.

If module A can pass the statistics associated with the Apache Parquet
file to module B through the Arrow C stream interface, module B can
use the statistics to optimize its query plan.

For example, DuckDB uses this approach but DuckDB couldn't use
statistics because there wasn't the standardized way to pass
statistics.

.. seealso::

`duckdb::ArrowTableFunction::ArrowScanBind() in DuckDB 1.1.3
<https://github.com/duckdb/duckdb/blob/v1.1.3/src/function/table/arrow.cpp#L373-L403>`_

Goals
-----

TODO: Remove the C data interface limitation?

* Establish a standard way to pass statistics through the Arrow C data
interface.
* Provide this in a manner that enables compatibility and ease of
implementation for existing users of the Arrow C data interface.

Non-goals
---------

TODO: Remove the C data interface limitation?

* Provide a common way to pass statistics that can be used for
other interfaces such Arrow Flight too.
Comment on lines +75 to +76
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about the Arrow IPC format? Can you add a sentence here that explains why we do not recommend using this to pass statistics over Arrow IPC?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. This may fit the Arrow IPC format. (A producer sends data and statistics as 2 separated the Arrow IPC format data.)

But the Arrow IPC format can use more approaches. For example, the Arrow IPC format can have metadata for each record batch data: https://arrow.apache.org/docs/format/Columnar.html#encapsulated-message-format (The Arrow C data can't have metadata for ArrowArray.)
The Arrow IPC format can be used with other mechanisms such as Arrow Flight and ADBC.

So this may not be the best approach for the Arrow IPC format. We should discuss this use case with the Arrow IPC format separately.

I'll add something to here.


For example, ADBC has `the statistics related APIs
<https://arrow.apache.org/adbc/current/format/specification.html#statistics>`__.
This specification doesn't replace them.
Comment on lines +78 to +80
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I would change this sentence, but as a personal note I hope that we do unify the two approaches. In the short term I suppose ADBC can have logic in its driver and client APIs to translate between the two.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lidavidm Do you have any opinion for unifying this specification and the ADBC's statistics related APIs?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In retrospect since I think nothing in ADBC actually implements the statistics properly we may choose to deprecate the existing API and down the line reintroduce it using the new schema.


TODO: Should we deprecate the current ADBC's statistics API and
redesign with this specification?

This specification may fit some use cases of :ref:`format-ipc` not the
Arrow data interface. But we don't recommend this specification for
the Arrow IPC format for now. Because we may be able to define better
specification for the Arrow IPC format. The Arrow IPC format has some
different features compared with the Arrow C data interface. For
example, the Arrow IPC format can have :ref:`metadata for each message
<ipc-message-format>`. If you're interested in the specification for
passing statistics through the Arrow IPC format, please start a
discussion on the `Arrow development mailing-list
<https://arrow.apache.org/community/>`__.

.. _c-data-interface-statistics-schema:

Schema
======

This specification provides only the schema for statistics. The
producer passes statistics through the Arrow C data interface as an
Arrow map array that uses this schema.

Here is the outline of the schema for statistics::

struct<
column: int32,
kou marked this conversation as resolved.
Show resolved Hide resolved
statistics: map<
key: dictionary<
indices: int32,
dictionary: utf8
>,
items: dense_union<...all needed types...>
>
>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean we get an array of maps or a single map? Because we really only need a single map. Any consumer of these statistics will loop through the values and gather only the statistic values that it cares about by skipping the keys that it doesn't understand.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An (one element) array of maps. But we can simplify this. I didn't notice it. Thanks.

How about the following? It removes the outer map:

column: int32,
values: map<
  key: dictionary<
    indices: int32,
    dictionary: utf8
  >,
  items: dense_union<...all needed types...>,
>

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That looks reasonable to me!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've updated this part!


Here is the details of top-level ``struct``:

.. list-table::
:header-rows: 1

* - Name
- Data type
- Nullable
- Notes
* - ``column``
- ``int32``
- ``true``
- The zero-based column index, or null if the statistics
describe the whole table or record batch.
kou marked this conversation as resolved.
Show resolved Hide resolved

The column index is computed as the same rule used by
:ref:`ipc-recordbatch-message`.
* - ``statistics``
- ``map``
- ``false``
- Statistics for the target column, table or record batch. See
the separate table below for details.

Here is the details of the ``map`` of the ``statistics``:

.. list-table::
:header-rows: 1

* - Key or items
- Data type
- Nullable
- Notes
* - key
- ``dictionary<indices: int32, dictionary: utf8>``
- ``false``
- Statistics key is string. Dictionary is used for
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we could be more explicitly about what the key means (it seems like it is the "statistics type" )?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

efficiency. Different keys are assigned for exact value and
approximate value. Also see the separate description below for
statistics key.
* - items
- ``dense_union``
- ``false``
- Statistics value is dense union. It has at least all needed
types based on statistics kinds in the keys. For example, you
need at least ``int64`` and ``float64`` types when you have a
``int64`` distinct count statistic and a ``float64`` average
byte width statistic. Also see the separate description below
for statistics key.

We don't standardize field names for the dense union because
consumers can access to proper field by type code not name. So
producers can use any valid name for fields.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really want this? What's the point of allowing arbitrary field names? This will make interoperability more difficult.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm. I think that it's difficult that we standardize field names. Because we don't define concrete field types in this specification.

We can use str(type_code) for field name. Does it make sense?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the point is that consumers shouldn't rely on particular field names, given the index/type is supposed to fully define it already?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have had trouble in the arrow-rs implementation or other places where schema's contain names but they aren't standardized

For example, we have an embedded field name for Lists and some implementations use "item" and some "element" which causes spurious schema mismatch errors

Therefore I also recommend removing field names unless there is some good reason for doing so (it isn't clear to me from the text why there are arbitrary field namds)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I didn't notice that the schema comparison case.

In this case, I think that we don't need to care about the case. Because we have a dynamic part in the proposed statistics schema. (This dense union part.) In general, two statistics schema will not be same with filed names standardization. For example, one statistics schema use dense_union<int64, float64> but another schema use dense_union<int64>.


TODO: Should we standardize field names?

.. _c-data-interface-statistics-key:

Statistics key
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit is I would find the term "statistics type" rather than "statistics key" easier to understand

--------------

Statistics key is string. ``dictionary<int32, utf8>`` is used for
efficiency.

We assign different statistics keys for individual statistics instead of using
flags. For example, we assign different statistics keys for exact
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does "instead of using values" means here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"instead of using flags" means that we use "max" statistics key + "exact" flag for exact max statistics and "max" statistics key + "approximate" flag for approximate max statistics.

ADBC uses this style: https://github.com/apache/arrow-adbc/blob/95f55c73e916a9b47bc287d10296e8f561284d42/c/include/arrow-adbc/adbc.h#L1761

value and approximate value.

The colon symbol ``:`` is to be used as a namespace separator like
:ref:`format_metadata`. It can be used multiple times in a key.

The ``ARROW`` pattern is a reserved namespace for pre-defined
statistics keys. User-defined statistics must not use it.
kou marked this conversation as resolved.
Show resolved Hide resolved
For example, you can use your product name as namespace
such as ``MY_PRODUCT:my_statistics:exact``.

Here are pre-defined statistics keys:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


.. list-table::
:header-rows: 1

* - Key
- Data type
- Notes
* - ``ARROW:average_byte_width:exact``
- ``float64``
- The average size in bytes of a row in the target column. (exact)
* - ``ARROW:average_byte_width:approximate``
- ``float64``: TODO: Should we use ``int64`` instead?
- The average size in bytes of a row in the target column. (approximate)
* - ``ARROW:distinct_count:exact``
- ``int64``
- The number of distinct values in the target column. (exact)
* - ``ARROW:distinct_count:approximate``
- ``float64``
- The number of distinct values in the target column. (approximate)
* - ``ARROW:max_byte_width:exact``
- ``int64``
- The maximum size in bytes of a row in the target column. (exact)
* - ``ARROW:max_byte_width:approximate``
- ``float64``
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this float64? It doesn't seem to make sense. The maximum is obviously an integer.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's precedent in Apache Calcite where metadata handlers return float64. The exact value is not important and these are approximations used as part of query planning.

Example: https://calcite.apache.org/javadocAggregate/org/apache/calcite/rel/metadata/BuiltInMetadata.MaxRowCount.html#getMaxRowCount()

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's correct for the "exact" case. But I think that float64 is better for the "approximate" case. Because an approximate max byte width may be float for string array.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't make sense for the maximum approximation to be float64. The fractional part is entirely nonsensical.

- The maximum size in bytes of a row in the target column. (approximate)
* - ``ARROW:max_value:exact``
- Target dependent
- The maximum value in the target column. (exact)
* - ``ARROW:max_value:approximate``
- Target dependent
- The maximum value in the target column. (approximate)
* - ``ARROW:min_value:exact``
- Target dependent
- The minimum value in the target column. (exact)
* - ``ARROW:min_value:approximate``
- Target dependent
- The minimum value in the target column. (approximate)
* - ``ARROW:null_count:exact``
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The C Data Interface already exposes the null_count, what use is this for?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These stats apply for a stream of data, so this would be the null_count for the dataset behind the stream as a whole.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's the number of null for the array. In this case, the array is a statistics array not a target column/record batch.
This is for a target column/record batch not a statistics array.

- ``int64``
- The number of nulls in the target column. (exact)
* - ``ARROW:null_count:approximate``
- ``float64``
- The number of nulls in the target column. (approximate)
* - ``ARROW:row_count:exact``
- ``int64``
- The number of rows in the target table or record batch. (exact)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the point? The length is already known.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto as above. These are summary statistics, not statistics for a single array (which would be mostly useless).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not known yet.
In general, a statistics array is passed before we receive a target record batch.
We don't know the number of rows in a target record batch at the time.

* - ``ARROW:row_count:approximate``
- ``float64``
- The number of rows in the target table or record
batch. (approximate)
kou marked this conversation as resolved.
Show resolved Hide resolved

If you find a missing statistics key that is usable for multiple
systems, please propose it on the `Arrow development mailing-list
<https://arrow.apache.org/community/>`__.

Examples
========

Here are some examples to help you understand.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we provide concrete examples of data in the statistics array? I'm not sure if this is the best place. An impatient reader may be interested to see what content to expect from a quick glance.


Simple record batch
-------------------

Schema::

vendor_id: int32
passenger_count: int64

Data::

vendor_id: [5, 1, 5, 1, 5]
passenger_count: [1, 1, 2, 0, null]

Statistics schema::
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is probably too late, but I figured I would point out another format we use for statistics in DataFusion is a columnar form rather than a nested structure

https://docs.rs/datafusion/latest/datafusion/physical_optimizer/pruning/trait.PruningStatistics.html

For example to represent this data in DataFusion it would be

vendor_id::null_count vendor_id::min vendor_id::max ... passenger_count::max
0 1 5 ... 2

The benefit of this encoding is that it can be used to quickly evaluate predicates on ranges (e.g. figure out if vendor_id = 6 could ever be true)

We use this format to read statistics from the parquet files (see ParquetStatisticsConverter to rule out row groups)

It does result in potentially very wide schemas however

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for sharing the existing statistics representation.
This is also shared by @tustvold in the initial discussion: https://lists.apache.org/thread/hd79kp59796sqmwo24vmsnbk8t6x7h8g

We chose the current approach than the DataFusion approach because the current approach will be easy to add support for not pre-defined (application-specific) statistics names (statistics types): https://lists.apache.org/thread/g362w785ntcdp0hjjqr06lk5gl4bfrj0


struct<
column: int32,
statistics: map<
key: dictionary<
indices: int32,
dictionary: utf8
>,
items: dense_union<int64>
>
>

Statistics array::
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't understand this example. I thought the statistics were structs, so I would have expected the data to look something like this (perhaps we could give the "logical contents" and then the specific array encoding):

[ 
  // first struct element
  { 
    column: null, # record batch
     statistics: {
        "ARROW:row_count:exact": 0
     }
   },
  { 
    column: 0, # vendor_id
     statistics: {
        "ARROW:null_count:exact": 0,
        "ARROW:distinct_count:exact": 2,
        "ARROW:max_value:exact": 5,
        "ARROW:min_value:exact": 1,
     }
   },
...
]

I can help work out the example if people think this is a good idea

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, it makes sense.
I used physical representation (columnar representation) here but logical representation (row based representation) may be easy to understand.

How about showing both of them? (We keep the current representation and add row based representation like you suggested.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's continue this in the current PR: https://github.com/apache/arrow/pull/45058/files#r1896967442


column: [
null, # record batch
0, # vendor_id
0, # vendor_id
0, # vendor_id
0, # vendor_id
1, # passenger_count
1, # passenger_count
1, # passenger_count
1, # passenger_count
]
statistics:
key:
indices: [
0, # "ARROW:row_count:exact"
1, # "ARROW:null_count:exact"
2, # "ARROW:distinct_count:exact"
3, # "ARROW:max_value:exact"
4, # "ARROW:min_value:exact"
1, # "ARROW:null_count:exact"
2, # "ARROW:distinct_count:exact"
3, # "ARROW:max_value:exact"
4, # "ARROW:min_value:exact"
]
dictionary: [
"ARROW:row_count:exact",
"ARROW:null_count:exact",
"ARROW:distinct_count:exact",
"ARROW:max_value:exact",
"ARROW:min_value:exact",
],
items: [
5, # record batch: "ARROW:row_count:exact"
0, # vendor_id: "ARROW:null_count:exact"
2, # vendor_id: "ARROW:distinct_count:exact"
5, # vendor_id: "ARROW:max_value:exact"
1, # vendor_id: "ARROW:min_value:exact"
1, # passenger_count: "ARROW:null_count:exact"
3, # passenger_count: "ARROW:distinct_count:exact"
4, # passenger_count: "ARROW:max_value:exact"
0, # passenger_count: "ARROW:min_value:exact"
]

Complex record batch
--------------------

TODO: It uses nested type.


Simple array
------------

TODO

Complex array
-------------

TODO: It uses nested type.
1 change: 1 addition & 0 deletions docs/source/format/Columnar.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1619,6 +1619,7 @@ example as above, an alternate encoding could be: ::
0
EOS

.. _format_metadata:

Custom Application Metadata
---------------------------
Expand Down
1 change: 1 addition & 0 deletions docs/source/format/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Specifications
CanonicalExtensions
Other
CDataInterface
CDataInterfaceStatistics
CStreamInterface
CDeviceDataInterface
DissociatedIPC
Expand Down
Loading