Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use Arc<[Buffer]> instead of raw Vec<Buffer> in GenericByteViewArray for faster slice #6427

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

ShiKaiWi
Copy link
Member

@ShiKaiWi ShiKaiWi commented Sep 20, 2024

Which issue does this PR close?

Close #6408

Rationale for this change

In the GenericByteViewArray, the buffers field is a raw vector, leading to heap allocation when some methods are called, e.g. clone, slice. Using Arc<[Buffer]> instead of the raw Vec<Buffer> can avoid such heap allocation.

And the newly-add benchmark cases about slice shows the improvement:

gc view types all       time:   [687.92 µs 694.54 µs 706.74 µs]
                        change: [+2.9773% +4.3614% +6.3877%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
  3 (3.00%) high mild
  8 (8.00%) high severe

gc view types slice half
                        time:   [322.61 µs 325.05 µs 327.29 µs]
                        change: [-0.7292% +0.5853% +1.7277%] (p = 0.37 > 0.05)
                        No change in performance detected.
Found 16 outliers among 100 measurements (16.00%)
  14 (14.00%) high mild
  2 (2.00%) high severe

view types slice        time:   [146.78 ns 147.03 ns 147.24 ns]
                        change: [-15.424% -15.106% -14.772%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 23 outliers among 100 measurements (23.00%)
  12 (12.00%) low severe
  4 (4.00%) low mild
  2 (2.00%) high mild
  5 (5.00%) high severe

What changes are included in this PR?

Use Arc<[Buffer]> instead of the raw Vec<Buffer> as the type of buffers field of GenericByteViewArray.

Are there any user-facing changes?

The signature of the method GenericByteViewArray::new_unchecked is changed from:

pub unsafe fn new_unchecked(
        views: ScalarBuffer<u128>,
        buffers: Vec<[Buffer]>,
        nulls: Option<NullBuffer>,
    ) -> Self;

to

pub unsafe fn new_unchecked(
        views: ScalarBuffer<u128>,
        buffers: impl Into<Arc<[Buffer]>>,
        nulls: Option<NullBuffer>,
    ) -> Self;

However, any usage of this method before this PR should still work without any modification.

@github-actions github-actions bot added the arrow Changes to the arrow crate label Sep 20, 2024
@ShiKaiWi ShiKaiWi marked this pull request as ready for review September 20, 2024 16:38
@ShiKaiWi ShiKaiWi changed the title Use Arc<[Buffer]> instead of raw Vec<Buffer> in `GenericByteViewA… Use Arc<[Buffer]> instead of raw Vec<Buffer> in GenericByteViewArray for faster slice Sep 20, 2024
@ShiKaiWi ShiKaiWi changed the title Use Arc<[Buffer]> instead of raw Vec<Buffer> in GenericByteViewArray for faster slice Use Arc<[Buffer]> instead of raw Vec<Buffer> in GenericByteViewArray for faster slice Sep 20, 2024
@tustvold tustvold added the api-change Changes to the arrow API label Sep 20, 2024
@tustvold
Copy link
Contributor

Unfortunately the use of impl is still a breaking change as it could impact type inference, e.g if collecting an interator into the argument

@tustvold tustvold added the next-major-release the PR has API changes and it waiting on the next major version label Sep 20, 2024
@@ -234,7 +234,7 @@ impl<T: ByteViewType + ?Sized> GenericByteViewArray<T> {
}

/// Deconstruct this array into its constituent parts
pub fn into_parts(self) -> (ScalarBuffer<u128>, Vec<Buffer>, Option<NullBuffer>) {
pub fn into_parts(self) -> (ScalarBuffer<u128>, Arc<[Buffer]>, Option<NullBuffer>) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is also a breaking change

@alamb
Copy link
Contributor

alamb commented Sep 20, 2024

FWIW the reason "breaking change" is important is that it restricts when we can merge this PR:

https://github.com/apache/arrow-rs/blob/master/CONTRIBUTING.md#breaking-changes

@@ -114,7 +114,7 @@ use super::ByteArrayType;
pub struct GenericByteViewArray<T: ByteViewType + ?Sized> {
data_type: DataType,
views: ScalarBuffer<u128>,
buffers: Vec<Buffer>,
buffers: Arc<[Buffer]>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the rationale for Arc<[Buffer]> vs Vec<Arc>?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cloning an Arc is relatively cheap (no allocation), cloning a Vec isn't.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i get it. However, if i understand correctly, Arc<[Buffer]> means the buffers can be passed around and shared only when they are within single slice, which can be limiting. For example, Can i merge two arrays, combining their Arc<Buffer> s without moving or cloning the buffers?

Copy link
Contributor

@alamb alamb Oct 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can i merge two arrays, combining their Arc s without moving or cloning the buffers?

No -- you would have to create a new Vec<Buffer> (or some other way to get Arc<[Buffer]>)

So while there are some cases where new allocations are required, slicing / cloning is faster

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you would have to create a new Vec<Buffer>

but that would prevent buffer sharing between two arrays, right?

slicing / cloning is faster

cloning yes

slicing -- i didn't see it

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the observation is that during StringViewArray::slice, the slice actually happens on the views -- the buffers (that the views can point at) must be copied

Here is the clone of buffers: https://docs.rs/arrow-array/53.1.0/src/arrow_array/array/byte_view_array.rs.html#385

@tustvold
Copy link
Contributor

Coming back to this now, I think we could merge this although I am curious if there is something funny going on here, which is why this is showing up in benchmarks. Someone popped up on discord the other day reporting a StringViewArray with ~10k buffers, this would suggest something is likely off somewhere.

I wonder if kernels are blindly concatenating identical buffers together, instead of using something like Buffer::ptr_eq to avoid a new entry for the exact same buffer allocation?

@alamb
Copy link
Contributor

alamb commented Nov 22, 2024

I wonder if kernels are blindly concatenating identical buffers together, instead of using something like Buffer::ptr_eq to avoid a new entry for the exact same buffer allocation?

What was happening in DataFusion was we had a Filter --> Coalesce chain and thus basically calling concat a few thousand different input RecordBatch each with a few rows.

However, to your point, it may well be the case that the input RecordBatches shared the same underlying buffer so maybe the same buffer was being appended multiple times

@XiangpengHao do you remember if you looked for this ? (related to "Section 3.5: Buffer size tuning
" in https://www.influxdata.com/blog/faster-queries-with-stringview-part-two-influxdb/)

@XiangpengHao
Copy link
Contributor

I wonder if kernels are blindly concatenating identical buffers together, instead of using something like Buffer::ptr_eq to avoid a new entry for the exact same buffer allocation?

I think so:

let variadic_data_buffers = match &data_type {
DataType::BinaryView | DataType::Utf8View => arrays
.iter()
.flat_map(|x| x.buffers().iter().skip(1))
.map(Buffer::clone)
.collect(),
_ => vec![],
};

Someone popped up on discord the other day reporting a StringViewArray with ~10k buffers, this would suggest something is likely off somewhere.

I have experienced this when loading string view from Parquet. If the parquet data has 10k buffers of string data, the string view will just hold them. Typically we should run a filter and then gc it. This is handled in DF but if used out side DF users might need to do something similar like this: https://github.com/apache/datafusion/blob/c0ca4b4e449e07c3bcd6f3593fa31dd31ed5e0c5/datafusion/physical-plan/src/coalesce/mod.rs#L201-L221

In other words, if StringViewArray is constructed by us, it's very unlikely to have 10k buffers as we will exponentially grow the buffers ultil 2MB; so 10k buffers means ~20GB data

@alamb
Copy link
Contributor

alamb commented Nov 23, 2024

In other words, if StringViewArray is constructed by us, it's very unlikely to have 10k buffers as we will exponentially grow the buffers ultil 2MB; so 10k buffers means ~20GB data

Good call -- @onursatici has tracked down a related (perhaps the same) issue:

@alamb
Copy link
Contributor

alamb commented Nov 23, 2024

All the discussion above not withstanding, I think the fact that this PR improves slice speed (by avoiding allocations) for GenericByteViewArray means it is still worth merging

@@ -178,7 +178,7 @@ impl<T: ByteViewType + ?Sized> GenericByteViewArray<T> {
Ok(Self {
data_type: T::DATA_TYPE,
views,
buffers,
buffers: buffers.into(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should take impl Into<Arc<[Buffer]>> in this method as well

@tustvold
Copy link
Contributor

means it is still worth merging

Agreed


What do people think about extracting a newtype ViewBuffers or similar, much like we have Fields. This would allow us to change the internal representation down the line, whilst also potentially supporting member functions, etc...

@alamb
Copy link
Contributor

alamb commented Nov 26, 2024

means it is still worth merging

Agreed

What do people think about extracting a newtype ViewBuffers or similar, much like we have Fields. This would allow us to change the internal representation down the line, whilst also potentially supporting member functions, etc...

I think it would be a great idea. @ShiKaiWi is that something you would be willing to try?

@ShiKaiWi
Copy link
Member Author

ShiKaiWi commented Dec 10, 2024

means it is still worth merging

Agreed
What do people think about extracting a newtype ViewBuffers or similar, much like we have Fields. This would allow us to change the internal representation down the line, whilst also potentially supporting member functions, etc...

I think it would be a great idea. @ShiKaiWi is that something you would be willing to try?

Sorry for the late reply (I was a bit busy with work in the last few months). I'd love to try it on this weekend.

@alamb
Copy link
Contributor

alamb commented Dec 10, 2024

Sorry for the late reply (I was a bit busy with work in the last few months). I'd love to try it on this weekend.

Thank @ShiKaiWi -- I don't think there is any huge rush to get this feature in, so it would be great if you could do so ❤️

@alamb alamb marked this pull request as draft December 16, 2024 18:33
@alamb
Copy link
Contributor

alamb commented Dec 16, 2024

Converting to draft as I think there is some new planned work here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api-change Changes to the arrow API arrow Changes to the arrow crate next-major-release the PR has API changes and it waiting on the next major version
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Make StringViewArray::slice() and BinaryViewArray::slice() faster / non allocating
6 participants