Skip to content

Draft POC: Push batch with filter without copy #8103

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

zhuqi-lucas
Copy link
Contributor

Which issue does this PR close?

We generally require a GitHub issue to be filed for all bug fixes and enhancements and this helps us generate change logs for our releases. You can link an issue to this PR using the GitHub syntax.

  • Closes #NNN.

Rationale for this change

Why are you proposing this change? If this is already explained clearly in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand your changes and offer better suggestions for fixes.

What changes are included in this PR?

There is no need to duplicate the description in the issue here but it is sometimes worth providing a summary of the individual changes in this PR.

Are these changes tested?

We typically require tests for all PRs in order to:

  1. Prevent the code from being accidentally broken by subsequent changes
  2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example, are they covered by existing tests)?

Are there any user-facing changes?

If there are user-facing changes then we may require documentation to be updated before approving the PR.

If there are any breaking changes to public APIs, please call them out.

@github-actions github-actions bot added the arrow Changes to the arrow crate label Aug 10, 2025
@zhuqi-lucas zhuqi-lucas marked this pull request as draft August 10, 2025 13:24
Copy link
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @zhuqi-lucas -- this is very cool -- I left some ideas

IterationStrategy::Slices(s) => FilterPlan::Slices(s), // moved directly
IterationStrategy::Indices(i) => FilterPlan::Indices(i), // moved directly
IterationStrategy::SlicesIterator => {
FilterPlan::Slices(SlicesIterator::new(&pred.filter).collect())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

avoiding this allocation will likely help

Copy link
Contributor Author

@zhuqi-lucas zhuqi-lucas Aug 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @alamb , i tried now, but it not improved the regression. The compute_filter_plan almost cost nothing for the benchmark profile.

});

// For each contiguous slice, copy rows in chunks fitting target_batch_size
for (mut start, end) in slices {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect to really make this fast it will need to have specialized implementations for the different array types (not use mutable array data)

I think we could yoink / reuse some of the existing code from the filter kernel:

_ => downcast_primitive_array! {

Copy link
Contributor Author

@zhuqi-lucas zhuqi-lucas Aug 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @alamb for review and good suggestion.

I found the hot path for profile is, copy_rows:

Especially for the code for null handling:

        // add nulls if necessary
        if let Some(nulls) = s.nulls().as_ref() {
            let nulls = nulls.slice(offset, len);
            self.nulls.append_buffer(&nulls);
        } else {
            self.nulls.append_n_non_nulls(len);
        };

It may due to we will call more times for copy_rows here, but the original logic is just filter and concact to a batch, and then to push_batch, so it will be friendly for SIMD. Also it will make copy_rows SIMD friendly for bigger batch. So the original run is pretty faster for most cases.

May be we need to only change this logic for selective < 0.005 for example, but it's hard for me to decide.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
arrow Changes to the arrow crate
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants