Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

persist: count compaction fast-path eligible reqs #18380

Merged
merged 1 commit into from
Mar 24, 2023

Conversation

pH14
Copy link
Contributor

@pH14 pH14 commented Mar 24, 2023

Seeing how much compaction work / S3 PUTs we could shave off if we reintroduce the compaction fast path for single non-empty batches (with 1 run and were written by compaction)

Motivation

Tips for reviewer

Checklist

  • This PR has adequate test coverage / QA involvement has been duly considered.
  • This PR has an associated up-to-date design doc, is a design doc (template), or is sufficiently small to not require a design.
  • This PR evolves an existing $T ⇔ Proto$T mapping (possibly in a backwards-incompatible way) and therefore is tagged with a T-proto label.
  • If this PR will require changes to cloud orchestration, there is a companion cloud PR to account for those changes that is tagged with the release-blocker label (example).
  • This PR includes the following user-facing behavior changes:

@pH14 pH14 requested a review from a team as a code owner March 24, 2023 14:03
}
if let Some(single_nonempty_batch) = single_nonempty_batch {
if single_nonempty_batch.runs.len() == 0
&& single_nonempty_batch.desc.since() != &Antichain::from_elem(T::minimum())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

technically we can only stop compacting it when the since is past the upper (otherwise we might get some consolidation from forwarding the timestamps), but this is probably close enough to give us a signal

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure we'd see that in practice, because the upper of the output will also get bumped each time through, right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not necessarily. like the upper of the shard could be at 7 and we could end up compacting [0,2) and [2,4)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC the most common case of this was a mostly-empty shard that looks like [0, 1) (with data), followed by empty [1, 2), [2, 3), ... progress batches that didn't get the empty-batch shortcut. so each time compaction fires, it gets an input of all the batches, and so in practice the upper would advance each time

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, good point. I don't think it would be correct to use the logic you have here to trigger the optimization, but it's certainly safe to use as an upper bound on the potential benefit of the technique

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's fair -- I mostly want a ballpark on whether fast-path compactions are like, 1%, 10%, 50% of our writes. I'm related-ly curious about TimelyDataflow/differential-dataflow#277 which seems like it'd help address your point here without having to compact a batch that will never benefit from logical compaction

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ha, I was about to point you at that PR! last time I took Frank's temperature on it, he was pretty hesitant to make any scary changes to the DD Spine, but I think it's a pretty straightforward cherry-pick to apply it to our fork

@pH14 pH14 merged commit a776344 into MaterializeInc:main Mar 24, 2023
@pH14 pH14 deleted the persist-count-compaction-fast-path branch March 24, 2023 15:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants