Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Delay block in place #454

Merged
merged 6 commits into from
Jan 2, 2024
Merged

Delay block in place #454

merged 6 commits into from
Jan 2, 2024

Conversation

rkuris
Copy link
Collaborator

@rkuris rkuris commented Dec 21, 2023

We can delay calling block_in_place until we actually block. This results in a small performance gain, probably more noticeable when we start doing reads in parallel.

Before:

Your branch is up to date with 'origin/main'.
❯ cargo run --quiet --release --example insert -- -i 1000 -s 1
Generated and inserted 1000 batches of size 1 in 5.022415705s
❯ cargo run --quiet --release --example insert -- -i 1000 -s 1
Generated and inserted 1000 batches of size 1 in 5.410185903s
❯ cargo run --quiet --release --example insert -- -i 1000 -s 1
Generated and inserted 1000 batches of size 1 in 5.059761131s

After

❯ cargo run --quiet --release --example insert -- -i 1000 -s 1
Generated and inserted 1000 batches of size 1 in 4.911608567s
❯ cargo run --quiet --release --example insert -- -i 1000 -s 1
Generated and inserted 1000 batches of size 1 in 4.906468629s
❯ cargo run --quiet --release --example insert -- -i 1000 -s 1
Generated and inserted 1000 batches of size 1 in 4.918876902s

Copy link
Contributor

@xinifinity xinifinity left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems there are tests failing?

@@ -569,7 +570,7 @@ impl DiskBufferRequester {
.send(BufferCmd::GetPage((space_id, page_id), resp_tx))
.map_err(StoreError::Send)
.ok();
resp_rx.blocking_recv().unwrap()
block_in_place(move || resp_rx.blocking_recv().unwrap())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just guessing that you don't need this here because you're not in an async function

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 will it work if not in an async call?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without this, all the tests fail. This code does nothing if you're not in an async context. If you are, then it moves you to a blockable thread to prevent interference with the async runtime.

@rkuris rkuris enabled auto-merge (squash) January 2, 2024 17:59
@rkuris
Copy link
Collaborator Author

rkuris commented Jan 2, 2024

Seems there are tests failing?
Should be good now. I found another instance where we making a blocking call in an async context.

rkuris added 3 commits January 2, 2024 10:16
It's safe to call `block_in_place` if we're not in an async runtime, or
even to do it recursively. The docs suggest you defer calling it until
you know you're about to actually block. This removes the calls from
several locations and puts them all in the one blocking call we have --
the one that reads from the cache.

This didn't make a big difference, but it does eliminate the extra
overhead of moving things over to another thread prematurely.

Will attach flamegraphs of 1,000 inserts.
Added a block_in_place to resolve the last test issue
@rkuris rkuris force-pushed the rkuris/delay-block-in-place branch from 82b34ec to f733f08 Compare January 2, 2024 18:16
@rkuris rkuris merged commit fab15d7 into main Jan 2, 2024
5 checks passed
@rkuris rkuris deleted the rkuris/delay-block-in-place branch January 2, 2024 19:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

3 participants