Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify future module, add context object for execute #12

Merged
merged 2 commits into from
Jan 13, 2025
Merged

Simplify future module, add context object for execute #12

merged 2 commits into from
Jan 13, 2025

Conversation

jlizen
Copy link
Owner

@jlizen jlizen commented Jan 9, 2025

Tying up the loose ends from: #11

Add ExecuteContext object

Good suggestion by rcoh@, this way we at least mostly don't break application authors that have custom closures using this metadata if we add new fields. Also avoids parameter soup.

I considered labeling the ExecuteContext struct as non-exhaustive, to avoid breaking strategies that do something like:

let ExecuteContext { chance_of_blocking, namespace} = context {
...
}

... but that would have made things really obnoxious for library authors, since then they can't directly construct the struct. And we're then back to input parameter soup. I did add #[non-exhaustive] to ChanceOfBlocking though

I also considered some sort of default impl to make it truly non-breaking to library authors as well to add new fields, but I think we kind of do want to force them to specify this stuff if we consider it important enough to include in context.

Simplify future module

I removed:

  • OffloadFirst - added an example in the module docs instead showing how to use futures_util
  • The PassThroughInnerPoll mode of OffloadWith - Including this requires that we split the get_offload closure from the incorporate closure, which is an awkward API.

In addition to a simpler builder, OffloadWith now has a single get_offload() closure that takes an owned inner future rather than a mutable reference to it. This is actually preferable for our current primary use case (tokio-rustls), since otherwise we would need to add new transitional states to their inner handshake future, and we're trying to keep the footprint small.

If somebody does want the ability to continue driving the inner future while offloading work (or driving multiple pieces of offloaded work), we can add it in, I pushed my old impl to a branch.

I touched on this in some new module docs in a Q&A section.

@jlizen jlizen changed the title Simplify future model, add context object for execute Simplify future module, add context object for execute Jan 10, 2025
…and shift to owned inner future in closure while offloading work
README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
src/future.rs Outdated Show resolved Hide resolved
src/future.rs Outdated Show resolved Hide resolved
src/future.rs Show resolved Hide resolved
src/future.rs Show resolved Hide resolved
src/future.rs Show resolved Hide resolved
src/future.rs Show resolved Hide resolved
src/lib.rs Outdated Show resolved Hide resolved
@jlizen
Copy link
Owner Author

jlizen commented Jan 10, 2025

Addressed feedback from rcoh@ in a new commit, thanks Russell!

/// Very likely to block, use primary sync strategy
High,
/// Very likely to block for significant amount of time(~50μs+), use primary strategy
Frequent,
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rcoh what do you think about this name change? any other suggestions?

it's awkward because there are really a few dimensions we might care about:

  • proportion of polls to a given future that will block
  • proportion of instances of that future that will have polls that block
  • absolute number of blocking futures created compared to other futures in this library
  • absolute number of blocking futures created compared to other futures in the typical calling application

The last two, i think we can punt, type or namespace addresses that better.

I had intended to address the first two with this enum. Do the names capture that / could descriptions be clearer?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should work backwards directly from what someone might do with this. I suppose this is block_in_place vs. spawn_blocking?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or perhaps, eventually some sort of runtime signal?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One facet would be block_in_place vs spawn_blocking (or just execute directly, depending on overall application throughput).

Another might be prioritization within a custom closure.

src/future.rs Outdated
// 2. If yes, we are done, return Poll::Pending
// 2. If no, split apart inner to get owned inner future
// 3. Send inner future into get_offload() closure
// 4. Put inner back together with returned offload owrk or inner future
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

owrk

Comment on lines +617 to +618
OffloadWithInner::InnerFut(_) => self.as_mut().poll_inner(cx),
OffloadWithInner::OffloadActive(_) => self.as_mut().poll_offload_and_then_inner(cx),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

non blocking, but should we impl future for OffloadWithInner or is that just add pointless indirection?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO pointless indirection given that the type isn't exposed and it is only polled when primary type is polled. Unless you think this is still not readable, in which case it might be a bit simpler.

/// Very likely to block, use primary sync strategy
High,
/// Very likely to block for significant amount of time(~50μs+), use primary strategy
Frequent,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should work backwards directly from what someone might do with this. I suppose this is block_in_place vs. spawn_blocking?

/// Very likely to block, use primary sync strategy
High,
/// Very likely to block for significant amount of time(~50μs+), use primary strategy
Frequent,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or perhaps, eventually some sort of runtime signal?

… ::new(), refactor OffloadWith future impl for readability
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants