-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplify future module, add context object for execute #12
Conversation
…and shift to owned inner future in closure while offloading work
Addressed feedback from rcoh@ in a new commit, thanks Russell! |
/// Very likely to block, use primary sync strategy | ||
High, | ||
/// Very likely to block for significant amount of time(~50μs+), use primary strategy | ||
Frequent, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rcoh what do you think about this name change? any other suggestions?
it's awkward because there are really a few dimensions we might care about:
- proportion of polls to a given future that will block
- proportion of instances of that future that will have polls that block
- absolute number of blocking futures created compared to other futures in this library
- absolute number of blocking futures created compared to other futures in the typical calling application
The last two, i think we can punt, type or namespace addresses that better.
I had intended to address the first two with this enum. Do the names capture that / could descriptions be clearer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should work backwards directly from what someone might do with this. I suppose this is block_in_place
vs. spawn_blocking
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or perhaps, eventually some sort of runtime signal?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One facet would be block_in_place vs spawn_blocking (or just execute directly, depending on overall application throughput).
Another might be prioritization within a custom closure.
src/future.rs
Outdated
// 2. If yes, we are done, return Poll::Pending | ||
// 2. If no, split apart inner to get owned inner future | ||
// 3. Send inner future into get_offload() closure | ||
// 4. Put inner back together with returned offload owrk or inner future |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
owrk
OffloadWithInner::InnerFut(_) => self.as_mut().poll_inner(cx), | ||
OffloadWithInner::OffloadActive(_) => self.as_mut().poll_offload_and_then_inner(cx), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
non blocking, but should we impl future for OffloadWithInner
or is that just add pointless indirection?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO pointless indirection given that the type isn't exposed and it is only polled when primary type is polled. Unless you think this is still not readable, in which case it might be a bit simpler.
/// Very likely to block, use primary sync strategy | ||
High, | ||
/// Very likely to block for significant amount of time(~50μs+), use primary strategy | ||
Frequent, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should work backwards directly from what someone might do with this. I suppose this is block_in_place
vs. spawn_blocking
?
/// Very likely to block, use primary sync strategy | ||
High, | ||
/// Very likely to block for significant amount of time(~50μs+), use primary strategy | ||
Frequent, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or perhaps, eventually some sort of runtime signal?
… ::new(), refactor OffloadWith future impl for readability
Tying up the loose ends from: #11
Add
ExecuteContext
objectGood suggestion by rcoh@, this way we at least mostly don't break application authors that have custom closures using this metadata if we add new fields. Also avoids parameter soup.
I considered labeling the
ExecuteContext
struct as non-exhaustive, to avoid breaking strategies that do something like:... but that would have made things really obnoxious for library authors, since then they can't directly construct the struct. And we're then back to input parameter soup. I did add
#[non-exhaustive]
toChanceOfBlocking
thoughI also considered some sort of default impl to make it truly non-breaking to library authors as well to add new fields, but I think we kind of do want to force them to specify this stuff if we consider it important enough to include in context.
Simplify future module
I removed:
OffloadFirst
- added an example in the module docs instead showing how to usefutures_util
PassThroughInnerPoll
mode ofOffloadWith
- Including this requires that we split theget_offload
closure from theincorporate
closure, which is an awkward API.In addition to a simpler builder,
OffloadWith
now has a singleget_offload()
closure that takes an owned inner future rather than a mutable reference to it. This is actually preferable for our current primary use case (tokio-rustls), since otherwise we would need to add new transitional states to their inner handshake future, and we're trying to keep the footprint small.If somebody does want the ability to continue driving the inner future while offloading work (or driving multiple pieces of offloaded work), we can add it in, I pushed my old impl to a branch.
I touched on this in some new module docs in a Q&A section.