-
-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove timeout in awaitable #651
Conversation
Now that we have eliminated timeouts, I really do hate the fact that we are doing a busy wait. Would it make sense to do a very slow backoff? Maybe call For synchronous calls, I was wondering if it makes sense for us to use |
It's probably more efficient to invest our time and energy into other things until upstream wgpu-native implements callbacks/futures/promises or something of that nature. Then we can build our polling-free implementation on top. |
@Korijn I agree that we (hopefully) eventually get a polling-free implementation, but in the mean time, adding a bit of logic to sleep a bit longer than zero seconds sounds like a good idea to me. Apparently in some cases the future may take over 5 s to resolve, and that's a much longer time to busy-wait that we first anticipated. @fyellin do you have time to create a pr for that? Maybe something like: sleep_time = min(0.01, sleep_time + 0.001) |
Yes. That was pretty much my thoughts exactly. I didn't mind constant polling when we believed the results would always be returned quickly. But now that we realize that polling may take 5s or more, than's not reasonable. I'll come up with something reasonable (and easy to modify if you don't like it) and submit a PR. |
It is clear that the use case in #650 is compute shaders right? For real time rendering use cases, results should always be returned quickly. Regardless, I think both of you are probably right that some kind of back off mechanism is appropriate. |
Probably, but not necessarily.
I could imagine someone attempting to do an extremely complicated rendering
with multiple light sources and complicated shadows and rendering this to a
texture rather than the screen. The only way to get a texture from the GPU
to the CPU is via a buffer.
…On Wed, Dec 11, 2024 at 12:23 PM Korijn van Golen ***@***.***> wrote:
It is clear that the use case in #650
<#650> is compute shaders right?
For real time rendering use cases, results should always be returned
quickly.
Regardless, I think both of you are probably right that some kind of back
off mechanism is appropriate.
—
Reply to this email directly, view it on GitHub
<#651 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABC2LIJ5UKEIAFXNWR7I6FL2FCNKRAVCNFSM6AAAAABTIWEPQOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMZXGA2DMMJYHE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I guess you are right! 👍🏻 |
Closed #650
cc @fyellin