-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Remove performance limitation when using Java Virtual Threads #2307
Conversation
Thank you for bringing raising this issue. Virtual threads are something we're going to look at in 2024. Unfortunately, I think this PR is not comprehensive enough to cover the threading and synchronization expectations much of the library internals assume today. I've created #2308 as a public tracking issue for the feature request of improving Virtual Thread Compatibility. Apologies, we can't accept this contribution at this time. PS: If you are running into contention issues on ReadChannel specifically, you could try |
Hey @BenWhitehead |
I reviewed the Oracle article linked in the issue. And while superficially things should be okay, historically we have found issues in the JDK itself and had to work on getting them fixed and/or working around the constraints they present (for example, we've found cases where the jdk was erroneously sending multiple http requests instead of just one, cases where decompression would stop part way through without an error). This also applies to the intermediary libraries our clients depend on as well. With the level of existing threading, synchronization, native code cross library compatibility layers and other complexities in the library we need to take a deeper investigation of our client and its dependencies to ensure there is not anything adverse lurking in the shadows. This is something that is on our radar for next year. There is some work already happening around virtual threads and java 21 compatibility that we don't yet have publicly shareable dates on. I apologies for not being able to accept this contribution right now, but there is other due diligence we are still doing around virtual threads. |
"historically we have found issues in the JDK itself" This level of - I can only call it - paranoia is disappointing. Every contribution is rejectable on those grounds, so you might as well remove "Contributions to this library are always welcome" from the README. No change is provably safe if you doubt the ability of the platform to execute it according to its own specification. 1 + 1 might one day be 3. If this is your opinion, then you should never change any line ever again. |
I echo @michaelboyles feeling. Disappointing to say the least, especially since the changes are so trivial they're inspectable for correctness, let alone the unit and integrations test which I ran with positive results. Just FUD on Google's part IMO. |
Hi @PhilBoutros and @michaelboyles, Thank you for raising this request, but we aren't ready for this change just yet. @BenWhitehead has created a tracking issue calling out that it's on our teams radar (#2308). I'm closing this pull request and our team will discuss prioritization. Stay tuned. |
Multiple, potentially lengthy
read
andwrite
methods use thesynchronized
keyword. This will pin virtual threads, preventing them from being detached from carrier platform threads during these operations. This prevents virtual threads from providing the concurrency and throughput they were designed for when using this library.Per Oracle... Pinning does not make an application incorrect, but it might hinder its scalability. Try avoiding frequent and long-lived pinning by revising synchronized blocks or methods that run frequently and guarding potentially long I/O operations with java.util.concurrent.locks.ReentrantLock.
This commit replaces the
synchronized
keyword with aReentrantLock
on select methods. Not allsynchronized
keywords are gone, only the ones on methods that obviously might take a while. I was conservative as I'm new to the codebase.A more complete explanation is available in this article.