-
-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EGL Backend: Fail to run with CUDA OpenGL interop #209
Comments
I can reproduce the error, but unfortunately I cannot readily figure out why it is occurring, due to the fact that nVidia's libraries are closed-source. This may be a similar issue to the issues with their Vulkan drivers (https://forums.developer.nvidia.com/t/headless-vulkan-with-multiple-gpus/222832/15), in that the CUDA driver may make some assumptions regarding the X display that are not valid in a remote display environment. |
Note that OpenCL/OpenGL interop also doesn't work with the EGL back end, because nVidia doesn't support the |
This is still an issue, unfortunately, and it doesn't appear as if it's something that can be fixed in VirtualGL. I can only guess that CUDA is somehow complaining about the fact that VirtualGL is sneaking in an EGL context behind the scenes when CUDA expects a GLX context. (Perhaps CUDA OpenGL interop is tied to the |
I'm also hitting this issue where I guess one work around is where the application opens its own EGL context separate to the VirtualGL one, as this works for interop. And then that is shared with the VirtualGL one for presentation/rendering? I can confirm this issue persists when the host application creates a GL context through either EGL or GLX; only on GLX the error is |
Here's what I observe on my Rocky Linux 8.5 machine with CUDA Toolkit 12.6, nVidia 550.90.07, and a Quadro P620:
Unfortunately, since CUDA is closed-source, I am completely clueless as to how to diagnose the issue. I used APITrace to obtain a trace of |
NOTE: Since the resources being passed to CUDA are OpenGL resources, not GLX or EGL resources, it shouldn't really matter how the context was created (but apparently it does, which is the fundamental mystery behind this issue.) |
Tried to run VirtualGL using EGL backend with the official CUDA OpenGL interop sample code, but it keeps failing the following error messages
The text was updated successfully, but these errors were encountered: