-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allocating NVISII to specific / multiple gpus #158
Comments
Hello! For your first question, I recommend setting the CUDA_VISIBLE_DEVICES environment variable on your system to control how many GPUs NVISII uses. For your second question, multiple GPUs only really makes sense for very complex scenes where it’s difficult to achieve more than one or two frames per second on a card. For simpler scenes where framerate is higher, there is more overhead synchronizing the cards and then multiGPU doesn’t make as much sense. Instead, I recommend rendering the different frames on different cards using multiple processes. And for your final question, we unfortunately do not currently support data distributed rendering. We only support data replicated rendering (ie, copying the same data to each card). Data distributed ray tracing is an open problem that we are researching though! |
Hi @natevm, I tried CUDA_VISIBLE_DEVICES and it doesn't seem to have any effect though. Could you please provide some guidance here? Thanks. |
Hi @natevm, In particular, I tried setting CUDA_VISIBLE_DEVICES=-1, or CUDA_VISIBLE_DEVICES=0, or CUDA_VISIBLE_DEVICES=1, CUDA_VISIBLE_DEVICES=2, etc. In all these cases, it seems like NVISII is still using all GPUs. Thank you and please help. |
How are you setting your environment variable? We use CUDA_VISIBLE_DEVICES and know that it does work, so it sounds like some issue with the environment variable not “sticking”. Note, you must set this variable before running Python. Setting the environment variable from within Python itself doesn’t work iirc. Maybe in your Python script for NVISII you should try reading the CUDA_VISIBLE_DEVICES and see if the interpreter has properly picked up your environment. |
I have been able to run nvisii on 8 gpus on a single machine.
I am not saying you are doing it wrong, just that we were able to test this without any issues in the past and this was the mechanism we decided to go with for selecting GPU vs multiple GPUS. |
Thank you! Problem solved as you have suggested! Thanks again. |
Hi NVISII team,
Thank you for this great repo!
Quick question: suppose you have three GPUs in a computer, but you only want NVISII to use one of the three GPUs, not the other two - is there a way to do that?
Another question: by default, NVISII will occupy all three GPUs with the same amount of memory on each GPU, but the amount of GPU memory allocated per GPU appears to be the same no matter how many GPUs you have in total for NVISII. The speed of rendering doesn't appear to be much faster either when you have multiple GPUs.
In this case, what is the benefit of using multiple GPUs for NVISII, if both the memory and the speed of rendering is the same regardless?
Finally, is it possible reduce GPU memory allocation per GPU for NVISII by providing multiple GPUs to NVISII? For example, if NVISII needs a total of 24 GB of GPU memory, is it possible to run NVISII on 3 GPUs, each with 11 GB memory?
Thank you!
The text was updated successfully, but these errors were encountered: