Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to avoid suboptimal performance when using multiple GPU #480

Open
mingxuancai opened this issue Dec 6, 2024 · 1 comment
Open

How to avoid suboptimal performance when using multiple GPU #480

mingxuancai opened this issue Dec 6, 2024 · 1 comment

Comments

@mingxuancai
Copy link

Hi,

My server has 3 different GPUs (A6000/Titan Xp/3090). I successfully installed tinycudann, but when I import tinycudann, it has the following warning:

"UserWarning: System has multiple GPUs with different compute capabilities: [86, 61, 86]. Using compute capability 61 for best compatibility. This may result in suboptimal performance."

I tried to only use A6000 for Fully fused MLP, but it tells me it still degraded to CUTLASS MLP. Is there any way to avoid the suboptimal performance?

Thank you!

@mingxuancai
Copy link
Author

"UserWarning: System has multiple GPUs with different compute capabilities: [86, 61, 86]. Using compute capability 61 for best compatibility. This may result in suboptimal performance."

"tiny-cuda-nn warning: FullyFusedMLP is not supported for the selected architecture 61. Falling back to CutlassMLP. For maximum performance, raise the target GPU architecture to 75+."

From my experience, if there is one of the GPUs below 75, then the tinycudann just directly uses the lowest capability. However, even if I choose to use the 75+ GPU, it cannot change the situation. So how can I raise the target GPU architecture to 75+?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant