Replies: 3 comments
-
That should be fine. AFAIK we even build both Serial and CUDA backend by default. But it's definitely not a very common use case, so I could see that there are some gotchas. (And it probably depends a bit on the solver. I could imagine that certain solvers do different things for CPU and GPU, and in particular, expect different communication patterns.) |
Beta Was this translation helpful? Give feedback.
-
Another use case comes from binary distributability, with the desire to choose whether to use a GPU at runtime. Or more to the point, if one starts up a run on a system without a CUDA card, one should not invoke the CUDA solvers. Are there any trilinos users out there who do this? |
Beta Was this translation helpful? Give feedback.
-
We're testing this. |
Beta Was this translation helpful? Give feedback.
-
We have an application that builds both CUDA and non-CUDA (pure CPU) implementations of updates (for fields and particles in a PIC code). At runtime, the code decomposes the computational domain into subdomains. Some subdomains are assigned to a GPU, some to a CPU. So the same code does now and has to contain both CPU and CUDA implementations.
Is this possible with Trilinos? Ie, can one tell Trilinos at build time to build both CUDA and CPU solvers?
Beta Was this translation helpful? Give feedback.
All reactions