You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Yeah, we can certainly support other backends. The entire cuda support is here.
Regarding KernelAbstractions, that's an interesting question, I hadn't thought about how that would look, but I suppose it's possible.
I personally only have access to NVIDIA gpus, so I can't easily test on other hardware. Since the implementations are pretty easily extendable, I'm happy to review PRs if you'd like to take a shot at implementing one.
For the pieces needed to be implemented, we should probably rename GPU to CUDAhere, and then we could define a KernelAbstractionsKernel type here. Then, we can catch dispatch like this, but for something like function fused_copyto!(fmb::MBF.FusedMultiBroadcast, ::MBF.KernelAbstractionsKernel), and then add an implementation based on what broadcast objects live in fmb (which can range from very simple/naive to highly sophisticated/complicated).
Perhaps @vchuravy may have a vision of how this might look?
I am trying out the package in LuxLib.jl (backend for Lux.jl) to fuse GPU operations. But currently, it seems only CUDA is supported. (at least that's what I understand from this error https://buildkite.com/julialang/luxlib-dot-jl/builds/786#0190c951-2861-484d-b998-e7bf87b0732b/280-562)
How difficult would it be to support other backends? Maybe via KernelAbstractions
The text was updated successfully, but these errors were encountered: