You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a very hard time understanding, which AT_DISPATCH from here needs to be used in order to write tensors of different dtypes to a kernel.
While I understand, how #define AT_DISPATCH_FLOATING_TYPES(TYPE, NAME, ...) or #define AT_DISPATCH_INTEGRAL_TYPES(TYPE, NAME, ...) can be used when all tensors have the same dtype,
(here in a torch environment)
I have a very hard time understanding, which AT_DISPATCH from here needs to be used in order to write tensors of different dtypes to a kernel.
While I understand, how
#define AT_DISPATCH_FLOATING_TYPES(TYPE, NAME, ...)
or#define AT_DISPATCH_INTEGRAL_TYPES(TYPE, NAME, ...)
can be used when all tensors have the same dtype,(here in a torch environment)
all attempts to dispatch tensors when both tensors have different dtypes fail.
The text was updated successfully, but these errors were encountered: