Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[Kernel][Hardware][AMD][ROCm] Fix rocm/attention.cu compilation on RO…
…Cm 6.0.3 In ROCm 6.0.3, __hip_bfloat16 doesn't support the + operator, while hip_bfloat16 is OK. Refer to ROCm/ROCm#2534 ERROR LOG: vllm/build/temp.linux-x86_64-cpython-311/csrc/rocm/attention.hip:168:20: error: invalid operands to binary expression ('__hip_bfloat16' and '__hip_bfloat16') res.b = t1.b + t2.b; ~~~~ ^ ~~~~ vllm/build/temp.linux-x86_64-cpython-311/csrc/rocm/attention.hip:683:15: note: in instantiation of function template specialization 'addx4<__hip_bfloat16>' requested here addx4<scalar_t>(vout[qh][vh], vout_shared[qh][vh][laneid][w]); ^ Signed-off-by: Hollow Man <[email protected]>
- Loading branch information