You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, there.
I looked through the readme file and didn't find out-of-the-box support for these models. Although they have a similar structure to GPT2, it is still relatively hard for an LLM engineer to write cuda. I tried Faster Transformer to speed up MOSS which is extremely fast and I look forward to using lightseq.
Also, I think you should update the readMe file since I saw LLAMA supported which is very important because BAICHUAN have almost the same structure as llama, especially for Chinese OSS.
Got a question here, is that possible to implement flash attention here to support more Nvidia cards like v100? I saw a collaborator comment that said it is not supported for v100 in the original implementation. From my naive understanding, flash attention is just an Engineering problem, and the key is sharing memory?
Thanks for your great work.
The text was updated successfully, but these errors were encountered:
Hi, there.
I looked through the readme file and didn't find out-of-the-box support for these models. Although they have a similar structure to GPT2, it is still relatively hard for an LLM engineer to write cuda. I tried Faster Transformer to speed up MOSS which is extremely fast and I look forward to using lightseq.
Also, I think you should update the readMe file since I saw LLAMA supported which is very important because BAICHUAN have almost the same structure as llama, especially for Chinese OSS.
Got a question here, is that possible to implement flash attention here to support more Nvidia cards like v100? I saw a collaborator comment that said it is not supported for v100 in the original implementation. From my naive understanding, flash attention is just an Engineering problem, and the key is sharing memory?
Thanks for your great work.
The text was updated successfully, but these errors were encountered: