-
Notifications
You must be signed in to change notification settings - Fork 380
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
xformers error when fine-tuning open_llama_3B with memory_efficient_attention #88
Comments
By the way, I think the problem maybe the dtype I use (bf16). But the dtype in your config is fp16 and still doesn't work? |
For the 3B model, since there's no official LLaMA 3B, we defined the model size ourselves and it might not agree with the 3B model sizes in other implementations |
But I just use the hf code and checkpoint you released and don't modify anything. |
Hmm, then that might be a bug on the HF side. We've tested it in HF transformers without the memory_efficient_attention and it works as expected. |
Thank you very much! Perhaps I've been using the code incorrectly all along. |
Hi, I feel confused about this bug when using memory_efficient_attention. It seems that the embed per head you choose can't match with xformers?
I'll appreciate it if you could help me.
The text was updated successfully, but these errors were encountered: