-
Notifications
You must be signed in to change notification settings - Fork 241
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Help][BUG] KeyError: 'lm_head.weight'
on loading llama 3.2
#1920
Comments
KeyError: 'lm_head.weight'
on loading llama 3.2KeyError: 'lm_head.weight'
on loading llama 3.2
Hi @steveepreston, I able to execute the code using the Gemma model, and it worked without any issues. For the Llama model, however, could you please reach out to the Llama team for further assistance? Please refer to the Gist file for more details. Thank you. |
Thank you for attention @Gopi-Uppari Yes, ok, i will try to create another issue there also. |
Could you please confirm if this issue is resolved for you with the above comment ? Please feel free to close the issue if it is resolved ? Thank you. |
Problem not resolved and I've moved to PyTorch. |
Variable paths in Llama are different from Gemma so the layout map that works for Gemma doesn't work for Llama (see here). Recently, We haven't added |
Trying to load llama-3.2 on TPU VM v3-8 via this:
but it throws this Error:
note: i get layout_map code from This Example. i don't know if problem is from
layout_map
orLlama3CausalLM
The text was updated successfully, but these errors were encountered: