We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PyTorch 2.4.0, CUDA 12.1, 7x H100
Import of MllamaForConditionalGeneration on https://github.com/meta-llama/llama-recipes/blob/main/src/llama_recipes/finetuning.py#L24 should be on line 28 instead. It is part of mllama module, not main transformers library
MllamaForConditionalGeneration
Throws import error
Have fixed this in https://github.com/AAndersn/llama-recipes/blob/main/src/llama_recipes/finetuning.py#L27. Will make PR to fix it along with int4 -> 4 bit fix in readme
The text was updated successfully, but these errors were encountered:
The import from transformers import MllamaForConditionalGeneration is fixed in transformers = 4.45.0
from transformers import MllamaForConditionalGeneration
transformers = 4.45.0
Sorry, something went wrong.
Successfully merging a pull request may close this issue.
System Info
PyTorch 2.4.0, CUDA 12.1, 7x H100
Information
🐛 Describe the bug
Import of
MllamaForConditionalGeneration
on https://github.com/meta-llama/llama-recipes/blob/main/src/llama_recipes/finetuning.py#L24 should be on line 28 instead. It is part of mllama module, not main transformers libraryError logs
Throws import error
Expected behavior
Have fixed this in https://github.com/AAndersn/llama-recipes/blob/main/src/llama_recipes/finetuning.py#L27. Will make PR to fix it along with int4 -> 4 bit fix in readme
The text was updated successfully, but these errors were encountered: