Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ImportError: cannot import name 'LLaMAForCausalLM' from 'transformers' #1

Open
SeekPoint opened this issue May 10, 2023 · 1 comment
Labels
bug Something isn't working

Comments

@SeekPoint
Copy link

(gh_alpaca-lora-for-huggingface) ub2004@ub2004-B85M-A0:~/llm_dev/alpaca-lora-for-huggingface$ accelerate launch --config_file peft_config.yaml finetune.py
[09:06:17] WARNING The following values were not passed to accelerate launch and had defaults used instead: launch.py:887
--dynamo_backend was set to a value of 'no'
To avoid this warning pass in values for each of the problematic parameters or run accelerate config.
Traceback (most recent call last):
File "finetune.py", line 8, in
from transformers import AutoTokenizer, AutoConfig, LLaMAForCausalLM, LLaMATokenizer
ImportError: cannot import name 'LLaMAForCausalLM' from 'transformers' (/home/ub2004/.local/lib/python3.8/site-packages/transformers/init.py)
[09:06:22] ERROR failed (exitcode: 1) local_rank: 0 (pid: 17154) of binary: /usr/bin/python3 api.py:673
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/ub2004/.local/bin/accelerate:8 in │
│ │
│ 5 from accelerate.commands.accelerate_cli import main │
│ 6 if name == 'main': │
│ 7 │ sys.argv[0] = re.sub(r'(-script.pyw|.exe)?$', '', sys.argv[0]) │
│ ❱ 8 │ sys.exit(main()) │
│ 9 │
│ │
│ /home/ub2004/.local/lib/python3.8/site-packages/accelerate/commands/accelerate_cli.py:45 in main │
│ │
│ 42 │ │ exit(1) │
│ 43 │ │
│ 44 │ # Run │
│ ❱ 45 │ args.func(args) │
│ 46 │
│ 47 │
│ 48 if name == "main": │
│ │
│ /home/ub2004/.local/lib/python3.8/site-packages/accelerate/commands/launch.py:900 in │
│ launch_command │
│ │
│ 897 │ │ if mp_from_config_flag: │
│ 898 │ │ │ args.deepspeed_fields_from_accelerate_config.append("mixed_precision") │
│ 899 │ │ args.deepspeed_fields_from_accelerate_config = ",".join(args.deepspeed_fields_fr │
│ ❱ 900 │ │ deepspeed_launcher(args) │
│ 901 │ elif args.use_fsdp and not args.cpu: │
│ 902 │ │ multi_gpu_launcher(args) │
│ 903 │ elif args.use_megatron_lm and not args.cpu: │
│ │
│ /home/ub2004/.local/lib/python3.8/site-packages/accelerate/commands/launch.py:643 in │
│ deepspeed_launcher │
│ │
│ 640 │ │ ) │
│ 641 │ │ with patch_environment(**current_env): │
│ 642 │ │ │ try: │
│ ❱ 643 │ │ │ │ distrib_run.run(args) │
│ 644 │ │ │ except Exception: │
│ 645 │ │ │ │ if is_rich_available() and debug: │
│ 646 │ │ │ │ │ console = get_console() │
│ │
│ /home/ub2004/.local/lib/python3.8/site-packages/torch/distributed/run.py:753 in run │
│ │
│ 750 │ │ ) │
│ 751 │ │
│ 752 │ config, cmd, cmd_args = config_from_args(args) │
│ ❱ 753 │ elastic_launch( │
│ 754 │ │ config=config, │
│ 755 │ │ entrypoint=cmd, │
│ 756 │ )(*cmd_args) │
│ │
│ /home/ub2004/.local/lib/python3.8/site-packages/torch/distributed/launcher/api.py:132 in │
call
│ │
│ 129 │ │ self._entrypoint = entrypoint │
│ 130 │ │
│ 131 │ def call(self, *args): │
│ ❱ 132 │ │ return launch_agent(self._config, self._entrypoint, list(args)) │
│ 133 │
│ 134 │
│ 135 def _get_entrypoint_name( │
│ │
│ /home/ub2004/.local/lib/python3.8/site-packages/torch/distributed/launcher/api.py:246 in │
│ launch_agent │
│ │
│ 243 │ │ │ # if the error files for the failed children exist │
│ 244 │ │ │ # @record will copy the first error (root cause) │
│ 245 │ │ │ # to the error file of the launcher process. │
│ ❱ 246 │ │ │ raise ChildFailedError( │
│ 247 │ │ │ │ name=entrypoint_name, │
│ 248 │ │ │ │ failures=result.failures, │
│ 249 │ │ │ ) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ChildFailedError:

finetune.py FAILED

Failures:
<NO_OTHER_FAILURES>

Root Cause (first observed failure):
[0]:
time : 2023-05-10_09:06:22
host : ub2004-B85M-A0
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 17154)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

(gh_alpaca-lora-for-huggingface) ub2004@ub2004-B85M-A0:~/llm_dev/alpaca-lora-for-huggingface$

@naem1023
Copy link
Owner

naem1023 commented May 12, 2023

Hi, I think the version of transforemrs is the reason.

Main reason

  • Current version's llama implementaion name: LlamaForCausalLM
  • This repository's llama implementation name: LLaMAForCausalLM

Conclusion

I'm sorry to bother you, but there was no official Tokenizer and CausalLM for LLAMA when I worked on this. Therefore, I used custom version of transforemers for this. I will update the work soon.

Thank you very much for your interest!!

@naem1023 naem1023 added the bug Something isn't working label May 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants