-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SyntaxError when patching SFTTrainer in unsloth/tokenizer_utils.py #1698
Comments
Same error, I opened an issue too. |
I added a print to see the value of class UnslothSFTTrainer(SFTTrainer):
def __init__(
self,
model = <class 'inspect._empty'>, # 4th line
args = None,
data_collator = None,
train_dataset = None,
eval_dataset = None,
processing_class = None,
compute_loss_func = None,
compute_metrics = None,
callbacks = None,
optimizers = (None, None),
optimizer_cls_and_kwargs = None,
preprocess_logits_for_metrics = None,
peft_config = None,
formatting_func = None,
tokenizer = None):
super().__init__(
model = model,
args = args,
data_collator = data_collator,
train_dataset = train_dataset,
eval_dataset = eval_dataset,
compute_loss_func = compute_loss_func,
compute_metrics = compute_metrics,
callbacks = callbacks,
optimizers = optimizers,
optimizer_cls_and_kwargs = optimizer_cls_and_kwargs,
preprocess_logits_for_metrics = preprocess_logits_for_metrics,
peft_config = peft_config,
formatting_func = formatting_func,
processing_class = tokenizer if tokenizer else processing_class
) |
This seems to be related to new trl version (0.15.0), which has just being released. I was able to fix downgrading it: |
thanks! the solution works for me |
when trying to run
|
Working on a fix asap - sorry on the issue! |
@danielhanchen I was really surprised to discover that unsloth patch some classes via string replacement and |
Just fixed - for now please use pip uninstall trl -y && pip install --no-cache-dir --force-reinstall --no-deps "trl<0.15.0" I'm still trying to work on supporting the latest TRL 0.15.0, so it'll take a bit more time. For GRPO runs, please use Using |
@brunodoamaral The approach was taken since it's the best for patching - re-writing entire swathes of code (or even copying pasting then overwriting) was one of the options, but we decided it was extremely time consuming to handle. We normally collaborate directly with the Hugging Face on launches, so this one sadly in terms of schedule got out of whack |
Description:
I encountered the following error while running FastLanguageModel.from_pretrained with unsloth/Meta-Llama-3.1-8B:
First code block:
%%capture
!pip install "unsloth [colab-new] @git+https://github.com/unslothai/unsloth.git"
import torch
from packaging.version import Version as V
xformers = "xformers-0.0.27" if V(torch.version) < V("2.4.0") else "xformers"
!pip install --no-deps {xformers} trl peft accelerate bitsandbytes triton
Second code block:
from unsloth import FastLanguageModel
import torch
max_seq_length = 2048
dtype = None
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "unsloth/Meta-Llama-3.1-8B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
Traceback:
File "unsloth/tokenizer_utils.py", line 1061, in
exec(trainer_text, globals())
File "", line 4
[invalid syntax here]
RuntimeError: Unsloth: Please file a bug report! Error patching SFTTrainer
Environment:
Steps to reproduce:
The text was updated successfully, but these errors were encountered: