You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi everybody!
I used unsloth for the first time a couple of weeks ago and was very happy with it! In particular, I used it to perform a fine tutoring of Gemma2:9B to specialise the model in Latin to Italian translations. Now I would like to train the same model for translations from ancient Greek into Italian, but unfortunately I can no longer use the library due to the error I report below. I have also tried installing older versions, but nothing, moreover I have the same error when trying to run your test netebook
As software I have tried both Colab and Vast, but I still get the same error, even installing unsloth specifying the version of cuda and pythorch.
I would be very happy if you could explain how I can solve it, thank you very much!
Here my code:
from unsloth import FastLanguageModel # FastVisionModel for LLMs
import torch
max_seq_length = 8112 # Choose any! We auto support RoPE Scaling internally!
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
# 4bit pre quantized models we support for 4x faster downloading + no OOMs.
fourbit_models = [
"unsloth/Meta-Llama-3.1-8B-bnb-4bit", # Llama-3.1 2x faster
"unsloth/Mistral-Small-Instruct-2409", # Mistral 22b 2x faster!
"unsloth/Phi-4", # Phi-4 2x faster!
"unsloth/Phi-4-unsloth-bnb-4bit", # Phi-4 Unsloth Dynamic 4-bit Quant
"unsloth/gemma-2-9b-bnb-4bit", # Gemma 2x faster!
"unsloth/Qwen2.5-7B-Instruct-bnb-4bit" # Qwen 2.5 2x faster!
"unsloth/Llama-3.2-1B-bnb-4bit", # NEW! Llama 3.2 models
"unsloth/Llama-3.2-1B-Instruct-bnb-4bit",
"unsloth/Llama-3.2-3B-bnb-4bit",
"unsloth/Llama-3.2-3B-Instruct-bnb-4bit",
] # More models at https://docs.unsloth.ai/get-started/all-our-models
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="unsloth/gemma-2-9b",
max_seq_length=max_seq_length,
dtype=None,
load_in_4bit=load_in_4bit,
token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf
)
The error
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
Hi everybody!
I used unsloth for the first time a couple of weeks ago and was very happy with it! In particular, I used it to perform a fine tutoring of Gemma2:9B to specialise the model in Latin to Italian translations. Now I would like to train the same model for translations from ancient Greek into Italian, but unfortunately I can no longer use the library due to the error I report below. I have also tried installing older versions, but nothing, moreover I have the same error when trying to run your test netebook
As software I have tried both Colab and Vast, but I still get the same error, even installing unsloth specifying the version of cuda and pythorch.
I would be very happy if you could explain how I can solve it, thank you very much!
Here my code:
The error
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
SyntaxError Traceback (most recent call last)
/usr/local/lib/python3.11/dist-packages/unsloth/tokenizer_utils.py in
1060 try:
-> 1061 exec(trainer_text, globals())
1062 except:
SyntaxError: invalid syntax (, line 4)
During handling of the above exception, another exception occurred:
RuntimeError Traceback (most recent call last)
5 frames
in <cell line: 0>()
----> 1 from unsloth import FastLanguageModel # FastVisionModel for LLMs
2 import torch
3 max_seq_length = 8112 # Choose any! We auto support RoPE Scaling internally!
4 load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
5
/usr/local/lib/python3.11/dist-packages/unsloth/init.py in
210 pass
211
--> 212 from .models import *
213 from .save import *
214 from .chat_templates import *
/usr/local/lib/python3.11/dist-packages/unsloth/models/init.py in
14
15
---> 16 from .granite import FastGraniteModel
17 from .loader import FastLanguageModel, FastVisionModel
18 from .llama import FastLlamaModel
/usr/local/lib/python3.11/dist-packages/unsloth/models/granite.py in
13 # limitations under the License.
14
---> 15 from .llama import *
16 import os
17 from ._utils import version
/usr/local/lib/python3.11/dist-packages/unsloth/models/llama.py in
34 )
35 from ..kernels import *
---> 36 from ..tokenizer_utils import *
37 if HAS_FLASH_ATTENTION:
38 from flash_attn import flash_attn_func
/usr/local/lib/python3.11/dist-packages/unsloth/tokenizer_utils.py in
1061 exec(trainer_text, globals())
1062 except:
-> 1063 raise RuntimeError(f"Unsloth: Please file a bug report! Error patching {trainer_name}")
1064 exec(f"trl.trainer.{trainer_name} = Unsloth{trainer_name}", globals())
1065 pass
RuntimeError: Unsloth: Please file a bug report! Error patching SFTTrainer
The text was updated successfully, but these errors were encountered: