Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

{% if add_generation_prompt %} [FIXED] #1284

Open
giuliabaldini opened this issue Nov 13, 2024 · 6 comments
Open

{% if add_generation_prompt %} [FIXED] #1284

giuliabaldini opened this issue Nov 13, 2024 · 6 comments
Labels
fixed - pending confirmation Fixed, waiting for confirmation from poster

Comments

@giuliabaldini
Copy link
Contributor

giuliabaldini commented Nov 13, 2024

Hi there,

if I run my usual code after the Qwen 2.5 commit, I get multiple errors. The first one is the following

jinja2.exceptions.TemplateSyntaxError: Encountered unknown tag 'endfor'. Jinja was looking for the following tags: 'elif' or 'else' or 'endif'. The innermost block that needs to be closed is 'if'.

which is probably because of the change in this line. Once I fix that, I still get

RuntimeError: Unsloth: The tokenizer `OpenMeditron/Meditron3-8B`
does not have a {% if add_generation_prompt %} for generation purposes.
Please file a bug report immediately - thanks!

Any ideas?

Best,
Giulia

@giuliabaldini giuliabaldini changed the title Cannot run old code after Qwen 2.5 update Jinja error after Qwen 2.5 update Nov 13, 2024
@xizhangmable
Copy link

I have a similar issue when running

train_ds = train_ds.map(lambda x: {"training_prompt": tokenizer.apply_chat_template(x["chat"], tokenize=False, add_generation_prompt=False)})

TemplateSyntaxError: Encountered unknown tag 'endfor'. Jinja was looking for the following tags: 'elif' or 'else' or 'endif'. The innermost block that needs to be closed is 'if'.

@danielhanchen danielhanchen changed the title Jinja error after Qwen 2.5 update [FIXED] {% if add_generation_prompt %} Error Nov 14, 2024
@danielhanchen danielhanchen changed the title [FIXED] {% if add_generation_prompt %} Error [FIXED] {% if add_generation_prompt %} Nov 14, 2024
@danielhanchen
Copy link
Contributor

danielhanchen commented Nov 14, 2024

Apologies just fixed @giuliabaldini @xizhangmable - thanks for reporting! Please update Unsloth on local machines via pip install --upgrade --no-cache-dir --no-deps unsloth

For Colab, Kaggle just refresh!

@danielhanchen danielhanchen added the fixed - pending confirmation Fixed, waiting for confirmation from poster label Nov 14, 2024
@danielhanchen danielhanchen pinned this issue Nov 14, 2024
@danielhanchen danielhanchen changed the title [FIXED] {% if add_generation_prompt %} {% if add_generation_prompt %} [FIXED] Nov 14, 2024
@GreenBogDes
Copy link

I am still getting this error in google colab in a new session. I was helped by rolling back to a2f8db3

RuntimeError Traceback (most recent call last)
in <cell line: 19>()
17 # ] # More models at https://huggingface.co/unsloth
18
---> 19 model, tokenizer = FastLanguageModel.from_pretrained(
20 model_name = "mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated",
21 max_seq_length = max_seq_length,

3 frames
/usr/local/lib/python3.10/dist-packages/unsloth/tokenizer_utils.py in fix_chat_template(tokenizer)
656 if "{% if add_generation_prompt %}" not in new_chat_template and
657 "{%- if add_generation_prompt %}" not in new_chat_template:
--> 658 raise RuntimeError(
659 f"Unsloth: The tokenizer {tokenizer.name_or_path}\n"
660 "does not have a {% if add_generation_prompt %} for generation purposes.\n"\

RuntimeError: Unsloth: The tokenizer mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
does not have a {% if add_generation_prompt %} for generation purposes.
Please file a bug report immediately - thanks!

@scoliono
Copy link

Getting this issue on kaggle with the same Meta-Llama-3.1-8B-Instruct-abliterated model.

@GreenBogDes mind sharing how you installed? I've tried pip install git+https://github.com/unslothai/unsloth.git@a2f8db3e7341f983af5814a2c56f54fa29ee548d (and several variations) but then I get errors when trying to import unsloth

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/unsloth/__init__.py:32
     31 try:
---> 32     import unsloth_zoo
     33 except:

File /opt/conda/lib/python3.10/site-packages/unsloth_zoo/__init__.py:27
     26 if not ("UNSLOTH_IS_PRESENT" in os.environ):
---> 27     raise ImportError("Please install Unsloth via `pip install unsloth`!")
     28 pass

ImportError: Please install Unsloth via `pip install unsloth`!

During handling of the above exception, another exception occurred:

ImportError                               Traceback (most recent call last)
Cell In[3], line 1
----> 1 from unsloth import FastLanguageModel
      2 import torch
      3 max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!

File /opt/conda/lib/python3.10/site-packages/unsloth/__init__.py:34
     32     import unsloth_zoo
     33 except:
---> 34     raise ImportError("Unsloth: Please install unsloth_zoo via `pip install unsloth-zoo`")
     35 pass
     37 # Unsloth currently does not work on multi GPU setups - sadly we are a 2 brother team so
     38 # enabling it will require much more work, so we have to prioritize. Please understand!
     39 # We do have a beta version, which you can contact us about!
     40 # Thank you for your understanding and we appreciate it immensely!

ImportError: Unsloth: Please install unsloth_zoo via `pip install unsloth-zoo`

@GreenBogDes
Copy link

GreenBogDes commented Nov 19, 2024

Getting this issue on kaggle with the same Meta-Llama-3.1-8B-Instruct-abliterated model.

@GreenBogDes mind sharing how you installed? I've tried pip install git+https://github.com/unslothai/unsloth.git@a2f8db3e7341f983af5814a2c56f54fa29ee548d (and several variations) but then I get errors when trying to import unsloth

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/unsloth/__init__.py:32
     31 try:
---> 32     import unsloth_zoo
     33 except:

File /opt/conda/lib/python3.10/site-packages/unsloth_zoo/__init__.py:27
     26 if not ("UNSLOTH_IS_PRESENT" in os.environ):
---> 27     raise ImportError("Please install Unsloth via `pip install unsloth`!")
     28 pass

ImportError: Please install Unsloth via `pip install unsloth`!

During handling of the above exception, another exception occurred:

ImportError                               Traceback (most recent call last)
Cell In[3], line 1
----> 1 from unsloth import FastLanguageModel
      2 import torch
      3 max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!

File /opt/conda/lib/python3.10/site-packages/unsloth/__init__.py:34
     32     import unsloth_zoo
     33 except:
---> 34     raise ImportError("Unsloth: Please install unsloth_zoo via `pip install unsloth-zoo`")
     35 pass
     37 # Unsloth currently does not work on multi GPU setups - sadly we are a 2 brother team so
     38 # enabling it will require much more work, so we have to prioritize. Please understand!
     39 # We do have a beta version, which you can contact us about!
     40 # Thank you for your understanding and we appreciate it immensely!

ImportError: Unsloth: Please install unsloth_zoo via `pip install unsloth-zoo`

What helped me was adding those lines:

!pip install git+https://github.com/unslothai/unsloth-zoo.git
import os
os.environ["UNSLOTH_IS_PRESENT"] = "1"

But it still doesn't work in the new version. Here's my notebook: https://colab.research.google.com/drive/1MGwKUq1O46IFkR5R6MmPn1ZtonaDhSN9?usp=sharing

@scoliono
Copy link

This actually worked for me. Very hacky, but here's what I have so far, if it helps anyone else. Some commands taken from the Colab starter notebook.

%%capture
!pip install pip3-autoremove
!pip-autoremove torch torchvision torchaudio -y
!pip install torch torchvision torchaudio xformers --index-url https://download.pytorch.org/whl/cu121
!pip install unsloth[kaggle-new]
!pip uninstall unsloth -y && pip install git+https://github.com/unslothai/unsloth.git@a2f8db3e7341f983af5814a2c56f54fa29ee548d
!pip install git+https://github.com/unslothai/unsloth-zoo.git
import os
os.environ["UNSLOTH_IS_PRESENT"] = "1"

Then, after loading the model:

Unsloth: We successfully patched the tokenizer to add a {% if add_generation_prompt %} to the chat_template.
This is not a bug, but please notify the Unsloth maintainers - thanks!
mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated does not have a padding token! Will use pad_token = <|finetune_right_pad_id|>.

It appears to be training now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
fixed - pending confirmation Fixed, waiting for confirmation from poster
Projects
None yet
Development

No branches or pull requests

5 participants