Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add generation prompt enforcement is too severe #1330

Open
RonanKMcGovern opened this issue Nov 24, 2024 · 3 comments
Open

add generation prompt enforcement is too severe #1330

RonanKMcGovern opened this issue Nov 24, 2024 · 3 comments
Labels
currently fixing Am fixing now!

Comments

@RonanKMcGovern
Copy link

Problem:

  • When loading a model that does not have add generation prompt in the chat template, it causes a runtime error rather than just a warning. This means - even if one does not want to have a generation prompt - one is forced to do so.

Specific error:

Unsloth: The tokenizer `Llama-3.2-1B-lora-model`
has a {% if add_generation_prompt %} for generation purposes, but wasn't provided correctly.

Recommendations:

  1. Make this a warning not a runtime error that stops loading.
  2. Provide an example of the exact syntax that unsloth expects.
@Cirr0e

This comment was marked as spam.

@danielhanchen danielhanchen added the currently fixing Am fixing now! label Nov 25, 2024
@danielhanchen
Copy link
Contributor

@RonanKMcGovern Interestingly weirdly I auto fixed it, but it seems like it fails now - I'll check it out

@RonanKMcGovern
Copy link
Author

RonanKMcGovern commented Nov 25, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
currently fixing Am fixing now!
Projects
None yet
Development

No branches or pull requests

3 participants