You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Models use special tokens to designate boundaries within prompts. Most models have a special "end of generation" token that they use when returning the results.
In llama models this is <|eot_id|>, and must be specified on the model class like follows:
model =Llamero::BaseModel.new(model_name:"meta-llama-3-8b-instruct-Q6_K.gguf", chat_template_end_of_generation_token:"<|eot_id|>")
Without this specification, the structured response will fail to parse. It's also inconvenient to have to remember to find this and configure it on the model.
The text was updated successfully, but these errors were encountered:
Models use special tokens to designate boundaries within prompts. Most models have a special "end of generation" token that they use when returning the results.
In llama models this is
<|eot_id|>
, and must be specified on the model class like follows:Without this specification, the structured response will fail to parse. It's also inconvenient to have to remember to find this and configure it on the model.
The text was updated successfully, but these errors were encountered: