-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FT] Add System Prompt field in LightevalTaskConfig that can be used by model clients #410
Comments
@clefourrier Happy to open a PR for this. Please let me know if you see a different solution. |
Hi! I'm adding auto-scaling for inference endpoints atm, and will edit the generation mechanisms after that, possibly to get a single homogeneous system - if you need it super fast feel free to add it before, but I'll probably edit the system next week :) |
Ok, I see. Thanks for letting me know. I will use a hack in the meantime in that case. |
Just briefly asking to double check: This edit to the generation mechanism will also affect the openai/litellm model loaders? |
Yes the aim is to have one better centralised system |
Need to add to #428 |
Hi @JoelNiklaus , from what I can see, it's already available for all models with the latest CLI refacto: there's a system prompt param, which is then used when creating the requests from the samples. Do you need stg else wrt this? |
I saw that, thanks. However, to me it is still not clear how to use the system prompt in the model_loader (e.g., litellm). The system prompt is not part of GreedyUntilRequest. Also it does not come through with the model_config. |
Because you should not use it in the model_loader ^^ |
The system prompt will appear in the details if you want to check this |
Haha ok. Sorry if a stupid question, but how can I access the system prompt before I make a call to the API? |
As discussed on Slack the PromptManager might need to be adapted to make passing through the system prompt possible. |
As of now there is no way to only grab the system prompt. One solution would be to add a |
Sounds good. I enabled it in PR #385. |
Issue encountered
My models currently don't follow the template I give. I want to give a system prompt that nudges the models to provide output the way I want it.
Solution/Feature
We could add a new field in LightevalTaskConfig that can be consumed by model clients like LiteLLMClient to send to the API.
Possible alternatives
Alternatively, we could use the generation_grammar, but I think this may be a bit overkill and may be more difficult to implement for different API providers.
The text was updated successfully, but these errors were encountered: