Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FT] Add System Prompt field in LightevalTaskConfig that can be used by model clients #410

Open
JoelNiklaus opened this issue Nov 28, 2024 · 14 comments
Labels
feature request New feature/request

Comments

@JoelNiklaus
Copy link
Contributor

Issue encountered

My models currently don't follow the template I give. I want to give a system prompt that nudges the models to provide output the way I want it.

Solution/Feature

We could add a new field in LightevalTaskConfig that can be consumed by model clients like LiteLLMClient to send to the API.

Possible alternatives

Alternatively, we could use the generation_grammar, but I think this may be a bit overkill and may be more difficult to implement for different API providers.

@JoelNiklaus JoelNiklaus added the feature request New feature/request label Nov 28, 2024
@JoelNiklaus
Copy link
Contributor Author

@clefourrier Happy to open a PR for this. Please let me know if you see a different solution.

@clefourrier
Copy link
Member

Hi! I'm adding auto-scaling for inference endpoints atm, and will edit the generation mechanisms after that, possibly to get a single homogeneous system - if you need it super fast feel free to add it before, but I'll probably edit the system next week :)

@JoelNiklaus
Copy link
Contributor Author

Ok, I see. Thanks for letting me know. I will use a hack in the meantime in that case.

@JoelNiklaus
Copy link
Contributor Author

Just briefly asking to double check: This edit to the generation mechanism will also affect the openai/litellm model loaders?

@clefourrier
Copy link
Member

Yes the aim is to have one better centralised system

@clefourrier
Copy link
Member

Need to add to #428

@clefourrier
Copy link
Member

Hi @JoelNiklaus , from what I can see, it's already available for all models with the latest CLI refacto: there's a system prompt param, which is then used when creating the requests from the samples. Do you need stg else wrt this?

@JoelNiklaus
Copy link
Contributor Author

I saw that, thanks. However, to me it is still not clear how to use the system prompt in the model_loader (e.g., litellm). The system prompt is not part of GreedyUntilRequest. Also it does not come through with the model_config.

@clefourrier
Copy link
Member

Because you should not use it in the model_loader ^^
The system prompt is added to the request by the PromptManager, which creates the request from the sample, number of few shots, and system prompt if needed. (It's in ligtheval/tasks/prompt_manager)

@clefourrier
Copy link
Member

The system prompt will appear in the details if you want to check this

@JoelNiklaus
Copy link
Contributor Author

Haha ok. Sorry if a stupid question, but how can I access the system prompt before I make a call to the API?
https://github.com/huggingface/lighteval/blob/main/src/lighteval/models/openai_model.py#L82

@JoelNiklaus
Copy link
Contributor Author

As discussed on Slack the PromptManager might need to be adapted to make passing through the system prompt possible.

@NathanHB
Copy link
Member

As of now there is no way to only grab the system prompt. One solution would be to add a system_prompt field to the requests when creating them in the PromptManager so that you can access them in litellm_model

@JoelNiklaus
Copy link
Contributor Author

Sounds good. I enabled it in PR #385.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature/request
Projects
None yet
Development

No branches or pull requests

3 participants