Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow prompt customization per interpreter model #286

Open
med8bra opened this issue Dec 24, 2024 · 0 comments
Open

Allow prompt customization per interpreter model #286

med8bra opened this issue Dec 24, 2024 · 0 comments
Labels
enhancement New feature or request

Comments

@med8bra
Copy link

med8bra commented Dec 24, 2024

Describe the solution you'd like
Currently the prompt used with interpreter doesn't work well with all models especially small ones like llama3.2 (3b).

Adding an option to change the prompt as an advanced option will help user customize the interpreter feature per their model needs.

One issue maybe, since the interpreter expects key/value json object in model response which is mentioned in current prompt, customizing the prompt may break the feature, but I think it's worth it for power user. And these instructions could be specified as hints/rules around the prompt option.

@med8bra med8bra added the enhancement New feature or request label Dec 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant