Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configurable model parameters #28

Open
wfjt opened this issue Apr 20, 2023 · 1 comment
Open

Configurable model parameters #28

wfjt opened this issue Apr 20, 2023 · 1 comment
Labels
enhancement New feature or request

Comments

@wfjt
Copy link

wfjt commented Apr 20, 2023

Please make temperature, top_p,, max_tokens, etc. configurable live in a settings window. E.g., temperature and top_p for code generation and text summarisation are typically different (~0 temp, 0.9-1.0 top_p) and 0.7-0.9 and 30-50 for summarisation are common ranges respectively. Just need a table for these which is passed to the API. Also presence and repeeat penalty should be configurable, these are less of an issue with GPT 3.5 turbo, but were crucial for Codex models which are now history it seems. Still, for summarisation I'd tune these to make the model more likely to come up with new topics and ideas.

Lastly, it would be good to show token usage like in playground and e.g., change prompt bg to red when prompt + history + completion tokens exceed the max to avoid unnecessary cut off messages and lost prompts and session refresh unexpectedly. E.g., I tend to tweak max_tokens to prompt + rest for completion initially in case I get a long response, and then tune down as the chat progresses. I haven't tested, maybe this is implemented, but would be good to have a high water mark/trim level for context/history sent to the model and automatically make room for a new prompt and configured completion tokens by deleting old data. This is how ChatGPT UI works, you notice it loses context but it doesn't kill the session. Playground is more harsh in this matter, but I much prefer chat.openai.com to playground.

@Bryley Bryley added the enhancement New feature or request label Apr 21, 2023
Bryley added a commit that referenced this issue Apr 21, 2023
Also added filetypes to windows

Fixes issue #15 and a little bit of #28
@00sapo
Copy link

00sapo commented Jun 11, 2023

Hello, this plugin looks very nice and almost better than CodeGPT/ChatGPT for the user interface. One thing it's lacking is an easy way to configure the API request of the prompts. We should be able to use it like this, for instance:

require('neoai').setup{
  shortcuts = {
    {
      name = "a_user_custom_shortcut",
      system_message = "you are an expert {language} developer",
      prompt = "the prompt here",
      model_params = {
        temperature = 0.5,
        other_model_parameters = 0.2,
      },
      inject_mode = 'append' -- or 'replace',
      only_code_snippet = true -- select the first code snippet only in the answer
    }
  }
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants