-
-
Notifications
You must be signed in to change notification settings - Fork 321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change model on the fly #415
Conversation
… a function to allow dynamically changing the model. the readme has been updated appropriately. I have also added an example configuration with gpt-4-1106-preview to the readme after the configuration section.
…tings panel. now doing this and testing if it works
…w lua lmao now testing
…ly and if this is enabled, it will say <dynamic> in the chat settings. the completion features etc have not been tested with the changes, but they should be unaffected as I did not touch the openai_completion_params etc, only openai_params. if you want to see currently active model, add a shortcut to your config (because your config manages the model)
… function as model to change the model used on the fly
great thanks, could you please reformat to satisfy stylua and make PR more readable? |
Update api.lua
I have now refactored the PR to (hopefully) be more readable and have tried to fix the formatting. This has been quite troublesome as I have been on holiday since the day after I had raised the PR and don't have my laptop with me... Could you please approve the workflow so I can see if reformatting worked? |
…t work for some stylua versions and that's really annoying and unnecessary
Would love to see this feature merged |
It would also be great to have some kind of lua api exposed which allowed for model parameters to be set. Something like: require('chatgpt').open_chat({ ... model params ... }) |
Maybe, I don't think there's too much need for that though. I don't see much utility in the ability to change any settings other than the model and the endpoint at runtime. If you want to open a chat with a different model, you can do that already by manipulating what the function returns and then vim.cmd("ChatGPTRun"). |
By the way, you can already use this feature by specifying my fork instead of the main repo until it's merged :) |
For my use case the function is unnecessarily complicated when you could pass a simple argument to the API |
I encourage you to look at the source and try implementing your feature. If you can, good, make a PR, if you can't, use my solution... I found my solution to be the most general and simple to both implement and get the job done. |
Also a function is not complicated at all, it is supposed to be a one-liner: function() return chatgpt_config.model end |
In my case, I am developing a telescope extension that offers interactive and chat modes through a picker. I'd rather not have the user of the extension declare a function in their chatGPT.nvim config. It would be much nicer if the telescope extension could use the API I suggested above |
I just a question, we can set the models via a runtime function but not the API key right? AFAIU when a shell command is given as api key it is only executed at startup and can't be changed afterwards. But that would be great as it could allow changing for example from gpt-4o to openrouter.ai 's claude 3.5 sonnet. If you dot know openrouter.ai is a website offering a common openai api for pretty much any llm, whereas for example anthropic's API is not compatible. Any idea how I can change the API key at runtime? |
You can make your neovim config write to a file and make the command read the file and echo its contents |
Yes but that's just at startup and not at runtime right? |
Oh right yeah I forgot. Thats the thing I tried for the model before making the plugin feature |
* added a feature where the model parameter in openai_params can now be a function to allow dynamically changing the model. the readme has been updated appropriately. I have also added an example configuration with gpt-4-1106-preview to the readme after the configuration section. * on opening chat window, the model was not being collapsed for the settings panel. now doing this and testing if it works * testing using debug output * removed debug out again because it seems to now work magically? * debugging * debugging * still debugging * if the model is determined by a function, just display <dynamic> in settings menu * debugging * still debugging * had value, key instead of key, value in a for loop because I dont know lua lmao now testing * seems to be working, testing it now * debug output for model * typo in toMessages function in settings.lua, fixed now * more debugging * still debugging :( * vim.inspect missing * the plugin is tested and working, you can now switch models dynamically and if this is enabled, it will say <dynamic> in the chat settings. the completion features etc have not been tested with the changes, but they should be unaffected as I did not touch the openai_completion_params etc, only openai_params. if you want to see currently active model, add a shortcut to your config (because your config manages the model) * reformatted the config sample to be more readable. you can now pass a function as model to change the model used on the fly * finally, removed all debug notifications * Update api.lua * removed a goto statement by refactoring the code because goto does not work for some stylua versions and that's really annoying and unnecessary --------- Co-authored-by: Paper <[email protected]>
Earlier this day, I raised an issue here: #414
Then, I decided to dive into the codebase and add the feature myself using my proposed change.
What's new:
openai_params.model can now be either a constant string (only method so far) or a function that returns a string containing the model (new). The model is thus computed on the fly when an API request is made.
Motivation for the change
So far, without this change, it has not been possible to change the model you are using on the fly. With the modifications this PR introduces, one can now pass a function instead of a fixed model, and the function returns the model to be used. This function is evaluated on API invokation.
Code changes:
Testing
I have tested every related command (ChatGPT, ChatGPTActAs, ChatGPTRun) and they're all still working perfectly.
Apology letter (lol)
Finally, I want to apologize for the many "debugging" commits. While working on the feature, I found it easiest to commit the changes and let lazy put them into my editor so I could test them, then go back to the plugin.