Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change model on the fly #415

Merged
merged 23 commits into from
Jun 22, 2024
Merged

Change model on the fly #415

merged 23 commits into from
Jun 22, 2024

Conversation

barnii77
Copy link
Contributor

@barnii77 barnii77 commented Mar 23, 2024

Earlier this day, I raised an issue here: #414
Then, I decided to dive into the codebase and add the feature myself using my proposed change.

What's new:

openai_params.model can now be either a constant string (only method so far) or a function that returns a string containing the model (new). The model is thus computed on the fly when an API request is made.

Motivation for the change

So far, without this change, it has not been possible to change the model you are using on the fly. With the modifications this PR introduces, one can now pass a function instead of a fixed model, and the function returns the model to be used. This function is evaluated on API invokation.

Code changes:

  • add a Utils.collapse_openai_params function to, if the openai_params.model parameter is a function, evaluate the function when an API request is made
  • add Utils.table_shallow_copy as a helper function for Utils.collapse_openai_params
  • if the model is determined on the fly, write <dynamic> in the settings menu
  • added documentation and a condiguration example to the README.md

Testing

I have tested every related command (ChatGPT, ChatGPTActAs, ChatGPTRun) and they're all still working perfectly.

Apology letter (lol)

Finally, I want to apologize for the many "debugging" commits. While working on the feature, I found it easiest to commit the changes and let lazy put them into my editor so I could test them, then go back to the plugin.

barnii77 added 20 commits March 23, 2024 18:08
… a function to allow dynamically changing the model. the readme has been updated appropriately. I have also added an example configuration with gpt-4-1106-preview to the readme after the configuration section.
…tings panel. now doing this and testing if it works
…ly and if this is enabled, it will say <dynamic> in the chat settings. the completion features etc have not been tested with the changes, but they should be unaffected as I did not touch the openai_completion_params etc, only openai_params. if you want to see currently active model, add a shortcut to your config (because your config manages the model)
… function as model to change the model used on the fly
@jackMort
Copy link
Owner

great thanks, could you please reformat to satisfy stylua and make PR more readable?

@barnii77
Copy link
Contributor Author

great thanks, could you please reformat to satisfy stylua and make PR more readable?

I have now refactored the PR to (hopefully) be more readable and have tried to fix the formatting. This has been quite troublesome as I have been on holiday since the day after I had raised the PR and don't have my laptop with me... Could you please approve the workflow so I can see if reformatting worked?

…t work for some stylua versions and that's really annoying and unnecessary
@HPRIOR
Copy link

HPRIOR commented Apr 24, 2024

Would love to see this feature merged

@HPRIOR
Copy link

HPRIOR commented Apr 24, 2024

It would also be great to have some kind of lua api exposed which allowed for model parameters to be set. Something like:

require('chatgpt').open_chat({ ... model params ... })

@barnii77
Copy link
Contributor Author

It would also be great to have some kind of lua api exposed which allowed for model parameters to be set. Something like:

require('chatgpt').open_chat({ ... model params ... })

Maybe, I don't think there's too much need for that though. I don't see much utility in the ability to change any settings other than the model and the endpoint at runtime. If you want to open a chat with a different model, you can do that already by manipulating what the function returns and then vim.cmd("ChatGPTRun").

@barnii77
Copy link
Contributor Author

barnii77 commented Apr 24, 2024

Would love to see this feature merged

By the way, you can already use this feature by specifying my fork instead of the main repo until it's merged :)
I've been using it ever since I made the PR
... it's 0 commits behind so it shouldn't matter

@HPRIOR
Copy link

HPRIOR commented Apr 25, 2024

It would also be great to have some kind of lua api exposed which allowed for model parameters to be set. Something like:

require('chatgpt').open_chat({ ... model params ... })

Maybe, I don't think there's too much need for that though. I don't see much utility in the ability to change any settings other than the model and the endpoint at runtime. If you want to open a chat with a different model, you can do that already by manipulating what the function returns and then vim.cmd("ChatGPTRun").

For my use case the function is unnecessarily complicated when you could pass a simple argument to the API

@barnii77
Copy link
Contributor Author

It would also be great to have some kind of lua api exposed which allowed for model parameters to be set. Something like:

require('chatgpt').open_chat({ ... model params ... })

Maybe, I don't think there's too much need for that though. I don't see much utility in the ability to change any settings other than the model and the endpoint at runtime. If you want to open a chat with a different model, you can do that already by manipulating what the function returns and then vim.cmd("ChatGPTRun").

For my use case the function is unnecessarily complicated when you could pass a simple argument to the API

I encourage you to look at the source and try implementing your feature. If you can, good, make a PR, if you can't, use my solution... I found my solution to be the most general and simple to both implement and get the job done.

@barnii77
Copy link
Contributor Author

It would also be great to have some kind of lua api exposed which allowed for model parameters to be set. Something like:

require('chatgpt').open_chat({ ... model params ... })

Maybe, I don't think there's too much need for that though. I don't see much utility in the ability to change any settings other than the model and the endpoint at runtime. If you want to open a chat with a different model, you can do that already by manipulating what the function returns and then vim.cmd("ChatGPTRun").

For my use case the function is unnecessarily complicated when you could pass a simple argument to the API

Also a function is not complicated at all, it is supposed to be a one-liner: function() return chatgpt_config.model end
And then, when you want to change models, you dont have to interact with the function, you just change chatgpt_config.model

@HPRIOR
Copy link

HPRIOR commented Apr 27, 2024

It would also be great to have some kind of lua api exposed which allowed for model parameters to be set. Something like:

require('chatgpt').open_chat({ ... model params ... })

Maybe, I don't think there's too much need for that though. I don't see much utility in the ability to change any settings other than the model and the endpoint at runtime. If you want to open a chat with a different model, you can do that already by manipulating what the function returns and then vim.cmd("ChatGPTRun").

For my use case the function is unnecessarily complicated when you could pass a simple argument to the API

Also a function is not complicated at all, it is supposed to be a one-liner: function() return chatgpt_config.model end And then, when you want to change models, you dont have to interact with the function, you just change chatgpt_config.model

In my case, I am developing a telescope extension that offers interactive and chat modes through a picker. I'd rather not have the user of the extension declare a function in their chatGPT.nvim config. It would be much nicer if the telescope extension could use the API I suggested above

@jackMort jackMort merged commit a8b5520 into jackMort:main Jun 22, 2024
@thiswillbeyourgithub
Copy link
Contributor

I just a question, we can set the models via a runtime function but not the API key right? AFAIU when a shell command is given as api key it is only executed at startup and can't be changed afterwards. But that would be great as it could allow changing for example from gpt-4o to openrouter.ai 's claude 3.5 sonnet. If you dot know openrouter.ai is a website offering a common openai api for pretty much any llm, whereas for example anthropic's API is not compatible.

Any idea how I can change the API key at runtime?

@barnii77
Copy link
Contributor Author

I just a question, we can set the models via a runtime function but not the API key right? AFAIU when a shell command is given as api key it is only executed at startup and can't be changed afterwards. But that would be great as it could allow changing for example from gpt-4o to openrouter.ai 's claude 3.5 sonnet. If you dot know openrouter.ai is a website offering a common openai api for pretty much any llm, whereas for example anthropic's API is not compatible.

Any idea how I can change the API key at runtime?

You can make your neovim config write to a file and make the command read the file and echo its contents

@thiswillbeyourgithub
Copy link
Contributor

Yes but that's just at startup and not at runtime right?

@barnii77
Copy link
Contributor Author

barnii77 commented Jun 22, 2024

Yes but that's just at startup and not at runtime right?

Oh right yeah I forgot.
I guess you could try changing internal variables of the plugin. That didnt work for the model but maybe it does for the api key

Thats the thing I tried for the model before making the plugin feature

qaptoR pushed a commit to qaptoR-nvim/ChatGPT.nvim that referenced this pull request Jul 6, 2024
* added a feature where the model parameter in openai_params can now be a function to allow dynamically changing the model. the readme has been updated appropriately. I have also added an example configuration with gpt-4-1106-preview to the readme after the configuration section.

* on opening chat window, the model was not being collapsed for the settings panel. now doing this and testing if it works

* testing using debug output

* removed debug out again because it seems to now work magically?

* debugging

* debugging

* still debugging

* if the model is determined by a function, just display <dynamic> in settings menu

* debugging

* still debugging

* had value, key instead of key, value in a for loop because I dont know lua lmao now testing

* seems to be working, testing it now

* debug output for model

* typo in toMessages function in settings.lua, fixed now

* more debugging

* still debugging :(

* vim.inspect missing

* the plugin is tested and working, you can now switch models dynamically and if this is enabled, it will say <dynamic> in the chat settings. the completion features etc have not been tested with the changes, but they should be unaffected as I did not touch the openai_completion_params etc, only openai_params. if you want to see currently active model, add a shortcut to your config (because your config manages the model)

* reformatted the config sample to be more readable. you can now pass a function as model to change the model used on the fly

* finally, removed all debug notifications

* Update api.lua

* removed a goto statement by refactoring the code because goto does not work for some stylua versions and that's really annoying and unnecessary

---------

Co-authored-by: Paper <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants