Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

💡 [Feature]: Add support for picking LLM model for GitHub Copilot chat participant #334

Open
Adam-it opened this issue Nov 1, 2024 · 3 comments
Labels
⭐ enhancement New feature or request 🤚 on hold I need to wait for something else
Milestone

Comments

@Adam-it
Copy link
Member

Adam-it commented Nov 1, 2024

🎯 Aim of the feature

Currently in preview GitHub Copilot added the functionality to select the LLM model you want to use in the response
image

We should check if we may adapt SPFx Toolkit Chat participant to also use it. Currently we use GPT 4o

📷 Images (if possible) with expected result

No response

🤔 Additional remarks or comments

No response

@Adam-it Adam-it added ⭐ enhancement New feature or request 🤔 research needs researching before I start hacking labels Nov 1, 2024
@Adam-it Adam-it added this to the v4.X milestone Nov 1, 2024
@Adam-it
Copy link
Member Author

Adam-it commented Nov 18, 2024

so it is described here
https://code.visualstudio.com/api/extension-guides/language-model
"Once you've built the prompt for the language model, you first select the language model you want to use with the selectChatModels method. This method returns an array of language models that match the specified criteria. If you are implementing a chat participant, we recommend that you instead use the model that is passed as part of the request object in your chat request handler. This ensures that your extension respects the model that the user chose in the chat model dropdown. Then, you send the request to the language model by using the sendRequest method."

example: https://github.com/microsoft/vscode-extension-samples/blob/main/chat-sample/src/simple.ts

@Adam-it Adam-it added ✏️prototype and removed 🤔 research needs researching before I start hacking labels Nov 18, 2024
@Adam-it
Copy link
Member Author

Adam-it commented Nov 18, 2024

prototype

we should:

  1. remove this
    const [model] = await vscode.lm.selectChatModels({ vendor: 'copilot', family: 'gpt-4o' });
    and take the model from request
  2. for /manage we should still ensure gpt-4o LLM as it has most token possible and brings currently best results

@Adam-it
Copy link
Member Author

Adam-it commented Nov 18, 2024

Since I cannot seem to make it work with latest VS Code update lets wait a few weeks until it will get out of insiders and into standard VS Code version before we open it up

@Adam-it Adam-it added the 🤚 on hold I need to wait for something else label Nov 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
⭐ enhancement New feature or request 🤚 on hold I need to wait for something else
Projects
None yet
Development

No branches or pull requests

1 participant