Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support Qwen/Qwen2.5-Coder-32B-Instruct model provided by siliconflow #2974

Merged
merged 25 commits into from
Nov 27, 2024

Conversation

AnoyiX
Copy link
Contributor

@AnoyiX AnoyiX commented Nov 18, 2024

Description

support some models provided by siliconflow, like

  • Qwen/Qwen2.5-Coder-32B-Instruct
  • Qwen/Qwen2.5-Coder-7B-Instruct

Testing

custom config:

{
  "models": [
    {
      "title": "Qwen2.5 Coder",
      "provider": "openai",
      "model": "Qwen/Qwen2.5-Coder-32B-Instruct",
      "apiKey": "************************************",
      "apiBase": "https://api.siliconflow.cn/v1/"
    }
  ],
  "tabAutocompleteModel": {
    "title": "Qwen2.5 Coder",
    "provider": "openai",
    "model": "Pro/Qwen/Qwen2.5-Coder-7B-Instruct",
    "apiKey": "************************************",
    "apiBase": "https://api.siliconflow.cn/v1/"
  },
  ...
}

AI Edit Example:
image

Autocomplete Example:
image

Copy link

netlify bot commented Nov 18, 2024

Deploy Preview for continuedev ready!

Name Link
🔨 Latest commit 26e0017
🔍 Latest deploy log https://app.netlify.com/sites/continuedev/deploys/6746b793dcf4c4000868435d
😎 Deploy Preview https://deploy-preview-2974--continuedev.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@@ -288,7 +288,8 @@ class OpenAI extends BaseLLM {
suffix: string,
options: CompletionOptions,
): AsyncGenerator<string> {
const endpoint = new URL("fim/completions", this.apiBase);
const url = this.apiBase?.includes("api.siliconflow.cn") ? "completions" : "fim/completions"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AnoyiX Can we make a dedicated LLM class instead of putting the logic in OpenAI.ts? A good example would be in Deepseek.ts, where we subclass OpenAI and adjust the FIM logic

Copy link
Contributor Author

@AnoyiX AnoyiX Nov 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be better to allow custom endpoint, there are so many providers, it’s hard to support them all, what do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that's fine as long as they have the exact same expected request body. I would just make it a function like getFimEndpoint so that it can be modified by subclasses, and then make a subclass of the endpoint. I'm not worried about having a large number of subclasses

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good. It's convenient for subclasses to implement getFimEndpoint, this way we can avoid duplicating a lot of code in *_streamFim

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can review the latest code ~ 😊

image

@AnoyiX AnoyiX requested a review from sestinj November 22, 2024 03:14
@AnoyiX AnoyiX requested a review from sestinj November 22, 2024 05:16
@sestinj sestinj merged commit fc3159d into continuedev:main Nov 27, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants