Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Idea: Add option to use a local model like GPT4ALL #44

Open
dhazel opened this issue May 25, 2023 · 5 comments
Open

Idea: Add option to use a local model like GPT4ALL #44

dhazel opened this issue May 25, 2023 · 5 comments

Comments

@dhazel
Copy link

dhazel commented May 25, 2023

Thank you for the great plugin!

The option to use a local model like GPT4ALL instead of GPT-4 could make the prompts more cost effective to play with.

See codexplain.nvim for an example plugin that is doing this.

@gerazov
Copy link

gerazov commented May 27, 2023

This would be a great addition for the plugin 👍

It would be better if the model is started externally and this plugin only communicates with it. codeexplain.nvim runs the model itself.

@walkabout21
Copy link

hfcc.nvim has an interface for a hosted open assistant model at huggingface. It doesn't have as robust of a feature set so it would be great if huggingface chat could be leveraged with this plugin.

@thegatsbylofiexperience

So there is a way to use llama.cpp with the openai api... if one could add a different URI for the openai endpoint we would be in business.... [https://www.reddit.com/r/LocalLLaMA/comments/15ak5k4/short_guide_to_hosting_your_own_llamacpp_openai/]

@shnee
Copy link

shnee commented Aug 30, 2023

I came here looking to see if this plugin could be used with llama.cpp.

Perhaps making this URL in openai.lua configurable would just work?

utils.exec("curl", {
        "--silent",
        "--show-error",
        "--no-buffer",
        "https://api.openai.com/v1/chat/completions",
        "-H",
        "Content-Type: application/json",
        "-H",
        "Authorization: Bearer " .. api_key,
        "-d",
        vim.json.encode(data),
    }

@Budali11
Copy link

Budali11 commented Jul 9, 2024

This would be a great addition for the plugin 👍

It would be better if the model is started externally and this plugin only communicates with it. codeexplain.nvim runs the model itself.

agree

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants