connecting x agent to a local llm using llm studio? #97
Closed
SmellyBones
started this conversation in
Ideas
Replies: 1 comment
-
+1 on this topic. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
its kind of expensive to use gpt 4 for xagent. im wanting to use my own llm through lm studio. i understand i have to create a server with lm studio. how would i go about connecting xagent to my lm studio server? also if im using a 7b llm model such as something like vicuna or mistral or llama, how much capability am i losing out on? could i have it run off my local llm but keep the gpt api key so if it needs to use gpt for more advanced task it could optionally use gpt. just an idea but any advice would be appreciated :D
Beta Was this translation helpful? Give feedback.
All reactions