-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Support for Multiple LLM AI API Endpoints for Self-Hosting and Model Selection #98
Comments
Thank you for your advice. At present, we are carrying out continuous repair and maintenance for some potential problems, and currently lack enough manpower to complete this matter. If possible, we'd love for you to come up with your ideas and send a pull request. |
Hey, I use LM Studio to run local models and I just discovered a way to run multiple local models at the same time with it. I currently have 4 different local models running on my desktop at the same time right now. And I've also found a way to use LM Studio Server with ChatDev. The output is a bit funky right now but I am running ChatDev locally and once you know how it's done the entire setup for everything takes less than 5 minutes. I've "documented" it on the LM Studio discord server. |
Hello, regarding the use of other GPT models or local models, you can refer to the discussion on our GitHub page: #27. Some of these models have corresponding configurations in this Pull Request: #53. You may consider forking the project and giving them a try. While our team currently lacks the time to test every model, it's worth noting that they have received positive feedback and reviews. If you have any other questions, please don't hesitate to ask. We truly appreciate your support and suggestions. We are continuously working to improve more significant features, so please stay tuned.😊 |
Could you share the process of making multiple model endpoints in LM Studio? |
Just open another instance/window of LM Studio: Now you can query different models in different ports running at the same time on your local machine. |
Title: Support for Multiple LLM AI API Endpoints for Self-Hosting and Model Selection
Feature Request
Description:
We would like to propose the addition of a new feature to ChatDev that enables users to configure and utilize multiple Language Model (LLM) AI API endpoints for self-hosting and experimentation with different models. This feature would enhance the flexibility and versatility of ChatDev for developers and researchers working with LLMs.
Feature Details:
Endpoint Configuration:
Custom Endpoint Names:
Chat Parameters:
Model Selection (if applicable):
API Key Management (if applicable):
Endpoint Address:
Optional - Endpoint Tagging:
Expected Benefits:
This feature will benefit developers, researchers, and users who work with LLMs by offering a centralized and user-friendly interface for managing multiple AI API endpoints. It enhances the ability to experiment with various models, configurations, and providers while maintaining security and simplicity. This could allow different characters to leverage specific fine-tuned models rather than the same model for each. This could also allow self-hosted users to experiment with expand the number of repeated looped calls without drastically increasing the bill.
Additional Notes:
Consider implementing an intuitive user interface for configuring and managing these endpoints within the GitHub platform, making it accessible to both novice and experienced users.
References:
Include any relevant resources or references that support the need for this feature, such as the growing popularity of LLMs in various fields and the demand for flexible API management solutions.
Related Issues/Pull Requests:
#27
#33
Azure OpenAI #55
Assignees:
If you permit this ticket to remain open, I will assemble some links and resources, as well as opening another ticket to handle TextGenWebUI with relevant links there to implementing it. I can try implementing and doing a PR if someone else doesn't get to it first.
Thank you for considering this feature request. I believe that this enhancement will greatly benefit the ChatDev community and its users working with Language Model AI API endpoints.
The text was updated successfully, but these errors were encountered: