-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request - allow model to "stick" to chat (Per-Chat Model Selection) #643
Comments
yes I vote for this also. A sticky box or something so not all are affected. |
Thanks @syberkitten @JJsilvera1 - by any chance are you guys on V2 already? The branch is still hot, but getting closer to a higher level of quality, and definitely night and day compared to V1. Question for you. I had requests for different models, and it's hard to support all of them with minimal UI/Code complexity:
Please let me know if any of these (or a combination thereof, or something else) would be a productive pattern for you. And also consider a future in which we have multiple branches per chats (split screen, or different computers, and the persona-models are per-branch). |
To be honest, I'm not exactly sure if I'm on what version. Do I just refresh the browser and it updates or do I have to do something more? No, I'm on 1.16.8 |
@enricoros I think your #4 is the best option, allowing Personas to be configured with default models, and then allowing that model to be updated per-chat. Even allowing Personas to default to a saved Beam configuration would be awesome! |
Agree with the flow, that's gonna be it. Can you expand on the persona as beam configuration? |
Sure - Allowing Personas to use the Beam Model Presets (screenshot below) could allow us to jump straight into a beam thread, which, along with Auto-Merge enabled, could allow a more opaque multi-model response with (hopefully) smarter replies. Instead of a per-message beam pop-up, each message could be sent, then (similar to As I wrote it out, I think my wish grew significantly in scope - originally I hadn't thought it through much, but even just having a per-persona default beam configuration (just like model per-persona, if I start with the Developer persona, beam always defaults to Mistral Code, Claude, 4-Turbo or some other selected Model Preset) would be a welcome addition - and hopefully require a little less work than the above? p.s. I saw a HN thread which may interest you, in case you haven't heard of this company. The idea is they have a leaderboard of models which work best in certain situations, and they automatically select the model based on which category the user's query falls under in their leaderboard (ex. if the query resembles code, and Sonnet 3.5 is currently the top-ranked code model, it will be selected as the model for that query, automatically) |
Thanks for taking the time and explaining this in great detail. This is a good suggestion, all I need to figure out are some automation details and dependencies in an synchronous manner but it's all doable, and love the way you use the app. Thanks for the HN thread, it's an interesting feature we should provide. I was actually thinking of using the beam information (which ones you select the most) as a signal for the leaderboard. |
Why
Different conversations often require different models for optimal performance. The current global model selection creates inefficiency, requiring users to switch models frequently when changing between chats.
Description
I propose implementing a per-chat model selection feature. This would allow users to assign specific models to individual chats, eliminating the need to change models when switching conversations.
Requirements
Expected Benefits
The text was updated successfully, but these errors were encountered: