Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request - allow model to "stick" to chat (Per-Chat Model Selection) #643

Open
syberkitten opened this issue Sep 19, 2024 · 7 comments

Comments

@syberkitten
Copy link

Why

Different conversations often require different models for optimal performance. The current global model selection creates inefficiency, requiring users to switch models frequently when changing between chats.

Description

I propose implementing a per-chat model selection feature. This would allow users to assign specific models to individual chats, eliminating the need to change models when switching conversations.

Requirements

  1. Enable model selection for each individual chat.
  2. Persist model selection per chat across sessions.
  3. Allow changing a chat's model without affecting others.
  4. Display the current model for each chat in the interface.
  5. Set a default model for new chats, customizable by users.
  6. Provide an option for bulk model changes across multiple chats.
  7. Ensure the feature doesn't negatively impact app performance.

Expected Benefits

  • Improved user experience with reduced model switching.
  • Enhanced productivity when working across multiple chats.
  • Greater flexibility in model usage for various conversation types.
@JJsilvera1
Copy link

yes I vote for this also. A sticky box or something so not all are affected.

@enricoros
Copy link
Owner

Thanks @syberkitten @JJsilvera1 - by any chance are you guys on V2 already? The branch is still hot, but getting closer to a higher level of quality, and definitely night and day compared to V1.

Question for you. I had requests for different models, and it's hard to support all of them with minimal UI/Code complexity:

  1. global model, works poorly (current)
  2. one model per chat
    • when creating a new chat, the current (or last used, or smarter) model and personas get copied to the new
  3. one models per Persona
    • this way changing personas can change the system prompt and model
  4. one model per persona, but with a per-chat override
    • i.e. a Persona has its own default model (assigned in the persona editor), but on a per-chat basis one can override the model and just choose a different model

Please let me know if any of these (or a combination thereof, or something else) would be a productive pattern for you. And also consider a future in which we have multiple branches per chats (split screen, or different computers, and the persona-models are per-branch).

@JJsilvera1
Copy link

JJsilvera1 commented Sep 20, 2024

To be honest, I'm not exactly sure if I'm on what version. Do I just refresh the browser and it updates or do I have to do something more?

No, I'm on 1.16.8

@noahkiss
Copy link

@enricoros I think your #4 is the best option, allowing Personas to be configured with default models, and then allowing that model to be updated per-chat. Even allowing Personas to default to a saved Beam configuration would be awesome!

@enricoros
Copy link
Owner

Agree with the flow, that's gonna be it. Can you expand on the persona as beam configuration?

@noahkiss
Copy link

noahkiss commented Oct 25, 2024

Sure - Allowing Personas to use the Beam Model Presets (screenshot below) could allow us to jump straight into a beam thread, which, along with Auto-Merge enabled, could allow a more opaque multi-model response with (hopefully) smarter replies. Instead of a per-message beam pop-up, each message could be sent, then (similar to /draw now) there could be intermediary messages like "querying models X, Y, Z...", "synthesizing results with model J" (or even just stream that auto-fused response if Auto-Merge is enabled, assuming we will also want to Auto-Accept the merge - and an Auto-Accept Merge would be nice to have, either way)

As I wrote it out, I think my wish grew significantly in scope - originally I hadn't thought it through much, but even just having a per-persona default beam configuration (just like model per-persona, if I start with the Developer persona, beam always defaults to Mistral Code, Claude, 4-Turbo or some other selected Model Preset) would be a welcome addition - and hopefully require a little less work than the above?

Screenshot of Beam Model Presets menu Screenshot 2024-10-25 at 9 52 37 AM

p.s. I saw a HN thread which may interest you, in case you haven't heard of this company. The idea is they have a leaderboard of models which work best in certain situations, and they automatically select the model based on which category the user's query falls under in their leaderboard (ex. if the query resembles code, and Sonnet 3.5 is currently the top-ranked code model, it will be selected as the model for that query, automatically)

https://news.ycombinator.com/item?id=41937572

@enricoros
Copy link
Owner

Thanks for taking the time and explaining this in great detail. This is a good suggestion, all I need to figure out are some automation details and dependencies in an synchronous manner but it's all doable, and love the way you use the app. Thanks for the HN thread, it's an interesting feature we should provide. I was actually thinking of using the beam information (which ones you select the most) as a signal for the leaderboard.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants