You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 16, 2024. It is now read-only.
This would probably be better if it were using LangChain. It would provide more functionality. We could still set it to default to ChatGPT and I don't think we would be losing any functionality.
What does everyone think?
The text was updated successfully, but these errors were encountered:
I know this was raised quite a while back, but I've recently been thinking it would be amazing to actually use LangChain properly.
Incorporating extra memory/context is an obvious application here, but even just running queries through a cheaper model to decide whether to spend the money on processing a response with 4 or 4.5 could be really useful.
I'm thinking, for example, in a support room running a message through a cheap model to decide if it's "a message that would be best answered by a technical engineer" - I've tried similar at work in non-Matrix settings, but would be very interested to try this in one or two public support rooms on Matrix to handle some of the simpler problems without paying for a full GPT4 response on every incoming message.
It looks interesting, especially the use of davinci and embeddings... though it looks like the changes would need to be cherry-picked because there are quite a few odd references to their specific bot implementation 🤔
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
This would probably be better if it were using LangChain. It would provide more functionality. We could still set it to default to ChatGPT and I don't think we would be losing any functionality.
What does everyone think?
The text was updated successfully, but these errors were encountered: