You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
As a user I want to get a response that provides the best possible experience given the state of a conversation and type of query I perform.
Describe the solution you'd like
The current chatbot implementation is geared towards a proper conversation with a user about a topic. Whilst LLM's are stateless themselves, the guidance engine maintains the state of a conversation through memory and accesses a corpus of relevant information though a vector database. Also, any follow-up question is rephrased, to ensure a relevant response in the context of the whole conversation.
The chatbot currently also recognises when a sentiment ('wow cool!') is entered as a query rather than a question ('what about spaces?') and will respond accordingly. There are a number of situation however, where the full response does not fully take into account the query and or state. Examples are:
even when a user enters a sentiment rather than a query and the chatbot responds accordingly, a number of potentially irrelevant reference URL as returned.
the performance of the overall response chain may be sub-optimal if the due to unnecessary retrieval of data from the database
specific instructions by the user ('tell me about spaces in about 50 words') may be ignored due to the current setup.
A more flexible approach to improving the user experience may be the use of 'agents'. The concept of agents in in the context of LLM chains is the dependency of what to do as a next action in the chain on the outcome of the previous action in the chain (e.g. dependent on the outcome of an LLM call). The introduction of agents would make it easier to improve the user experience and implement new features.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
As a user I want to get a response that provides the best possible experience given the state of a conversation and type of query I perform.
Describe the solution you'd like
The current chatbot implementation is geared towards a proper conversation with a user about a topic. Whilst LLM's are stateless themselves, the guidance engine maintains the state of a conversation through memory and accesses a corpus of relevant information though a vector database. Also, any follow-up question is rephrased, to ensure a relevant response in the context of the whole conversation.
The chatbot currently also recognises when a sentiment ('wow cool!') is entered as a query rather than a question ('what about spaces?') and will respond accordingly. There are a number of situation however, where the full response does not fully take into account the query and or state. Examples are:
A more flexible approach to improving the user experience may be the use of 'agents'. The concept of agents in in the context of LLM chains is the dependency of what to do as a next action in the chain on the outcome of the previous action in the chain (e.g. dependent on the outcome of an LLM call). The introduction of agents would make it easier to improve the user experience and implement new features.
The text was updated successfully, but these errors were encountered: