You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Retrieval is done on a per-query basis by design, so it's expected that your second search won't return documents relevant to your first search.
However, in an actual conversation the model does have knowledge on your previous queries and retrieval results, and will respond accordingly (how well it retains previous context also depends on how you've written your prompts & context window)
Retrieval is done on a per-query basis by design, so it's expected that your second search won't return documents relevant to your first search.
However, in an actual conversation the model does have knowledge on your previous queries and retrieval results, and will respond accordingly (how well it retains previous context also depends on how you've written your prompts & context window)
we found if we use gtp4 in dify, it will change the second search query automaticlly, that very cool. But it won't work when use other llm model.
Self Checks
Dify version
0.3.2
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
first convesation i search "BYD company" , second conversation i search "what was it's advantages",the response of vector search was not about "BYD"
✔️ Expected Behavior
all response of vector search was about all the user input
❌ Actual Behavior
the response of vector search was about user intends
The text was updated successfully, but these errors were encountered: