You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Been experimenting with Gemni 2.0 Flash Thinking (awful name) lately and it seems like the anythingllm frontend handles the api answer as such:
[Thought process][Answer]
It puts all the text in the same "space" as if it was one unique answer, but it's not.
This is an example of an answer showcased in anythingllm:
the highlighted part is where I think both thought and answer are concatenated, not only because it clearly looks like, but also because that's the only part of the whole answer where a period is missing its respective subsequent space.
This is an example of the same prompt in google ai studio
If I expand that Thought box, it shows all the previous process.
I'm not sure if the api response body provides the possibility for implementing this, but I sure hope it does! Also, I wonder if this a mistake also happening with OpenAI's o1.
Edit: One workaround I found is to specify on the workspace's prompt to structure its responses like so:
[Thought process]
[Enter]
[Response]
The text was updated successfully, but these errors were encountered:
What would you like to see?
Been experimenting with Gemni 2.0 Flash Thinking (awful name) lately and it seems like the anythingllm frontend handles the api answer as such:
[Thought process][Answer]
It puts all the text in the same "space" as if it was one unique answer, but it's not.
This is an example of an answer showcased in anythingllm:
the highlighted part is where I think both thought and answer are concatenated, not only because it clearly looks like, but also because that's the only part of the whole answer where a period is missing its respective subsequent space.
This is an example of the same prompt in google ai studio
If I expand that Thought box, it shows all the previous process.
I'm not sure if the api response body provides the possibility for implementing this, but I sure hope it does! Also, I wonder if this a mistake also happening with OpenAI's o1.
Edit: One workaround I found is to specify on the workspace's prompt to structure its responses like so:
The text was updated successfully, but these errors were encountered: