-
Notifications
You must be signed in to change notification settings - Fork 13.2k
Add support to ◁think▷...◁/think▷
format and DRY the thinking processing logic
#16364
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
… processing code
Why and how in hell did we end up getting the the frontend to parse thinking tags ? The backend returns thinking content inside dedicated field |
Parsing thinking content on the frontend had been around since the previous version of WebUI. It's necessary as there are cases where we are getting thinking content directly in the message instead of Some models do return thinking content in
llama.cpp/tools/server/webui/src/lib/components/app/chat/ChatMessages/ChatMessage.svelte Lines 48 to 59 in 132d673
llama.cpp/tools/server/webui/src/lib/stores/chat.svelte.ts Lines 329 to 346 in 132d673
|
So it's some kind of "Not implemented on backend (cpp code), but faster to implement on frontend" ? |
Yes. |
content.includes('<think>') || | ||
content.includes('[THINK]') || | ||
THINKING_FORMATS.some((format) => content.includes(format.startTag)) || | ||
content.includes('<|channel|>analysis') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe gpt-oss
handling on the client is not needed anymore. At least when I test with curl
, it seems the reasoning is correctly parsed into reasoning_content
. So maybe this can be removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Georgi: I checked the new Harmony/GPT-OSS path. The web UI now asks for reasoning_format: "auto" and consumes the streamed delta.reasoning_content field directly, persisting it as message.thinking; the backend parser routes the <|channel|>analysis blocks into that field for us. So the extra client-side check for <|channel|>analysis is redundant at this point and can be dropped. I'll run some tests tonight to confirm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the support. Yes, this is what I figured as well, though my understanding of the chat parsing logic is very rudimentary. Extra eyes on this is appreciated.
…ives - Captured inline <think> segments during streaming, forwarding them to the reasoning UI while keeping the cleaned assistant message stream intact - Tracked when explicit reasoning_content chunks arrive so inline capture is skipped once the server provides dedicated reasoning updates
Your PR already improves the old solution. |
Tested with GPT-OSS-120B, Qwen3 A3B Thinking, and GLM 4.5 Air on |
There's still one tricky edge case that isn't handled: some models expect the <think> tag to already be opened in the system prompt to start the chain-of-thought, and they were only trained to close it. With SFT done that way, compatibility with other models wasn't really considered because on the very first chunk, how do you know if it's reasoning or the final answer? That means we'd have to hook into /prop / Jinja template again just to propagate extra info, but that feels like a brittle workaround and it doesn't really align with the spirit of OpenAI-Compat. But this is not really a regression : I have not seen any WebUI handle it correctly so far. Or we could, upon detection of the </think>, retroactively render the preceding text as a "thinking block" at the start of the streamed final answer. |
From memory that's what the thinking 2507 QWEN3 model does. They don't open it, we have to consider it already opened. |
thinking.ts