You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using the reasoning option in combination with Ollama as a backend that uses a different endpoint than localhost, the following error occurs.
ERROR Reasoning error: [WinError 10061] No connection could be made because
the target machine actively refused it
I tried the basic plan itinerary example with Ollama.
task = "Plan an itinerary from Los Angeles to Las Vegas" reasoning_agent = Agent(model=Ollama(id="vanilj/Phi-4", host='192.168.1.2'), reasoning=True, markdown=True, structured_outputs=True, debug_mode=False) reasoning_agent.print_response(task, stream=False, show_full_reasoning=True)
After some digging, I found out, that somewhere on the way to the _run() function, the model seems to lose the host setting when its run in reasoning mode. Without reasoning everything works fine. I printed the self.model object from the _run() function with and without reasoning set.
After a really dirty fix, by setting the self.model.host parameter manually in the _run() function, everything seems to work fine. So far reasoning with Ollama models does a good job and it would be great if this minor flaw could be fixed.
The text was updated successfully, but these errors were encountered:
When using the reasoning option in combination with Ollama as a backend that uses a different endpoint than localhost, the following error occurs.
ERROR Reasoning error: [WinError 10061] No connection could be made because
the target machine actively refused it
I tried the basic plan itinerary example with Ollama.
task = "Plan an itinerary from Los Angeles to Las Vegas" reasoning_agent = Agent(model=Ollama(id="vanilj/Phi-4", host='192.168.1.2'), reasoning=True, markdown=True, structured_outputs=True, debug_mode=False) reasoning_agent.print_response(task, stream=False, show_full_reasoning=True)
After some digging, I found out, that somewhere on the way to the _run() function, the model seems to lose the host setting when its run in reasoning mode. Without reasoning everything works fine. I printed the self.model object from the _run() function with and without reasoning set.
With reasoning:
id='vanilj/Phi-4' name='Ollama' provider='Ollama' metrics={} response_format=<class 'phi.reasoning.step.ReasoningSteps'> tools=None tool_choice=None run_tools=True show_tool_calls=False tool_call_limit=None functions=None function_call_stack=None system_prompt=None instructions=None session_id='e35f4e16-4bfc-47ec-b978-52b4ac9810bf' structured_outputs=True supports_structured_outputs=True format=None options=None keep_alive=None request_params=None host=None timeout=None client_params=None client=None async_client=None
Without reasoning:
id='vanilj/Phi-4' name='Ollama' provider='Ollama' metrics={} response_format=None tools=None tool_choice=None run_tools=True show_tool_calls=False tool_call_limit=None functions=None function_call_stack=None system_prompt=None instructions=None session_id='cb67467c-f9c6-474a-9334-bdc4c64ab34d' structured_outputs=False supports_structured_outputs=True format=None options=None keep_alive=None request_params=None host='192.168.1.2' timeout=None client_params=None client=None async_client=None
After a really dirty fix, by setting the self.model.host parameter manually in the _run() function, everything seems to work fine. So far reasoning with Ollama models does a good job and it would be great if this minor flaw could be fixed.
The text was updated successfully, but these errors were encountered: