Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reasoning with Ollama loses Host setting #1653

Open
turicumoperarius opened this issue Dec 29, 2024 · 1 comment
Open

Reasoning with Ollama loses Host setting #1653

turicumoperarius opened this issue Dec 29, 2024 · 1 comment

Comments

@turicumoperarius
Copy link

When using the reasoning option in combination with Ollama as a backend that uses a different endpoint than localhost, the following error occurs.

ERROR Reasoning error: [WinError 10061] No connection could be made because
the target machine actively refused it

I tried the basic plan itinerary example with Ollama.

task = "Plan an itinerary from Los Angeles to Las Vegas" reasoning_agent = Agent(model=Ollama(id="vanilj/Phi-4", host='192.168.1.2'), reasoning=True, markdown=True, structured_outputs=True, debug_mode=False) reasoning_agent.print_response(task, stream=False, show_full_reasoning=True)

After some digging, I found out, that somewhere on the way to the _run() function, the model seems to lose the host setting when its run in reasoning mode. Without reasoning everything works fine. I printed the self.model object from the _run() function with and without reasoning set.

With reasoning:
id='vanilj/Phi-4' name='Ollama' provider='Ollama' metrics={} response_format=<class 'phi.reasoning.step.ReasoningSteps'> tools=None tool_choice=None run_tools=True show_tool_calls=False tool_call_limit=None functions=None function_call_stack=None system_prompt=None instructions=None session_id='e35f4e16-4bfc-47ec-b978-52b4ac9810bf' structured_outputs=True supports_structured_outputs=True format=None options=None keep_alive=None request_params=None host=None timeout=None client_params=None client=None async_client=None

Without reasoning:
id='vanilj/Phi-4' name='Ollama' provider='Ollama' metrics={} response_format=None tools=None tool_choice=None run_tools=True show_tool_calls=False tool_call_limit=None functions=None function_call_stack=None system_prompt=None instructions=None session_id='cb67467c-f9c6-474a-9334-bdc4c64ab34d' structured_outputs=False supports_structured_outputs=True format=None options=None keep_alive=None request_params=None host='192.168.1.2' timeout=None client_params=None client=None async_client=None

After a really dirty fix, by setting the self.model.host parameter manually in the _run() function, everything seems to work fine. So far reasoning with Ollama models does a good job and it would be great if this minor flaw could be fixed.

@ysolanky
Copy link
Contributor

@turicumoperarius that is an excellent find. Thank you so much! I will raise this issue and get a fix out

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants