Improving Goose performance with local models #1403
ahau-square
started this conversation in
Ideas
Replies: 2 comments 1 reply
-
PR in progress for tool shim: #1448 |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hey! Thanks so much for doing this! Just wanted to share some of my experiences so far, running on a M1 Macbook:
I'm not very experienced with LLMs, so I'm just poking around. If there are any specific things I could provide, I'm happy to try and collect some proper data! |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Improving Goose Performance with Local Models
We're working on measuring and improving Goose performance with local models (focused on those available through Ollama) to increase accessibility and enhance the fully open source experience.
Current Focus Areas
Open models with native tool calling capabilities
Open models without native tool calling
Our Approach
We're benchmarking open models against closed models (from Anthropic/OpenAI) on simple tasks to establish current performance baselines. Our initial findings suggest tool calling is a significant limitation for many open models today, even those with the technical capability.
It seems like tool calling is a key inhibitor for open models so far - even if they have the capability, they might not perform well. Recent research supports this:
Consumer Hardware Used for Benchmarking
We're running our benchmarks on:
We acknowledge that many Goose users may not have comparable hardware, but we believe providing these benchmarks will still be valuable as a reference point for consumer hardware performance.
Anticipated Improvements
Experimental Work
How You Can Help
Beta Was this translation helpful? Give feedback.
All reactions