-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No Ollama provider #40
Comments
Version 0.0.1 supports Olllam: Ollam was canceled in version 0.0.2: Why refer to https://github.com/bytedance/UI-TARS-desktop#%EF%B8%8F-important-announcement-gguf-model-performance |
At least let us give it a try 🤣 |
This is not true. 0.0.1 doesn't have Ollama either. |
Just use the baseURL ( |
Hello, we have observed that the performance of the Ollama + GGUF approach is currently inferior to cloud deployment. At this stage, we recommend using Hugging Face inference endpoints for optimal results. We will consider uploading to Ollama once local deployment achieves performance parity with online inference. |
I understand. But why actively disabling a feature because of performance concerns? People might have an killer AI machine with actually good performance. Even more important: For prototyping, it would be amazingly helpful to use Ollama to run 100% locally. Think about building an early prototype in an enterprise scenario where going the full way to use external services and share data with them might need weeks of discussions before even being able to try it out. |
I just downloaded the installer for 0.0.2 on Windows.
In issue #11 I saw that there should be a provider for Ollama but it's missing for me.
Any idea how to overcome this?
The text was updated successfully, but these errors were encountered: