Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Auto-Start Ollama Serve #55

Open
rb81 opened this issue May 7, 2024 · 3 comments
Open

Auto-Start Ollama Serve #55

rb81 opened this issue May 7, 2024 · 3 comments

Comments

@rb81
Copy link

rb81 commented May 7, 2024

It would be great to have an auto-start option to run Ollama if not already running.

@kevinhermawan
Copy link
Owner

I'm weighing the option of developing a standalone application that eliminates the need for Ollama installation, allowing users to download the model directly through the app. Still planning...

@rb81
Copy link
Author

rb81 commented May 7, 2024

@kevinhermawan - Personally, I wouldn't switch from Ollama. I use it for many other things. Just a simple auto-start would be amazing. IMHO, I would suggest focusing on some other basic features that would take your app a long way. Here's a wish list I suggested to another app recently in case you're interested:

  • Integration with other providers (e.g., OpenAI, Claude, Gemini, etc.)
  • Ability to switch models during conversation
  • Ability to alter hyperparameters and system prompt per conversation
  • Ability to import/export conversations
  • Ability to share conversations (exports entire thread as HTML/CSS or PDF to share with others)
  • RAG features with both external (e.g., OpenAI) and Ollama-served embedding models
  • Transparency of system prompts (e.g., when using RAG)

@kevinhermawan
Copy link
Owner

Wow, that's so much interesting stuff, @rb81! Thank you for sharing these ideas. 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants