Your friendly workspace buddy for Miles on Slack. Milo uses local AI models through Ollama to provide helpful responses while keeping your data private.
- 💬 Chat with Milo by mentioning
@milo
- 🔄 Switch between different AI models on the fly
- 🚀 Fast responses using local models
- 🔒 Privacy-focused: all processing happens locally
Just mention @milo followed by your command:
@milo help
- Show available commands@milo list models
- Show available AI models@milo use model <name>
- Switch to a different model@milo reset model
- Reset to the default model (llama3.2:1b)
-
Install dependencies:
npm install
-
Install Ollama:
# macOS brew install ollama
-
Pull the default model:
ollama pull llama3.2:1b
-
Create a
.env
file:SLACK_BOT_TOKEN=xoxb-your-bot-token SLACK_SIGNING_SECRET=your-signing-secret SLACK_APP_TOKEN=xapp-your-app-token
-
Start Ollama:
ollama serve
-
Start Milo:
npm start
Milo uses Ollama models. Install additional models with:
ollama pull <model-name>
Some recommended models:
- llama3.2:1b (default)
- codellama
- mistral
- neural-chat
Built with:
- TypeScript
- Slack Bolt Framework
- Ollama API
Milo processes all queries locally using Ollama, ensuring your conversations stay private and secure.
Issues and pull requests are welcome! Feel free to contribute to make Milo even better.