A web & desktop chat application built with React, Python FastAPI, and Electron, designed to interact with Ollama's local AI models.
- Modern chat interface
- Local AI model integration via Ollama
- Cross-platform desktop application
- Real-time chat responses
- Code syntax highlighting
- Markdown support
- Node.js 18+ and npm
- Python 3.7+
- Ollama installed and running locally
- Clone the repository:
git clone https://github.com/nikolliervin/chatai.git
cd chatai
- Install backend dependencies:
cd backend
python -m venv venv
# On Windows:
venv\Scripts\activate
# On macOS/Linux:
source venv/bin/activate
pip install -r requirements.txt
- Install frontend dependencies:
cd ../frontend
npm install
-
Start Ollama on your machine
-
Start the backend server:
cd backend
# Activate virtual environment if not already activated
python -m uvicorn main:app --reload --host 0.0.0.0 --port 8000
- Start the frontend development server:
cd frontend
npm run electron:dev
To create a standalone desktop application:
- After checking out feature/electron-app, make sure you're in the frontend directory:
cd frontend
- Build the application:
npm run electron:build
- Find the installer in:
- Windows:
frontend/dist-electron/ChatAI-Setup-1.0.0.exe
- macOS:
frontend/dist-electron/ChatAI-1.0.0.dmg
- Linux:
frontend/dist-electron/ChatAI-1.0.0.AppImage
- Launch the application
- Select your preferred AI model from the dropdown
- Start chatting!
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.