This application allows users to upload documents and chat with AI models about the content of those documents. The application uses semantic search to find relevant content in documents to provide accurate responses.
- Document upload and processing (PDF, TXT)
- Chat with AI about document contents
- Semantic search for accurate document retrieval
- Model selection with adjustable parameters
- Code highlighting and markdown support
The application consists of:
- Backend: Flask API that processes documents and connects to LLM services
- Frontend: React application that provides the user interface
- LLM Service: Integration with Ollama for language model capabilities
- Docker
- Docker Compose (usually included with Docker Desktop)
-
Clone the repository
git clone <repository-url> cd arun-ai-lab
-
Build and start the containers
docker-compose up -d
Note: Required Ollama models (text-embedding-bge-m3, deepseek-r1-distill-qwen-32b-mlx) will be automatically downloaded on first startup. This may take some time depending on your internet connection.
-
Access the application
- Frontend: http://localhost:3000
- Backend API: http://localhost:5001/api
-
Stop the application
docker-compose down
-
View logs
docker-compose logs docker-compose logs -f # Follow logs in real-time
-
Rebuild after changes
docker-compose build docker-compose up -d
For more detailed Docker instructions, see DOCKER_README.md.
-
Navigate to the backend directory
cd backend
-
Create and activate a virtual environment
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Run the server
python app.py
The backend will be available at http://localhost:5001
-
Navigate to the frontend directory
cd frontend
-
Install dependencies
npm install
-
Start the development server
npm start
The frontend will be available at http://localhost:3000
This application requires an Ollama service running locally:
-
Install Ollama Follow instructions at Ollama.ai
-
Pull required models
ollama pull text-embedding-bge-m3 ollama pull deepseek-r1-distill-qwen-32b-mlx
-
Start Ollama The service should be available at http://localhost:1234