license | base_model | pipeline_tag | tags | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
apache-2.0 |
|
translation |
|
A comprehensive toolkit for creating, fine-tuning, and deploying large language models with support for both Polish and English.
- Ready-to-use WronAI package - All functionality available through the
wronai
package - Model Management - Easy installation and management of LLM models
- Multiple Model Support - Works with various models via Ollama
- Optimizations - 4-bit quantization, LoRA, FP16 support
- CLI Tools - Command-line interface for all operations
- Production Ready - Easy deployment with Docker
- Web Interface - User-friendly Streamlit-based web UI
- Python 3.8+
- Ollama installed and running
- CUDA (optional, for GPU acceleration)
# Install the package
pip install wronai
# Start Ollama (if not already running)
ollama serve &
# Pull the required model (e.g., mistral:7b-instruct)
ollama pull mistral:7b-instruct
from wronai import WronAI
# Initialize with default settings
wron = WronAI()
# Chat with the model
response = wron.chat("Explain quantum computing in simple terms")
print(response)
# Start interactive chat
wronai chat
# Run a single query
wronai query "Explain quantum computing in simple terms"
# Start the web UI
wronai web
List available models:
ollama list
Pull a model (if not already available):
ollama pull mistral:7b-instruct
Run with Docker:
docker run -p 8501:8501 wronai/wronai web
# Clone the repository
git clone https://github.com/wronai/llm-demo.git
cd llm-demo
# Install in development mode
pip install -e ".[dev]"
# Install pre-commit hooks
pre-commit install
# Run all tests
pytest
# Run with coverage
pytest --cov=wronai --cov-report=term-missing
Contributions are welcome! Please see our Contributing Guide for details.
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
For questions or support, please open an issue on GitHub or contact us at [email protected]