Tunix(Tune-in-JAX) is a JAX based library designed to streamline the post-training of Large Language Models. It provides efficient and scalable supports for:
- Supervised Fine-Tuning
- Reinforcement Learning (RL)
- Knowledge Distillation
Tunix leverages the power of JAX for accelerated computation and seamless integration with JAX-based modeling framework Flax NNX.
Current Status: Early Development
Tunix is in early development. We're actively working to expand its capabilities, usability and improve its performance. Stay tuned for upcoming updates and new features!
Tunix is still under development, here's a glimpse of the current features:
- Supervised Fine-Tuning:
- Full Weights Fine-Tuning
- Parameter-Efficient Fine-Tuning (PEFT) with LoRA/Q-LoRA Layers
- Reinforcement Learning (RL):
- Proximal Policy Optimization (PPO)
- Group Relative Policy Optimization (GRPO)
- Token-level Group Sequence Policy Optimization (GSPO-token)
- Preference Fine-Tuning:
- Preference alignments with Direct Preference Optimization (DPO)
- Knowledge Distillation:
- Logit Strategy: A classic approach where the student learns to match the teacher's output probability distribution.
- Attention Transfer & Projection Strategies: Methods to align the attention mechanisms between the student and teacher models.
- Feature Pooling & Projection Strategies: General techniques for matching intermediate feature representations, even between models of different architectures.
- Modularity:
- Components are designed to be reusable and composable
- Easy to customize and extend
- Efficiency:
- Native support of common model sharding strategies such as DP, FSDP and TP
- Designed for distributed training on accelerators (TPU)
- Agentic RL Training:
- Async Rollout
- Multi-turn & multi-step support
- Tool usage
- Advanced Algorithms:
- Addtional state-of-the-art RL and distillation algorithms
- Scalability:
- Multi-host distributed training
- Optimized rollout with vLLM
- User Guides:
- More advanced RL recipe
You can install Tunix in several ways:
- From PyPI (recommended):
pip install "tunix[prod]"
- Directly from GitHub (latest main branch)
pip install git+https://github.com/google/tunix
- From source (editable install) If you plan to modify the codebase and run it in development mode:
git clone https://github.com/google/tunix.git
cd tunix
pip install -e ".[dev]"
To get started, we have a bunch of detailed examples and tutorials.
- PEFT Gemma with QLoRA
- Training Gemma on grade school Math problems using GRPO
- Logit Distillation using Gemma models
To setup Jupyter notebook on single host GCP TPU VM, please refer to the setup script.
We plan to provide clear, concise documentation and more examples in the near future.
We welcome contributions! As Tunix is in early development, the contribution process is still being formalized. A rough draft of the contribution process is present here. In the meantime, you can make feature requests, report issues and ask questions in our Tunix GitHub discussion forum.
GRL (Game Reinforcement Learning), developed by Hao AI Lab from UCSD, is an open-source framework for post-training large language models through multi-turn RL on challenging games. In collaboration with Tunix, GRL integrates seamless TPU support—letting users quickly run scalable, reproducible RL experiments (like PPO rollouts on Qwen2.5-0.5B-Instruct) on TPU v4 meshes with minimal setup. This partnership empowers the community to push LLM capabilities further, combining Tunix’s optimized TPU runtime with GRL’s flexible game RL pipeline for cutting-edge research and easy reproducibility.
Thank you for your interest in Tunix. We're working hard to bring you a powerful and efficient library for LLM post-training. Please follow our progress and check back for updates!
Thank you to all our wonderful contributors!