Skip to content

RLHFlow/RLHF-Reward-Modeling

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

RLHF-Reward-Modeling

Structure

The initial release of this project focuses on the Bradley-Terry reward modeling and pairwise preference model. Since then, we have included more advanced techniques to construct a preference model. The structure of this project is

News

πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯

πŸš€ [Nov 2024] PRM and ORM training codes are released under the math-rm/ folder!

πŸš€ [Sep 2024] ArmoRM training code is released under the armo-rm/ folder!

πŸš€ [Sep 2024] Code for Semi-Supervised Reward Modeling via Iterative Self-Training is released under the pair-pm/ folder

πŸš€ [Jun 2024] Our ArmoRM is the Rank #1 8B model on RewardBench!

πŸš€ [May 2024] The top-3 open-source 8B reward models on RewardBench (ArmoRM, Pair Pref. Model, BT RM) are all trained with this repo!

πŸš€ [May 2024] The pairwise preference model training code is available (pair-pm/)!

πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯

TL;DL: this is a repo for training the reward/preference model for DRL-based RLHF (PPO), Iterative SFT (Rejection sampling fine-tuning), and iterative DPO.

  • 4 x A40 48G: we can train Gemma-7B-it with max_length 4096 by Deepspeed Zero-3 + gradient checkpoint;
  • 4 x A100 80G: we can train Gemma-7B-it with max_length 4096 by gradient checkpoint;
  • The resulting reward models achieve SOTA performance as open-source RMs in the leaderboard of RewardBench.
  • Check out our blog post!

Installation instructions

It is recommended to create separate environments for the Bradley-Terry reward model and pair-wise preference model. The installation instructions are provided in the corresponding folders.

Dataset Preparation

The dataset should be preprocessed in the standard format, where each of the samples consists of two conversations 'chosen' and 'rejected' and they share the same prompt. Here is an example of the rejected sample in the comparison pair.

[
{ "content": "Please identify the top 5 rarest animals in the world.", "role": "user" },
{ "content": "Do you mean animals that are really rare, or rare relative to the size of the human population?", "role": "assistant" },
{ "content": "The ones that are really rare.", "role": "user" },
{ "content": "Alright, here’s what I found:", "role": "assistant" }, 
]

We preprocess many open-source preference datasets into the standard format and upload them to the hugginface hub. You can find them HERE. We have also searched and found that some of the following mixture of preference dataset useful.

Evaluation Results

You can evaluate the resulting reward model with the dataset provided by benchmark by the following command.

CUDA_VISIBLE_DEVICES=1 python ./useful_code/eval_reward_bench_bt.py --reward_name_or_path ./models/gemma_2b_mixture2_last_checkpoint --record_dir ./bench_mark_eval.txt

To Do

  • Bradley-Terry Reward Model
  • Preference model
  • Multi-Objective Reward Model
  • LLM-as-a-judge

Our models and codes have contributed to many academic research projects, e.g.,

  1. Xu Zhangchen, et al. "Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing."
  2. Chen, Lichang, et al. "OPTune: Efficient Online Preference Tuning."
  3. Xie, Tengyang, et al. "Exploratory Preference Optimization: Harnessing Implicit Q*-Approximation for Sample-Efficient RLHF." arXiv preprint arXiv:2405.21046 (2024).
  4. Zhong, Han, et al. "Dpo meets ppo: Reinforced token optimization for rlhf." arXiv preprint arXiv:2404.18922 (2024).
  5. Zheng, Chujie, et al. "Weak-to-strong extrapolation expedites alignment." arXiv preprint arXiv:2404.16792 (2024).
  6. Ye, Chenlu, et al. "A theoretical analysis of Nash learning from human feedback under general kl-regularized preference." arXiv preprint arXiv:2402.07314 (2024).
  7. Chen, Ruijun, et al. "Self-Evolution Fine-Tuning for Policy Optimization"
  8. Li Bolian, et al., Cascade Reward Sampling for Efficient Decoding-Time Alignment
  9. Zhang, Yuheng, et al. "Iterative Nash Policy Optimization: Aligning LLMs with General Preferences via No-Regret Learning"
  10. Lin Tzu-Han, et al., "DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging",
  11. Yang Rui, et al., "Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs"
  12. Junsoo Park, et al., "OffsetBias: Leveraging Debiased Data for Tuning Evaluators"
  13. Meng Yu, et al., "SimPO: Simple Preference Optimization with a Reference-Free Reward"
  14. Song Yifan, et al., "The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism"
  15. Wenxuan Zhou et al., "WPO: Enhancing RLHF with Weighted Preference Optimization"
  16. Han Xia et al., "Inverse-Q*: Token Level Reinforcement Learning for Aligning Large Language Models Without Preference Data"
  17. Wang Haoyu et al., "Probing the Safety Response Boundary of Large Language Models via Unsafe Decoding Path Generation"
  18. He Yifei et al., "Semi-Supervised Reward Modeling via Iterative Self-Training"
  19. Tao leitian et al., "Your Weak LLM is Secretly a Strong Teacher for Alignment"
  20. Guijin Son et al., "LLM-as-a-Judge & Reward Model: What They Can and Cannot Do"
  21. Nicolai Dorka et al., "Quantile Regression for Distributional Reward Models in RLHF"
  22. Zhaolin Gao et al., "Rebel: Reinforcement learning via regressing relative rewards"

Contributors

Thanks to all of our contributors to date (Made with contrib.rocks).

Citation

If you find the content of this repo useful in your work, please consider citing:

@article{dong2024rlhf,
  title={RLHF Workflow: From Reward Modeling to Online RLHF},
  author={Dong, Hanze and Xiong, Wei and Pang, Bo and Wang, Haoxiang and Zhao, Han and Zhou, Yingbo and Jiang, Nan and Sahoo, Doyen and Xiong, Caiming and Zhang, Tong},
  journal={arXiv preprint arXiv:2405.07863},
  year={2024}
}

@inproceedings{ArmoRM,
      title={Interpretable Preferences via Multi-Objective Reward Modeling and Mixture-of-Experts}, 
      author={Haoxiang Wang and Wei Xiong and Tengyang Xie and Han Zhao and Tong Zhang},
      booktitle={The 2024 Conference on Empirical Methods in Natural Language Processing},
      year={2024}
}

@article{xiong2024iterative,
      title={Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint}, 
      author={Wei Xiong and Hanze Dong and Chenlu Ye and Ziqi Wang and Han Zhong and Heng Ji and Nan Jiang and Tong Zhang},
      year={2024},
      journal={ICML}
}