This repository is a collection of awesome things about Robust Finetuning, including papers, code, etc.
If you would like to contribute to our repository or have any questions/advice, see Contributing & Contact.
We list papers and the implementation code.
- Robust fine-tuning of zero-shot models [CVPR 2022] [Code]
- Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution [ICLR 2022]
- Finetune Like You Pretrain: Improved Finetuning of Zero-Shot Vision Models [CVPR 2023] [Code]
- Masked Images Are Counterfactual Samples for Robust Fine-Tuning [CVPR 2023] [Code]
- Trainable Projected Gradient Method for Robust Fine-tuning [CVPR 2023] [Code]
- Fast Trainable Projection for Robust Fine-Tuning [NeurIPS 2023] [Code]
- Geodesic Multi-Modal Mixup for Robust Fine-Tuning [NeurIPS 2023] [Code]
- Towards Calibrated Robust Fine-Tuning of Vision-Language Models [NeurIPS 2023] [Code]
- Context-Aware Robust Fine-Tuning [IJCV 2023]
- AutoFT: Robust Fine-Tuning by Optimizing Hyperparameters on OOD Data [arXiv 2024]
- Anchor-based Robust Finetuning of Vision-Language Models [CVPR 2024]
- Lipsum-FT: Robust Fine-Tuning of Zero-Shot Models Using Random Text Guidance [ICLR 2024]
Feel free to contribute to our repository.
- If you woulk like to correct mistakes, please do it directly;
- If you would like to add/update papers, please follow the existing format;
- If you have any questions or advice, please contact us by email ([email protected]) or GitHub issues.
Thank you for your support!