Unify Efficient Fine-Tuning of 100+ LLMs
-
Updated
Jul 4, 2024 - Python
Unify Efficient Fine-Tuning of 100+ LLMs
This repo contains a list of channels and sources from where LLMs should be learned
A one-stop data processing system to make data higher-quality, juicier, and more digestible for (multimodal) LLMs! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷为大模型提供更高质量、更丰富、更易”消化“的数据!
InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.
Cambrian-1 is a family of multimodal LLMs with a vision-centric design.
LLaRA: Large Language and Robotics Assistant
✨✨Latest Advances on Multimodal Large Language Models
总结Prompt&LLM论文,开源数据&模型,AIGC应用
awesome-LLM-controlled-constrained-generation
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
SCAR: Efficient Instruction-Tuning for Large Language Models via Style Consistency-Aware Response Ranking
Docker image for LLaVA: Large Language and Vision Assistant
Generative Representational Instruction Tuning
[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
An official implementation of ECCV 2024 paper--ShareGPT4V: Improving Large Multi-modal Models with Better Captions
[CVPR 2024] "LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning"; an interactive Large Language 3D Assistant.
An interpretable KBQA system that operates at the natural language level with the help of LLMs
[ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning
[ECCV2024] Video Foundation Models & Data for Multimodal Understanding
Add a description, image, and links to the instruction-tuning topic page so that developers can more easily learn about it.
To associate your repository with the instruction-tuning topic, visit your repo's landing page and select "manage topics."