Releases: InternLM/xtuner
Releases · InternLM/xtuner
XTuner Release V0.1.13
What's Changed
- set dev version by @LZHgrla in #329
- [Docs] Add LLaVA-InternLM2 results by @LZHgrla in #332
- Update internlm2_chat template by @RangiLyu in #339
- [Fix] Fix examples demo_data configs by @LZHgrla in #334
- bump version to v0.1.13 by @LZHgrla in #340
New Contributors
Full Changelog: v0.1.12...v0.1.13
XTuner Release V0.1.12
What's Changed
- set dev version by @LZHgrla in #281
- [Fix] Update LLaVA results by @LZHgrla in #283
- [Fix] Update LLaVA results (based on VLMEvalKit) by @LZHgrla in #285
- [Fix] Fix filter bug for test data by @LZHgrla in #293
- [Fix] Fix
ConcatDataset
by @LZHgrla in #298 - [Improve] Redesign the
prompt_template
by @LZHgrla in #294 - [Fix] Fix errors about
stop_words
by @LZHgrla in #313 - [Fix] Fix Mixtral LoRA setting by @LZHgrla in #312
- [Feature] Support DeepSeek-MoE by @LZHgrla in #311
- [Fix] Set
torch.optim.AdamW
as the default optimizer by @LZHgrla in #318 - [FIx] Fix
pth_to_hf
for LLaVA model by @LZHgrla in #316 - [Improve] Add
demo_data
examples by @LZHgrla in #278 - [Feature] Support InternLM2 by @LZHgrla in #321
- [Fix] Fix the resume of seed by @LZHgrla in #309
- [Feature] Accelerate
xtuner xxx
by @pppppM in #307 - [Fix] Fix InternLM2 url by @LZHgrla in #325
- [Fix] Limit the version of python,
>=3.8, <3.11
by @LZHgrla in #327 - [Fix] Add
trust_remote_code=True
for AutoModel by @LZHgrla in #328 - [Docs] Improve README by @LZHgrla in #326
- bump verion to v0.1.12 by @pppppM in #323
Full Changelog: v0.1.11...v0.1.12
XTuner Release V0.1.11
What's Changed
- [Docs] Update Mixtral 8x7b docs by @LZHgrla in #265
- [Bug] Fix bugs when chat with --lagent by @ooooo-create in #269
- [Feature] Support setting the random seed for
xtuner train
by @LZHgrla in #272 - [Fix] Update Mixtral-8x7b repo_id; Add mixtral template by @LZHgrla in #275
- [Feature] Add Qwen 72b config by @xiaohangguo in #254
- [Improve] Add notes for requirements; Improve badges by @LZHgrla in #277
- [Feature] Support LLaVA by @LZHgrla in #196
- [Feature] Add
warmup
for all configs by @LZHgrla in #274 - bump version to v0.1.11 by @LZHgrla in #280
New Contributors
- @ooooo-create made their first contribution in #269
Full Changelog: v0.1.10...v0.1.11
XTuner Release V0.1.10
What's Changed
- [Feature] Support for full-scale fine-tuning of large language models such as Llama2 70B. by @HIT-cwh in #231
- [Feature] Support to process internlm-style datasets by @HIT-cwh in #232
- [Fix] Fix bugs of llama dispatch by @LZHgrla in #229
- [Bug] Resolve the bug introduced by higher versions of DeepSpeed. by @HIT-cwh in #240
- [Doc] Add internlm dataset doc by @HIT-cwh in #242
- add
wizardcoder
template by @xiaohangguo in #243 - [Feature] Filter negative labels by @xiaohangguo in #244
- [Bug] Support auto detect torch_dtype in chat.py by @HIT-cwh in #250
- [Feature] Add Qwen 1.8b config by @xiaohangguo in #252
- [Feature]Add Deepseekcoder config by @xiaohangguo in #253
- [Bug] Fix bugs when grad clip == 0 by @HIT-cwh in #262
- [Feature] Support Mixtral 8x7b by @pppppM in #263
- bump version to v0.1.10 by @pppppM in #264
New Contributors
- @xiaohangguo made their first contribution in #243
- @pppppM made their first contribution in #263
Full Changelog: v0.1.9...v0.1.10
XTuner Release V0.1.9
XTuner Release V0.1.8
What's Changed
- [Feature] Add mistral pretrain by @DumoeDss in #204
- [Feature] add yi-6b and yi-34b sft script by @amulil in #216
- [Docs] Add Introduction docs for config by @LZHgrla in #212
- [Fix] Fix MMLU evaluation by @LZHgrla in #208
- [Feature] Support ChatGLM3-6B by @LZHgrla in #222
- [Fix] Set default
eta_min
to 0. by @LZHgrla in #223 - bump version to 0.1.8 by @LZHgrla in #224
New Contributors
Full Changelog: v0.1.7...v0.1.8
XTuner Release V0.1.7
What's Changed
- add zephyr config by @maxchiron in #188
- [Feature] Support "auto" fp16/bf16 for DeepSpeed by @LZHgrla in #195
- [Fix] Temporarily limit the version of
transformers
by @LZHgrla in #200 - bump version to 0.1.7 by @LZHgrla in #201
New Contributors
- @maxchiron made their first contribution in #188
Full Changelog: v0.1.6...v0.1.7
XTuner Release V0.1.6
What's Changed
Full Changelog: v0.1.5...v0.1.6
XTuner Release V0.1.5
What's Changed
- [Fix] Rename internlm-chat-20b by @LZHgrla in #131
- [Fix] Fix CPU OOM during the merge step by @LZHgrla in #133
- [Fix] Add
--offload-folder
for merge and chat by @LZHgrla in #140 - [Feature] Support to remove history for chat script by @LZHgrla in #144
- [Docs] add conda env create by @KevinNuNu in #147
- [Fix] Fix activation checkpointing bug by @LZHgrla in #159
- [Refactor] Refactor the preprocess of dataset by @LZHgrla in #163
- [Feature] Support deepspeed for HF trainer by @LZHgrla in #164
- [Feature] Support the fine-tuning of MSAgent dataset by @LZHgrla in #156
- [Fix] Fix bugs on
traverse_dict
by @LZHgrla in #141 - [Doc] Update
chat.md
by @LZHgrla in #168 - bump version to 0.1.5 by @LZHgrla in #171
New Contributors
- @KevinNuNu made their first contribution in #147
Full Changelog: v0.1.4...v0.1.5