XTuner Release V0.1.12
What's Changed
- set dev version by @LZHgrla in #281
- [Fix] Update LLaVA results by @LZHgrla in #283
- [Fix] Update LLaVA results (based on VLMEvalKit) by @LZHgrla in #285
- [Fix] Fix filter bug for test data by @LZHgrla in #293
- [Fix] Fix
ConcatDataset
by @LZHgrla in #298 - [Improve] Redesign the
prompt_template
by @LZHgrla in #294 - [Fix] Fix errors about
stop_words
by @LZHgrla in #313 - [Fix] Fix Mixtral LoRA setting by @LZHgrla in #312
- [Feature] Support DeepSeek-MoE by @LZHgrla in #311
- [Fix] Set
torch.optim.AdamW
as the default optimizer by @LZHgrla in #318 - [FIx] Fix
pth_to_hf
for LLaVA model by @LZHgrla in #316 - [Improve] Add
demo_data
examples by @LZHgrla in #278 - [Feature] Support InternLM2 by @LZHgrla in #321
- [Fix] Fix the resume of seed by @LZHgrla in #309
- [Feature] Accelerate
xtuner xxx
by @pppppM in #307 - [Fix] Fix InternLM2 url by @LZHgrla in #325
- [Fix] Limit the version of python,
>=3.8, <3.11
by @LZHgrla in #327 - [Fix] Add
trust_remote_code=True
for AutoModel by @LZHgrla in #328 - [Docs] Improve README by @LZHgrla in #326
- bump verion to v0.1.12 by @pppppM in #323
Full Changelog: v0.1.11...v0.1.12