From 9cdce39e81a1f170993942d09729d8df5cfc1273 Mon Sep 17 00:00:00 2001 From: Lyu Han Date: Tue, 16 Jul 2024 18:04:37 +0800 Subject: [PATCH] bump version to v0.5.1 (#2022) * bump version to v0.5.1 * update readme * update supported models * update readme * update supported models * change to v0.5.1 --- README.md | 5 +- README_zh-CN.md | 5 +- docs/en/get_started.md | 2 +- docs/en/multi_modal/cogvlm.md | 2 +- docs/en/supported_models/supported_models.md | 55 ++++++++++--------- docs/zh_cn/get_started.md | 2 +- docs/zh_cn/multi_modal/cogvlm.md | 2 +- .../supported_models/supported_models.md | 55 ++++++++++--------- lmdeploy/version.py | 2 +- 9 files changed, 71 insertions(+), 59 deletions(-) diff --git a/README.md b/README.md index 70655c6a0..a9d8be302 100644 --- a/README.md +++ b/README.md @@ -26,6 +26,7 @@ ______________________________________________________________________
2024 +- \[2024/07\] Support [InternVL2](https://huggingface.co/collections/OpenGVLab/internvl-20-667d3961ab5eb12c7ed1463e) full-serie models, [InternLM-XComposer2.5](docs/en/multi_modal/xcomposer2d5.md) and [function call](docs/en/serving/api_server_tools.md) of InternLM2.5 - \[2024/06\] PyTorch engine support DeepSeek-V2 and several VLMs, such as CogVLM2, Mini-InternVL, LlaVA-Next - \[2024/05\] Balance vision model when deploying VLMs with multiple GPUs - \[2024/05\] Support 4-bits weight-only quantization and inference on VLMs, such as InternVL v1.5, LLaVa, InternLMXComposer2 @@ -138,9 +139,11 @@ For detailed inference benchmarks in more devices and more settings, please refe