Skip to content

Actions: InternLM/lmdeploy

lint

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
5,445 workflow runs
5,445 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

read norm_type
lint #6146: Commit c746fd3 pushed by grimoire
December 4, 2024 08:49 4m 5s refactor-vl
December 4, 2024 08:49 4m 5s
Refactor turbomind attention by precomputing rotary embed
lint #6145: Pull request #2801 synchronize by irexyc
December 4, 2024 08:37 4m 7s irexyc:rope
December 4, 2024 08:37 4m 7s
Refactor VLM modules
lint #6144: Pull request #2810 synchronize by grimoire
December 4, 2024 08:08 4m 11s refactor-vl
December 4, 2024 08:08 4m 11s
fix docs
lint #6143: Commit ae7015a pushed by grimoire
December 4, 2024 08:08 4m 30s refactor-vl
December 4, 2024 08:08 4m 30s
Update pytorch engine w8a8 supported model list
lint #6142: Pull request #2854 opened by AllentDan
December 4, 2024 07:00 3m 47s AllentDan:update-doc
December 4, 2024 07:00 3m 47s
update supported models
lint #6139: Pull request #2849 synchronize by lvhan028
December 4, 2024 05:48 3m 28s lvhan028:update-supported-models
December 4, 2024 05:48 3m 28s
Supports W8A8 quantization for more models (#2850)
lint #6138: Commit 69a4306 pushed by lvhan028
December 4, 2024 05:39 3m 57s main
December 4, 2024 05:39 3m 57s
[ascend]feat: support kv int8
lint #6137: Pull request #2736 synchronize by jinminxi104
December 4, 2024 04:50 3m 43s DeepLink-org:ascend_kv_int8
December 4, 2024 04:50 3m 43s
[ascend]feat: support kv int8
lint #6136: Pull request #2736 synchronize by jinminxi104
December 4, 2024 04:47 4m 8s DeepLink-org:ascend_kv_int8
December 4, 2024 04:47 4m 8s
Refactor VLM modules
lint #6135: Pull request #2810 synchronize by lvhan028
December 4, 2024 04:14 4m 0s refactor-vl
December 4, 2024 04:14 4m 0s
fix
lint #6134: Commit e977361 pushed by lvhan028
December 4, 2024 04:14 4m 4s refactor-vl
December 4, 2024 04:14 4m 4s
[ascend]feat: support kv int8
lint #6133: Pull request #2736 synchronize by Reinerzhou
December 4, 2024 01:47 4m 25s DeepLink-org:ascend_kv_int8
December 4, 2024 01:47 4m 25s
[ascend]feat: support kv int8
lint #6132: Pull request #2736 synchronize by jinminxi104
December 3, 2024 16:27 3m 28s DeepLink-org:ascend_kv_int8
December 3, 2024 16:27 3m 28s
Refactor turbomind attention by precomputing rotary embed
lint #6131: Pull request #2801 synchronize by irexyc
December 3, 2024 12:58 5m 1s irexyc:rope
December 3, 2024 12:58 5m 1s
[maca] add env to support different mm layout on maca. (#2835)
lint #6130: Commit cc8cfb0 pushed by lvhan028
December 3, 2024 08:45 4m 10s main
December 3, 2024 08:45 4m 10s
[dlinfer] change dlinfer kv_cache layout and ajust paged_prefill_atte…
lint #6129: Commit a6645b2 pushed by lvhan028
December 3, 2024 08:44 3m 54s main
December 3, 2024 08:44 3m 54s
Refactor VLM modules
lint #6128: Pull request #2810 synchronize by grimoire
December 3, 2024 08:00 4m 22s refactor-vl
December 3, 2024 08:00 4m 22s
fix
lint #6127: Commit f7c167e pushed by grimoire
December 3, 2024 08:00 3m 57s refactor-vl
December 3, 2024 08:00 3m 57s
Supports W8A8 quantization for more models
lint #6126: Pull request #2850 opened by AllentDan
December 3, 2024 07:34 3m 58s AllentDan:w8a8-llm
December 3, 2024 07:34 3m 58s
Refactor turbomind attention by precomputing rotary embed
lint #6125: Pull request #2801 synchronize by irexyc
December 3, 2024 07:30 3m 55s irexyc:rope
December 3, 2024 07:30 3m 55s
check whether backend_config is None or not before accessing its attr…
lint #6124: Commit efa8ac0 pushed by lvhan028
December 3, 2024 06:46 4m 11s main
December 3, 2024 06:46 4m 11s
fix the logic to verify whether AutoAWQ has been successfully install…
lint #6123: Commit 0dedd73 pushed by lvhan028
December 3, 2024 06:44 4m 18s main
December 3, 2024 06:44 4m 18s
update supported models
lint #6122: Pull request #2849 opened by lvhan028
December 3, 2024 06:43 4m 1s lvhan028:update-supported-models
December 3, 2024 06:43 4m 1s