Skip to content

Actions: NVIDIA/TensorRT-LLM

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
1,997 workflow runs
1,997 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Build Qwen2-72B-Instruct model by INT4-AWQ quantization failed
Blossom-CI #142: Issue comment #2445 (comment) created by basujindal
December 12, 2024 00:15 5s
December 12, 2024 00:15 5s
Close inactive issues
Close inactive issues #386: Scheduled
December 12, 2024 00:14 14s main
December 12, 2024 00:14 14s
What does "weights_scaling_factor_2" mean in safetensor results of awq_w4a8
auto-assign #28: Issue #2561 labeled by nv-guomingz
December 12, 2024 00:14 40s
December 12, 2024 00:14 40s
What does "weights_scaling_factor_2" mean in safetensor results of awq_w4a8
auto-assign #27: Issue #2561 labeled by nv-guomingz
December 12, 2024 00:08 42s
December 12, 2024 00:08 42s
What does "weights_scaling_factor_2" mean in safetensor results of awq_w4a8
auto-assign #26: Issue #2561 labeled by nv-guomingz
December 12, 2024 00:07 50s
December 12, 2024 00:07 50s
Close inactive issues
Close inactive issues #385: Scheduled
December 11, 2024 23:03 15s main
December 11, 2024 23:03 15s
Blossom-CI
Blossom-CI #141: created by michaelfeil
December 11, 2024 22:11 5s
December 11, 2024 22:11 5s
Close inactive issues
Close inactive issues #384: Scheduled
December 11, 2024 22:03 21s main
December 11, 2024 22:03 21s
Close inactive issues
Close inactive issues #383: Scheduled
December 11, 2024 21:03 15s main
December 11, 2024 21:03 15s
Close inactive issues
Close inactive issues #382: Scheduled
December 11, 2024 20:04 15s main
December 11, 2024 20:04 15s
[feature request] Can we add H200 in infer_cluster_key() method?
Blossom-CI #140: Issue comment #2552 (comment) created by renjie0
December 11, 2024 19:28 5s
December 11, 2024 19:28 5s
Close inactive issues
Close inactive issues #381: Scheduled
December 11, 2024 19:02 17s main
December 11, 2024 19:02 17s
Close inactive issues
Close inactive issues #380: Scheduled
December 11, 2024 18:04 17s main
December 11, 2024 18:04 17s
Upgrade transformers to 4.45.2
Blossom-CI #139: Issue comment #2465 (comment) created by VALLIS-NERIA
December 11, 2024 15:42 6s
December 11, 2024 15:42 6s
Issue with converting custom encoder model
Blossom-CI #138: Issue comment #2535 (comment) created by AvivSham
December 11, 2024 15:33 5s
December 11, 2024 15:33 5s
Upgrade transformers to 4.45.2
Blossom-CI #137: Issue comment #2465 (comment) created by Xarbirus
December 11, 2024 15:33 5s
December 11, 2024 15:33 5s
Blossom-CI
Blossom-CI #136: created by niukuo
December 11, 2024 14:33 4m 33s
December 11, 2024 14:33 4m 33s
InternVL deploy
Blossom-CI #135: Issue comment #2565 (comment) created by ChenJian7578
December 11, 2024 12:25 5s
December 11, 2024 12:25 5s
trust_remote_code argument ignored in load_calib_dataset()
Blossom-CI #134: Issue comment #2537 (comment) created by hiroshi-matsuda-rit
December 11, 2024 11:54 6s
December 11, 2024 11:54 6s
Blossom-CI
Blossom-CI #133: created by jayakommuru
December 11, 2024 10:44 5s
December 11, 2024 10:44 5s
[feature request] qwen model's query logn-scaling attn
Blossom-CI #132: Issue comment #836 (comment) created by Njuapp
December 11, 2024 10:04 4s
December 11, 2024 10:04 4s
[feature request] qwen model's query logn-scaling attn
Blossom-CI #131: Issue comment #836 (comment) created by Njuapp
December 11, 2024 10:04 5s
December 11, 2024 10:04 5s
Wrong output on Llama 3.2 1B, but 3B ok
Blossom-CI #130: Issue comment #2492 (comment) created by jayakommuru
December 11, 2024 09:54 5s
December 11, 2024 09:54 5s
How to use greedy search correctly
Blossom-CI #129: Issue comment #2557 (comment) created by fan-niu
December 11, 2024 09:35 5s
December 11, 2024 09:35 5s
support Qwen2-VL
Blossom-CI #128: Issue comment #2183 (comment) created by fan-niu
December 11, 2024 09:18 5s
December 11, 2024 09:18 5s
ProTip! You can narrow down the results and go further in time using created:<2024-12-11 or the other filters available.