Skip to content

Actions: NVIDIA/TensorRT-LLM

Blossom-CI

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
307 workflow runs
307 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

trtllm-serve does not support dynamic batching like tritonserver
Blossom-CI #207: Issue comment #2549 (comment) created by nv-guomingz
December 18, 2024 06:48 4s
December 18, 2024 06:48 4s
Error in building llama with eagle for speculative decoding
Blossom-CI #206: Issue comment #2588 (comment) created by nv-guomingz
December 18, 2024 06:46 5s
December 18, 2024 06:46 5s
TRT-LLM Support for Llama3.2
Blossom-CI #205: Issue comment #2320 (comment) created by JoJoLev
December 18, 2024 05:50 5s
December 18, 2024 05:50 5s
Which version of InternVL does TensorRT-llm 1.5 support ?
Blossom-CI #204: Issue comment #2578 (comment) created by spacegrass
December 18, 2024 05:45 4s
December 18, 2024 05:45 4s
Error in building llama with eagle for speculative decoding
Blossom-CI #203: Issue comment #2588 (comment) created by JoJoLev
December 18, 2024 05:45 4s
December 18, 2024 05:45 4s
Error in building llama with eagle for speculative decoding
Blossom-CI #202: Issue comment #2588 (comment) created by nv-guomingz
December 18, 2024 05:35 4s
December 18, 2024 05:35 4s
OOM when building engine for meta-llama/Llama-3.1-405B-FP8 on 8 x A100 80G
Blossom-CI #201: Issue comment #2586 (comment) created by JoJoLev
December 18, 2024 04:45 4s
December 18, 2024 04:45 4s
OOM when building engine for meta-llama/Llama-3.1-405B-FP8 on 8 x A100 80G
Blossom-CI #200: Issue comment #2586 (comment) created by HeyangQin
December 18, 2024 04:18 5s
December 18, 2024 04:18 5s
llava-onevision convert bug
Blossom-CI #199: Issue comment #2585 (comment) created by liyi-xia
December 18, 2024 02:53 4s
December 18, 2024 02:53 4s
llava-onevision convert bug
Blossom-CI #198: Issue comment #2585 (comment) created by liyi-xia
December 18, 2024 02:35 5s
December 18, 2024 02:35 5s
What does "weights_scaling_factor_2" mean in safetensor results of awq_w4a8
Blossom-CI #197: Issue comment #2561 (comment) created by Barry-Delaney
December 18, 2024 02:26 5s
December 18, 2024 02:26 5s
InternVL deploy
Blossom-CI #196: Issue comment #2565 (comment) created by nv-guomingz
December 18, 2024 02:13 4s
December 18, 2024 02:13 4s
Support for LLaMa3.3
Blossom-CI #195: Issue comment #2567 (comment) created by nv-guomingz
December 18, 2024 02:04 4s
December 18, 2024 02:04 4s
Code for libtensorrt_llm_batch_manager_static.a
Blossom-CI #194: Issue comment #2569 (comment) created by nv-guomingz
December 18, 2024 02:03 4s
December 18, 2024 02:03 4s
TRT-LLM fails on GH200 node
Blossom-CI #193: Issue comment #2571 (comment) created by nv-guomingz
December 18, 2024 02:01 4s
December 18, 2024 02:01 4s
What does "weights_scaling_factor_2" mean in safetensor results of awq_w4a8
Blossom-CI #192: Issue comment #2561 (comment) created by gujiewen
December 18, 2024 02:00 3s
December 18, 2024 02:00 3s
tensorrtllm_backend Support for InternVL2
Blossom-CI #191: Issue comment #2568 (comment) created by nv-guomingz
December 18, 2024 02:00 4s
December 18, 2024 02:00 4s
Which version of InternVL does TensorRT-llm 1.5 support ?
Blossom-CI #190: Issue comment #2578 (comment) created by nv-guomingz
December 18, 2024 01:56 4s
December 18, 2024 01:56 4s
Blossom-CI
Blossom-CI #189: created by liyi-xia
December 18, 2024 01:54 4s
December 18, 2024 01:54 4s
llava-onevision convert bug
Blossom-CI #188: Issue comment #2585 (comment) created by nv-guomingz
December 18, 2024 01:32 4s
December 18, 2024 01:32 4s
[InternVL 2.0] config of eof is INVALID, output token length is max_tokens
Blossom-CI #187: Issue comment #2580 (comment) created by nv-guomingz
December 18, 2024 01:23 4s
December 18, 2024 01:23 4s
目前OpenAIServer是不是只支持LLM不支持VLM?
Blossom-CI #186: Issue comment #2581 (comment) created by nv-guomingz
December 18, 2024 01:19 5s
December 18, 2024 01:19 5s
OOM when building engine for meta-llama/Llama-3.1-405B-FP8 on 8 x A100 80G
Blossom-CI #185: Issue comment #2586 (comment) created by nv-guomingz
December 18, 2024 01:18 4s
December 18, 2024 01:18 4s
Is it possible load quantized model from huggingface?
Blossom-CI #184: Issue comment #2458 (comment) created by pei0033
December 18, 2024 00:33 4s
December 18, 2024 00:33 4s
support Qwen2-VL
Blossom-CI #183: Issue comment #2183 (comment) created by zhaocc1106
December 17, 2024 15:15 5s
December 17, 2024 15:15 5s