File tree Expand file tree Collapse file tree 2 files changed +6
-3
lines changed
tests/integration/test_lists/test-db Expand file tree Collapse file tree 2 files changed +6
-3
lines changed Original file line number Diff line number Diff line change @@ -35,7 +35,8 @@ l0_dgx_h100:
3535 - accuracy/test_disaggregated_serving.py::TestGemma3_1BInstruct::test_auto_dtype[False]
3636 - accuracy/test_disaggregated_serving.py::TestGemma3_1BInstruct::test_auto_dtype[True]
3737 - accuracy/test_disaggregated_serving.py::TestLlama3_1_8BInstruct::test_ngram
38- - accuracy/test_disaggregated_serving.py::TestLlama3_1_8BInstruct::test_eagle3
38+ - accuracy/test_disaggregated_serving.py::TestLlama3_1_8BInstruct::test_eagle3[overlap_scheduler=False-eagle3_one_model=True]
39+ - accuracy/test_disaggregated_serving.py::TestLlama3_1_8BInstruct::test_eagle3[overlap_scheduler=False-eagle3_one_model=False]
3940 - test_e2e.py::test_ptp_quickstart_advanced_bs1
4041- condition :
4142 ranges :
Original file line number Diff line number Diff line change @@ -34,14 +34,16 @@ l0_h100:
3434 - accuracy/test_llm_api_pytorch.py::TestLlama3_1_8BInstruct::test_fp8[fp8kv=False-attn_backend=TRTLLM-torch_compile=True]
3535 - accuracy/test_llm_api_pytorch.py::TestLlama3_1_8BInstruct::test_fp8[fp8kv=True-attn_backend=TRTLLM-torch_compile=False]
3636 - accuracy/test_llm_api_pytorch.py::TestLlama3_1_8BInstruct::test_fp8[fp8kv=True-attn_backend=TRTLLM-torch_compile=True]
37- - accuracy/test_llm_api_pytorch.py::TestLlama3_1_8BInstruct::test_eagle3
37+ - accuracy/test_llm_api_pytorch.py::TestLlama3_1_8BInstruct::test_eagle3[overlap_scheduler=False-eagle3_one_model=False]
38+ - accuracy/test_llm_api_pytorch.py::TestLlama3_1_8BInstruct::test_eagle3[overlap_scheduler=False-eagle3_one_model=True]
3839 - accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_fp8_block_scales[mtp=disable-fp8kv=True-attention_dp=False-cuda_graph=True-overlap_scheduler=True-torch_compile=True]
3940 - accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_fp8_block_scales[mtp=eagle-fp8kv=True-attention_dp=True-cuda_graph=True-overlap_scheduler=True-torch_compile=False]
4041 - accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_fp8_block_scales[mtp=vanilla-fp8kv=True-attention_dp=False-cuda_graph=True-overlap_scheduler=True-torch_compile=True]
4142 - accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_no_kv_cache_reuse[quant_dtype=fp8-mtp_nextn=2-fp8kv=True-attention_dp=True-cuda_graph=True-overlap_scheduler=True]
4243 - accuracy/test_llm_api_pytorch.py::TestQwen3_8B::test_fp8_block_scales[latency]
4344 - accuracy/test_llm_api_pytorch.py::TestQwen3_30B_A3B::test_fp8[latency]
44- - accuracy/test_llm_api_pytorch.py::TestQwen3_8B::test_eagle3
45+ - accuracy/test_llm_api_pytorch.py::TestQwen3_8B::test_eagle3[overlap_scheduler=False-eagle3_one_model=False]
46+ - accuracy/test_llm_api_pytorch.py::TestQwen3_8B::test_eagle3[overlap_scheduler=False-eagle3_one_model=True]
4547 - accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_fp8_block_scales_cuda_graph_padding[mtp_nextn=0]
4648 - accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_fp8_block_scales_cuda_graph_padding[mtp_nextn=2]
4749 - test_e2e.py::test_trtllm_bench_pytorch_backend_sanity[meta-llama/Llama-3.1-8B-llama-3.1-8b-False-False]
You can’t perform that action at this time.
0 commit comments