Skip to content

Actions: q10/FBGEMM

FBGEMM_GPU-CUDA CI

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
771 workflow runs
771 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

FBGEMM_GPU-CUDA CI
FBGEMM_GPU-CUDA CI #771: Scheduled
February 15, 2025 12:58 3s main
February 15, 2025 12:58 3s
Add cublas FP8 tensorwise GEMM in fbgemm quantize bench (#3693)
FBGEMM_GPU-CUDA CI #770: Commit a4be13a pushed by q10
February 14, 2025 20:29 11s main
February 14, 2025 20:29 11s
FBGEMM_GPU-CUDA CI
FBGEMM_GPU-CUDA CI #769: Scheduled
February 14, 2025 13:02 4s main
February 14, 2025 13:02 4s
Small modifications to quantize_bench script (#3684)
FBGEMM_GPU-CUDA CI #768: Commit 1b7789a pushed by q10
February 13, 2025 20:37 7s main
February 13, 2025 20:37 7s
FBGEMM_GPU-CUDA CI
FBGEMM_GPU-CUDA CI #767: Scheduled
February 13, 2025 13:03 4s main
February 13, 2025 13:03 4s
refactor sweep_utils.py to test gemv kernel for different precisions …
FBGEMM_GPU-CUDA CI #766: Commit f3424d6 pushed by q10
February 12, 2025 21:23 8s main
February 12, 2025 21:23 8s
FBGEMM_GPU-CUDA CI
FBGEMM_GPU-CUDA CI #765: Scheduled
February 12, 2025 13:02 4s main
February 12, 2025 13:02 4s
Support histogram_binning_calibration for export (#3657)
FBGEMM_GPU-CUDA CI #764: Commit d8e07ce pushed by q10
February 12, 2025 01:53 7s main
February 12, 2025 01:53 7s
FBGEMM_GPU-CUDA CI
FBGEMM_GPU-CUDA CI #763: Scheduled
February 11, 2025 13:03 4s main
February 11, 2025 13:03 4s
Unifying TBE API using List (Backend) (#3563)
FBGEMM_GPU-CUDA CI #762: Commit fe80da4 pushed by q10
February 10, 2025 21:15 6s main
February 10, 2025 21:15 6s
FBGEMM_GPU-CUDA CI
FBGEMM_GPU-CUDA CI #761: Scheduled
February 10, 2025 13:02 4s main
February 10, 2025 13:02 4s
FBGEMM_GPU-CUDA CI
FBGEMM_GPU-CUDA CI #760: Scheduled
February 9, 2025 12:57 3s main
February 9, 2025 12:57 3s
FBGEMM_GPU-CUDA CI
FBGEMM_GPU-CUDA CI #759: Scheduled
February 8, 2025 12:58 2s main
February 8, 2025 12:58 2s
Fix f8f8bf16_lite quantize op input in quantize_and_compute (#3667)
FBGEMM_GPU-CUDA CI #758: Commit a914871 pushed by q10
February 8, 2025 00:51 6s main
February 8, 2025 00:51 6s
FBGEMM_GPU-CUDA CI
FBGEMM_GPU-CUDA CI #757: Scheduled
February 7, 2025 13:01 4s main
February 7, 2025 13:01 4s
FBGEMM_GPU-CUDA CI
FBGEMM_GPU-CUDA CI #756: Scheduled
February 6, 2025 13:02 4s main
February 6, 2025 13:02 4s
loose unit test atol rtol tolerance to eliminate ut flakiness (#3…
FBGEMM_GPU-CUDA CI #755: Commit 9a343a0 pushed by q10
February 6, 2025 08:08 7s main
February 6, 2025 08:08 7s
FBGEMM_GPU-CUDA CI
FBGEMM_GPU-CUDA CI #754: Scheduled
February 5, 2025 13:02 4s main
February 5, 2025 13:02 4s
FBGEMM_GPU-CUDA CI
FBGEMM_GPU-CUDA CI #753: Scheduled
February 4, 2025 13:02 3s main
February 4, 2025 13:02 3s
fp8 rowwise regular gemm tuning for llm new shapes (#3654)
FBGEMM_GPU-CUDA CI #752: Commit bdcce9c pushed by q10
February 4, 2025 05:31 7s main
February 4, 2025 05:31 7s
Fix zero_start_index_M argument for triton rowwise quantize (#3639)
FBGEMM_GPU-CUDA CI #751: Commit 26eeef0 pushed by q10
February 3, 2025 19:01 8s main
February 3, 2025 19:01 8s
FBGEMM_GPU-CUDA CI
FBGEMM_GPU-CUDA CI #750: Scheduled
February 3, 2025 13:02 4s main
February 3, 2025 13:02 4s
FBGEMM_GPU-CUDA CI
FBGEMM_GPU-CUDA CI #749: Scheduled
February 2, 2025 12:57 3s main
February 2, 2025 12:57 3s
Use custom copy of cutlass to enable FP8 Grouped Gemm. (#3649)
FBGEMM_GPU-CUDA CI #748: Commit 0b2c24f pushed by q10
February 2, 2025 05:35 7s main
February 2, 2025 05:35 7s
FBGEMM_GPU-CUDA CI
FBGEMM_GPU-CUDA CI #747: Scheduled
February 1, 2025 12:57 3s main
February 1, 2025 12:57 3s