Skip to content

Actions: NVIDIA/TransformerEngine

Build

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
3,651 workflow runs
3,651 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

[TE/JAX] XLA FFI calls for layer norm and RMS norm
Build #6077: Pull request #1290 synchronize by pre-commit-ci bot
October 31, 2024 16:40 1h 9m 48s huanghua1994:xla-custom-call-ffi
October 31, 2024 16:40 1h 9m 48s
[TE/JAX] XLA FFI calls for layer norm and RMS norm
Build #6076: Pull request #1290 synchronize by huanghua1994
October 31, 2024 16:37 1h 10m 23s huanghua1994:xla-custom-call-ffi
October 31, 2024 16:37 1h 10m 23s
[TE/JAX] Disable FusedAttn with FFI by default
Build #6074: Pull request #1298 synchronize by phu0ngng
October 31, 2024 14:32 1h 11m 47s phu0ngng:fused_attn_ffi
October 31, 2024 14:32 1h 11m 47s
Support using fp16 master weights and fp16/fp8 optimizer states in FusedAdam
Build #6072: Pull request #1078 synchronize by pre-commit-ci bot
October 31, 2024 09:21 1h 10m 56s kunlunl:mx_fp16
October 31, 2024 09:21 1h 10m 56s
Support using fp16 master weights and fp16/fp8 optimizer states in FusedAdam
Build #6071: Pull request #1078 synchronize by kunlunl
October 31, 2024 09:20 Action required kunlunl:mx_fp16
October 31, 2024 09:20 Action required
Support using fp16 master weights and fp16/fp8 optimizer states in FusedAdam
Build #6070: Pull request #1078 synchronize by pre-commit-ci bot
October 31, 2024 09:13 1h 9m 50s kunlunl:mx_fp16
October 31, 2024 09:13 1h 9m 50s
Support using fp16 master weights and fp16/fp8 optimizer states in FusedAdam
Build #6069: Pull request #1078 synchronize by kunlunl
October 31, 2024 09:12 Action required kunlunl:mx_fp16
October 31, 2024 09:12 Action required
Support using fp16 master weights and fp16/fp8 optimizer states in FusedAdam
Build #6068: Pull request #1078 synchronize by pre-commit-ci bot
October 31, 2024 09:02 1h 10m 37s kunlunl:mx_fp16
October 31, 2024 09:02 1h 10m 37s
Support using fp16 master weights and fp16/fp8 optimizer states in FusedAdam
Build #6067: Pull request #1078 synchronize by kunlunl
October 31, 2024 09:02 Action required kunlunl:mx_fp16
October 31, 2024 09:02 Action required
Support using fp16 master weights and fp16/fp8 optimizer states in FusedAdam
Build #6066: Pull request #1078 synchronize by kunlunl
October 31, 2024 06:29 Action required kunlunl:mx_fp16
October 31, 2024 06:29 Action required
Update cudnn-frontend to 1.8.0
Build #6065: Pull request #1302 opened by cyanguwa
October 30, 2024 21:52 1h 8m 46s cyanguwa:update_FE_to_1.8
October 30, 2024 21:52 1h 8m 46s
[JAX] Expose cp params to jax DPA api
Build #6064: Pull request #1292 synchronize by mgoldfarb-nvidia
October 30, 2024 20:43 1h 13m 17s kocchop:faysal/expose-cp-to-jax-dpa
October 30, 2024 20:43 1h 13m 17s
[JAX] Expose cp params to jax DPA api
Build #6063: Pull request #1292 synchronize by mgoldfarb-nvidia
October 30, 2024 20:35 1h 11m 13s kocchop:faysal/expose-cp-to-jax-dpa
October 30, 2024 20:35 1h 11m 13s
[TE/JAX] Custom call with FFI - lowering all attributes with bind all
Build #6062: Pull request #1289 synchronize by pre-commit-ci bot
October 30, 2024 18:12 1h 18m 55s phu0ngng:ffi_dict_attrs
October 30, 2024 18:12 1h 18m 55s
[TE/JAX] Custom call with FFI - lowering all attributes with bind all
Build #6061: Pull request #1289 synchronize by phu0ngng
October 30, 2024 18:12 1h 9m 40s phu0ngng:ffi_dict_attrs
October 30, 2024 18:12 1h 9m 40s
[TE/JAX] Custom call with FFI - lowering all attributes with bind all
Build #6060: Pull request #1289 synchronize by phu0ngng
October 30, 2024 17:54 1h 8m 28s phu0ngng:ffi_dict_attrs
October 30, 2024 17:54 1h 8m 28s
[TE/JAX] Custom call with FFI - lowering all attributes with bind all
Build #6059: Pull request #1289 synchronize by phu0ngng
October 30, 2024 17:43 1h 12m 12s phu0ngng:ffi_dict_attrs
October 30, 2024 17:43 1h 12m 12s
[PyTorch] Make FP8 MHA work with RoPE when CP is on
Build #6058: Pull request #1297 synchronize by pre-commit-ci bot
October 30, 2024 09:53 1h 10m 1s yaox12:xiny/fp8_mha_with_rope_cp
October 30, 2024 09:53 1h 10m 1s
[PyTorch] Make FP8 MHA work with RoPE when CP is on
Build #6057: Pull request #1297 synchronize by yaox12
October 30, 2024 09:52 1h 10m 5s yaox12:xiny/fp8_mha_with_rope_cp
October 30, 2024 09:52 1h 10m 5s
Support using fp16 master weights and fp16/fp8 optimizer states in FusedAdam
Build #6056: Pull request #1078 synchronize by pre-commit-ci bot
October 30, 2024 09:44 1h 12m 58s kunlunl:mx_fp16
October 30, 2024 09:44 1h 12m 58s