Skip to content

Issues: NVIDIA/TransformerEngine

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

when to support flashattention2.7.3+
#1503 opened Feb 24, 2025 by leo-ztjht
MXFP8 backward
#1490 opened Feb 18, 2025 by hanhanpp
qwen1.5-0.5B failed to save model with huggingface transformers bug Something isn't working
#1482 opened Feb 13, 2025 by xinpengzz
Questions about accuracy alignment between BF16 and FP8 question Further information is requested
#1419 opened Jan 22, 2025 by zigzagcai
Import fails when working from a TE directory good first issue Good for newcomers
#1400 opened Jan 10, 2025 by ksivaman
Installation stuck at 97%
#1399 opened Jan 10, 2025 by lorenzbaraldi
support new flash_attn_interface
#1392 opened Jan 7, 2025 by rgtjf
FP8 GEMM Kernels
#1391 opened Jan 6, 2025 by xiaoxiao26
How about the grouplinear?
#1386 opened Dec 26, 2024 by south-ocean
_NoopCatFunc in transformer layer bug Something isn't working
#1384 opened Dec 22, 2024 by robot-transformer
ProTip! Mix and match filters to narrow down what you’re looking for.