You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a part of modernizing and simplifying XLA:GPU runtime we'll be working on a project to bring CommandBuffers support to XLA (mapped to CUDA graphs for NVIDIA backend).
Also as a part of this project we want to deprecate and remove intermediate LMHLO step from the XLA:GPU compilation pipeline:
As we do not plan to use MLIR to lower from HLO to Thunks and CommandBuffers this step is not required
We believe that we'll simplify the overall stack by removing non-load-bearing MLIR dialect from the compilation path
As LMHLO is one of the first dialects designed with MLIR (while MLIR was very young), it's not the best example of building a dialect
Please let us know what you think and what concerns do you have.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Link: XLA:GPU Runtime 2023-2024
Hi everyone,
As a part of modernizing and simplifying XLA:GPU runtime we'll be working on a project to bring CommandBuffers support to XLA (mapped to CUDA graphs for NVIDIA backend).
Also as a part of this project we want to deprecate and remove intermediate LMHLO step from the XLA:GPU compilation pipeline:
Please let us know what you think and what concerns do you have.
Beta Was this translation helpful? Give feedback.
All reactions