You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a user, I want to use bfloat16 data types in lava-dl, with compatibility for PyTorch's torch.amp (Automatic Mixed Precision), to accelerate inference and training processes while maintaining numerical accuracy. This will allow for efficient computation and memory savings, leveraging the mixed precision capabilities of PyTorch to optimize performance for large-scale spiking neural networks (SNNs).
Conditions of satisfaction
The software should support bfloat16 data types for all relevant operations, including both training and inference.
Integration with torch.amp should be seamless, allowing users to easily switch between float32 and bfloat16 or use automatic mixed precision without significant code changes.
The numerical stability and accuracy of operations with bfloat16 should be validated, ensuring compatibility with PyTorch's mixed precision training workflows.
Documentation should include guidelines on using bfloat16 with torch.amp, any limitations, and best practices for users.
The text was updated successfully, but these errors were encountered:
User story
As a user, I want to use
bfloat16
data types inlava-dl
, with compatibility for PyTorch'storch.amp
(Automatic Mixed Precision), to accelerate inference and training processes while maintaining numerical accuracy. This will allow for efficient computation and memory savings, leveraging the mixed precision capabilities of PyTorch to optimize performance for large-scale spiking neural networks (SNNs).Conditions of satisfaction
bfloat16
data types for all relevant operations, including both training and inference.torch.amp
should be seamless, allowing users to easily switch betweenfloat32
andbfloat16
or use automatic mixed precision without significant code changes.bfloat16
should be validated, ensuring compatibility with PyTorch's mixed precision training workflows.bfloat16
withtorch.amp
, any limitations, and best practices for users.The text was updated successfully, but these errors were encountered: