PyTorch/XLA 1.12 release
Cloud TPUs now support the PyTorch 1.12 release, via PyTorch/XLA integration. The release has daily automated testing for the supported models: Torchvision ResNet, FairSeq Transformer and RoBERTa, HuggingFace GLUE and LM, and Facebook Research DLRM.
On top of the underlying improvements and bug fixes in PyTorch's 1.12 release, this release adds several features and PyTorch/XLA specified bug fixes.
New feature
- FSDP
- PyTorch/XLA gradident checkpoint api (#3524)
- Optimization_barrier which enables gradient checkpointing (#3482)
- Ongoing LTC migration
- Device lock position optimization to speed up tracing (#3457)
- Experimental support for PJRT TPU client (#3550)
- Send/Recv CC op support (#3494)
- Performance profiling tool enhancement (#3498)
- TPU-V4 pod official support (#3440)
- Roll lowering (#3505)
- Celu, celu_, selu, selu_ lowering (#3547)
Bug fixes and improvements
- Fixed a view bug which will create unnecessary IR graph (#3411)