Skip to content

PyTorch/XLA 1.12 release

Compare
Choose a tag to compare
@wonjoolee95 wonjoolee95 released this 29 Jun 01:22
· 7 commits to r1.12 since this release
82fbe57

Cloud TPUs now support the PyTorch 1.12 release, via PyTorch/XLA integration. The release has daily automated testing for the supported models: Torchvision ResNet, FairSeq Transformer and RoBERTa, HuggingFace GLUE and LM, and Facebook Research DLRM.

On top of the underlying improvements and bug fixes in PyTorch's 1.12 release, this release adds several features and PyTorch/XLA specified bug fixes.

New feature

  • FSDP
    • Check the instruction and example code in here
    • FSDP support for PyTorch/XLA (#3431)
    • Bfloat 16 and float 16 support in FSDP (#3617)
  • PyTorch/XLA gradident checkpoint api (#3524)
  • Optimization_barrier which enables gradient checkpointing (#3482)
  • Ongoing LTC migration
  • Device lock position optimization to speed up tracing (#3457)
  • Experimental support for PJRT TPU client (#3550)
  • Send/Recv CC op support (#3494)
  • Performance profiling tool enhancement (#3498)
  • TPU-V4 pod official support (#3440)
  • Roll lowering (#3505)
  • Celu, celu_, selu, selu_ lowering (#3547)

Bug fixes and improvements

  • Fixed a view bug which will create unnecessary IR graph (#3411)