Skip to content

PyTorch/XLA 1.13 release

Compare
Choose a tag to compare
@vanbasten23 vanbasten23 released this 29 Nov 01:07
· 2206 commits to master since this release
c62c5a5

Cloud TPUs now support the PyTorch 1.13 release, via PyTorch/XLA integration. The release has daily automated testing for the supported models: Torchvision ResNet, FairSeq Transformer and RoBERTa, HuggingFace GLUE and LM, and Facebook Research DLRM.

On top of the underlying improvements and bug fixes in PyTorch's 1.13 release, this release adds several features and PyTorch/XLA specified bug fixes.

New Features

  • GPU enhancement
    • Add upsample_nearest/bilinear implementation for CPU and GPU (#3990)
    • Set three_fry as the default RNG for GPU (#3951)
  • FSDP enhancement
    • allow FSDP wrapping and sharding over modules on CPU devices (#3992)
    • Support param sharding dim and pinning memory (#3830)
  • Lower torch::einsum using xla::einsum which provide significant speedup (#3843)
  • Support large models with >3200 graph input on TPU + PJRT (#3920)

Experimental Features

  • PJRT experimental support on Cloud TPU v4
    • Check the instruction and example code in here
  • DDP experimental support on Cloud TPU and GPU
    • Check the instruction, analysis and example code in here

Ongoing development

  • Ongoing Dynamic Shape implementation (POC completed)
  • Ongoing SPMD implementation (POC completed)
  • Ongoing LTC migration

Bug fixes and improvements

  • Make XLA_HLO_DEBUG populate the scope metadata (#3985)