Skip to content

Releases: pytorch/xla

PyTorch/XLA 1.10 release

25 Oct 17:10
8fb44f9
Compare
Choose a tag to compare

Cloud TPUs now support the PyTorch 1.10 release, via PyTorch/XLA integration. The release has daily automated testing for the supported models: Torchvision ResNet, FairSeq Transformer and RoBERTa, HuggingFace GLUE and LM, and Facebook Research DLRM.

On top of the underlying improvements and bug fixes in PyTorch's 1.10 release, this release adds several PyTorch/XLA-specific bug fixes:

PyTorch/XLA 1.8 release

04 Mar 23:45
f2f8f44
Compare
Choose a tag to compare

Summary

Cloud TPUs now support the PyTorch 1.8 release, via PyTorch/XLA integration. The release has daily automated testing for the supported models: Torchvision ResNet, FairSeq Transformer and RoBERTa, HuggingFace GLUE and LM, and Facebook Research DLRM.

This release focused on making PyTorch XLA easier to use and debug. See below for a list of new features.

New Features

  • Enhanced usability:
    • Profiler tools to help you pinpoint the areas where you can improve the memory usage or speed of your TPU models. The tools are ready to use; check out our main README for some upcoming tutorials.
    • Simpler error messages (#2771)
    • Less log spam using TPU Pods (#2662)
    • Able to view images in Tensorboard (#2679)
  • TriangularSolve (#2498) (example)
  • New ops supported by PyTorch/XLA:

Bug Fixes

  • Crashing while using dynamic shapes (#2602)
  • all_to_all crashing on TPU pods (#2601)
  • SiLU fix (#2721)

PyTorch/XLA 1.6 Release (GA)

19 Aug 21:19
9703109
Compare
Choose a tag to compare

Highlights

Cloud TPUs now support the PyTorch 1.6 release, via PyTorch/XLA integration. With this release we mark our general availability (GA) with the models such as ResNet, FairSeq Transformer and RoBERTa, and HuggingFace GLUE task models that have been rigorously tested and optimized.

In addition, with our PyTorch/XLA 1.6 release, you no longer need to run the env-setup.py script on Colab/Kaggle as those are now compatible with native torch wheels. See here for an example of the new Colab/Kaggle install step. You can still continue to use that script if you would like to run with our latest unstable releases.

New Features

  • XLA RNG state checkpointing/loading (#2096)
  • Device Memory XRT API (#2295)
  • [Kaggle/Colab] Small host VM memory environment utility (#2025)
  • [Advanced User] XLA Builder Support (#2125)
  • New ops supported on PyTorch/XLA
  • Dynamic shape support on XLA:CPU and XLA:GPU (experimental)

Bug Fixes

  • RNG Fix (proper randomness with bernoulli and dropout) (#1932)
  • Manual all-reduce in backward pass (#2325)

PyTorch/XLA 1.5 release

21 Apr 15:25
60c4f79
Compare
Choose a tag to compare

Cloud TPUs and Cloud TPU Pods now support PyTorch 1.5 via the PyTorch/XLA integration. This integration aims to make it possible for PyTorch users to do everything they can do on GPUs on Cloud TPUs as well while minimizing changes to the user experience. You can try out PyTorch on an 8-core Cloud TPU device for free via Google Colab, and you can use PyTorch on Cloud TPUs at a much larger scale on Google Cloud (all the way up to full Cloud TPU Pods).

Three PyTorch models have been added to our list of supported models, which are rigorously and continuously tested:

  • ResNet-50
  • Fairseq Transformer
  • Fairseq RoBERTa

Additional notes:

  • New Operators added
  • Exposed APIs to enable different types of cross-replica reduce operations using the TPU interconnect link (#1709)
  • Exposed API to perform rendezvous operations among the different replica processes (#1669)
  • Added support for reading/writing GCS files (#1230)
  • Added support to read TFRecords (#1220)
  • Miscellaneous bug fixes

PyTorch/XLA 1.9 release

15 Jun 23:17
bcc59d6
Compare
Choose a tag to compare

Cloud TPUs now support the PyTorch 1.9 release, via PyTorch/XLA integration. The release has daily automated testing for the supported models: Torchvision ResNet, FairSeq Transformer and RoBERTa, HuggingFace GLUE and LM, and Facebook Research DLRM.

On top of the underlying improvements and bug fixes in PyTorch's 1.9 release, this release adds several PyTorch/XLA-specific bug fixes:

PyTorch/XLA 1.8.1 release

21 Apr 17:38
ef3cad0
Compare
Choose a tag to compare

Cloud TPUs now support the PyTorch 1.8.1 release, via PyTorch/XLA integration. The release has daily automated testing for the supported models: Torchvision ResNet, FairSeq Transformer and RoBERTa, HuggingFace GLUE and LM, and Facebook Research DLRM.

On top of the underlying bug fixes in PyTorch's 1.8.1 release, this release adds a few bug fixes on the PyTorch XLA side around the XRT server and TPU Pods training.

PyTorch/XLA 1.7 release

28 Oct 22:40
7231272
Compare
Choose a tag to compare

Summary

Cloud TPUs now support the PyTorch 1.7 release, via PyTorch/XLA integration. The release has daily automated testing for the supported models: Torchvision ResNet, FairSeq Transformer and RoBERTa, HuggingFace GLUE and LM, and Facebook Research DLRM.

New Features

  • TriangularSolve (#2498) (example)
  • New ops supported by PyTorch/XLA:
  • Documentation on adding more supported ops: (#2458)

Bug Fixes

  • exponential_() returning 0 (#2562)
  • cross_entropy on inf input (#2553)