Releases: pytorch/xla
PyTorch/XLA 1.10 release
Cloud TPUs now support the PyTorch 1.10 release, via PyTorch/XLA integration. The release has daily automated testing for the supported models: Torchvision ResNet, FairSeq Transformer and RoBERTa, HuggingFace GLUE and LM, and Facebook Research DLRM.
On top of the underlying improvements and bug fixes in PyTorch's 1.10 release, this release adds several PyTorch/XLA-specific bug fixes:
- Add support for reduce_scatter
- Introduce the AMP Zero gradients optimization for XLA:GPU
- Introduce the environment variable XLA_DOWN_CAST_BF16 and XLA_DOWNCAST_FP16 to downcast input tensors
- adaptive_max_pool2d lowering
- nan_to_num lowering
- sgn lowering
- logical_not/logical_xor/logical_or/logical_and lowering
- amax lowering
- amin lowering
- std_mean lowering
- var_mean lowering
- lerp lowering
- isnan lowering
PyTorch/XLA 1.8 release
Summary
Cloud TPUs now support the PyTorch 1.8 release, via PyTorch/XLA integration. The release has daily automated testing for the supported models: Torchvision ResNet, FairSeq Transformer and RoBERTa, HuggingFace GLUE and LM, and Facebook Research DLRM.
This release focused on making PyTorch XLA easier to use and debug. See below for a list of new features.
New Features
- Enhanced usability:
- Profiler tools to help you pinpoint the areas where you can improve the memory usage or speed of your TPU models. The tools are ready to use; check out our main README for some upcoming tutorials.
- Simpler error messages (#2771)
- Less log spam using TPU Pods (#2662)
- Able to view images in Tensorboard (#2679)
TriangularSolve
(#2498) (example)- New ops supported by PyTorch/XLA:
Bug Fixes
PyTorch/XLA 1.6 Release (GA)
Highlights
Cloud TPUs now support the PyTorch 1.6 release, via PyTorch/XLA integration. With this release we mark our general availability (GA) with the models such as ResNet, FairSeq Transformer and RoBERTa, and HuggingFace GLUE task models that have been rigorously tested and optimized.
In addition, with our PyTorch/XLA 1.6 release, you no longer need to run the env-setup.py script on Colab/Kaggle as those are now compatible with native torch
wheels. See here for an example of the new Colab/Kaggle install step. You can still continue to use that script if you would like to run with our latest unstable releases.
New Features
- XLA RNG state checkpointing/loading (#2096)
- Device Memory XRT API (#2295)
- [Kaggle/Colab] Small host VM memory environment utility (#2025)
- [Advanced User] XLA Builder Support (#2125)
- New ops supported on PyTorch/XLA
- Dynamic shape support on XLA:CPU and XLA:GPU (experimental)
Bug Fixes
PyTorch/XLA 1.5 release
Cloud TPUs and Cloud TPU Pods now support PyTorch 1.5 via the PyTorch/XLA integration. This integration aims to make it possible for PyTorch users to do everything they can do on GPUs on Cloud TPUs as well while minimizing changes to the user experience. You can try out PyTorch on an 8-core Cloud TPU device for free via Google Colab, and you can use PyTorch on Cloud TPUs at a much larger scale on Google Cloud (all the way up to full Cloud TPU Pods).
Three PyTorch models have been added to our list of supported models, which are rigorously and continuously tested:
- ResNet-50
- Fairseq Transformer
- Fairseq RoBERTa
Additional notes:
- New Operators added
- Exposed APIs to enable different types of cross-replica reduce operations using the TPU interconnect link (#1709)
- Exposed API to perform rendezvous operations among the different replica processes (#1669)
- Added support for reading/writing GCS files (#1230)
- Added support to read TFRecords (#1220)
- Miscellaneous bug fixes
PyTorch/XLA 1.9 release
Cloud TPUs now support the PyTorch 1.9 release, via PyTorch/XLA integration. The release has daily automated testing for the supported models: Torchvision ResNet, FairSeq Transformer and RoBERTa, HuggingFace GLUE and LM, and Facebook Research DLRM.
On top of the underlying improvements and bug fixes in PyTorch's 1.9 release, this release adds several PyTorch/XLA-specific bug fixes:
PyTorch/XLA 1.8.1 release
Cloud TPUs now support the PyTorch 1.8.1 release, via PyTorch/XLA integration. The release has daily automated testing for the supported models: Torchvision ResNet, FairSeq Transformer and RoBERTa, HuggingFace GLUE and LM, and Facebook Research DLRM.
On top of the underlying bug fixes in PyTorch's 1.8.1 release, this release adds a few bug fixes on the PyTorch XLA side around the XRT server and TPU Pods training.
PyTorch/XLA 1.7 release
Summary
Cloud TPUs now support the PyTorch 1.7 release, via PyTorch/XLA integration. The release has daily automated testing for the supported models: Torchvision ResNet, FairSeq Transformer and RoBERTa, HuggingFace GLUE and LM, and Facebook Research DLRM.
New Features
TriangularSolve
(#2498) (example)- New ops supported by PyTorch/XLA:
- Documentation on adding more supported ops: (#2458)