Skip to content

Commit 72cbb0d

Browse files
doc: improve readme
Fixes: #1731 Signed-off-by: Dmitry Rogozhkin <[email protected]> Co-authored-by: Gajanan Choudhary <[email protected]>
1 parent aa5b3dc commit 72cbb0d

File tree

2 files changed

+84
-10
lines changed

2 files changed

+84
-10
lines changed

README.md

Lines changed: 84 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,24 @@
11
# Torch XPU Operators*
22

3-
Torch XPU Operators* implements PyTorch ATen operators for Intel GPU devices, aiming to agilely support PyTorch ATen operations and buffer these operations for Intel GPU upstreaming . For more details, refer to [SYCL kernels for ATen Operators RFC](https://github.com/pytorch/pytorch/issues/114835) for more details.
3+
Torch XPU Operators* project is an integral part of [PyTorch](https://github.com/pytorch/pytorch) to support XPU acceleration backend. The PyTorch build system automatically clones this repository at the pin pointed commit, branch, or tag specified in the following file of the [PyTorch repository](https://github.com/pytorch/pytorch):
44

5-
## Overview
5+
* https://github.com/pytorch/pytorch/blob/main/third_party/xpu.txt
66

7-
<p align="center">
8-
<img src="docs/torch_xpu_ops.jpg" width="100%">
9-
</p>
7+
Cloned copy becomes available at `./third_party/torch-xpu-ops/` relative to the root of the checked out PyTorch tree.
108

11-
* SYCL Implementation for XPU Operators: The Operators in this staging branch will finally be upstreamed to PyTorch for Intel GPU.
9+
Torch XPU Operators* implements some of the operators for Intel GPU devices accessible via PyTorch XPU acceleration backend:
10+
11+
* PyTorch ATen operators
12+
* Torchvision operators
13+
14+
<p align="center">
15+
<img src="docs/torch_xpu_ops.jpg" width="100%">
16+
</p>
17+
18+
Most operators are implemented as SYCL kernels, the sources of which are available in this repository. Some operators (linear algebra)
19+
are implemented through calls to the [Intel® oneAPI Math Kernel Library (oneMKL)](https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl.html).
20+
21+
Note that a few operators (convolution and matrix-matrix multiplication (`gemm`)) for the PyTorch XPU backend are implemented directly in PyTorch sources through calls to the [oneAPI Deep Neural Network Library (oneDNN)](https://github.com/uxlfoundation/oneDNN). These sources can be found in the PyTorch repository at https://github.com/pytorch/pytorch/tree/main/aten/src/ATen/native/mkldnn/xpu.
1222

1323
## Requirements
1424

@@ -17,14 +27,78 @@ For the hardware and software prerequiste, please refer to [PyTorch Prerequisite
1727
* Intel GPU Driver: Install Intel GPU drivers along with compute and media runtimes and development packages.
1828
* Intel® Deep Learning Essentials: Install a subset of Intel® oneAPI components needed for building and running PyTorch.
1929

20-
## Build
30+
## Build and install
31+
32+
This project cannot be built or installed as a stand-alone. This project gets built when PyTorch is built with XPU backend support.
2133

22-
Need to built this project as a submodule of PyTorch, after install Intel GPU Driver and Intel Deep Learning Essentials.
34+
**To install PyTorch with XPU backend from pre-built binary packages**, use one of the available distribution channels:
35+
36+
* For release builds:
37+
38+
```
39+
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/xpu
40+
```
41+
42+
* For nightly builds
43+
44+
```
45+
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/xpu
46+
```
47+
48+
**To build PyTorch with XPU backend from sources**, refer to the [Intel GPU Support](https://github.com/pytorch/pytorch/blob/main/README.md#intel-gpu-support) section of the [PyTorch documentation](https://github.com/pytorch/pytorch/blob/main/README.md#from-source). In summary, the PyTorch build for XPU backend can be triggered as follows:
2349

2450
```bash
25-
# Setup PyTorch source project. torch-xpu-ops is included by default.
26-
python setup.py install
51+
git clone https://github.com/pytorch/pytorch.git && cd pytorch
52+
pip install -r requirements.txt
53+
python setup.py install >log.txt 2>&1
54+
```
55+
56+
Look for the following lines in the log file indicating that XPU backend is being built:
57+
2758
```
59+
$ cat log.txt | grep -E "(USE_XPU|USE_XCCL)\s*:"
60+
-- USE_XPU : 1
61+
-- USE_XCCL : ON
62+
```
63+
64+
If building from sources, note the following environment variables which control the PyTorch XPU backend build:
65+
66+
| Environment variable | Default | Notes |
67+
| --- | --- | --- |
68+
| `USE_XPU` | `ON` | Enables XPU backend support |
69+
| `USE_XCCL` | `ON` (>= PT2.8) | Enables XCCL distributed backend support |
70+
| `TORCH_XPU_ARCH_LIST` | depends on the PT and OS versions | Build SYCL kernels for specified platform(s) |
71+
72+
The `TORCH_XPU_ARCH_LIST` allows to specify a comma separated list of platforms for which SYCL kernels will be built. This helps to reduce the build time. This option does not affect oneDNN- or oneMKL-based operators. Note that if PyTorch is executed on a platform that the SYCL kernels were not built for, then they are compiled just-in-time (JIT).
73+
74+
## Verification
75+
76+
Once PyTorch is built or installed, verify that PyTorch XPU backend is available as follows:
77+
78+
```
79+
$ python3 -c "import torch; print(torch.xpu.is_available())"
80+
True
81+
python3 -c "import torch; print(torch.distributed.distributed_c10d.is_xccl_available())"
82+
True
83+
```
84+
85+
# FAQ
86+
87+
Some confusion might arise about relationship of this repostory and PyTorch XPU backend in general with [Intel® Extension for PyTorch (IPEX)](https://github.com/intel/intel-extension-for-pytorch). See answers on that below.
88+
89+
**Does the PyTorch XPU backend implementation use IPEX?**
90+
91+
No. PyTorch XPU backend implementation does not use IPEX and can be used without IPEX levaraging:
92+
93+
- Standard PyTorch API to access eager mode operators, compile modules (including Triton kernels), profiling, etc.
94+
- SYCL kernels to enhance applications with custom operators via PyTorch CPP Extension API
95+
96+
**Does IPEX depend on this repository through the PyTorch XPU backend implementation?**
97+
98+
Yes. IPEX relies on the PyTorch XPU backend implementation (which includes this repository) and augments it with additional features and operators.
99+
Moreover, IPEX implements select features and operators outside of standard PyTorch API that are required in popular AI frameworks such as vLLM, Huggingface TGI, SGLang, and others.
100+
The ultimate long term goal is to upstream or substitute with better upstream implementations as much of IPEX code as possible.
101+
Each subsequent IPEX release is a step toward that goal as fewer features are being implemented in IPEX and more are instead taken from other upstream projects.
28102

29103
## Security
30104
See Intel's [Security Center](https://www.intel.com/content/www/us/en/security-center/default.html) for information on how to report a potential security issue or vulnerability.

docs/torch_xpu_ops.jpg

16.9 KB
Loading

0 commit comments

Comments
 (0)