You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Doc] Clean content relate to removed/deprecated feature (#5415)
Clean content relate to removed/deprecated feature
Update known issue with dependencies problem which causes the issue that we can not install using one command
---------
Co-authored-by: ZhaoqiongZ <[email protected]>
Copy file name to clipboardExpand all lines: docs/tutorials/features/nhwc.md
+3-1
Original file line number
Diff line number
Diff line change
@@ -195,7 +195,9 @@ auto src_mem = memory(src_md, src_data_ptr, engine);
195
195
* **NCHW** - create `memory::desc` with *any* card for 'input', 'output' and 'weight'; query proposed `memory::desc` from convolution primitive;
196
196
* **NHWC** - create `memory::desc` with `format_tag::nhwc` for 'input' and 'output', use *any* for 'weight'; if we use `hwio` for 'weight' convolution primitive will be created with gemm rather jit avx512.
197
197
198
-
## Channels Last 1D support on XPU
198
+
## Channels Last 1D support on XPU (Deprecated)
199
+
200
+
**Note:** Channels Last 1D support on XPU APIs `torch.xpu.to_channels_last_1d()` and `torch.xpu.is_contiguous_channels_last_1d()` will be deprecated in future releases.
199
201
200
202
Both stock PyTorch and Intel® Extension for PyTorch\* support Channels Last(2D) and Channels Last 3D, however, regarding Channels Last 1D, they are different. Stock PyTorch doesn't support Channels Last 1D, while XPU could supply limited support for Channels Last 1D.
201
203
We only support Channels Last 1D memory format in these operators: Conv1D, BatchNorm1D, MaxPool1D, Concat, binary add, binary div, upsample linear and upsample nearest.
The legacy profiler tool will be deprecated from Intel® Extension for PyTorch* very soon. Please use [Kineto Supported Profiler Tool](./profiler_kineto.md) instead for profiling operators' executing time cost on Intel® GPU devices.
6
+
The legacy profiler tool has been removed from Intel® Extension for PyTorch\*. Please use [Kineto Supported Profiler Tool](./profiler_kineto.md) instead for profiling operators' executing time cost on Intel® GPU devices.
Copy file name to clipboardExpand all lines: docs/tutorials/known_issues.md
+15-9
Original file line number
Diff line number
Diff line change
@@ -26,6 +26,9 @@ Troubleshooting
26
26
-**Problem**: RuntimeError: Can't add devices across platforms to a single context. -33 (PI_ERROR_INVALID_DEVICE).
27
27
-**Cause**: If you run Intel® Extension for PyTorch\* in a Windows environment where Intel® discrete GPU and integrated GPU co-exist, and the integrated GPU is not supported by Intel® Extension for PyTorch\* but is wrongly identified as the first GPU platform.
28
28
-**Solution**: Disable the integrated GPU in your environment to work around. For long term, Intel® Graphics Driver will always enumerate the discrete GPU as the first device so that Intel® Extension for PyTorch\* could provide the fastest device to end framework users in such co-exist scenario based on that.
29
+
-**Problem**: RuntimeError: Failed to load the backend extension: intel_extension_for_pytorch. You can disable extension auto-loading with TORCH_DEVICE_BACKEND_AUTOLOAD=0.
30
+
-**Cause**: If you import any third party library such as Transformers before `import torch`, and the third party library has dependency to torch and then implicitly autoloads intel_extension_for_pytorch, which introduces circle import.
31
+
-**Solution**: Disable extension auto-loading with TORCH_DEVICE_BACKEND_AUTOLOAD=0.
29
32
30
33
## Library Dependencies
31
34
@@ -89,15 +92,6 @@ Troubleshooting
89
92
source {dpcpproot}/env/vars.sh
90
93
```
91
94
92
-
- **Problem**: RuntimeError: Cannot find a working triton installation. Either the package is not installed or it is too old. More information on installing Triton can be found at https://github.com/openai/triton
93
-
- **Cause**: No pytorch-triton-xpu installed
94
-
- **Solution**: Resolve the issue with following command:
- **Problem**: ERROR: can not install dpcpp-cpp-rt and torch==2.6.0 because these packages version has conflicting dependencies.
109
+
- **Cause**: The intel-extension-for-pytorch v2.6.10+xpu uses Intel DPC++ Compiler 2025.0.4 to get a crucial bug fix in unified runtime, while torch v2.6.0+xpu is pinned with 2025.0.2, so we can not install PyTorch and intel-extension-for-pytorch in one pip installation command.
110
+
- **Solution**: Install PyTorch and intel-extension-for-pytorch with seperate commands.
- **Problem**: ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
115
117
116
118
```
@@ -123,6 +125,10 @@ Troubleshooting
123
125
- **Cause**: The intel-extension-for-pytorch v2.6.10+xpu uses Intel DPC++ Compiler 2025.0.4 to get a crucial bug fix in unified runtime, while torch v2.6.0+xpu is pinned with 2025.0.2.
124
126
- **Solution**: Ignore the Error since actually torch v2.6.0+xpu is compatible with Intel Compiler 2025.0.4.
125
127
128
+
- **Problem**: RuntimeError: oneCCL: ze_handle_manager.cpp:226 get_ptr: EXCEPTION: unknown memory type, when executing DLRMv2 BF16 training on 4 cards Intel® Data Center GPU Max platform.
129
+
- **Cause**: Issue exists in the default sycl path of oneCCL 2021.14 which uses two IPC exchanges.
130
+
- **Solution**: Use `export CCL_ATL_TRANSPORT=ofi` to work around.
131
+
126
132
## Performance Issue
127
133
128
134
- **Problem**: Extended durations for data transfers from the host system to the device (H2D) and from the device back to the host system (D2H).
0 commit comments