-
Notifications
You must be signed in to change notification settings - Fork 44
Issues: intel/intel-xpu-backend-for-triton
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Merge OpenAI Triton till Dec 13rd
enhancement
New feature or request
upstream: rebase
PR to be up-streamed
#2879
opened Nov 29, 2024 by
whitneywhtsang
[GEMM] 2048X2048X2048 has big variance.
codegen: gemm
performance
research
#2873
opened Nov 29, 2024 by
quintinwang5
Attach New feature or request
good first issue
Good for newcomers
performance
opencl.kernels
metadata
enhancement
#2869
opened Nov 28, 2024 by
victor-eds
[Pytorch Upstream] Triton build failed with docker image pytorch/manylinux2_28-builder:cpu
bug
Something isn't working
#2868
opened Nov 28, 2024 by
chuanqi129
'/debug:fastlink'
and '/INCREMENTAL'
aren't recognized in Windows build
ci
code quality
windows
#2855
opened Nov 27, 2024 by
anmyachev
Drop
-tritonintelgpu-optimize-elementwise-locality
pass
code quality
#2850
opened Nov 27, 2024 by
victor-eds
SPIRVRunner: Investigate and handle multi-kernel execution (benchmark- gemm_streamk_benchmark.py)
enhancement
New feature or request
#2831
opened Nov 26, 2024 by
kballeda
Reduce changes in common files for windows support
enhancement
New feature or request
#2824
opened Nov 26, 2024 by
whitneywhtsang
Remove New feature or request
getElemsPerThreadForOperands
from MmaEncodingTrait
enhancement
#2823
opened Nov 26, 2024 by
whitneywhtsang
Reland upstream commit PR to be up-streamed
340cbc6
upstream: rebase
#2811
opened Nov 23, 2024 by
whitneywhtsang
Use New feature or request
performance
llvm.func
's reqd_work_group_size
to specify static local size
enhancement
#2798
opened Nov 22, 2024 by
victor-eds
Fine tune sub-group transpose bank conflict prevention for PVC
codegen: attention
performance
#2797
opened Nov 22, 2024 by
victor-eds
Consider reusing PyTorch solution for New feature or request
ext_oneapi_get_default_context
enhancement
Enable New feature or request
performance
-tritonintelgpu-optimize-reduction-locality
by default
codegen: attention
enhancement
[CI] Use DLE for performance runners
ci
dependencies: ipex
enhancement
New feature or request
performance
profiling
Implement support for New feature or request
upstream: triton
TritonGPU::UpcastMXFPOp
for Intel XPU BE
codegen: mlir
enhancement
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.