Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add arm64 builder #999

Open
8 tasks
rengolin opened this issue Jan 9, 2025 · 3 comments
Open
8 tasks

Add arm64 builder #999

rengolin opened this issue Jan 9, 2025 · 3 comments

Comments

@rengolin
Copy link
Contributor

rengolin commented Jan 9, 2025

Adding a new ARM64 builder brings non-trivial changes:

  • AWS Graviton instance needs to start/stop to avoid costs of running idle
  • check_llvm.sh needs to know about both x86 and arm64 LLVM versions
  • Integration tests checking for floating point values obviously fail
  • Graviton node that is cheap to build TPP-MLIR isn't good to build LLVM
  • We may want to separate LLVM builds from TPP-MLIR and keep the install dir zip in a cloud storage

(add more stuff here)

We may need to simplify execution, for example:

  • only run clang release mode on Arm
  • use the same builder for build and benchmark jobs
  • have only benchmark jobs, since that builds release clang and test anyway

see https://github.com/plaidml/tpp-mlir/tree/arm64

@rengolin
Copy link
Contributor Author

rengolin commented Jan 9, 2025

https://github.com/plaidml/tpp-mlir/actions/runs/12691058703/job/35373272875

Failed Tests (28):
  TPP_OPT :: BF16/Integration/matmul-pbf16.mlir
  TPP_OPT :: BF16/Integration/mlir-gen-bf16.mlir
  TPP_OPT :: BF16/Integration/mlp-all-bf16-tpprun.mlir
  TPP_OPT :: BF16/Integration/tpp-run-splat-shape.mlir
  TPP_OPT :: BF16/Integration/vnni-packing-chain.mlir
  TPP_OPT :: BF16/Integration/xsmm-brgemm-bf16.mlir
  TPP_OPT :: BF16/Integration/xsmm-gemm-bf16.mlir
  TPP_OPT :: BF16/Integration/xsmm-quarternary-bf16.mlir
  TPP_OPT :: BF16/Integration/xsmm-ternary-bf16.mlir
  TPP_OPT :: BF16/brgemm-tpp.mlir
  TPP_OPT :: BF16/brgemm-vnni.mlir
  TPP_OPT :: BF16/matmul-tiled-vnni.mlir
  TPP_OPT :: BF16/matmul-untiled-vnni.mlir
  TPP_OPT :: BF16/matmul-vnni.mlir
  TPP_OPT :: Conversion/LinalgToXsmm/linalg-to-brgemm.mlir
  TPP_OPT :: Conversion/LinalgToXsmm/linalg-to-gemm.mlir
  TPP_OPT :: Conversion/LinalgToXsmm/linalg-to-unary.mlir
  TPP_OPT :: Conversion/VectorToXsmm/vector-to-transpose.mlir
  TPP_OPT :: Conversion/XsmmToFunc/xsmm-to-func.mlir
  TPP_OPT :: Dialect/Xsmm/xsmm-dispatch-invoke.mlir
  TPP_OPT :: Integration/hoist-vector-transfer-brgemm.mlir
  TPP_OPT :: Integration/lower-pack-unpack-without-transpose.mlir
  TPP_OPT :: Integration/transpose-bf16.mlir
  TPP_OPT :: Integration/vector-contract-to-fma.mlir
  TPP_OPT :: Passes/DefaultPipeline/linalg.mlir
  TPP_OPT :: Passes/DefaultPipeline/vnni.mlir
  TPP_OPT :: Passes/DefaultPipeline/xsmm.mlir
  TPP_OPT :: Passes/xsmm-combine.mlir

Some errors I found:

  • vector/fma tests fail diff by a large delta. Could be some tiling assumption that doesn't hold on Arm
  • pass tests on VNNI fail verification as IR shape doesn't hold (error: 'xsmm.brgemm' op expect VNNI layout for operand B or invalid VNNI_B flags)
  • pack/unpack integration test fails fpcmp with text instead of floats
  • linalg-to-xsmm conversion tests don't lower VNNI operations
  • all BF16 tests fail with no xsmm calls found or wrong VNNI layout
  • BF16 MLP example fails with error: 'tensor.pack' op invalid tile factor or output size provided. Only full tiles are supported when padding_value is not set
  • BF16 GEMM example failes with error: 'xsmm.gemm' op expect VNNI layout for operand: 2

@adam-smnk
Copy link
Contributor

Interesting failures considering benchmarks run fine. It could indicate some tests are iffy(?)

@adam-smnk
Copy link
Contributor

Looks like the main culprit is different VNNI blocking factor.
We usually run with factor 2. However, ARM uses 4 which is not compatible with (randomly) picked test sizes, hardcoded LIT checks, and/or prepacked arguments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants