Skip to content

Commit

Permalink
[torch-mlir] remove trailing whitespace from md documentation (llvm#2853
Browse files Browse the repository at this point in the history
)
  • Loading branch information
aartbik authored Feb 2, 2024
1 parent 24b8c86 commit d1cd117
Show file tree
Hide file tree
Showing 6 changed files with 26 additions and 26 deletions.
6 changes: 3 additions & 3 deletions docs/add_ops.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
# How to Add Ops to Torch-Mlir

Collected links and contacts for how to add ops to torch-mlir.
Collected links and contacts for how to add ops to torch-mlir.


<details>
<summary>Turbine Camp: Start Here</summary>
This document was previously known as `turbine-camp.md` to Nod.ai. "Turbine Camp" is part of Nod.ai's onboarding process. Welcome to turbine camp. This document originated at Nod.ai as a part of onboardding process, where new nod-ai folks learn about the architecture of our work by adding support for 2 ops to torch-mlir. I decided to put this into torch mlir because a lot of this is about torch-mlir.
This document was previously known as `turbine-camp.md` to Nod.ai. "Turbine Camp" is part of Nod.ai's onboarding process. Welcome to turbine camp. This document originated at Nod.ai as a part of onboardding process, where new nod-ai folks learn about the architecture of our work by adding support for 2 ops to torch-mlir. I decided to put this into torch mlir because a lot of this is about torch-mlir.

Written & maintained by @renxida

Guides by other folks that were used during the creation of this document:
- [Chi Liu](https://gist.github.com/AmosLewis/dd31ab37517977b1c499d06495b4adc2)
- [Sunsoon](https://docs.google.com/document/d/1H79DwW_wnVzUU81EogwY5ueXgnl-QzKet1p2lnqPar4/edit?pli=1)
- [Sunsoon](https://docs.google.com/document/d/1H79DwW_wnVzUU81EogwY5ueXgnl-QzKet1p2lnqPar4/edit?pli=1)

## Before you begin...

Expand Down
10 changes: 5 additions & 5 deletions docs/adding_abstract_interpretation_functions.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

As part of adding support for a Torch operator in Torch-MLIR, it is usually
necessary to define a shape and dtype function so that the compiler can infer
the shapes and dtypes of result tensors for the operator. We use the
the shapes and dtypes of result tensors for the operator. We use the
[abstract interpretation library](abstract_interp_lib.md) for this process.

## Step-by-step guide
Expand All @@ -19,7 +19,7 @@ We will use the example of adding support for the `torch.aten.tanh` op.
file is the "rosetta stone" that allows translating between
e.g. `torch.aten.tanh`, `AtenTanhOp`, and the shape and dtype
function signatures are:

- `def aten〇tanh〡shape(self: List[int]) -> List[int]:`
- `def aten〇tanh〡dtype(self_rank_dtype: Tuple[int, int]) -> int:`

Expand All @@ -39,10 +39,10 @@ We will use the example of adding support for the `torch.aten.tanh` op.
But in general, you will need to write the function and test it
(see the comments about "Shape, dtype, and decomposition function
testing infrastructure" in `testing_framework.py`). New shape
functions should be added upstream following the example of [this PR](https://github.com/pytorch/pytorch/pull/76889),
though it can be useful to iterate locally in `abstract_interp_lib_gen.py`
functions should be added upstream following the example of [this PR](https://github.com/pytorch/pytorch/pull/76889),
though it can be useful to iterate locally in `abstract_interp_lib_gen.py`
first.

Similarly, dtype functions should ideally just be a call to the helper
`promote_dtypes` defined in `library_generator.py`. However, some ops will
require some extra logic to calculate the right result types. While dtypes
Expand Down
4 changes: 2 additions & 2 deletions docs/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -442,5 +442,5 @@ characteristics.

### Presentations and Talks

* 2021-10-07: MLIR ODM: Introduction to Torch-MLIR. ([recording](https://www.youtube.com/watch?v=QbNkex-gizs) and [slides](https://docs.google.com/presentation/d/1ZhzfE4EK6XV7AdQTYicrsE_OYjkER_yiB0vBeszRfzY/edit#slide=id.gf56404f79c_1_55))
* 2022-08-20: Overview of Torch-MLIR passes. ([recording](https://www.youtube.com/watch?v=ZpwlVxsD9_U) and [slides](https://drive.google.com/file/d/1ZSlk1HGttRuVhJSxtP6spWt_hxClit2T/view))
* 2021-10-07: MLIR ODM: Introduction to Torch-MLIR. ([recording](https://www.youtube.com/watch?v=QbNkex-gizs) and [slides](https://docs.google.com/presentation/d/1ZhzfE4EK6XV7AdQTYicrsE_OYjkER_yiB0vBeszRfzY/edit#slide=id.gf56404f79c_1_55))
* 2022-08-20: Overview of Torch-MLIR passes. ([recording](https://www.youtube.com/watch?v=ZpwlVxsD9_U) and [slides](https://drive.google.com/file/d/1ZSlk1HGttRuVhJSxtP6spWt_hxClit2T/view))
22 changes: 11 additions & 11 deletions docs/importers/onnx_importer.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ for the reference importer which complies with the rules below.
With the exception of certain special or complicated ONNX operators, most
are relatively straight-forward to map, following this general procedure:

* Plan the ops you wish to support by consulting the
[ONNX operator database](https://onnx.ai/onnx/operators/).
* Plan the ops you wish to support by consulting the
[ONNX operator database](https://onnx.ai/onnx/operators/).
* This database has detailed diffs wrt different support versions but
at the level of detail we operate, most version diffs are inconsequential
and just require a bit more pattern support.
Expand All @@ -24,7 +24,7 @@ are relatively straight-forward to map, following this general procedure:
corresponding with the alphabetic sort of the op and add a conversion.
* Generate successful test cases:
* All `onnx_importer.py` tests are dumped to the test temp dir (success
or failure). This is typically located under
or failure). This is typically located under
`tools/torch-mlir/test/python/onnx_importer/Output`. The `.mlir` files
under there should provide good variants to drive lit test coverage of
conversion.
Expand All @@ -34,25 +34,25 @@ are relatively straight-forward to map, following this general procedure:
* There are often many variants of tests for checking conformance of
different historic ONNX encodings, but these are often not load bearing
at the MLIR level.
* Pick a handful of test cases and add them to
* Pick a handful of test cases and add them to
`test/Conversion/TorchOnnxToTorch/simple_ops_x_to_y.mlir` corresponding to
an alphabetic breakdown. At this time, ignore tests that are not exercising
useful differences in the pattern implementations.
* (Optionally) Use `torch-mlir-opt` to validate the outputs of the new op.
First, build the project using
* (Optionally) Use `torch-mlir-opt` to validate the outputs of the new op.
First, build the project using
`cmake --build build --target tools/torch-mlir/all`. This will generate
the conversion binary, `torch-mlir-opt`. Then call `torch-mlir-opt` with
the MLIR pass `convert-torch-onnx-to-torch`:
```
build/bin/torch-mlir-opt -convert-torch-onnx-to-torch \
-split-input-file [DESIRED_ONNX_FILE].mlir
```
```
* Generate failure test cases:
* Some ops have forms that do not (easily) map to torch-mlir. If you leave
an op under-implemented, add a failing test case to
`test/Conversion/TorchOnnxToTorch/unsupported_simple_ops.mlir`.
* Optional but recommended: Use your test case files to fuzz against the
torch-mlir backend of your choice by running a backend conversion pipeline
* Optional but recommended: Use your test case files to fuzz against the
torch-mlir backend of your choice by running a backend conversion pipeline
and fixing any crashes/issues.
* Send a patch with your changes.
Expand Down Expand Up @@ -115,7 +115,7 @@ not yet implemented.
The `IsolatedFromAbove` parent of the ops can contain the following
metadata:
* `torch.onnx_meta.ir_version`: 64bit `IntegerAttr` corresponding to
* `torch.onnx_meta.ir_version`: 64bit `IntegerAttr` corresponding to
`ModelProto.ir_version`.
* `torch.onnx_meta.producer_name`: `StringAttr` corresponding to
`ModelProto.producer_name`.
Expand All @@ -135,7 +135,7 @@ are only minor variations of an op. Major variations should use
### Special op forms
Certain ONNX operators map to different structural components of
Certain ONNX operators map to different structural components of
torch-mlir's representation:
* `ConstantOfShape`: Mapped to `torch.vtensor.literal` with
Expand Down
4 changes: 2 additions & 2 deletions docs/ltc_backend.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ At some point, the tensors will be synced in order to execute the computation --
>>> torch._lazy.mark_step()
```

This triggers a call to `LazyGraphExecutor::SyncLiveTensorsGraph` somewhere in the guts of LTC, which collects all the `TorchMlirNode`s (technically `torch::lazy::Node`s at this point) from the current trace and
This triggers a call to `LazyGraphExecutor::SyncLiveTensorsGraph` somewhere in the guts of LTC, which collects all the `TorchMlirNode`s (technically `torch::lazy::Node`s at this point) from the current trace and
creates an instance of `TorchMlirLoweringContext`. Here, the `TorchMlirNode`s are lowered to JIT via `mlir_node_lowering.cpp` and inserted into a `jit::Graph`.

Next, `TorchMlirLoweringContext::Build` is executed and the final `jit::Graph` is sent to `torch_mlir::importJitFunctionAsFuncOp` to generate MLIR using the existing infrastructure from Torch-MLIR.
Expand All @@ -121,7 +121,7 @@ Finally, the compiled computation is sent to `TorchMlirBackendImpl::ExecuteCompu

## Implementing a custom backend

A reference implementation of a custom backend is available [here](../python/torch_mlir/csrc/reference_lazy_backend/).
A reference implementation of a custom backend is available [here](../python/torch_mlir/csrc/reference_lazy_backend/).
All the work involved with generating MLIR is handled in the base LTC backend, so vendors only need to worry about implementing `Compile`, `ExecuteComputation`, and some other minor methods to interface with the device.

A pybind is needed to invoke C++ code to register the autogen PyTorch kernels and the custom backend itself.
Expand Down
6 changes: 3 additions & 3 deletions docs/ltc_examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,18 +33,18 @@ Received 1 arguments, and returned 2 results during ExecuteCompile!
Results: tensor([[0.7616, 0.9640, 0.9951, 0.9993, 0.9999]], device='lazy:0')
JIT Graph:
JIT Graph:
graph(%p0 : Float(1, 5)):
%1 : Float(1, 5) = aten::tanh(%p0)
return (%p0, %1)
MLIR:
MLIR:
func.func @graph(%arg0: !torch.vtensor<[1,5],f32>) -> (!torch.vtensor<[1,5],f32>, !torch.vtensor<[1,5],f32>) {
%0 = torch.aten.tanh %arg0 : !torch.vtensor<[1,5],f32> -> !torch.vtensor<[1,5],f32>
return %arg0, %0 : !torch.vtensor<[1,5],f32>, !torch.vtensor<[1,5],f32>
}
Input/Output Alias Mapping:
Input/Output Alias Mapping:
Output: 0 -> Input param: 0
In Mark Step: true
Expand Down

0 comments on commit d1cd117

Please sign in to comment.