Skip to content

Commit

Permalink
Introduce spec.md (#766)
Browse files Browse the repository at this point in the history
Earlier today, in conclusion of the Q3/Q4 speccing marathon, we have
finished speccing HLO semantics for the StableHLO ops.

This was a huge effort that involved writing 93 specs, including digging
deep into involved semantics of ops like batch_norm_grad, convolution,
dot_general and more. Congratulations to everyone who contributed to
this important milestone!

The idea of this project was to create a baseline from which the
StableHLO opset will evolve in the future. Our immediate next steps will
be writing a dynamism RFC (#8) and speccing quantization (#588) on top
of this baseline.

Also, this speccing marathon has uncovered a lot of future work - both
in cleaning up the opset and improving the implementation to fully
conform to the spec. This is something that we're aiming to address in
the next year.
  • Loading branch information
Eugene Burmako authored Dec 14, 2022
1 parent 59b68c5 commit 8007c55
Show file tree
Hide file tree
Showing 13 changed files with 106 additions and 106 deletions.
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
2 changes: 1 addition & 1 deletion docs/reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

## Data Model

[StableHLO programs](spec_draft.md#programs) are computations over tensors
[StableHLO programs](spec.md#programs) are computations over tensors
(n-dimensional arrays), which, in the current model, are implemented using class
`Tensor`. The underlying storage class for a `Tensor` object, `detail::Buffer`,
stores the `mlir::ShapedType` of the tensor along with a
Expand Down
16 changes: 8 additions & 8 deletions docs/spec_draft.md → docs/spec.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# StableHLO Specification Draft
# StableHLO Specification

## Types

Expand Down Expand Up @@ -753,7 +753,7 @@ Afterwards, within each `process_group`:

### Semantics

![](images/spec_draft/all_to_all.svg)
![](images/spec/all_to_all.svg)

Within each process group in the StableHLO grid, splits the values of the
`operand` tensor along `split_dimension` into parts, scatters the split parts
Expand Down Expand Up @@ -1895,7 +1895,7 @@ Computes dot products between windows of `lhs` and slices of `rhs` and produces
`result`. The following diagram shows how elements in `result` are computed from
`lhs` and `rhs` using a concrete example.

![](images/spec_draft/convolution.svg)
![](images/spec/convolution.svg)

More formally, we start with reframing the inputs to the operation in terms
of `lhs` in order to be able to express windows of `lhs`:
Expand Down Expand Up @@ -2709,7 +2709,7 @@ The following diagram shows how elements in `result` map on elements in
`operand` using a concrete example. The diagram picks a few example `result`
indices and explains in detail which `operand` indices they correspond to.

![](images/spec_draft/gather.svg)
![](images/spec/gather.svg)

More formally, `result[result_index] = operand[operand_index]` where:

Expand Down Expand Up @@ -4000,7 +4000,7 @@ More formally:

### Semantics

![](images/spec_draft/reduce_scatter.svg)
![](images/spec/reduce_scatter.svg)

Within each process group in the StableHLO grid, performs reduction, using
`computations`, over the values of the `operand` tensor from each process,
Expand Down Expand Up @@ -4104,7 +4104,7 @@ and produces `results`.
The following diagram shows how elements in `results[k]` are computed from
`inputs[k]` using a concrete example.

![](images/spec_draft/reduce_window.svg)
![](images/spec/reduce_window.svg)

More formally, `results[:][result_index] = reduce(windows, init_values, axes(inputs[:]), body)` where:

Expand Down Expand Up @@ -4574,7 +4574,7 @@ The following diagram shows how elements in `updates[k]` map on elements in
`updates[k]` indices and explains in detail which `results[k]` indices they
correspond to.

![](images/spec_draft/scatter.svg)
![](images/spec/scatter.svg)

More formally, for all `update_index` from the index space of `updates[0]`:

Expand Down Expand Up @@ -4766,7 +4766,7 @@ a `result` tensor.
The following diagram shows how elements in `result` are computed from
`operand` and `source` using a concrete example.

![](images/spec_draft/select_and_scatter.svg)
![](images/spec/select_and_scatter.svg)

More formally:

Expand Down
2 changes: 1 addition & 1 deletion docs/status.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ one of the following tracking labels.
implements some verification.
- Customized labels for Verifier and Type Inference
- **yes**: there is an implementation, and it's in sync with
[StableHLO semantics](https://github.com/openxla/stablehlo/blob/main/docs/spec_draft.md).
[StableHLO semantics](https://github.com/openxla/stablehlo/blob/main/docs/spec.md).
- **yes\***: there is an implementation, and it's in sync with
[XLA semantics](https://www.tensorflow.org/xla/operation_semantics).
Since XLA semantics is oftentimes underdocumented, we are using
Expand Down
2 changes: 1 addition & 1 deletion docs/type_inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ These proposals apply to both revisiting existing implementations, and achieving

## (P1) Use the StableHLO spec as the source of truth.

The [spec](https://github.com/openxla/stablehlo/blob/main/docs/spec_draft.md) is the source of truth for all verifiers and shape functions of the StableHLO ops. The existing verifiers and shape functions of every op need revisited to be fully aligned with the specification. Note that the specification document keeps evolving, in cases that the spec for an op is not available, the XLA implementation should be used as the source of truth instead: including [xla/service/shape\_inference.cc](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/service/shape_inference.cc) and [xla/service/hlo\_verifier.cc](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/service/hlo_verifier.cc). XLA implementation doesn't cover unbounded dynamism, so for unbounded dynamism we'll apply common sense until the dynamism RFC is available.
The [spec](https://github.com/openxla/stablehlo/blob/main/docs/spec.md) is the source of truth for all verifiers and shape functions of the StableHLO ops. The existing verifiers and shape functions of every op need revisited to be fully aligned with the specification. Note that the specification document keeps evolving, in cases that the spec for an op is not available, the XLA implementation should be used as the source of truth instead: including [xla/service/shape\_inference.cc](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/service/shape_inference.cc) and [xla/service/hlo\_verifier.cc](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/service/hlo_verifier.cc). XLA implementation doesn't cover unbounded dynamism, so for unbounded dynamism we'll apply common sense until the dynamism RFC is available.


## (P2) Make the most of ODS
Expand Down
4 changes: 2 additions & 2 deletions rfcs/20220912-compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,14 +47,14 @@ StableHLO (opset, libStablehlo and serialization format) will version in complia

**(G4) Version 0.x.x:** There will be some stability guarantees while in major version 0. There is not stability guaranteed within the major version, but we will provide 1 month of forward and backward compatibility between minor versions. This approach is chosen to allow dialect evolution and cleanup in the early days, as well as refine compatibility procedures while meeting the requirements of early adopters. Stability within major version will begin with version `1.x.x` and will happen in 2023.

_*StableHLO semantics is defined by the StableHLO specification and can be tested using the StableHLO interpreter. Refer to the [StableHLO Specification](https://github.com/openxla/stablehlo/blob/main/docs/spec_draft.md) for reference semantics._
_*StableHLO semantics is defined by the StableHLO specification and can be tested using the StableHLO interpreter. Refer to the [StableHLO Specification](https://github.com/openxla/stablehlo/blob/main/docs/spec.md) for reference semantics._

**Proposal 3:** Provide forward / backward compatibility within a major release, with major releases happening at least 5 years apart. Additionally provide backward compatibility for serialized artifacts across 1 major release.

## What's not covered?
**Bugs:** We may make backward incompatible changes if the current implementation is clearly broken, that is, if it contradicts the Operation's spec.

**Unspecced features:** We may make backward incompatible changes to features which haven't been specced (see [StableHLO Specification](https://github.com/openxla/stablehlo/blob/main/docs/spec_draft.md)).
**Unspecced features:** We may make backward incompatible changes to features which haven't been specced (see [StableHLO Specification](https://github.com/openxla/stablehlo/blob/main/docs/spec.md)).

**Numerical accuracy:** StableHLO has multiple ops that have implementation-defined accuracy across consumers and even within the same consumer across versions. As a result, StableHLO doesn't aim to make any guarantees about numerical accuracy, although this may change in a future RFC.

Expand Down
Loading

0 comments on commit 8007c55

Please sign in to comment.