Skip to content

Commit

Permalink
General improvements
Browse files Browse the repository at this point in the history
  • Loading branch information
Jegp committed Nov 17, 2024
1 parent ee1bd08 commit 7b74438
Show file tree
Hide file tree
Showing 6 changed files with 142 additions and 27 deletions.
11 changes: 10 additions & 1 deletion docs/source/_config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,16 @@ repository:
execute:
execute_notebooks: off

parse:
myst_enable_extensions:
- amsmath

launch_buttons:
notebook_interface: "jupyterlab"
binderhub_url: "https://mybinder.org/v2/gh/neuromorphs/nir/main?urlpath=lab"
colab_url: "https://colab.research.google.com"
colab_url: "https://colab.research.google.com"

sphinx:
extra_extensions:
- 'sphinx.ext.autodoc'

10 changes: 6 additions & 4 deletions docs/source/_toc.yml
Original file line number Diff line number Diff line change
@@ -1,8 +1,5 @@
format: jb-book
root: index
options:
html:
show_navbar_depth: 2
parts:
- caption: Introduction
chapters:
Expand Down Expand Up @@ -31,5 +28,10 @@ parts:
- caption: Developer guide
chapters:
- file: porting_nir
- file: api_design
- file: dev_pytorch
- file: dev_jax
- file: contributing
- caption: API documentation
chapters:
- file: api_design
- file: doctrees
6 changes: 6 additions & 0 deletions docs/source/dev_jax.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Developing JAX extensions

JAX is a popular deep learning framework that more and more of the NIR-supported libraries are built on.
For PyTorch, we have built the [`nirtorch` package](https://github.com/neuromorphs/nirtorch), but *no such package exists for JAX*.
If you're interested in developing such a package, please reach out to us!
Either on [Discord](https://discord.gg/JRMRGP9h3c) or by [opening an issue](https://github.com/neuromorphs/NIR/issues).
69 changes: 69 additions & 0 deletions docs/source/dev_pytorch.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
# Developing PyTorch extensions

PyTorch is a popular deep learning framework that many of the NIR-supported libraries are built on.
We have built the [`nirtorch` package](https://github.com/neuromorphs/nirtorch) to make it easier to develop PyTorch extensions for the NIR-supported libraries.
`nirtorch` helps you write PyTorch code that (1) exports NIR models from PyTorch and (2) imports NIR models into PyTorch.

## Exporting NIR models from PyTorch
Exporting a NIR model requires two things: exporting the model's nodes and edges.

### Exporting edges
Exporting edges is slightly complicated because PyTorch modules can have multiple inputs and outputs.
And because PyTorch modules are connected via function calls, which only happen at runtime.
Therefore, we need to trace the PyTorch module to get the edges with some sample input.
Luckily, `nirtorch` package helps you do exactly that.
It works behind the scenes, but you can read about it in the [`to_nir.py` file in `nirtorch`](https://github.com/neuromorphs/NIRTorch/blob/main/nirtorch/to_nir.py#L11).

### Exporting nodes
The only thing we really have to do to use `nirtorch` is to export modules.
Since all PyTorch modules inherit from the `torch.nn.Module` class, exporting the nodes is straightforward: we simply need a function that looks at a PyTorch module and returns the corresponding NIR node.
Assume this is done in a function called `export_node`.

```python
import nir
import torch

class MyModule(torch.nn.Module):
weight: torch.Tensor
bias: torch.Tensor


def export_node(module: torch.nn.Module) -> Node:
# Export the module to a NIR node
if isinstance(module, MyModule):
return nir.Linear(module.weight, module.bias)
...
```
This example converts a custom Linear module to a NIR Linear node.

### Putting it all together
The following code is a snippet taken from the [Norse library](https://github.com/norse/norse) that demonstrates how to export custom PyTorch models to a NIR using the `nirtorch` package.
Note that we only have to declare the `export_node` function for each custom module we want to export.
The edges are traced automatically by the `nirtorch` package.

```python
def _extract_norse_module(module: torch.nn.Module) -> Optional[nir.NIRNode]:
if isinstance(module, LIFBoxCell):
return nir.LIF(
tau=module.p.tau_mem_inv,
v_th=module.p.v_th,
v_leak=module.p.v_leak,
r=torch.ones_like(module.p.v_leak),
)
elif isinstance(module, torch.nn.Linear):
return nir.Linear(module.weight, module.bias)
elif ...

return None

def to_nir(
module: torch.nn.Module, sample_data: torch.Tensor, model_name: str = "norse"
) -> nir.NIRNode:
return extract_nir_graph(
module, _extract_norse_module, sample_data, model_name=model_name
)
```

## Importing NIR models into PyTorch
Importing NIR models into PyTorch with `nirtorch` is also straightforward.
Assuming you have a NIR graph in the Python object `nir_graph` (see [Usage](#usage))
37 changes: 19 additions & 18 deletions docs/source/primitives.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,24 +10,25 @@ But, if you plan to execute the graph on restricted neuromorphic hardware, pleas

NIR defines 16 fundamental primitives listed in the table below, which backends are free to implement as they want, leading to varying outputs across platforms. While discrepancies could be minimized by constraining implementations or making backends aware of each other's discretization choices, NIR does not do this since it is declarative, specifying only the necessary inputs and outputs. Constraining implementations would cause hardware incompatibilities and making backends aware of each other could create large O(N^2) overhead for N backends. The primitives are already computationally expressive and able to solve complex PDEs.

| Primitive | Parameters | Computation | Reset |
|-|-|-|-|
| **Input** | Input shape | - | - |
| **Output** | Output shape | - | - |
| **Affine** | $W, b$ | $ W*I + b$ | - |
| **Convolution** | $W$, Stride, Padding, Dilation, Groups, Bias | $f \star g$ | - |
| **Current-based leaky integrate-and-fire** | $\tau_\text{syn}$, $\tau_\text{mem}$, R, $v_\text{leak}$, $v_\text{thr}$, $w_\text{in}$ | **LI**_1_; **Linear**; **LIF**_2_ | $\begin{cases} v_\text{LI\_2}-v_\text{thr} & \text{Spike} \\ v & \text{else} \end{cases}$ |
| **Delay** | $\tau$ | $I(t - \tau)$ | - |
| **Flatten** | Input shape, Start dim., End dim. | - | - |
| **Integrator** | $\text{R}$ | $\dot{v} = R I$ | - |
| **Integrate-and-fire** | $\text{R}, v_\text{thr}$ | **Integrator**; **Threshold** | $\begin{cases} v-v_\text{thr} & \text{Spike} \\ v & \text{else} \end{cases}$ |
| **Leaky integrator (LI)** | $\tau, \text{R}, v_\text{leak}$ | $\tau \dot{v} = (v_\text{leak} - v) + R I$ | - |
| **Linear** | $W$ | $W I$ | - |
| **Leaky integrate-fire (LIF)** | $\tau, \text{R}, v_\text{leak}, v_\text{thr}$ | **LI**; **Threshold** | $\begin{cases} v-v_\text{thr} & \text{Spike} \\ v & \text{else} \end{cases}$ |
| **Scale** | $s$ | $s I$ | - |
| **SumPooling** | $p$ | $\sum_{j} x_j$ | |
| **AvgPooling** | $p$ | **SumPooling**; **Scale** | - |
| **Threshold** | $\theta_\text{thr}$ | $H(I - \theta_\text{thr})$ | - |
| Primitive | Parameters | Computation | Reset |
|------------------------------------|---------------------------------------------------------------------------|----------------------------------------------------------|----------------------------------------------------------------------------------------|
| **Input** | Input shape | - | - |
| **Output** | Output shape | - | - |
| **Affine** | $W, b$ | $W \cdot I + b$ | - |
| **Convolution** | $W$, Stride, Padding, Dilation, Groups, Bias | $f \star g$ | - |
| **Current-based leaky integrate-and-fire** | $\tau_\text{syn}, \tau_\text{mem}, R, v_\text{leak}, v_\text{thr}, w_\text{in}$ | **LI**; **Linear**; **LIF** | $\begin{cases} v_\text{LIF} - v_\text{thr} & \text{Spike} \\ v_\text{LIF} & \text{else} \end{cases}$ |
| **Delay** | $\tau$ | $I(t - \tau)$ | - |
| **Flatten** | Input shape, Start dim., End dim. | - | - |
| **Integrator** | $R$ | $\dot{v} = R I$ | - |
| **Integrate-and-fire** | $R, v_\text{thr}$ | **Integrator**; **Threshold** | $\begin{cases} v - v_\text{thr} & \text{Spike} \\ v & \text{else} \end{cases}$ |
| **Leaky integrator (LI)** | $\tau, R, v_\text{leak}$ | $\tau \dot{v} = (v_\text{leak} - v) + R I$ | - |
| **Linear** | $W$ | $W I$ | - |
| **Leaky integrate-fire (LIF)** | $\tau, R, v_\text{leak}, v_\text{thr}$ | **LI**; **Threshold** | $\begin{cases} v - v_\text{thr} & \text{Spike} \\ v & \text{else} \end{cases}$ |
| **Scale** | $s$ | $s I$ | - |
| **SumPooling** | $p$ | $\sum_{j} x_j$ | - |
| **AvgPooling** | $p$ | **SumPooling**; **Scale** | - |
| **Spike** | $\theta_\text{thr}$ | $\dirac(I - \theta_\text{thr})$ | - |


Each primitive is defined by their own dynamical equation, specified in the [API docs](https://nnir.readthedocs.io/en/latest/).

Expand Down
36 changes: 32 additions & 4 deletions docs/source/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,17 +13,18 @@ Please refer to the **Examples** section in the sidebar for code for each suppor
More code examples are available [in the repository for our paper](https://github.com/neuromorphs/NIR/tree/main/paper/).

## Example: Norse model to Sinabs Speck
This example demonstrates how to convert a Norse model to a Sinabs model and then to a Speck chip.
Note that Norse is based on PyTorch and uses [NIRTorch](#dev_pytorch) to convert PyTorch models to NIR.
You can also do this manually, by constructing your own NIR graphs as shown in our [API design documentation](#api_desige).

### Part 1: Convert Norse model to NIR
```python
import torch
import norse.torch as norse

# Define our neural network model
model = norse.SequentialState(
norse.LIFCell(),
...
)
model = ...

# Convert model to NIR
# Note that we use some sample data to "trace" the graph in PyTorch.
# You need to ensure that shape of the data fits your model
Expand All @@ -44,3 +45,30 @@ dynapcnn_model = DynapcnnNetwork(sinabs_model, input_shape=sample_data.shape[-1]
# Move model to chip!
dynapcnn_model.to("speck2fdevkit")
```

## Example: Manually writing and reading NIR files
You can also manually write and read NIR files.
This is useful if you want to save a model to disk and use it later.
Or if you want to load in a model that someone else has created.

### Writing a NIR file
[NIR consists of graphs](#primitives) that describe the structure of a neural network.
Our reference implementation uses Python to describe these graphs, so you can imagine having a graph in an object, say `nir_model`.
To write this graph to file, you can use

```python
import nir
nir.write(nir_model, "my_model.nir")
```

### Reading a NIR file
Reading a NIR file is similarly easy and will give you a graph object that you can use in your code.

```python
import nir
nir_model = nir.read("my_model.nir")
```

Note that the graph object (`nir_model`) doesn't do anything by itself.
You still need to convert it to a format that your hardware or simulator can understand.
Read more about this in the [Using NIR in hardware guide](#porting_nir).

0 comments on commit 7b74438

Please sign in to comment.