All notable changes between Rockpool releases will be documented in this file
- AFESamna module for XyloAudio 3: allow recording of input spikes from live microphone
- Remove dependence of imp module that was deprecated in Python 3.12
- Fixed bug in multiplying
TSContinuous
objects with differing numbers of channels
- Support for Xylo™Audio 3 development kit
- Hardware interface via samna
- Digital microphone input and simulaton package
- Cycles model
- Simulation support for audio front-end:
AFESimExternal
,AFESimAGC
, andAFESimPDM
with all the necessary sub-modules
- Tutorial and documentation for the
SynNet
architecture, to improve visibility
- Update
release notes
for developers in documentation- Add check for version
- Add check for copyright
- Update dependency version of Jax to >=0.4.28
- Move instructions to build documentation inside
Contributing
section
- Add cycles model for Xylo Audio and Xylo IMU, enabling users to calculate the required master clock frequency for Xylo
- Add support for NIR, for importing and exporting Rockpool torch networks
LIFExodus
now supports vectors as threshold parameter- Standard
LIF
modules now havew_rec
as a simulation parameter when in non-recurrent mode
TypeError
when usingLIFExodus
- Update
jax.config
usage - Power measurement for
xyloA2
was not considering AFE channels - Remove
check_grads
from Jax tests, since this will fail for LIF neurons due to surrograte gradients - Fix a bug in
AFESim
on windows, where the maximum int32 value would be exceeded when seeding the AFE simulation - Fix stochasticity in some unit tests
- Fix a bug in
channel_quantize
, where quantization would be incorrectly applied for Xylo IMU networks with Nien < Nhid - Fix a bug in
channel_quantize
, where hidden unit biases would be incorrectly used in place of output unit biases - Fix a non-handled buffer bug in
LIFJax
, where non-recurrent modules would sometimes have garbage inw_rec
instead of all zeros - Fix a bug in
TorchSequential.as_graph()
, where torch module functions would be called instead of rockpool modules, leading to a failing call to.as_graph()
.
- Brian2 tests are not running -- Brian2 backend will be soon removed
- Bug in Xylo IMU mapper, where networks with more than 128 hidden neurons could not be mapped
-
Dependency on
pytest-random-order
v1.1.0 for test order randomization. -
New HowTo tutorial for performing constrained optimisation with torch and jax
-
Xylo IMU application software support:
mapper
,config_from_specification
and graph mapping supportXyloSim
module: SNN core simulation for Xylo IMUIMUIFSim
module: Simulation of the input encoding interface with sub-modules:BandPassFilter
FilterBank
RotationRemoval
IAFSpikeEncoder
ScaleSpikeEncoder
XyloIMUMonitor
module: Real-time hardware monitoring for Xylo IMU.XyloSamna
module: Interface to the SNN core.IMUIFSamna
module: Interface toIMUIF
, utilizing neurons in the SNN core.IMUData
module: Collection of sensor data from the onboard IMU sensor.- Utility functions for network mapping to the Xylo IMU HDK, interfacing, and data processing.
- Introductory documentation providing an overview of Xylo IMU and instructions on configuring preprocessing.
-
New losses, with structure similar to PyTorch.
- PeakLoss which can be imported as
peak_loss = rockpool.nn.losses.PeakLoss()
. - MSELoss which can be imported as
mse_loss = rockpool.nn.losses.MSELoss()
.
- PeakLoss which can be imported as
- Update dependency version of pytest-xdist to >=3.2.1.
- Update to
Sequential
API.Sequential
now permits instantiation with anOrderedDict
to specify module names.Sequential
now supports an.append()
method, to append new modules, optionally specifying a module name. - Cleaned up tree manipulation libraries and added to documentation. Implemented unit tests.
- Removed obsolete unit tests
- Changed semantics of transformation configurations for QAT, to only include attributes which will be transformed, rather than all attributes. This fixes an incompatibility with torch >= 2.0.
- Added support for latest
torch
versions - New fine-grained installation options
- Renamed power measurement dict keys returned by Xylo Audio 2 (
syns61201
)XyloSamna
module, to be more descriptive - Upgrade minimum Python version supported to 3.8
- Upgrade minimum JAX version supported to 0.4.10
- Rearranged Xylo documentation to separate overview, Xylo Audio and Xylo IMU
- Fixed bug in initialising access to MC3620 IMU sensor on Xylo IMU HDK, where it would fail with an error the on the second initialisation
- NEST backend completely removed
- Removed spiking output surrogate "U" from LIF modules
-
Dynap-SE2 Application Software Support (jax-backend)
- jax backend
DynapSim
neuron model with its own custom surrogate gradient implementation DynapSamna
module handling low-level HDK interface under-the-hood- rockpool network <-> hardware configuration bi-directional conversion utilities
- Network mapping:
mapper()
andconfig_from_specification()
- sequentially combined
LinearJax
+DynapSim
network getters :dynapsim_net_from_config()
anddynapsim_net_from_spec()
- Network mapping:
- transistor lookup tables to ease high-level parameters <-> currents <-> DAC (coarse, fine values) conversions
- Dynap-SE2 specific auto-encoder quantization
autoencoder_quantization()
- Custom
DigitalAutoEncoder
implementation and training pipeline
- Custom
samna
alias classes compensating the missing documentation support- unit tests + tutorials + developer docs
DynapseNeuron
graph module which supports conversion from and toLIFNeuronWithSynsRealValue
graph- hardcoded frozen and dynamic mismatch prototypes
- mismatch transformation (jax)
- jax backend
-
LIFExodus
now supports training time constants, and multiple time constants -
Improved API for
LIFTorch
-
Implemented
ExpSynExodus
for accelerated training of exponential synapse modules -
Added initial developer documentation
-
Added MNIST tutorial
-
Fixed notebook links to MyBinder.org
- Updated Samna version requirement to >=0.19.0
- User explicitly defines Cuda device for LIFExodus, ExpDynExodus and LIFMembraneExodus
- Improved error message when a backend is missing
- Improved transient removal in
syns61201.AFESim
- Weight scaling was too different for output layers and hidden layers.
- Hotfix: Regression in
LIFExodus
- Added support for Xylo-Audio v2 (SYNS61201) devices and HDK
- Added hardware versioning for Xylo devices
- Added a beta implementation of Quantisation-Aware Training for Torch backend in
rockpool.transform.torch_transform
- Added support for parameter boundary constraints in
rockpool.training.torch_loss
- Added tutorial for Spiking Heidelberg Digits audio classification
- Added tutorial and documentation for WaveSense network architecture
- Added support to
LIFTorch
for training decays and bitshift parameters - Added a new utility package
rockpool.utilities.tree_utils
- Updated support for Exodus v1.1
- Updated
XyloSim.from_specification
to handle NIEN ≠ NRSN ≠ NOEN for Xylo devices - Updated
LIFTorch
to provide propertau
s for.as_graph()
in case of decay and bitshift traning - Improved backend management, to test torch version requirements
- Fixed usage of Jax optimisers in tutorial notebooks to reflect Jax API changes
- Fixed issues with
LIFTorch
andaLIFTorch
, preventingdeepcopy
protocol - Fixed bug in
tree_utils
, whereTree
was used instead ofdict
inisinstance
check - Replaced outdated reference from
FFRateEuler
toRate
module in high-level API tutorial - Fixed seeds in torch and numpy to avoid
nan
loss problem while training in tutorial - Fixed bug in
TorchModule
where assigning to an existing registered attribute would clear the family of the attribute - Fixed a bug in Constant handling for
torch.Tensor
s, which would raise errors in torch 1.12 - Fixed bug in
LIFTorch
, which would cause recorded state to hang around post-evolution, causing errors fromdeepcopy
- Fixed bug in
Module._register_module()
, where replacing an existing submodule would cause the string representation to be incorrect
Improved handling of weights when using XyloSim.from_specification
Linear...
modules now do not have a bias parameter, by default.
- Support for Xylo SNN core v2, via XyloSim. Including biases and quantisation support; mapping and deployment for Xylo SNN core v2 (SYNS61201)
- Added support for Xylo-A2 test board, with audio recording support from Xylo AFE (
AFESamna
andXyloSamna
) - Support for an LIF neuron including a trainable adaptive threshold (
aLIFTorch
). Deployable to Xylo - New module
BooleanState
, which maintains a boolean state - Support for membrane potential training using
LIFExodus
- Xylo package support for HW versioning (SYNS61300; SYNS61201)
- Ability to return events, membrane potentials or synaptic currents as output from
XyloSim
andXyloSamna
- Enhanced Xylo
mapper
to be more lenient about weight matrix size --- now assumes missing weights are zero - Xylo
mapper
is now more lenient about HW constraints, permitting larger numbers of input and output channels than supported by existing HDKs - Xylo
mapper
supports a configurable number of maxmimum hidden and output neurons - Running
black
is enforced by the CI pipeline Linear...
modules now export bias parameters, if they are presentLinear...
modules now do not include bias parameters by default- Xylo
mapper
now raises a warning if any linear weights have biases LIFSlayer
renamed toLIFExodus
, corresponding tosinabs.exodus
library name change- Periodic exponetial surrogate function now supports training thresholds
- Fixes related to torch modules moved to simulation devices
- Fixed issue in
dropout.py
, where if jax was missing an ImportError was raised - Fixed an issue with
Constant
torch
parameters, wheredeepcopy
would raise an error - Fixed issue with newer versions of torch; torch v1.12 is now supported
- Updated to support changes in latest jax api
- Fixed bug in
WavesenseNet
, where neuron class would not be checked properly - Fixed bug in
channel_quantize
, where unquantized weights were returned instead of quantized weights
LIFSlayer
is now deprecated
- Improved CI pipeline such that pipline is not blocked with sinabs.exodus cannot be installed
- Fixed UserWarning raised by some torch-backed modules
- Improved some unit tests
- Standard dynamics introduced for LIF, Rate, Linear, Instant, ExpSyn. These are standardised across Jax, Torch and Numpy backends. We make efforts to guarantee identical dynamics for the standard modules across these backends, down to numerical precision
- LIF modules can now train threhsolds and biases as well as time constants
- New
JaxODELIF
module, which implements a trainable LIF neuron following common dynamical equations for LIF neurons - New addition of the WaveSense network architecture, for temporal signal processing with SNNs. This is available in
rockpool.networks
, and is documented with a tutorial - A new system for managing computational graphs, and mapping these graphs onto hardware architectures was introduced. These are documented in the Xylo quick-start tutorial, and in more detail in tutorials covering Computational Graphs and Graph Mapping. The mapping system performs design-rule checks for Xylo HDK
- Included methods for post-traning quantisation for Xylo, in
rockpool.transform
- Added simulation of a divisive normalisation block for Xylo audio applications
- Added a
Residual
combinator, for convenient generation of networks with residual blocks - Support for
sinabs
layers and Exodus Module
,JaxModule
andTorchModule
provide facility for auto-batching of input data. Input data shape is(B, T, Nin)
, or(T, Nin)
when only a single batch is provided- Expanded documentation on parameters and type-hinting
- Python > 3.6 is now required
- Improved import handling, when various computational back-ends are missing
- Updated for new versions of
samna
- Renamed Cimulator -> XyloSim
- Better parameter handling and rockpool/torch parameter registration for Torch modules
- (Most) modules can accept batched input data
- Improved / additional documentation for Xylo
- Improved type casting and device handling for Torch modules
- Fixed bug in Module, where
modules()
would return a non-ordered dict. This caused issues withJaxModule
- Removed several obsolete
Layer
s andNetwork
s from Rockpool v1
- Added support for the Xylo development kit in
.devices.xylo
, including several tutorials - Added CTC loss implementations in
.training.ctc_loss
- New trainable
torch
modules:LIFTorch
and others in.nn.modules.torch
, including an asynchronous delta modulatorUpDownTorch
- Added
torch
training utilities and loss functions in.training.torch_loss
- New
TorchSequential
class to supportSequential
combinator fortorch
modules - Added a
FFwdStackTorch
class to supportFFwdStack
combinator fortorch
modules
- Existing
LIFTorch
module renamed toLIFBitshiftTorch
; updated module to align better with Rockpool API - Improvements to
.typehints
package TorchModule
now raises an error if submodules are notTorchmodules
- Updated LIF torch training tutorial to use new
LIFBitshiftTorch
module - Improved installation instructions for
zsh
- 👹 Adversarial training of parameters using the Jax back-end, including a tutorial
- 🐰 "Easter" tutorial demonstrating an SNN trained to generate images
- 🔥 Torch tutorials for training non-spiking and spiking networks with Torch back-ends
- Added new method
nn.Module.timed()
, to automatically convert a module to aTimedModule
- New
LIFTorch
module that permits training of neuron and synaptic time constants in addition to other network parameters - New
ExpSynTorch
module: exponential leak synapses with Torch back-end - New
LinearTorch
module: linear model with Torch back-end - New
LowPass
module: exponential smoothing with Torch back-end - New
ExpSmoothJax
module: single time-constant exponential smoothing layer, supporting arbitrary transfer functions on output - New
softmax
andlog_softmax
losses injax_loss
package - New
utilities.jax_tree_utils
package containing useful parameter tree handling functions - New
TSContinuous.to_clocked()
convenience method, to easily rasterise a continuous time series - Alpha: Optional
_wrap_recorded_state()
method added tonn.Module
base class, which supports wrapping recorded state dictionaries asTimeSeries
objects, when using the high-levelTimeSeries
API - Support for
add_events
flag for time-series wrapper class - New Parameter dictionary classes to simplify conversion and handling of Torch and Jax module parameters
- Added
astorch()
method to parameter dictionaries returned formTorchModule
- Added
- Improved type hinting
- Old
LIFTorch
module renamed toLIFBitshiftTorch
- Kaiming and Xavier initialisation support for
Linear
modules Linear
modules provide a bias by default- Moved
filter_bank
package from V1 layers intonn.modules
- Update Jax requirement to > v2.13
- Fixed binder links for tutorial notebooks
- Fixed bug in
Module
for multiple inheritance, where the incorrect__repr__()
method would be called - Fixed
TimedModuleWrapper.reset_state()
method - Fixed axis limit bug in
TSEvent.plot()
method - Removed page width constraint for docs
- Enable
FFExpSyn
module by making it independent of oldRRTrainedLayer
- Removed
rpyc
dependency
- New Rockpool API. Breaking change from v1.x
- Documentation for new API
- Native support for Jax and Torch backends
- Many v1 Layers transferred
- Hotfix to remove references to ctxctl and aiCTX
- Hotfix to include NEST documentation in CI-built docs
- Hotfix to include change log in build docs
- Considerably expanded support for Denève-Machens spike-timing networks, including training arbitrary dynamical systems in a new
RecFSSpikeADS
layer. Added tutorials for standard D-M networks for linear dynamical systems, as well as a tutorial for training ADS networks - Added a new "Intro to SNNs" getting-started guide
- A new "sharp points of Rockpool" tutorial collects the tricks and traps for new users and old
- A new
Network
class,JaxStack
, supports stacking and end-to-end gradient-based training of all Jax-based layers. A new tutorial has been added for this functionality TimeSeries
classes now support best-practices creation from clock or rasterised data.TSContinuous
provides a.from_clocked()
method, andTSEvent
provides a.from_raster()
method for this purpose..from_clocked()
a sample-and-hold interpolation, for intuitive generation of time series from periodically-sampled data.TSContinuous
now supports a.fill_value
property, which permits extrapolation usingscipy.interpolate
- New
TSDictOnDisk
class for storingTimeSeries
objects transparently on disk - Allow ignoring data points for specific readout units in ridge regression Fisher relabelling. To be used, for example with all-vs-all classification
- Added exponential synapse Jax layers
- Added
RecLIFCurrentIn_SO
layer
TSEvent
time series no longer support creation without explicitly settingt_stop
. The previous default of taking the final event time ast_stop
was causing too much confusion. For related reasons,TSEvent
now forbids events to occur att_stop
TimeSeries
classes by default no longer permit sampling outside of the time range they are defined for, raising aValueError
exception if this occurs. This renders safe several traps that new users were falling in to. This behaviour is selectable per time series, and can be transferred to a warning instead of an exception using thebeyond_range_exception
flag- Jax trainable layers now import from a new mixin class
JaxTrainer
. THe class provides a default loss function, which can be overridden in each sub-class to provide suitable regularisation. The training interface now returns loss value and gradients directly, rather than requiring an extra function call and additional evolution - Improved training method for JAX rate layers, to permit parameterisation of loss function and optimiser
- Improved the
._prepare_input...()
methods in theLayer
class, such that allLayer
s that inherit from this superclass are consistent in the number of time steps returned from evolution - The
Network.load()
method is now a class method - Test suite now uses multiple cores for faster testing
- Changed company branding from aiCTX -> SynSense
- Documentation is now hosted at https://rockpool.ai
- Fixed bugs in precise spike-timing layer
RecSpikeBT
- Fixed behavior of
Layer
class when passing weights in wrong format - Stability improvements in
DynapseControl
- Fix faulty z_score_standardization and Fisher relabelling in
RidgeRegrTrainer
. Fisher relabelling now has better handling of differently sized batches - Fixed bugs in saving and loading several layers
- More sensible default values for
VirtualDynapse
baseweights - Fix handling of empty
channels
argument inTSEvent._matching_channels()
method - Fixed bug in
Layer._prepare_input
, where it would raise an AssertionError when no input TS was provided - Fixed a bug in
train_output_target
, where the gradient would be incorrectly handled if no batching was performed - Fixed
to_dict
method forFFExpSynJax
classes - Removed redundant
_prepare_input()
method from Torch layer - Many small documentation improvements
- Introduced new
TimeSeries
class methodconcatenate_t()
, which permits construction of a new time series by concatenating a set of existing time series, in the time dimension Network
class now provides ato_dict()
method for export.Network
now also can treat sub-Network
s as layers.- Training methods for spiking LIF Jax-backed layers in
rockpool.layers.training
. Tutorial demonstrating SGD training of a feed-forward LIF network. Improvements in JAX LIF layers. - Added
filter_bank
layers, providinglayer
subclasses which act as filter banks with spike-based output - Added a
filter_width
parameter for butterworth filters - Added a convenience function
start_at_zero()
to delayTimeSeries
so that it starts at 0 - Added a change log in
CHANGELOG.md
- Improved
TSEvent.raster()
to make it more intuitive. Rasters are now produced in line with time bases that can be created easily withnumpy.arange()
- Updated
conda_merge_request.sh
to work for conda feedstock TimeSeries.concatenate()
renamed toconcatenate_t()
RecRateEuler
warns iftau
is too small instead of silently changingdt
- Fixed issue in
Layer
, where internal property was used when accessing._dt
. This causes issues with layers that have an unusual internal type for._dt
(e.g. if data is stored in a JAX variable on GPU) - Reduce memory footprint of
.TSContinuous
by approximately half - Reverted regression in layer class
.RecLIFJax_IO
, wheredt
was by default set to1.0
, instead of being determined bytau_...
- Fixed incorrect use of
Optional[]
type hints - Allow for small numerical differences in comparison between weights in NEST test
test_setWeightsRec
- Improvements in inline documentation
- Increasing memory efficiency of
FFExpSyn._filter_data
by reducing kernel size - Implemented numerically stable timestep count for TSEvent rasterisation
- Fixed bugs in
RidgeRegrTrainer
- Fix plotting issue in time series
- Fix bug of RecRateEuler not handling
dt
argument in__init__()
- Fixed scaling between torch and nest weight parameters
- Move
contains()
method fromTSContinuous
toTimeSeries
parent class - Fix warning in
RRTrainedLayer._prepare_training_data()
when times of target and input are not aligned - Brian layers: Replace
np.asscalar
withfloat
- New
.Layer
superclass.RRTrainedLayer
. This superclass implements ridge regression for layers that support ridge regression training .TimeSeries
subclasses now add axes labels on plotting- New spiking LIF JAX layers, with documentation and tutorials
.RecLIFJax
,.RecLIFJax_IO
,.RecLIFCurrentInJax
,.RecLIFCurrentInJAX_IO
- Added
save
andload
facilities to.Network
objects ._matching_channels()
now accepts an arbitrary list of event channels, which is used when analysing a periodic time series
- Documentation improvements
- :py:meth:
.TSContinuous.plot
method now supportsstagger
andskip
arguments .Layer
and.Network
now deal with a.Layer.size_out
attribute. This is used to determine whether two layers are compatible to connect, rather than using.size
- Extended unit test for periodic event time series to check non-periodic time series as well
- Fixed bug in
TSEvent.plot()
, where stop times were not correctly handled - Fix bug in
Layer._prepare_input_events()
, where if only a duration was provided, the method would return an input raster with an incorrect number of time steps - Fixed bugs in handling of periodic event time series
.TSEvent
- Bug fix:
.Layer._prepare_input_events
was failing for.Layer
s with spiking input TSEvent.__call__()
now correctly handles periodic event time series
- CI build and deployment improvements
- CI Build and deployment improvements
- Remove deployment dependency on docs
- Hotfix: Fix link to
Black
- Add links to gitlab docs
- Hotfix for incorrect license text
- Updated installation instructions
- Included some status indicators in readme and docs
- Improved CI
- Extra meta-data detail in
setup.py
- Added more detail for contributing
- Update README.md
- First public release