Releases: lava-nc/lava-dl
Lava DL 0.6.0
What's Changed
- Add missing license information by @PhilippPlank in #266
- changing dependencies back to dev by @PhilippPlank in #267
- Bump cryptography from 41.0.5 to 41.0.6 by @dependabot in #268
- Bump jinja2 from 3.1.2 to 3.1.3 by @dependabot in #279
- Bump pillow from 10.1.0 to 10.2.0 by @dependabot in #281
- Bump gitpython from 3.1.40 to 3.1.41 by @dependabot in #278
- Bump cryptography from 41.0.6 to 42.0.0 by @dependabot in #284
- fixing sign of surrogate gradient for graded spikes by @ccaccavella in #286
- Bump cryptography from 42.0.0 to 42.0.2 by @dependabot in #289
- Bump cryptography from 42.0.2 to 42.0.4 by @dependabot in #290
- fixed event_rate fnc- by @elrond91 in #294
- Bump black from 22.12.0 to 24.3.0 by @dependabot in #295
- Bump pillow from 10.2.0 to 10.3.0 by @dependabot in #297
- Bump idna from 3.6 to 3.7 by @dependabot in #299
- MNIST Inference Tutorial using netx by @R-Gaurav in #302
- Bump jinja2 from 3.1.3 to 3.1.4 by @dependabot in #307
- fixed a small bug in the adrf neuron forward function by @felixthoe in #308
- Bump requests from 2.31.0 to 2.32.0 by @dependabot in #317
- Bump tornado from 6.3.3 to 6.4.1 by @dependabot in #323
- Recurrent netx save and load by @timcheck in #324
- Accelerated BDD100K by @bamsumit in #341
- Fix weight_exp is None for netx DelaySynapse by @Michaeljurado42 in #316
- Demo merge by @mgkwill in #343
- Bump urllib3 from 2.1.0 to 2.2.2 by @dependabot in #325
- Pytorch 2.3.1 by @mgkwill in #351
- Release 0.6.0 by @mgkwill in #352
New Contributors
- @ccaccavella made their first contribution in #286
- @elrond91 made their first contribution in #294
- @R-Gaurav made their first contribution in #302
- @felixthoe made their first contribution in #308
Full Changelog: v0.5.0...v0.6.0
Lava Deep Learning 0.5.0
Lava Deep Learning v0.5.0 Release Notes
November 9, 2023
What's Changed
- Ensure clamping of delay values during network export and import by @bamsumit in #215
- Bump tornado from 6.3.2 to 6.3.3 by @dependabot in #228
- Bump cryptography from 41.0.2 to 41.0.3 by @dependabot in #229
- Affine hdf5 export (#221) by @ahenkes1 in #222
- Added XOR-Regression tutorial. by @ahenkes1 in #227
- Spikemoid pr by @Michaeljurado42 in #231
- Bump gitpython from 3.1.32 to 3.1.34 by @dependabot in #232
- Bump gitpython from 3.1.34 to 3.1.35 by @dependabot in #234
- Bump cryptography from 41.0.3 to 41.0.4 by @dependabot in #240
- Device parameter for Sigma Dendrite by @bamsumit in #241
- Sparsity netx pr by @Michaeljurado42 in #238
- Bump urllib3 from 1.26.16 to 1.26.17 by @dependabot in #245
- Bump pillow from 9.5.0 to 10.0.1 by @dependabot in #246
- Update pillow version in pyproject.toml by @PhilippPlank in #247
- Set user defined spike_exp level globally when creating netx network by @bamsumit in #249
- Bump gitpython from 3.1.35 to 3.1.37 by @dependabot in #251
- Dev/feature yolo by @bamsumit in #243
- Bump torch requirements by @bamsumit in #250
- Updated readme by @bamsumit in #254
- Reduce the file size of yolo notebook using mp4 export by @bamsumit in #256
- Bump urllib3 from 2.0.6 to 2.0.7 by @dependabot in #257
- YOLO-KP inference by @bamsumit in #262
- Yolo kp part II by @bamsumit in #263
- Fix pypi publish in cd.yml by @mgkwill in #264
New Features and Improvements
-
Lava-dl SLAYER now has extended support for training and inference of video object detection networks and the associated pre and post processing utilities used for object detection. The object detection module is available as
lava.lib.dl.slayer.obd
. The modules are described below:Module Description obd.yolo_base
the foundational model for YOLO object detection training which can be used to build a variety of YOLO models obd.models
selected pre-trained YOLO SDNN models which can be fine-tuned for user-specific applications obd.dataset
object detection dataset library (will be progressively extended) obd.bbox.metrics
modules to evaluate object detection models obd.{bbox, dataset}.utils
utilities to manipulate bounding boxes and dataset processing including frame visualization and video export Extensive tutorials for
- YOLO SDNN training for video object detection,
- YOLO SDNN inference on GPU, and
- YOLO SDNN inference on Lava and Loihi
are also available.
In addition, the lava-dl SLAYER tutorials now include XOR regression tutorial as a basic example to get started with lava-dl training.
Finally, lava-dl SLAYER now supports SpikeMoid loss, the official implementation of the spike-based loss introduced in
Jurado et. al., Spikemoid: Updated Spike-based Loss Methods for Classification.
which enables more advanced tuning of SNNs for classification.
-
Lava-dl NetX now supports users to configure inference of fully connected layers using sparse synapse instead of the default dense synapse. This allows the network to leverage the compression offered by sparse synapse if the fully connected weights are sparse enough. It is as simple as setting
sparse_fc_layer=True
when initializing anetx.hdf5.Network
.netx.hdf5.Network
also supports global control of spike exponent (the fraction portion of spike message) by settingspike_exp
keyword. This allows users to control the network behavior in a more fine-grained manner and potentially avoid data overflow on Loihi hardware.In addition, lava-dl NetX now includes sequential modules
netx.modules
. These modules allow the creation of PyTorch style callable constructs whose behavior is described in theforward
function. In addition, these sequential modules also allow the execution of non-critical, but expensive management between calls in a parallel thread so that the execution flow is not blocked.netx.modules.Quantize
andnetx.modules.Dequantize
are now pre-built to allow for consistent quantization and dequantization to/from the fixed precision representation in the NetX network. Their usage can be seen in the YOLO SDNN inference on Lava and Loihi tutorial.
Bug Fixes and Other Changes
- Lava-dl SLAYER is now Torch 2.0 compatible allowing our users to use advanced Torch 2.0+ features.
- Fixes have been included that enable hdf5 export of affine block and proper handling of out-of-bound delays during hdf5 export in lava-dl SLAYER.
Breaking Changes
- No breaking changes in this release.
Known Issues
- No known issues in this release.
New Contributors
Full Changelog: v0.4.0...v0.5.0
Lava 0.4.0
What's Changed
- Add git-lfs instructions to the README.md by @mgkwill in #188
- Paolo gcd branch by @PaoloGCD in #193
- File renamed to match source structure. by @bamsumit in #201
- Bump requests from 2.28.1 to 2.31.0 by @dependabot in #203
- Bump tornado from 6.3.1 to 6.3.2 by @dependabot in #204
- Bump cryptography from 39.0.1 to 41.0.0 by @dependabot in #207
- Update license-metadata in pyproject.toml for pypi compatibility by @mgkwill in #190
- Bump cryptography from 41.0.0 to 41.0.2 by @dependabot in #209
New Contributors
Full Changelog: v0.3.3...v0.4.0
Lava Deep Learning 0.3.3
What's Changed
- Changing dependency of lava-nc back to main for development by @PhilippPlank in #124
- Link to SDN tutorial in Pilotnet training tutorial by @weidel-p in #126
- Cancel old CI run if new one is queued by @PhilippPlank in #128
- Add netx support for rf.Dense and rf_iz.Dense by @Michaeljurado42 in #110
- Stats printout by @bamsumit in #131
- Small fix to check for complex synapse in create_dense by @Michaeljurado42 in #130
- Link lava-dl decolle in readme by @bamsumit in #133
- Changed pre-/post hooks by @PhilippPlank in #138
- version change due to lava release by @PhilippPlank in #139
- Version change back to main branch after release by @PhilippPlank in #141
- updating Bandit to resolve dependabot alert by @michaelbeale-IL in #145
- Dependabot fixes by @michaelbeale-IL in #146
- Dependabot fixes by @michaelbeale-IL in #147
- Benchmark notebooks updated to new callback_fx api in Loihi2HwCfg by @bamsumit in #152
- Fixed Spikemax for moving window case by @Michaeljurado42 in #144
- Fix for netx mishandling scalar input dimension by @bamsumit in #156
- Update templates by @PhilippPlank in #157
- Bump cryptography from 39.0.0 to 39.0.1 by @dependabot in #161
- Persistent delay buffer to correctly run slayer blocks one timestep at a time by @bamsumit in #169
- truncate weight matrix if min/max out of bounds by @stevenabreu7 in #171
- Axonal delay as synaptic delay in NetX by @bamsumit in #173
- Release 0.3.3 by @mgkwill in #184
New Contributors
- @PhilippPlank made their first contribution in #124
- @weidel-p made their first contribution in #126
- @Michaeljurado42 made their first contribution in #110
- @michaelbeale-IL made their first contribution in #145
- @stevenabreu7 made their first contribution in #171
Full Changelog: v0.3.2...v0.3.3
Lava Deep Learning 0.3.2
Lava Deep Learning v0.3.2 Release Notes
New Features and Improvements
- No new features or improvements in this release.
Bug Fixes and Other Changes
- Updated dependency on lava-nc from main to version 0.5.1.
Breaking Changes
- No breaking changes in this release.
Known Issues
- No known issues in this release.
Thanks to our Contributors
- Intel Labs Lava Developers
Full Changelog: v0.3.1...v0.3.2
Lava Deep Learning 0.3.1
Lava Deep Learning v0.3.1 Release Notes
October 31, 2022
The lava-dl library version 0.3.1 now includes additional deep SNN inference and benchmarking tutorials.
New Features and Improvements
- Merged a PilotNet LIF inference tutorial (#119)
- Merged benchmarking tutorials for PilotNet SDNN and PilotNet LIF (#119)1
Bug Fixes and Other Changes
- Fixed issue with imports for recurrent tests (#112)
- Fixed a bug for improper device configuration for
lava.lib.dl.slayer
neuron normalization (#116)
Breaking Changes
- No breaking changes in this release.
Known Issues
- Issue training with GPU for lava-dl-slayer on Windows machine.
Thanks to our Contributors
- Intel Labs Lava Developers
- Tobias Fischer
- fangwei123456
Full Changelog: v0.3.0...v0.3.1
-
Intel Core i5-5257U with 32GB RAM, running Ubuntu 20.04.2 LTS with lava v0.5.1. Performance results are based on testing as of November 2022 and may not reflect all publicly available security updates. Results may vary. ↩
Lava Deep Learning 0.3.0
The lava-dl library version 0.3.0 now enables inference for trained spiking networks seamlessly on CPU or Loihi 2 backends and can leverage Loihi 2’s convolutional network compression and graded spike features for improved memory usage and performance.
New Features and Improvements
- Added Loihi 2 support in lava-dl NetX utilizing Loihi 2 convolution support and graded spikes (PR #88, #107).
- Added a tutorial demonstrating PilotNet application running on Intel Loihi 2 (PR #107).
- Added accelerated training of recurrent topologies in lava-dl SLAYER (PR #103)
- Added Transposed Convolution and Unpool support in lava-dl SLAYER (PR #80)
Bug Fixes and Other Changes
Breaking Changes
- No breaking changes in this release
Known Issues
- Issue training with GPU for lava-dl-slayer on Windows machine.
What's Changed
- Remove unnecessary imports by @Tobias-Fischer in #56
- Update mnist.py by @uslumt in #71
- Updated notebooks with new hyperparameters and typo fixes by @bamsumit in #76
- Add pilotnet integration tests by @mgkwill in #79
- Slayer fixes by @bamsumit in #78
- Changes to lava-dl to reflect api changes in lava 0.4.0 by @bamsumit in #88
- Added ConvT and Unpool block for neurons by @alexggener in #80
- Update ci-build.yml, Remove redundant poetry updates by @mgkwill in #89
- Recurrent mechanic by @timcheck in #103
- Bump nbconvert from 6.5.0 to 6.5.1 by @dependabot in #93
- fix bug of
block.AbstractInput
by @fangwei123456 in #105 - Loihi Tutorials by @bamsumit in #107
- Add conda install instructions with intel-numpy by @mgkwill in #91
- Version 0.3.0 by @mgkwill in #109
New Contributors
- @Tobias-Fischer made their first contribution in #56
- @uslumt made their first contribution in #71
- @alexggener made their first contribution in #80
- @timcheck made their first contribution in #103
- @dependabot made their first contribution in #93
- @fangwei123456 made their first contribution in #105
Full Changelog: v0.2.0...v0.3.0
Lava Deep Learning 0.2.0
The lava-dl library version 0.2.0 now supports automated generation of Lava processes for a trained network described by hdf5 network configuration using our Network Exchange (NetX) library.
New Features and Improvements
- Released Network Exchange (NetX) library to support automated creation of Lava process for a deep network. We support hdf5 network exchange format. Support for more formats will be introduced in future. (PR #30, Issue #29)
Bug Fixes and Other Changes
- Fixed bug with pre-hook quantization function on conv blocks (PR#13)
Breaking Changes
- No breaking changes in this release
Known Issues
- Issue training with GPU for lava-dl-slayer on Windows machine.
What's Changed
- Create PULL_REQUEST_TEMPLATE.md & ISSUE_TEMPLATE.md by @mgkwill in #27
- Hardware neuron parameters exchange and fixed precision instruction precision compatibility by @bamsumit in #25
- Pilotnet link fix by @bamsumit in #31
- Bugfix: CUBA neuron normalization applied to current state by @bamsumit in #35
- Netx by @bamsumit in #30
- Streamline PilotNet SNN notebook with RefPorts by @bamsumit in #37
- Fix for failing tests/lava/lib/dl/netx/test_hdf5.py by @bamsumit in #44
- Update ci-build.yml by @mgkwill in #42
- Install by @mgkwill in #45
- Lava Deep Learning 0.2.0 by @mgkwill in #46
- Lava Deep Learning 0.2.0 - update lock by @mgkwill in #47
Full Changelog: v0.1.1...v0.2.0
Lava Deep Learning v0.1.1
Lava Deep Learning 0.1.1 is a bugfix dot release.
Features and Improvements
- Added more content to tutorial_01. Some tuning guidelines of learning rates α and β for the QP solver have been added
Bug Fixes and Other Changes
- Fixed bug with pre-hook quantization function on conv blocks. (PR#13)
Known Issues
- No known issues at this point
What's Changed
- Adding init.py to lava-dl/lava by @awintel in #10
- Clean up of explicit namespace declaration by @bamsumit in #11
- Fix Pool layer when pre_hook function is not None by @valmat07 in #13
New Contributors
Full Changelog: v0.1.0...v0.1.1
Lava Deep Learning 0.1.0
Lava Deep Learning Library
This first release of lava-dl under BSD-3 license provides two new modes of training deep event-based neural networks, either directly with SLAYER 2.0 or through hybrid ANN/SNN training using the Bootstrap module.
SLAYER 2.0 (lava.lib.dl.slayer) provides direct training of heterogenous event-based computational blocks with support for a variety of learnable neuron models, complex synaptic computation, arbitrary recurrent connection, and many more new features. The API provides high level building blocks that are fully autograd enabled and training utilities that make getting started with training SNNs extremely simple.
Bootstrap (lava.lib.dl.bootstrap) is a new training method for rate-coded SNNs. In contrast to prior ANNto-SNN conversion schemes, it relies on an equivalent “shadow” ANN during training to maintain fast training speed but to also accelerate SNN inference post-training dramatically with only few spikes. Although Bootstrap is currently separate from SLAYER, its API mirrors the familiar SLAYER API, enabling fast hybrid ANN-SNN training for minimal performance loss in ANN to SNN conversion.
At this point in time, Lava processes cannot be trained directly with backpropagation. Therefore, we will soon release the Network Exchange (lava.lib.dl.netx) module for automatic generation of Lava processes from SLAYER or Bootstrap-trained networks. At that point, networks trained with SLAYER or Bootstrap can be executed in Lava.
Open-source contributions to these libraries are highly welcome. You are invited to extend the collection neuron models supported by both SLAYER and Bootstrap. Check out the Neurons and Dynamics tutorial to learn how to create custom neuron models from the fundamental linear dynamics’ API.
New Features and Improvements
- lava.lib.dl.slayer is an extension of SLAYER for natively training a combination of different neuron models and architectures including arbitrary recurrent connections. The library is fully autograd compatible with custom CUDA acceleration when supported by the hardware.
- lava.lib.dl.bootstrap is a new training method for accelerated training of rate based SNN using dynamically estimated ANN as well as hybrid training with fully spiking layers for low latency rate coded SNNs.
Bug Fixes and Other Changes
- This is the first release of Lava. No bug fixes or other changes.
Breaking Changes
- This is the first release of Lava. No breaking or other changes.
Known Issues
- No known issues at this point.
What's Changed
New Contributors
- @bamsumit made their first contribution in #5
- @mgkwill made their first contribution in #1
- @mathisrichter made their first contribution in #6
Full Changelog: https://github.com/lava-nc/lava-dl/commits/v0.1.0