Releases: lava-nc/lava
Lava 0.10.0
What's Changed
- Handle exceptions in runtime service so unittest can catch them by @weidel-p in #813
- Bump cryptography from 41.0.4 to 41.0.6 by @dependabot in #816
- LearningDense bit-accurate by @gkarray in #812
- Bump fonttools from 4.41.1 to 4.43.0 by @dependabot in #824
- Bump gitpython from 3.1.37 to 3.1.41 by @dependabot in #825
- SigmaS4Delta Neuronmodel and Layer with Unittests by @smm-ncl in #830
- Bump pillow from 10.0.1 to 10.2.0 by @dependabot in #832
- [QUBO] Solution readout via spikeIO for multi-chip support by @phstratmann in #820
- Bump cryptography from 41.0.6 to 42.0.0 by @dependabot in #834
- Bump cryptography from 42.0.0 to 42.0.2 by @dependabot in #836
- Bump cryptography from 42.0.2 to 42.0.4 by @dependabot in #837
- Spiker with 32bit by @phstratmann in #839
- Alternative to Injector/Extractor Processes by @gkarray in #835
- Fix workflows by @PhilippPlank in #844
- Bump pillow from 10.2.0 to 10.3.0 by @dependabot in #847
- Bump idna from 3.6 to 3.7 by @dependabot in #848
- CI: Use macos-13 instead of latest by @PhilippPlank in #853
- ATRLIF neuron model by @jlubo in #846
- Add the models and process of conv_in_time in src/lava/proc/conv_in_time by @zeyuliu1037 in #833
- Bump jinja2 from 3.1.3 to 3.1.4 by @dependabot in #855
- Bump requests from 2.31.0 to 2.32.0 by @dependabot in #858
- Bump tornado from 6.4 to 6.4.1 by @dependabot in #863
- Fix: subthreshold dynamics equation of refractory lif by @monkin77 in #842
- Bump urllib3 from 2.2.1 to 2.2.2 by @dependabot in #865
- Lava va by @epaxon in #740
- Add pure S4D by @smm-ncl in #868
- Update S4D to consider bit precision of input signal by @smm-ncl in #870
- Merge demo to main by @mgkwill in #869
- Enable manual partitioning by @PhilippPlank in #876
- Graded relu by @epaxon in #860
- Extractor and Injector async models fixed by @bamsumit in #881
- Release 0.10.0 by @mgkwill in #882
New Contributors
- @smm-ncl made their first contribution in #830
- @zeyuliu1037 made their first contribution in #833
- @monkin77 made their first contribution in #842
Full Changelog: v0.9.0...v0.10.0
Lava 0.9.0
Lava v0.9.0 Release Notes
November 9, 2023
What's Changed
- Fix conv python model to send() before recv() by @Gavinator98 in #751
- Adds support for Monitor a Port to observe if it is blocked by @joyeshmishra in #755
- Issue: #757 - Update install from source info by @ahenkes1 in #758
- Fix DelayDense buffer issue by @bamsumit in #767
- Allow np.array as input weights for Sparse by @SveaMeyer13 in #772
- Bump tornado from 6.3.2 to 6.3.3 by @dependabot in #778
- Bump cryptography from 41.0.2 to 41.0.3 by @dependabot in #779
- Bump gitpython from 3.1.32 to 3.1.35 by @dependabot in #785
- Merge Spike IO by @joyeshmishra in #786
- CLP tutorial 1 small patch by @elvinhajizada in #773
- CLP Tutorial 02: COIL-100 by @elvinhajizada in #721
- Bump cryptography from 41.0.3 to 41.0.4 by @dependabot in #790
- Generalize int shape check in injector and extractor to take numpy ints by @bamsumit in #792
- Prod neuron by @epaxon in #783
- Resfire by @epaxon in #787
- Graded by @epaxon in #734
- Bump pillow from 10.0.0 to 10.0.1 by @dependabot in #794
- Bump urllib3 from 1.26.16 to 1.26.17 by @dependabot in #793
- Bump gitpython from 3.1.35 to 3.1.37 by @dependabot in #795
- Folded view by @ymeng-git in #774
- Add sender and receiver information in watchdog logs by @joyeshmishra in #796
- Bump urllib3 from 1.26.17 to 1.26.18 by @dependabot in #800
- Update process.py to fix typo. by @tim-shea in #763
- Add BitCheck Process by @mgkwill in #802
- Lava fixes to enable large convolutional networks by @bamsumit in #808
- Fix poetry config for publish to pypi by @mgkwill in #782
- Change how poetry uploads to pypi in cd.yml by @mgkwill in #810
New Features and Improvements
VarWire
process added inlava.proc.io.injector
. It works similarly toInjector
, but usingRefPorts
.- Added watchdog which supports monitoring a port to observe if it is blocked
GradedVec
process enabling graded spike vector layer, which transmits accumulated input as graded spike with no dynamicsProdNeuron
process, enabling the product of two graded inputs and outputs result as graded spikeRFZero
process, enabling resonate and fire neuron with spike trigger of threshold and 0-phase crossing- Added
BitCheck
process, allowing quick check of hardware overflow for bit-accurate process vars - Added support for multi-instance compilation through
compile_option {'folded_view': ['templateName']}
Bug Fixes and Other Changes
- Fixed buffer issue in synaptic delay.
- Added support for numpy array types to use as input weights for Sparse connection process.
Breaking Changes
- No known breaking changes in this release.
Known Issues
- No known issues in this release.
New Contributors
- @ahenkes1 made their first contribution in #758
- @epaxon made their first contribution in #783
- @ymeng-git made their first contribution in #774
Full Changelog: v0.8.0...v0.9.0
Lava 0.8.0
What's Changed
- Bugfix to enable monitoring aliased vars by @AlessandroPierro in #675
- Enable Sparse processes by @weidel-p in #672
- Update pyproject.toml with PyPI compatible classifiers by @tim-shea in #678
- Added process.create_runtime method by @tim-shea in #674
- Dev/graded spike pre trace by @weidel-p in #679
- Truncating weights to 0 for Sparse lead to wrong shape fixed by @weidel-p in #683
- Minor refactoring for DelayDense/Sparse process by @PhilippPlank in #693
- Minor change to sparse to handle all 0 delays by @PhilippPlank in #696
- Coverage reports for Codacy by @mathisrichter in #701
- Codacy coverage upload by @mathisrichter in #702
- Fixed pylint errors by @mathisrichter in #644
- Bump requests from 2.28.2 to 2.31.0 by @dependabot in #703
- Bump tornado from 6.3.1 to 6.3.2 by @dependabot in #704
- LIF refractory floating point by @ssgier in #655
- Bump cryptography from 40.0.2 to 41.0.0 by @dependabot in #708
- Executable explicitly contains a flat list of all Processes after ProcGroups are created by @srrisbud in #714
- Iterator callback function by @bamsumit in #726
- Diff-clean PR, no changes by @srrisbud in #728
- IO bridge Processes by @gkarray in #686
- Fixing create_runtime unit test. by @tim-shea in #718
- Bump cryptography from 41.0.0 to 41.0.2 by @dependabot in #735
- Fixing tuple type hints in Injector/Extractor tests by @gkarray in #736
- The Initial version of CLP in CPU backend by @elvinhajizada in #707
- Refactor methods without arguments into functions by @mathisrichter in #685
- Fix tuple to list error by @hexu33 in #729
- Re-adding deprecated lava.utils.system. by @tim-shea in #739
- Serialization by @PhilippPlank in #732
- Fix partition parse bug for internal vLab by @tim-shea in #741
- Add linting tutorials folder by @PhilippPlank in #742
- Iterator callback fx signature fix by @bamsumit in #743
- Bugfix to pass the args by keyword by @joyeshmishra in #744
- CLP Tutorial 01 Only by @drager-intel in #746
- Update release job, add pypi upload, github release creation by @mgkwill in #737
- Use github pypi auth in release job by @mgkwill in #747
Full Changelog: v0.7.0...v0.8.0
Lava 0.7.0
What's Changed
- Dependabot fixes by @michaelbeale-IL in #553
- Add var port polling in host phase to flush out any pending reads by @joyeshmishra in #563
- Expose channel slack parameter to user interface by @mathisrichter in #529
- Initialization of Dense with all-zero weights by @PhilippPlank in #585
- Lp with ap by @ysingh7 in #587
- Added num_steps for AsyncProcess as well as ability to GET/SET Var wh… by @ysingh7 in #591
- Factory function that converts PyLoihi process models to PyAsync models on the fly by @bamsumit in #595
- Add compile config to rs_builder by @joyeshmishra in #608
- Bump cryptography from 39.0.0 to 39.0.1 by @dependabot in #609
- Bump ipython from 8.8.0 to 8.10.0 by @dependabot in #617
- Update runtime and process to be "with" context managers. by @Gavinator98 in #605
- Enable get/set on learning rule parameters by @weidel-p in #622
- Synaptic delays for Dense connections by @PhilippPlank in #624
- Graded spike payload in pre-synaptic trace updates -
floating-pt
andbit-approximate-loihi
by @gkarray in #607 - Integrate raster plot by @ssgier in #623
- License header cleanup by @mathisrichter in #641
- Fixed Pause for Async Process by @ysingh7 in #646
- Plug file descriptor leaks by @ssgier in #643
- Fixed mappin gof payload to multiple port pairs by @ysingh7 in #657
- Changing order of checks when advancing phase in LoihiPyRuntimeService by @gkarray in #662
- Fixed automatic append of py_ports and var_ports by @tim-shea in #669
- Release 0.6.1 by @mgkwill in #670
New Contributors
- @Gavinator98 made their first contribution in #605
- @ssgier made their first contribution in #623
Full Changelog: v0.6.0...v0.7.0
Lava 0.6.0
Lava v0.6.0 Release Notes
December 14, 2022
New Features and Improvements
- Enabled 2 factor learning on Loihi 2 and in Lava simulation with the LearningLIF and LearningLIFFloat processes. (PR #528 & PR #535)
- Resonate and Fire and Resonate and Fire Izhikevich neurons now available in Lava simulation. (PR #378)
- New tutorial on sigma-delta networks in Lava. (PR #470)
- Enabled state probes for Loihi 2 and added an in-depth tutorial (lava-loihi extension).
Bug Fixes and Other Changes
- RF neurons with variable periods now work. (PR #487)
- Automatically cancle older CI runs of a PR if a newer one was started due to a push. (PR #488)
- Improved learning API, related tutorials and tests and a but in the Loihi STDP implementation. (PR #500)
- Generalisation of the pre- and post hooks into the runtime service. (PR #521)
- Improved RSTDP learning tutorial. (PR #536)
Breaking Changes
- No breaking changes in this release.
Known Issues
- Direct channel connections between Processes using a PyProcessModel and NcProcessModel are not supported.
- Channel communication between PyProcessModels is slow.
- Lava networks throw error if run is invoked too many times due to a leak in shared memory descriptors in CPython implementation.
- Virtual ports are only supported between Processes using PyProcModels, and between Processes using NcProcModels. Virtual ports are not supported when Processes with CProcModels are involved or between pairs of Processes that have different types of ProcModels. In addition, VirtualPorts do not support concatenation yet.
- Joining and forking of virtual ports is not supported.
- The Monitor Process only supports probing a single Var per Process implemented via a PyProcessModel. Probing states on Loihi 2 is currently available using StateProbes (tutorial available in lava-loihi extension).
- Some modules, classes, or functions lack proper docstrings and type annotations. Please raise an issue on the GitHub issue tracker in such a case.
Thanks to our Contributors
- Intel Labs Lava Developers
- @Michaeljurado42 made their first contribution in #378
Full Changelog: v0.5.1...v0.6.0
Lava 0.5.1
Lava v0.5.1 Release Notes
October 31, 2022
New Features and Improvements
- Lava now supports LIF reset models with CPU backend. (PR #415)
- LAVA now supports three factor learning rules. This release introduces a base class for plastic neurons as well as differentiation between
Loihi2FLearningRule
andLoihi3FLearningRule
. (PR #400) - New Tutorial shows how to implement and use a three-factor learning rule in Lava with an example of reward-modulated STDP. (PR #400)
Bug Fixes and Other Changes
- Fixes a bug in network compilation for branching/forking of CProcess and NC Process Models. (PR #391)
- Fixes a bug to support multiple CPorts to PyPorts connectivity in a single process model. (PR #391)
- Fixed issues with the
uk
conditional in the learning engine. (PR #400) - Fixed the explicit ordering of subcompilers in compilation stack: C-first-Nc-second heuristic. (PR #408)
- Fixed the incorrect use of np.logical_and and np.logical_or discovered in learning-related code in Connection ProcessModels. (PR #412)
- Fixed a warning in Compiler process model discovery and selection due to importing sub process model classes. (PR #418)
- Fixed a bug in Compiler to select correct CProcessModel based on tag specified in run config. (PR #421)
- Disabled overwriting of user set environment variables in systems.Loihi2. (PR #428)
- Process Model selection now works in Jupyter Collab environment. #435
- Added instructions to download dataset for MNIST tutorial (PR #439)
- Fixed a bug in run config with respect to initializing pre- and post-execution hooks during multiple runs (PR #440)
- Added an interface for Lava profiler to enable future implementations on different hardware or chip generations. (PR #444)
- Updated PyTest and NBConvert dependencies to newer versions in poetry for installation. (PR #447)
Breaking Changes
- QUBO related processes and process models have now moved to lava-optimization (PR #449)
Known Issues
- Direct channel connections between Processes using a PyProcessModel and NcProcessModel are not supported.
- Channel communication between PyProcessModels is slow.
- Lava networks throw errors if run is invoked too many times due to a leak in shared memory descriptors in CPython implementation.
- Virtual ports are only supported between Processes using PyProcModels, and between Processes using NcProcModels. Virtual ports are not supported when Processes with CProcModels are involved or between pairs of Processes that have different types of ProcModels. In addition, VirtualPorts do not support concatenation yet.
- Joining and forking of virtual ports is not supported.
- The Monitor Process only supports probing a single Var per Process implemented via a PyProcessModel. The Monitor Process does not support probing Vars on Loihi NeuroCores.
- Some modules, classes, or functions lack proper docstrings and type annotations. Please raise an issue on the GitHub issue tracker in such a case.
Thanks to our Contributors
- Intel Labs Lava Developers
- @AlessandroPierro made their first contribution in #439
- @michaelbeale-IL made their first contribution in #447
- @bala-git9 made their first contribution in #400
- @a-t-0 made their first contribution in #453
Lava 0.5.0
The release of Lava v0.5.0 includes major updates to the Lava Deep Learning (Lava-DL) and Lava Optimization (Lava-Optim) libraries and offers the first update to the core Lava framework following the first release of the Lava extension for Loihi in July 2022.
- Lava offers a new learning API on CPU based on the Loihi on-chip learning engine. In addition, various functional and performance issues have been fixed since the last release.
- Several high-level application tutorials on QUBO (maximum independent set), deep learning (PilotNet, Oxford Radcliffe spike training), 2-factor STDP-based learning, and design of an E/I network model as well as a comprehensive API reference documentation make this version more accessible to new and experienced users.
New Features and Improvements
- Added support for convolutional neural networks (lava-nc PR #344, lava-loihi PR #343).
- Added NcL2ModelConv ProcessModel supporting Loihi 2 convolutional connection sharing (lava-loihi PR #343).
- Added NcL1ModelConvAsSparse ProcessModel supporting convolutional connections on implemented as sparse connections (Compatible with both Loihi 1 and Loihi 2).
- Added ability to represent convolution inferred connection to represent shared connection to and from Loihi 2 convolution synapse (lava-loihi PR #343).
- Added Convolution Manager to manage the resource allocation for utilizing Loihi 2 convolution feature (lava-loihi PR #343).
- Added convolution connection strategy to partition convolution layers to Loihi2 neurocores (lava-loihi PR #343).
- Added support for convolution spike generation (lava-loihi PR #343).
- Added support for Convolution specific varmodels (ConvNeuronVarModel and ConvInVarModel) for interacting with the Loihi 2 convolution configured neuron as well as Loihi 2 convolution input from a C process.
- Added embedded IO processes and C-models to bridge the interaction between Python and Loihi 2 processes in the form of spikes as well as state read/write including convolution specific support. (lava-nc PR #344, lava-loihi PR #343)
- Added support for compressed message passing from Python to Loihi 2 using Loihi 2’s embedded processors (lava-nc PR #344, lava-loihi PR #343).
- Added support for resource cost sharing between Loihi 2 to allow for flexible memory allocation in neurocore (lava-loihi PR #343).
- Added support for sharing axon instructions for output spike generation from a Loihi 2 neurocore (lava-loihi PR #287).
- Added support for learning in simulation (CPU) according to Loihi’s learning engine (PR #332):
- STDPLoihi class is a 2-Factor STDP learning algorithm added to the Lava Process Library based on the Loihi learning engine.
- LoihiLearningRule class provides the ability to create custom learning rules based on the Loihi learning engine.
- Implemented a LearningDense Process which takes the same arguments as Dense, plus an optional LearningRule argument to enable learning in its ProcessModels.
- Implemented floating-point and bit-approximate PyLoihi ProcessModel, named PyLearningDenseModelFloat and PyLearningDenseModelBitApproximate, respectively.
- Also implemented bit-accurate PyLoihi ProcessModel named PyLearningDenseModelBitAcc.
- Added a tutorial to show the usage of STDPLoihi and how to create custom learning rules.
Bug Fixes and Other Changes
- The fixed-point PyProcessModel of the Dense Process now has the same behavior as the NcProcessModel for Loihi 2 (PR #328)
- The Dense NcProcModel now correctly represents purely inhibitory weight matrices on Loihi 2 (PR #376).
- The neuron current overflow behavior of the fixed point LIF model was fixed so that neuron current wraps to opposite side of integer range rather than to 0. (PR #364)
Breaking Changes
- Function signatures of node allocate() methods in Net-API have been updated to use explicit arguments. In addition, some function argument names have been changed to abstract away Loihi register details.
- Removed bit-level parameters and Vars from Dense Process API.
Known Issues
- Only one instance of a Process targeting an embedded processor (using CProcessModel) can currently be created. Creating multiple instances in a network, results in an error. As a workaround, the behavior of multiple Processes can be fused into a single CProcessModel.
- Direct channel connections between Processes using a PyProcessModel and NcProcessModel are not supported.
- Channel communication between PyProcessModels is slow.
- The Lava Compiler is still inefficient and in need of improvement to performance and memory utilization.
- Virtual ports are only supported between Processes using PyProcModels, and between Processes using NcProcModels. Virtual ports are not supported when Processes with CProcModels are involved or between pairs of Processes that have different types of ProcModels. In addition, VirtualPorts do not support concatenation yet.
- Joining and forking of virtual ports is not supported.
- The Monitor Process does currently only support probing of a single Var per Process implemented via a PyProcessModel. The Monitor Process does currently not support probing of Vars mapped to NeuroCores.
- Some modules, classes, or functions lack proper docustrings and type annotations. Please raise an issue on the GitHub issue tracker in such a case.
- Learning API does not support 3-Factor learning rules yet.
Thanks to our Contributors
- Alejandro Garcia Gener (@alexggener)
- @fangwei123456
- Julia A (@JuliaA369)
- Maryam Parsa
- @Michaeljurado24
- Intel Corporation: All contributing members of the Intel Neuromorphic Computing Lab
What's Changed
- Update RELEASE.md by @mgkwill in #270
- Add C Builder Index to the channel name to make it unique in case of … by @joyeshmishra in #271
- Changes in DiGraphBase class to enable recurrence by @srrisbud in #273
- Fix for dangling ports by @bamsumit in #274
- Unique process models in process models discovery by @bamsumit in #277
- Added a compilation order heuristic for compiling C before Nc processes by @srrisbud in #275
- Process module search fix by @bamsumit in #285
- Fix default sync domain not splitting processes according to the Node by @ysingh7 in #286
- Add type to isinstance call by @mgkwill in #287
- Add intel numpy to conda install instructions by @mgkwill in #298
- Modified mapper to handle disconnected lif components connected to sa… by @ysingh7 in #294
- Bump nbconvert from 6.5.0 to 6.5.1 by @dependabot in #317
- Make Pre Post Functions execution on board by @joyeshmishra in #323
- Doc/auto api by @weidel-p in #318
- changed heading to improve rendering website by @weidel-p in #338
- Input compression features for large dimension inputs and infrastructure for convolution feature by @bamsumit in #344
- Stochastic Constraint Integrate and Fire (SCIF) neuron model for constraint satisfaction problems by @srrisbud in #335
- Update tutorials to newest version by @weidel-p in #340
- Transfer dev deps to dev section: Update pyproject.toml by @mgkwill in #355
- Ability to get/set synaptic weights by @srrisbud in #359
- Fixed pt lif precision by @ackurth-nc in #330
- Use poetry 1.1.15 explicitly by @srrisbud in #365
- Dev/learning rc 0.5 by @weidel-p in #332
- SCIF neuron model: Minor fixes by @srrisbud in #367
- Ei network tutorial by @ackurth-nc in #309
- Fix issue 334 by @PhilippPlank in #364
- Add Missing Variables in Conv Model by @SveaMeyer13 in #354
- Weight bit-accuracy of Dense (Python vs. Loihi 2) by @mathisrichter in #328
- Public processes for optimization solver by @phstratmann in #374
- Fixing Dense inhibitory sign_mode by @mathisrichter in #376
- Eliminate design issues in learning-related code by @mathisrichter in #371
- Enable Requesting Pause from Host. by @GaboFGuerra in #373
- Update poetry version in CI by @mgkwill in #380
- Enable exception proc_map, working with dataclasses, etc. by @GaboFGuerra in #372
- Expose noise amplitude for SCIF by @GaboFGuerra in #383
- Update ReadGate API according to NC model. by @GaboFGuerra in #384
- Add output messages for ReadGate's send_req_pause port. by @GaboFGuerra in #385
- Version 0.5.0 by @mgkwill in #375
New Contributors
- @dependabot made their first contribution in #317
- @weidel-p made their first contribution in #318
- @ackurth-nc made their first...
Lava 0.4.0
The release of Lava v0.4.0 brings initial support to compile and run models on Loihi 2 via Intel’s cloud hosted Oheo Gulch and Kapoho Point systems. In addition, new tutorials and documentation explain how to build Lava Processes written in Python or C for CPU and Loihi backends.
While this release offers few high-level application examples, Lava v0.4.0 provides major enhancements to the overall Lava architecture. It forms the basis for the open-source community to enable the full Loihi feature set, such as on-chip learning, convolutional connectivity, or accelerated spike IO. The Lava Compiler and Runtime architecture has also been generalized allowing extension to other backends or neuromorphic processors. Subsequent releases will improve compiler performance and provide more in-depth documentation as well as several high-level coding examples for Loihi, such as real-world applications spanning multiple chips.
The public Lava GitHub repository (https://github.com/lava-nc/lava) continues to provide all the features necessary to run Lava applications on a CPU backend. In addition, it now also includes enhancements to enable Intel Loihi support. To run Lava applications on Loihi, users need to install the proprietary Lava extension for Loihi. This extension contains the Loihi-compatible Compiler and Runtime features as well as additional tutorials. While this extension is currently released as a tar file, it will be made available as a private GitHub repo in the future.
Please help us fix any problems you encounter with the release by filing an issue on Github for the public code or sending a ticket to the team for the Lava extension for Loihi.
New Features and Improvements
Features marked with * are available as part of the Loihi 2 extension available to INRC members.
- *Extended Process library including new ProcessModels and additional improvements:
- LIF, Sigma-Delta, and Dense Processes execute on Loihi NeuroCores.
- Prototype Convolutional Process added.
- Sending and receiving spikes to NeuroCores via embedded processes that can be programmed in C with examples included.
- All Lava Processes now list all constructor arguments explicitly with type annotations.
- *Added high-level API to develop custom ProcessModels that use Loihi 2 features:
- Loihi NeuroCores can be programmed in Python by allocating neural network resources like Axons, Synapses or Neurons. In particular, Loihi 2 NeuroCore Neurons can be configured by writing highly flexible assembly programs.
- Loihi embedded processors can be programmed in C. But unlike the prior NxSDK, no knowledge of low-level registers details is required anymore. Instead, the C API mirrors the high-level Python API to interact with other processes via channels.
- Compiler and Runtime support for Loihi 2:
- General redesign of Compiler and Runtime architecture to support compilation of Processes that execute across a heterogenous backend of different compute resources. CPU and Loihi are supported via separate sub compilers.
- *The Loihi NeuroCore sub compiler automatically distributes neural network resources across multiple cores.
- *The Runtime supports direct channel-based communication between Processes running on Loihi NeuroCores, embedded CPUs or host CPUs written in Python or C. Of all combinations, only Python<->C and C<->NeuroCore are currently supported.
- *Added support to access Process Variables on Loihi NeuroCores at runtime via Var.set and Var.get().
- New tutorials and improved class and method docstrings explain how new Lava features can be used such as *NeuroCore and *embedded processor programming.
- An extended suite of unit tests and new *integration tests validate the correctness of the Lava framework.
Bug Fixes and Other Changes
- Support for virtual ports on multiple incoming connections (Python Processes only) (Issue #223, PR #224)
- Added conda install instructions (PR #225)
- Var.set/get() works when RunContinuous RunMode is used (Issue #255, PR #256)
- Successful execution of tutorials now covered by unit tests (Issue #243, PR #244)
- Fixed PYTHONPATH in tutorial_01 (Issue #45, PR #239)
- Fixed output of tutorial_07 (Issue #249, PR #253)
Breaking Changes
- Process constructors for standard library processes now require explicit keyword/value pairs and do not accept arbitrary input arguments via **kwargs anymore. This might break some workloads.
- use_graded_spike kwarg has been changed to num_message_bits for all the built-in processes.
- shape kwarg has been removed from Dense process. It is automatically inferred from the weight parameter’s shape.
- Conv Process has additional arguments weight_exp and num_weight_bits that are relevant for fixed-point implementations.
- The sign_mode argument in the Dense Process is now an enum rather than an integer.
- New parameters u and v in the LIF Process enable setting initial values for current and voltage.
- The bias parameter in the LIF Process has been renamed to bias_mant.
Known Issues
- Lava does currently not support on-chip learning, Loihi 1 and a variety of connectivity compression features such as convolutional encoding.
- All Processes in a network must currently be connected via channels. Running unconnected Processes using NcProcessModels in parallel currently gives incorrect results.
- Only one instance of a Process targeting an embedded processor (using CProcessModel) can currently be created. Creating multiple instances in a network, results in an error. As a workaround, the behavior of multiple Processes can be fused into a single CProcessModel.
- Direct channel connections between Processes using a PyProcessModel and NcProcessModel are not supported.
- In the scenario that InputAxons are duplicated across multiple cores and users expect to inject spikes based on the declared port size, then the current implementation leads to buffer overflows and memory corruption.
- Channel communication between PyProcessModels is slow.
- The Lava Compiler is still inefficient and in need of improvement to performance and memory utilization.
- Virtual ports are only supported between Processes using PyProcModels, but not between Processes when CProcModels or NcProcModels are involved. In addition, VirtualPorts do not support concatenation yet.
- Joining and forking of virtual ports is not supported.
- The Monitor Process does currently only support probing of a single Var per Process implemented via a PyProcessModel. The Monitor Process does currently not support probing of Vars mapped to NeuroCores.
- Despite new docstrings, type annotations, and parameter descriptions to most of the public user-facing API, some parts of the code still have limited documentation and are missing type annotations.
What's Changed
- Virtual ports on multiple incoming connections by @mathisrichter in #224
- Add conda install to README by @Tobias-Fischer in #225
- PYTHONPATH fix in tutorial by @jlubo in #239
- Fix tutorial04_execution.ipynb by @mgkwill in #241
- Tutorial tests by @mgkwill in #244
- Update README.md remove vlab instructions by @mgkwill in #248
- Tutorial bug fix by @PhilippPlank in #253
- Fix get set var by @PhilippPlank in #256
- Update runtime_service.py by @PhilippPlank in #258
- Release/v0.4.0 by @mgkwill in #265
Thanks to our Contributors
- Intel Corporation: All contributing members of the Intel Neuromorphic Computing Lab
Open-source community:
- Tobias-Fischer, Tobias Fischer
- jlubo, Jannik Luboeinski
New Contributors
Full Changelog: v0.3.0...v0.4.0
The release of Lava v0.4.0 brings initial support to compile and run models on Loihi 2 via Intel’s cloud hosted Oheo Gulch and Kapoho Point systems. In addition, new tutorials and documentation explain how to build Lava Processes written in Python or C for CPU and Loihi backends.
While this release offers few high-level application examples, Lava v0.4.0 provides major enhancements to the overall Lava architecture. It forms the basis for the open-source community to enable the full Loihi feature set, such as on-chip learning, convolutional connectivity, or accelerated spike IO. The Lava Compiler and Runtime architecture has also been generalized allowing extension to other backends or neuromorphic processors. Subsequent releases will improve compiler performance and provide more in-depth documentation as well as several high-level coding examples for Loihi, such as real-world applications spanning multiple chips.
The public Lava GitHub repository (https://github.com/lava-nc/lava) continues to provide all the features necessary to run Lava applications on a CPU backend. In addition, it now also includes enhancements to enable Intel Loihi support. To run Lava applications on Loihi, users need...
Lava 0.3.0
Lava 0.3.0 includes bug fixes, updated documentation, improved error handling, refactoring of the Lava Runtime and support for sigma delta neuron enconding and decoding.
New Features and Improvements
- Added sigma delta neuron encoding and decoding support (PR #180, Issue #179)
- Implementation of ReadVar and ResetVar IO process (PR #156, Issue #155)
- Added Runtime handling of exceptions occuring in ProcessModels and the Runtime now returns exeception stack traces (PR #135, Issue #83)
- Virtual ports for reshaping and transposing (permuting) are now supported. (PR #187, Issue #185, PR #195, Issue #194)
- A Ternary-LIF neuron model was added to the process library. This new variant supports both positive and negative threshold for processing of signed signals (PR #151, Issue #150)
- Refactored runtime to reduce the number of channels used for communication(PR #157, Issue #86)
- Refactored Runtime to follow a state machine model and refactored ProcessModels to use command design pattern, implemented PAUSE and RUN CONTINOUS (PR #180, Issue #86, Issue #52)
- Refactored builder to its own package (PR #170, Issue #169)
- Refactored PyPorts implementation to fix incomplete PyPort hierarchy (PR #131, Issue #84)
- Added improvements to the MNIST tutorial (PR #147, Issue #146)
- A standardized template is now in use on new Pull Requests and Issues (PR #140)
- Support added for editable install (PR #93, Issue #19)
- Improved runtime documentation (PR #167)
Bug Fixes and Other Changes
- Fixed multiple Monitor related issues (PR #128, Issue #103, Issue #104, Issue #116, Issue #127)
- Fixed packaging issue regarding the dataloader for MNIST (PR #133)
- Fixed multiprocessing bug by checking process lineage before join (PR #177, Issue #176)
- Fixed priority of channel commands in model (PR #190, Issue #186)
- Fixed RefPort time step handling (PR #205, Issue #204)
Breaking Changes
- No breaking changes in this release
Known Issues
- No support for Intel Loihi
- CSP channels process communication, implemented with Python multiprocessing, needs improvement to reduce the overhead from inter-process communication to approach native execution speeds of similar implementations without CSP channel overhead
- Virtual ports for concatenation are not supported
- Joining and forking of virtual ports is not supported
- A Monitor process cannot monitor more than one Var/InPort of a process, as a result multi-var probing with a singular Monitor process is not supported
- Limited API documentation
What's Changed
- Fixing multiple small issues of the Monitor proc by @elvinhajizada in #128
- GitHub Issue/Pull request template by @mgkwill in #140
- Fixing MNIST dataloader by @tihbe in #133
- Runtime error handling by @PhilippPlank in #135
- Reduced the number of channels between service and process (#1) by @ysingh7 in #157
- TernaryLIF and refactoring of LIF to inherit from AbstractLIF by @srrisbud in #151
- Proc_params for communicating arbitrary object between process and process model by @bamsumit in #162
- Support editable install by @matham in #93
- Implementation of ReadVar and ResetVar IO process and bugfixes for LIF, Dense and Conv processes by @bamsumit in #156
- Refactor builder to module by @mgkwill in #170
- Use unittest ci by @mgkwill in #173
- Improve mnist tutorial by @srrisbud in #147
- Multiproc bug by @mgkwill in #177
- Refactoring py/ports by @PhilippPlank in #131
- Adds runtime documentation by @joyeshmishra in #167
- Implementation of Pause and Run Continuous with refactoring of Runtime by @ysingh7 in #171
- Ref port debug by @PhilippPlank in #183
- Sigma delta neuron, encoding and decoding support by @bamsumit in #180
- Add NxSDKRuntimeService by @mgkwill in #182
- Partial implementation of virtual ports for PyProcModels by @mathisrichter in #187
- Remove old runtime_service.py by @mgkwill in #192
- Fixing priority of channel commands in model by @PhilippPlank in #190
- Virtual ports between RefPorts and VarPorts by @mathisrichter in #195
- RefPort's sometimes handled a time step late by @PhilippPlank in #205
- Fixed reset timing offset by @bamsumit in #207
- Update README.md by @mgkwill in #202
- Virtual ports no longer block Process discovery in compiler by @mathisrichter in #211
- Remove pybuilder, Add poetry by @mgkwill in #215
- Added wait() to refvar unittests by @bamsumit in #220
- Update Install Instructions by @mgkwill in #218
Thanks to our Contributors
Intel Corporation: All contributing members of the Intel Neuromorphic Computing Lab
Open-source community: (Ismael Balafrej, Matt Einhorn)
New Contributors
- @tihbe made their first contribution in #133
- @ysingh7 made their first contribution in #157
- @matham made their first contribution in #93
Full Changelog: v0.2.0...v0.3.0
Lava 0.2.0
Lava 0.2.0 includes several improvements to the Lava Runtime. One of them improves the performance of the underlying message passing framework by over 10x on CPU. We also added new floating-point and Loihi fixed-point PyProcessModels for LIF and DENSE Processes as well as a new CONV Process. In addition, Lava now supports remote memory access between Processes via RefPorts which allows Processes to reconfigure other Processes. Finally, we added/updated several new tutorials to address all these new features.
Features and Improvements
- Refactored Runtime and RuntimeService to separate the MessagePassingBackend from the Runtime and RuntimeService itself into its own standalone module. This will allow implementing and comparing the performance of other implementations for channel-based communication and also will enable true multi-node scaling beyond the capabilities of the Python multiprocessing module (PR #29)
- Enhanced execution performance by removing busy waits in the Runtime and RuntimeService (Issue #36 & PR #87)
- Enabled compiler and runtime support for RefPorts which allows remote memory access between Lava processes such that one process can reconfigure another process at runtime. Also, remote-memory access is based on channel-based message passing but can lead to side effects and should therefore be used with caution. See Remote Memory Access tutorial for how RefPorts can be used (Issue #43 & PR #46).
- Implemented a first prototype of a Monitor Process. A Monitor provides a user interface to probe Vars and OutPorts of other Processes and records their evolution over time in a time series for post-processing. The current Monitor prototype is limited in that it can only probe a single Var or OutPort per Process. (Issue #74 & PR #80). This limitation will be addressed in the next release.
- Added floating point and Loihi-fixed point PyProcessModels for LIF and connection processes like DENSE and CONV. See issue #40 for more details.
- Added an in-depth tutorial on connecting processes (PR #105)
- Added an in-depth tutorial on remote memory access (PR #99)
- Added an in-depth tutorial on hierarchical Processes and SubProcessModels ()
Bug Fixes and Other Changes
- Fixed a bug in get/set Var to enable get/set of floating-point values (Issue #44)
- Fixed install instructions (setting PYTHONPATH) (Issue #45)
- Fixed code example in documentation (Issue #62)
- Fixed and added missing license information (Issue #41 & Issue #63)
- Added unit tests for merging and branching In-/OutPorts (PR #106)
Known Issues
- No support for Intel Loihi yet.
- Channel-based Process communication via CSP channels implemented with Python multiprocessing improved significantly by >30x . However, more improvement is still needed to reduce the overhead from inter-process communication in implementing CSP channels in SW and to get closer to native execution speeds of similar implementations without CSP channel overhead.
- Errors from remote system processes like PyProcessModels or the PyRuntimeService are currently not thrown to the user system process. This makes debugging of parallel processes hard. We are working on propagating exceptions thrown in remote processes to the user.
- Virtual ports for reshaping and concatenation are not supported yet.
- A single Monitor process cannot monitor more than one Var/InPort of single process, i.e., multi-var probing with single Monitor process is not supported yet.
- Still limited API documentation.
- Non-blocking execution mode not yet supported. Thus Runtime.pause() and Runtime.wait() do not work yet.
What's Changed
- Remove unused channel_utils by @mgkwill in #37
- Refactor Message Infrastructure by @joyeshmishra in #29
- Fixed copyright in BSD-3 LICENSE files by @mathisrichter in #42
- Fixed PYTHONPATH installation instructions after directory restructure of core lava repo by @drager-intel in #48
- Add missing license in utils folder by @Tobias-Fischer in #58
- Add auto Runtime.stop() by @mgkwill in #38
- Enablement of RefPort to Var/VarPort connections by @PhilippPlank in #46
- Support float data type for get/set value of Var by @PhilippPlank in #69
- Disable non-blocking execution by @PhilippPlank in #67
- LIF ProcessModels: Floating and fixed point: PR attempt #2 by @srrisbud in #70
- Fixed bug in README.md example code by @mathisrichter in #61
- PyInPort: probe() implementation by @gkarray in #77
- Performance improvements by @harryliu-intel in #87
- Clean up of explicit namespace declaration by @bamsumit in #98
- Enabling monitoring/probing of Vars and OutPorts of processes with Monitor Process by @elvinhajizada in #80
- Conv Process Implementation by @bamsumit in #73
- Move tutorials to root directory of the repo by @bamsumit in #102
- Tutorial for shared memory access (RefPorts) by @PhilippPlank in #99
- Move tutorial07 by @PhilippPlank in #107
- Added Unit tests for branching/merging of IO ports by @PhilippPlank in #106
- Connection tutorial finished by @PhilippPlank in #105
- Fix for issue #109, Monitor unit test failing non-deterministically by @mathisrichter in #110
- Created floating pt and bit accurate Dense ProcModels + unit tests. Fixes issues #100 and #111. by @drager-intel in #112
- Update test_io_ports.py by @PhilippPlank in #113
- Fix README.md Example Code by @mgkwill in #94
- Added empty list attribute
tags
toAbstractProcessModel
by @srrisbud in #96 - Lava 0.2.0 by @mgkwill in #117
New Contributors
- @joyeshmishra made their first contribution in #29
- @drager-intel made their first contribution in #48
- @Tobias-Fischer made their first contribution in #58
- @PhilippPlank made their first contribution in #46
- @gkarray made their first contribution in #77
- @harryliu-intel made their first contribution in #87
- @bamsumit made their first contribution in #98
- @elvinhajizada made their first contribution in #80
Full Changelog: v0.1.1...v0.2.0