Skip to content

Simulation of memory consolidation in recurrent spiking neural networks based on synaptic tagging and capture

License

Notifications You must be signed in to change notification settings

tetzlab/memory-consolidation-stc

 
 

Repository files navigation

Memory consolidation in recurrent spiking neural networks

Outline

This package serves to simulate recurrent spiking neural networks with calcium-based synaptic plasticity and synaptic tagging and capture. The generic C++ program code and build scripts for specific simulations are located in the directory simulation-code/. Building will create binaries in the directory simulation-bin/, which contains scripts to run the specific simulations. Building has most recently been tested with g++ 10.5.0 and boost 1.77.0. Using a Linux system is recommended.

The directory analysis/ contains Python scripts serving to analyze the data produced by the simulations.

The directory notebooks/ contains Jupyter notebooks serving to reproduce data with a graphical user interface.

The package that is provided here has been developed and used for a number of publications (see the list here). Please cite accordingly if you use parts of the code or the model for your research (BibTeX code can be found here). Also note that the simulation code provided here contains some features that have not been used in any publications yet. Please feel free to contact us for any questions or comments!

Simulation code

The code for running the main simulation is located in the directory simulation-code/. It comes with build scripts for specific cases.

Files

  • NetworkMain.cpp - contains the main function initializing network simulations
  • NetworkSimulation.cpp - class performing network simulations
  • Network.cpp - class describing the network
  • Neuron.cpp - class describing one neuron
  • Stimulus.cpp - class describing a stimulus
  • StimulusProtocols.cpp - class to define specific stimulus protocols
  • Definitions.hpp - general definitions
  • SpecialCases.hpp - definitions for special simulations (see this to reproduce results of the studies mentioned above)
  • Tools.cpp - collection of utility functions
  • Plots.cpp - collection of plotting functions employing gnuplot
  • plotFunctions.py - collection of plotting functions employing Matplotlib

Compiling and linking

The simulation code comes with shell scripts (in addition to the included Makefile) to build it for different purposes, related to the studies mentioned above. The subdirectories of simulation-code/ contain the following scripts:

  • build_scripts_paper1/:

    • compile_2N1S - compiles the code for simulating the effect of basic plasticity induction protocols at a single synapse
    • compile_IRS - compiles the code for network simulations of learning, consolidation, recall, and the effect of intermediate stimulation on a memory representation
    • compile_recall_varied_size - compiles the code for network simulations of learning, consolidation, and recall of a memory representation of certain size
  • build_scripts_paper2/:

    • compile_activation - compiles the code for network simulations to investigate the spontaneous activation of assemblies in the absence of plasticity and in the presence of background noise
    • compile_organization - compiles the code for network simulations of learning and consolidating three memory representations in different organizational paradigms
    • compile_organization_IC - compiles the code for network simulations of learning and consolidating three memory representations in different organizational paradigms, with intermediate consolidation
    • compile_organization_noLTD - compiles the code for network simulations of learning and consolidating three memory representations in different organizational paradigms, without LTD
    • compile_organization_randweight - compiles the code for network simulations of learning and consolidating three memory representations in different organizational paradigms, with randomly initialized weights
    • compile_recall - compiles the code for network simulations to investigate in the absence of plasticity the recall of different assemblies
  • build_scripts_paper3/:

    • compile_nm_psth_c - compiles the code for network simulations of learning, neuromodulator-dependent consolidation, and recall of a memory representation of 150 neurons
  • build_scripts_misc/:

    • compile - compiles the code as it is (without setting any specific preprocessor definition for a particular simulation)
    • compile_2N1S - compiles the code for simulating the effect of basic plasticity induction protocols at a single synapse
    • compile_2N1S_conv - compiles the code for testing the convergence of the membrane potential
    • compile_2N1S_Li2016 - compiles the code for simulating the effect of basic plasticity induction protocols at a single synapse, with the same model as in Li, Kulvicius, Tetzlaff, PLOS ONE, 2016
    • compile_2N1S_basic_early - compiles the code for a simple example of the induction of early-phase plasticity by a few pre-defined spikes
    • compile_2N1S_basic_late - compiles the code for a simple example of the induction of late-phase plasticity by prolonged substantial stimulation of one neuron
    • compile_activation_attractors - compiles the code for network simulations to investigate the spontaneous activation of assemblies in the absence of plasticity and in the presence of background noise or in the presence of 1 Hz/5 Hz oscillatory input to the inhibitory population
    • compile_CA200 - compiles the code for network simulations of learning, consolidation, and recall of a memory representation of 200 neurons
    • compile_max_activity - compiles the code for a network that has one neuron spiking at maximal activity
    • compile_onespike - compiles the code for a network that is stimulated with a single pulse to evoke one spike in one neuron
    • compile_organization_attractors - compiles the code for a network that learns and consolidates three attractor memory representations in different organizational paradigms
    • compile_PFreq - compiles the code for simulations serving to characterize plasticity regimes depending on pre- and post-synaptic firing rate
    • compile_recall_varied_size - compiles the code for network simulations of learning, consolidation, and recall of a memory representation of certain size
    • compile_smallnet_ou - compiles the code for a small network that is stimulated with an Ornstein-Uhlenbeck current

Running a simulation

To run a simulation, execute the binary file with or without command line options (as defined in NetworkMain.cpp, e.g., via one of the following shell scripts). Please note that in addition, there are preprocessor options that can be set before compiling (see, for example, NetworkSimulation.cpp or one of the compile* scripts) but that cannot be changed during runtime.

The binaries and run scripts for the studies mentioned above are located in subdirectories of simulation-bin/. Please note that some of these scripts trigger a cascade of many simulations by using the screen command (which has to be installed). This may cause less powerful machines to take very long or to run into memory issues. In those cases, you might consider to run simulations separately.

  • run_binary_paper1/:

    • run_2N1S - reproduce single-synapse data resulting from basic induction protocols for synaptic plasticity (see, for example, Sajikumar et al., J Neurosci, 2005)
    • run_IRS - learn a memory representation, save the network state, and recall after 10 seconds; load the network state, apply intermediate stimulation, let the memory representation consolidate, and recall after 8 hours
    • run_recall_150_full - learn a memory representation, let it consolidate, and recall after 8 hours (no fast-forwarding, takes very long)
    • run_recall_varied_inhibition - learn a memory representation, save the network state, and recall after 10 seconds; load the network state, let the memory representation consolidate, and recall after 8 hours; do this for varied inhibition parameters
    • run_recall_varied_size - learn a memory representation, save the network state, and recall after 10 seconds; load the network state, let the memory representation consolidate, and recall after 8 hours; for varied pattern size
    • connections.txt - the default connectivity matrix used in this paper; if this file is absent, the simulation program will automatically generate a new connectivity structure
  • run_binary_paper2/:

    • run_activation* - simulate the activity in a previously consolidated network for 3 minutes without plasticity (it is required to run the according run_learn_cons* script beforehand)
    • run_learn_cons - subsequently learn 3 memory representations and let them consolidate for 8 hours
    • run_learn_cons_interleaved - learn 3 memory representations in an interleaved manner and let them consolidate for 8 hours (it is required to run run_learn_cons beforehand)
    • run_learn_cons_IC - subsequently learn 3 memory representations; intermediate consolidation for 8 hours after learning each individual assembly
    • run_learn_cons_noLTD - subsequently learn 3 memory representations and let them consolidate for 8 hours; without LTD
    • run_learn_cons_randweight - subsequently learn 3 memory representations and let them consolidate for 8 hours; with randomly initialized weights
    • run_priming_and_activation - prime one of the assemblies in a previously consolidated network at a certain time and then simulate the activity for 3 minutes without plasticity (it is required to run run_learn_cons_interleaved beforehand)
    • run_recall - apply recall stimuli to the assemblies in a previously consolidated network and in a control network (it is required to run run_learn_cons beforehand)
  • run_binary_paper3/:

    • run_raster_10s - learn a memory representation (with varied stimulation strength) and recall after 10 seconds
    • run_raster_8h - learn a memory representation (with varied stimulation strength), let it consolidate (with varied level of neuromodulation), and recall after 8 hours
    • run_nm_timing - learn a memory representation, let it consolidate (with low or high level of neuromodulation and varied timing), and recall after 8 hours
  • run_binary_misc/:

    • run_2N1S_basic_early - simple example of the induction of early-phase plasticity by a few pre-defined spikes, with stochastic plasticity dynamics (the interactive notebook in notebooks/simulator_comparison_basic/ can be used for the same purpose)
    • run_2N1S_basic_early_det - simple example of the induction of early-phase plasticity by a few pre-defined spikes, with deterministic plasticity dynamics
    • run_2N1S_basic_late - simple example of the induction of late-phase plasticity by prolonged substantial stimulation of one neuron (the interactive notebook in notebooks/simulator_comparison_basic/ can be used for the same purpose)
    • run_2N1S_conv - tests the convergence of the neuronal membrane potential following current stimulation
    • run_2N1S_facilitated_spiking - reproduce single-synapse data resulting from basic induction protocols for synaptic plasticity, with facilitated spiking (hence, with more postsynaptic spikes)
    • run_2N1S_Li2016 - reproduce single-synapse data resulting from basic induction protocols for synaptic plasticity, with the same model as in Li, Kulvicius, Tetzlaff, PLOS ONE, 2016
    • run_activation_attractors - simulate the activity in a previously consolidated network for 3 minutes without plasticity (it is required to run run_learn_cons_attractors beforehand)
    • run_benchmark - pipeline for benchmarks of runtime and memory usage; can be used with different paradigms ('CA200', '2N1S_basic_late', ...)
    • run_learn_cons_attractors - subsequently learn 3 attractor memory representations; consolidate for 8 hours after learning each assembly
    • run_defaultnet_onespike_exc - test case to study the transmission of a single spike of an excitatory neuron in a network
    • run_defaultnet_onespike_inh - test case to study the transmission of a single spike of an inhibitory neuron in a network
    • run_PFreq - network simulations to characterize plasticity regimes depending on pre- and post-synaptic firing rate
    • run_smallnet2_det_max_activity - simulating a small network of 4 excitatory neurons with deterministic dynamics, where one neuron fires at maximum rate
    • run_smallnet3_8h-recall - simulating a small network of 4 excitatory and 1 inhibitory neurons, where one excitatory neuron receives typical "learning" stimulation, and "recall" stimulation after 8 hours (the interactive notebook in notebooks/simulator_comparison_basic/ can be used for the same purpose)
    • run_smallnet3_10s-recall - simulating a small network of 4 excitatory and 1 inhibitory neurons, where one excitatory neuron receives typical "learning" stimulation, and "recall" stimulation after 10 seconds (the interactive notebook in notebooks/simulator_comparison_basic/ can be used for the same purpose)
    • run_recall_varied_size - learn a memory representation and recall after 10 seconds; learn a memory representation, let it consolidate, and recall after 8 hours; do this for different pattern sizes (the interactive notebook in notebooks/simulator_comparison_memory_recall/ can be used for the same purpose)
    • runner_recall_varied_size_desktop - run a number of trials of run_recall_varied_size via screen
    • _runner_recall_varied_size_gwdg-medium - run a number of trials of run_recall_varied_size on the SLURM-managed GWDG SCC cluster
    • track_allocated_memory - script to track the memory usage of a given process

Analysis scripts

The following scripts, located in analysis/, serve to process and analyze the data produced by the simulation code. They were tested to run with Python 3.7.3, NumPy 1.20.1, SciPy 1.6.0, and pandas 1.0.3. Please note that some of the script files are interdependent. Also note that not all script files and functions have to be used to reproduce the results of any single study mentioned above (also see section "Interactive scripts" below).

Files

  • adjacencyFunctions.py - functions to analyze the connectivity and weights in a network (used to compute mean and standard deviation of early- and late-phase weights)
  • analyzeWeights.py - routine that runs functions to investigate the synaptic weight structure of networks (reads from [timestamp]_net_[time].txt files produced by the simulation program)
  • assemblyAttractorStatistics.py - determines the statistics of the activation of attractor cell assemblies (considering exclusive activation and transitions between attractors)
  • assemblyAvalancheStatistics.py - determines the statistics of avalanche occurrence within cell assemblies
  • averageFileColumnsAdvanced.py - averages data columns across files (for example, average over multiple weight traces or probability distributions)
  • averageWeights-py - averages across multiple weight matrices
  • calculateMIa.py - calculates the mutual information from two firing rate distributions (given either by [timestamp]_net_[time].txt files or by arrays)
  • calculateQ.py - calculates the pattern completion coefficient Q for an input-defined cell assembly (either from the firing rate distribution given by a [timestamp]_net_[time].txt file or from mean firing rates)
  • computeRateFromSpikes.py - computes the firing rate over time via fixed time windows from spike raster data
  • extractAverageQMI.py - averages over trials of Q and MI data
  • extractParamsFiringRates.py - recursively extracts the mean firing rates of neuronal subpopulations along with the simulation parameters from directories containing spike raster data (can be used to process many datasets)
  • extractParamsMeanWeights.py - recursively extracts the mean weights across neuronal subpopulations along with the simulation parameters from directories containing [timestamp]_net_[time].txt files (can be used to process many datasets)
  • extractParamsProteins.py - recursively extracts the mean protein amount across core and non-core neurons along with the simulation parameters from directories containing [timestamp]_mean_weight.txt files (can be used to process many datasets)
  • extractParamsQMI.py - recursively extracts the Q and MI measures along with the simulation parameters from directories containing [timestamp]_net_[time].txt files (can be used to process many datasets)
  • extractParamsQMIfromSpikes.py - extracts simulation parameters and firing rates and/or Q and MI measures from spike raster data; optionally recursively searches directories for suitable data files (can be used to process many datasets)
  • frequencyAnalysisSpikeRaster.py - computes the frequency spectrum of spike raster data
  • meanCorrelations.py - computes firing rate correlations for neuron pairs from spike raster data and averages over subpopulations of the network
  • nmAnalysisClass.py - generates temporal traces and analyzes their consolidation; produces most of the final data for paper 3 ("Neuromodulator-dependent synaptic tagging..."; an interactive frontend for this can be found in notebooks/)
  • numberOfSpikesInBins.py - computes the distribution of spikes per time bin from cell assembly time series data
  • overlapParadigms.py - defines paradigms of overlapping cell assemblies
  • plotQMICoreSize.py - functions to plot Q and MI values (from 10s- and 8h-recall) over assembly core size
  • plotSimResultsComparisonMeanSEM.py - plots a comparison of traces to compare across different simulators (adopted from here)
  • utilityFunctions.py - diverse utility functions, e.g., to read firing rate, early- and late-phase weight data from [timestamp]_net_[time].txt files produced by the simulation program
  • valueDistributions.py - functions to analyze and plot weight and firing rate distributions

Interactive scripts

The following subfolders of notebooks/ contain interactive Jupyter notebooks serving to reproduce figures for a related paper. They were tested to run with Jupyter Lab 3.2.8 and 3.4.3.

Files

  • lehr_luboeinski_tetzlaff_2022/ - reproduces figures of paper 3 ("Neuromodulator-dependent synaptic tagging..."); raw data has to be used from provided sources or to be generated using the simulation code
  • simulator_comparison_basic/ - runs basic simulations (basic early- and late phase plasticity, smallnet3 dynamics) to compare with other simulators, in particular, with Arbor or Brian 2
  • simulator_comparison_memory_recall/ - runs simulations of memory recall (after 10 seconds and after 8 hours) to compare with other simulators, in particular, with Arbor or Brian 2

About

Simulation of memory consolidation in recurrent spiking neural networks based on synaptic tagging and capture

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 28.7%
  • Python 28.4%
  • C++ 22.0%
  • Shell 20.9%