Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simulator doesn't seem to work correctly when time step is changed #243

Open
1 of 4 tasks
stiber opened this issue Oct 25, 2018 · 7 comments
Open
1 of 4 tasks

Simulator doesn't seem to work correctly when time step is changed #243

stiber opened this issue Oct 25, 2018 · 7 comments

Comments

@stiber
Copy link
Contributor

stiber commented Oct 25, 2018

What kind of issue is this?

  • Bug report
  • Feature request
  • Question not answered in documentation
  • Cleanup needed

Attempt to change time step from 0.1ms to 0.05ms or 0.1ms produces results that are completely different than original.

What is affected by this?

Simulator

How do we replicate the issue/how would it work?

Change only time step; keep all other parameters the same.

Expected behavior (i.e. solution or outline of what it would look like)

Should produce qualitatively the same results. Since we deliver random numbers at each time step, changing the time step size will produce different noise realizations, even with identical seeds. But, this should not make a large qualitative difference in results, after analysis. So, for example, if done with Izikevich, the raster plot should look basically the same and neuron spike rates and ISI distributions should be the same.

Other Comments

@fumik
Copy link
Contributor

fumik commented Nov 2, 2018

Run 1s simulation with 1x1 LIF neuron (no input, no active neuron) changing time step 0.1ms and 0.01ms.

Results:

fig1
membrane potential of 0.1ms time step simulation

fig2
membrane potential of 0.01ms time step simulation

As we can see the results, fluctuation of membrane potential of 0.01ms time step simulation is smaller. If this neuron is an active neuron which threshold is around 13.6e-3V, the neuron never fire.

Reason:

According to the formula below, the fluctuation of membrane potential is proportional to C2⋅Inoise. Therefore as the simulation time step becomes smaller, C2 also gets smaller, and eventually the the fluctuation of membrane potential becomes smaller.

Vm(t+Δt)=C1⋅Vm(t)+C2⋅(Isyn(t)+Iinject+Inoise+Vresting/Rm)
C2=Rm⋅(1−e(−Δt/τm)) .

@stiber
Copy link
Contributor Author

stiber commented Nov 2, 2018

EDITED

The most obvious cause is that C2 is scaling the noise amplitude by a factor of 10 as the time step is reduced by a factor of 10. The simplest approach, a bit of a hack, would be to multiply Inoise by the ratio of Δt and 0.1ms.

It also might be due to the noise's spectrum being shifted to higher frequencies, which are then low pass filtered by the LIF neuron. One approach to addressing this would be to choose a "standard" noise (say at 0.1ms time step), double-buffer noise generation, and then wrap the RNG with a noise generator that interpolates so that, regardless of the time step, the neuron gets (approximately) the same waveform.

uploads/599bc426-7044-4822-9d12-af04d7d7726c/IMG_0143.JPG

@fumik
Copy link
Contributor

fumik commented Nov 5, 2018

To make the noise generator more general, we should introduce a parameter, the noise frequency, in addition to the noise scale factor. For the current implementation, the noise frequency is equal to (1/(simulation time step)). The noise frequency is (1/(interval between noise)), and it must be (the noise frequency <= 1/(simulation time step)). If (the noise frequency < 1/(simulation time step)), we will interpolate noises.

@fumik
Copy link
Contributor

fumik commented Nov 6, 2018

Another restriction regarding to the simulation step is caused by the length of event queue buffer, and it must be satisfied by the following formula.

(maximum + minimum synapse transmission delay) / simulation step + 1 < length of event queue buffer.

where: maximum synapse transmission delay = 1.5e-3
minimum synapse transmission delay = 0.8e-3
length of event queue buffer = 64

Therefore based on the current design of the event queue buffer, simulation time step 0.01ms is not allowed.

@stiber
Copy link
Contributor Author

stiber commented Nov 7, 2018

OK, the decision about noise makes sense to me. We should probably avoid the term "frequency" for this, as it could be confused with the frequency content of the noise, i.e., that it's not white noise. Maybe "noise time step"? Or "noise update interval"?

In thinking about the two alternatives (increase the noise current amplitude vs. keep the noise time step constant and interpolate) it strikes me that we're presenting the individual doing the simulation with a choice (and that is good, because they can make the decision):

  1. If you keep the noise time step (let's call it dtn) constant and interpolate, you generate a noise waveform that is independent of the simulation time step — as you decrease the simulation time step, the noise waveform still looks the same, with values interpolated. The tradeoff is that, technically, the noise is no longer white — it has a cutoff frequency of 1/(2 * dtn). This is like you're simulating a physical setup with a physical noise waveform; the simulation behavior shouldn't change as a result of the time step (other than any critical dependencies on discrete event timing, i.e., spikes).
  2. If you instead scale the noise current and keep dtn equal to the simulation time step, your noise is still white noise. However, the waveform is now very different and so your results may change greatly, depending on the particulars of the simulation, merely as a result of changing the simulation time step.

@fumik
Copy link
Contributor

fumik commented Nov 9, 2018

I want to postpone this to the future issue.

Reasons:

  1. 0.1ms time step is an optimum value for neurons and synapses time parameters.
  2. Changing time step is restricted by the current simulation design mentioned above.
  3. Handling noise is closely related to model formula and integrator design.

@stiber
Copy link
Contributor Author

stiber commented Nov 11, 2018

Agreed. While we can hack this, it's not entirely clear that we can make all of these things independent of time step changes so that we can be confident that we aren't introducing artifacts. I'll reference issue #182 that notes that we should likely separate the integration method from the models. Typically, this is done by having the model-specific code take in state information and return derivatives, which the integrator then uses. But this is a major change — as big as thinking of noise in terms of synthesizing a waveform rather than merely generating random numbers.

@stiber stiber assigned stiber and unassigned JewelYLee Feb 22, 2019
@stiber stiber removed this from the Fall 2018 milestone Oct 14, 2019
@stiber stiber removed the BG/C++ label Feb 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants