You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm opening this issue to show a strange behavior in the quantization process, that could lead to quantized networks that are unable to produce neural activity.
The behavior can be observed running the following code:
importnumpyasnpimportmatplotlib.pyplotaspltfromrockpoolimportTSEvent, TSContinuousfromrockpool.nn.modules.torchimportLinearTorch, LIFTorchfromrockpool.nn.combinatorsimportSequentialfromrockpool.parametersimportConstantfromrockpool.transformimportquantize_methodsasqfromrockpool.devices.xylo.syns63300importconfig_from_specification, mapper, XyloSim# The same happens for syns61201defbuild_net(n_input_channels, n_population, n_output_channels, neuron_parameters):
returnSequential(
LinearTorch((n_input_channels, n_population)),
LIFTorch(n_population, **neuron_parameters),
LinearTorch((n_population, n_output_channels)),
LIFTorch(n_output_channels, **neuron_parameters),
)
neuron_parameters= {
"tau_mem": 0.16,
"tau_syn": 0.065,
"bias": Constant(0.0),
"threshold": Constant(1.0),
"dt": 5e-2,
}
net=build_net(12, 128, 7, neuron_parameters)
# Convert network to specspec=mapper(
net.as_graph(), weight_dtype="float", threshold_dtype="float", dash_dtype="float"
)
# Quantize the parametersspec_Q=specspec_Q.update(q.global_quantize(**spec_Q))
print(f"dash_syn: {spec_Q['dash_syn']}")
print(f"dash_syn_out: {spec_Q['dash_syn_out']}")
# Convert spec to Xylo configurationconfig, is_valid, m=config_from_specification(**spec_Q)
ifnotis_valid:
raiseValueError(f"Error detected in spec:\n{m}")
# Build XyloSim from the configmod=XyloSim.from_config(config, dt=neuron_parameters["dt"])
# - Generate some Poisson inputT=100f=0.1input_spikes=np.random.rand(T, 12) <fTSEvent.from_raster(
input_spikes, neuron_parameters["dt"], name="Poisson input events"
).plot()
# - Evolve the input over the network, in simulationout, _, r_d=mod(input_spikes, record=True)
# - Plot some internal state variablesplt.figure()
plt.imshow(r_d["Spikes"].T, aspect="auto", origin="lower")
plt.title("Hidden spikes")
plt.ylabel("Channel")
plt.figure()
TSContinuous.from_clocked(
r_d["Vmem"], neuron_parameters["dt"], name="Hidden membrane potentials"
).plot()
plt.figure()
TSContinuous.from_clocked(
r_d["Isyn"], neuron_parameters["dt"], name="Hidden synaptic currents"
).plot()
plt.show()
The recorded membrane potential and synaptic currents are flat lines, hence no spikes are generated.
This is probably due to the fact that the quantized parameters that control the bit-shift decay, dash_syn and dash_syn_out, are both 0.
Since the decay should be calculated as “new_v=v-(v>>dash)”, setting dash to 0 means that new_v will always be 0 (v-v).
This happens both for channel-wise and global-wise quantization.
I know that generally time constants must be around 10 times the timestep, but specifically for my use case the best parameters, obtained through hyper-parameter optimization, are the ones used in the demo code:
{dt: 0.05, tau_mem: ~0.160, tau_syn: ~0.065}
The text was updated successfully, but these errors were encountered:
I'm opening this issue to show a strange behavior in the quantization process, that could lead to quantized networks that are unable to produce neural activity.
The behavior can be observed running the following code:
The recorded membrane potential and synaptic currents are flat lines, hence no spikes are generated.
This is probably due to the fact that the quantized parameters that control the bit-shift decay,
dash_syn
anddash_syn_out
, are both 0.Since the decay should be calculated as “new_v=v-(v>>dash)”, setting dash to 0 means that new_v will always be 0 (v-v).
This happens both for channel-wise and global-wise quantization.
I know that generally time constants must be around 10 times the timestep, but specifically for my use case the best parameters, obtained through hyper-parameter optimization, are the ones used in the demo code:
The text was updated successfully, but these errors were encountered: