Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Xylo] The quantization process of some trained models results in no network activity #19

Open
MarcoBramini opened this issue Oct 18, 2023 · 0 comments

Comments

@MarcoBramini
Copy link

I'm opening this issue to show a strange behavior in the quantization process, that could lead to quantized networks that are unable to produce neural activity.

The behavior can be observed running the following code:

import numpy as np
import matplotlib.pyplot as plt
from rockpool import TSEvent, TSContinuous
from rockpool.nn.modules.torch import LinearTorch, LIFTorch
from rockpool.nn.combinators import Sequential
from rockpool.parameters import Constant
from rockpool.transform import quantize_methods as q
from rockpool.devices.xylo.syns63300 import config_from_specification, mapper, XyloSim # The same happens for syns61201


def build_net(n_input_channels, n_population, n_output_channels, neuron_parameters):
    return Sequential(
        LinearTorch((n_input_channels, n_population)),
        LIFTorch(n_population, **neuron_parameters),
        LinearTorch((n_population, n_output_channels)),
        LIFTorch(n_output_channels, **neuron_parameters),
    )


neuron_parameters = {
    "tau_mem": 0.16,
    "tau_syn": 0.065,
    "bias": Constant(0.0),
    "threshold": Constant(1.0),
    "dt": 5e-2,
}
net = build_net(12, 128, 7, neuron_parameters)

# Convert network to spec
spec = mapper(
    net.as_graph(), weight_dtype="float", threshold_dtype="float", dash_dtype="float"
)

# Quantize the parameters
spec_Q = spec
spec_Q.update(q.global_quantize(**spec_Q))
print(f"dash_syn: {spec_Q['dash_syn']}")
print(f"dash_syn_out: {spec_Q['dash_syn_out']}")

# Convert spec to Xylo configuration
config, is_valid, m = config_from_specification(**spec_Q)
if not is_valid:
    raise ValueError(f"Error detected in spec:\n{m}")

# Build XyloSim from the config
mod = XyloSim.from_config(config, dt=neuron_parameters["dt"])

# - Generate some Poisson input
T = 100
f = 0.1
input_spikes = np.random.rand(T, 12) < f
TSEvent.from_raster(
    input_spikes, neuron_parameters["dt"], name="Poisson input events"
).plot()

# - Evolve the input over the network, in simulation
out, _, r_d = mod(input_spikes, record=True)

# - Plot some internal state variables
plt.figure()
plt.imshow(r_d["Spikes"].T, aspect="auto", origin="lower")
plt.title("Hidden spikes")
plt.ylabel("Channel")

plt.figure()
TSContinuous.from_clocked(
    r_d["Vmem"], neuron_parameters["dt"], name="Hidden membrane potentials"
).plot()

plt.figure()
TSContinuous.from_clocked(
    r_d["Isyn"], neuron_parameters["dt"], name="Hidden synaptic currents"
).plot()

plt.show()

The recorded membrane potential and synaptic currents are flat lines, hence no spikes are generated.

This is probably due to the fact that the quantized parameters that control the bit-shift decay, dash_syn and dash_syn_out, are both 0.
Since the decay should be calculated as “new_v=v-(v>>dash)”, setting dash to 0 means that new_v will always be 0 (v-v).
This happens both for channel-wise and global-wise quantization.

I know that generally time constants must be around 10 times the timestep, but specifically for my use case the best parameters, obtained through hyper-parameter optimization, are the ones used in the demo code:

{dt: 0.05, tau_mem: ~0.160, tau_syn: ~0.065}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant