Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Xylo IMU] Rockpool returns an error generating the hardware configuration for networks with more than 128 hidden neurons #18

Open
MarcoBramini opened this issue Oct 18, 2023 · 0 comments

Comments

@MarcoBramini
Copy link

MarcoBramini commented Oct 18, 2023

I'm opening this issue to expose a possible bug during the generation of the hardware configuration, specifically for Xylo IMU (syns63300).

When trying to convert any compatible network, with more than 128 hidden neurons, the function config_from_specification(...) returns odd warnings and an internal call to Samna fails.

The following code shows the described problem:

from rockpool.nn.modules.torch import LinearTorch, LIFTorch
from rockpool.nn.combinators import Sequential
from rockpool.transform import quantize_methods as q
from rockpool.devices.xylo.syns63300 import config_from_specification, mapper, XyloSim


def build_net(n_input_channels, n_population, n_output_channels):
    return Sequential(
        LinearTorch((n_input_channels, 64)),
        LIFTorch(64),
        LinearTorch((64, n_population)),
        LIFTorch(n_population),
        LinearTorch((n_population, 64)),
        LIFTorch(64),
        LinearTorch((64, n_output_channels)),
        LIFTorch(n_output_channels),
    )


# Let's create a feed-forward network with the following shape:
# - Input layer with 12 neurons
# - Linear weights (12,64)
# - Hidden layer with 64 neurons
# - Linear weights (64,196)
# - Hidden layer with 196 neurons
# - Linear weights (196,64)
# - Hidden layer with 64 neurons
# - Linear weights (64,7)
# - Output layer with 7 neurons
#
# Total hidden neurons count: 324 (max Xylo IMU: 496)
# Max output connections per neuron: 196 (max Xylo IMU: 512)
# Total reservoir connections under the Xylo IMU limit of 31744
net = build_net(12, 196, 7)

# Convert network to spec
spec = mapper(
    net.as_graph(), weight_dtype="float", threshold_dtype="float", dash_dtype="float"
)

# Quantise the parameters
spec_Q = spec
spec_Q.update(q.global_quantize(**spec_Q))

# Convert spec to Xylo configuration
config, is_valid, m = config_from_specification(**spec_Q)
if not is_valid:
    raise ValueError(f"Error detected in spec:\n{m}")

Running it results in:

WARNING .../rockpool/devices/xylo/syns63300/xylo_samna.py:221: UserWarning: More than 128 input expansion neurons (IEN) detected. Only the first 128 will be used.

This warning is controlled by the condition if weights_in.shape[1] > 128, that also considers the weights with 0 value.
weights_in is a matrix (12,324,1), but the first hidden layer only holds 64 neurons, thus only 64 values are != 0.
Shouldn't the IEN max value of 128 already be met?

WARNING .../rockpool/devices/xylo/syns63300/xylo_samna.py:221: UserWarning: More than 128 input expansion neurons (IEN) detected. Only the first 128 will be used.

Same behavior as in the previous point. The weights_out matrix (324,7) only has 64 value with value != 0, but it's fully considered.

ValueError: Error detected in spec:
Active output expansion neurons must be in [1,128]. Actual: 324

This error seem related to the point 2). The full weights matrix is considered for the count of the output expansion neurons, but shouldn't they only be 64?

The same issues appear for the following network:

Sequential(
	LinearTorch((12, 128)),
	LIFTorch(128, **neuron_parameters),
	LinearTorch((128, 128)),
	LIFTorch(128, **neuron_parameters),
	LinearTorch((128, 7)),
	LIFTorch(7, **neuron_parameters),
)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant