You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm opening this issue to expose a possible bug during the generation of the hardware configuration, specifically for Xylo IMU (syns63300).
When trying to convert any compatible network, with more than 128 hidden neurons, the function config_from_specification(...) returns odd warnings and an internal call to Samna fails.
The following code shows the described problem:
fromrockpool.nn.modules.torchimportLinearTorch, LIFTorchfromrockpool.nn.combinatorsimportSequentialfromrockpool.transformimportquantize_methodsasqfromrockpool.devices.xylo.syns63300importconfig_from_specification, mapper, XyloSimdefbuild_net(n_input_channels, n_population, n_output_channels):
returnSequential(
LinearTorch((n_input_channels, 64)),
LIFTorch(64),
LinearTorch((64, n_population)),
LIFTorch(n_population),
LinearTorch((n_population, 64)),
LIFTorch(64),
LinearTorch((64, n_output_channels)),
LIFTorch(n_output_channels),
)
# Let's create a feed-forward network with the following shape:# - Input layer with 12 neurons# - Linear weights (12,64)# - Hidden layer with 64 neurons# - Linear weights (64,196)# - Hidden layer with 196 neurons# - Linear weights (196,64)# - Hidden layer with 64 neurons# - Linear weights (64,7)# - Output layer with 7 neurons## Total hidden neurons count: 324 (max Xylo IMU: 496)# Max output connections per neuron: 196 (max Xylo IMU: 512)# Total reservoir connections under the Xylo IMU limit of 31744net=build_net(12, 196, 7)
# Convert network to specspec=mapper(
net.as_graph(), weight_dtype="float", threshold_dtype="float", dash_dtype="float"
)
# Quantise the parametersspec_Q=specspec_Q.update(q.global_quantize(**spec_Q))
# Convert spec to Xylo configurationconfig, is_valid, m=config_from_specification(**spec_Q)
ifnotis_valid:
raiseValueError(f"Error detected in spec:\n{m}")
Running it results in:
WARNING .../rockpool/devices/xylo/syns63300/xylo_samna.py:221: UserWarning: More than 128 input expansion neurons (IEN) detected. Only the first 128 will be used.
This warning is controlled by the condition if weights_in.shape[1] > 128, that also considers the weights with 0 value.
weights_in is a matrix (12,324,1), but the first hidden layer only holds 64 neurons, thus only 64 values are != 0.
Shouldn't the IEN max value of 128 already be met?
WARNING .../rockpool/devices/xylo/syns63300/xylo_samna.py:221: UserWarning: More than 128 input expansion neurons (IEN) detected. Only the first 128 will be used.
Same behavior as in the previous point. The weights_out matrix (324,7) only has 64 value with value != 0, but it's fully considered.
ValueError: Error detected in spec:
Active output expansion neurons must be in [1,128]. Actual: 324
This error seem related to the point 2). The full weights matrix is considered for the count of the output expansion neurons, but shouldn't they only be 64?
I'm opening this issue to expose a possible bug during the generation of the hardware configuration, specifically for Xylo IMU (syns63300).
When trying to convert any compatible network, with more than 128 hidden neurons, the function
config_from_specification(...)
returns odd warnings and an internal call to Samna fails.The following code shows the described problem:
Running it results in:
This warning is controlled by the condition
if weights_in.shape[1] > 128
, that also considers the weights with 0 value.weights_in is a matrix (12,324,1), but the first hidden layer only holds 64 neurons, thus only 64 values are != 0.
Shouldn't the IEN max value of 128 already be met?
Same behavior as in the previous point. The weights_out matrix (324,7) only has 64 value with value != 0, but it's fully considered.
This error seem related to the point 2). The full weights matrix is considered for the count of the output expansion neurons, but shouldn't they only be 64?
The same issues appear for the following network:
The text was updated successfully, but these errors were encountered: