Huge drop of classification accuracy on loihi board/loihi simulation compared to slayer classifcation #360
Unanswered
gwgknudayanga
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I trained the following network in slayer for nmnist event data classification. In this case I use the last layer's neuron voltages as the logit as shown in training part. It gave a 96 % test accuracy. Then i export this to lava and run on loihi and loihi cpu simulation and evaluated the test set. However then the test accuracy was about 55% which is 40% drop compared to slayer classification.
When i checked the voltage outputs of the output layer after 32 time steps these are outputs are different from slayer's output.
Could you please help to resolve this issue?
neuron_conv_params = {
'threshold' : 1.0,
'current_decay' : 1.0,
'voltage_decay' : 0.03,
'requires_grad' : False,
'persistent_state' : False,
}
neuron_output_layer_params = {
'threshold' : 2048.0,
'current_decay' : 1.0,
'voltage_decay' : 0.03,
'requires_grad' : False,
'persistent_state' : True,
}
neuron_conv_kwargs = {**neuron_conv_params, 'norm': slayer.neuron.norm.MeanOnlyBatchNorm}
quantizer = _quantize_8bit #netx_quantizer #_quantize_8bit
block_kwargs = dict(weight_norm=False, delay_shift=False ,pre_hook_fx=quantizer)
synapse_kwargs = dict(weight_norm=False,pre_hook_fx=quantizer)
net_ladl = nn.Sequential(
slayer.block.cuba.Conv(neuron_conv_kwargs,2,16,kernel_size=3, stride=2, padding=1,**block_kwargs),
slayer.block.cuba.Conv(neuron_conv_kwargs,16,32,kernel_size=3, stride=2, padding=1,**block_kwargs),
slayer.block.cuba.Conv(neuron_conv_kwargs,32,64,kernel_size=3, stride=2, padding=1,**block_kwargs),
slayer.block.cuba.Conv(neuron_conv_kwargs,64,16,kernel_size=3, stride=1, padding=1,**block_kwargs),
slayer.block.cuba.Conv(neuron_output_layer_params,16,num_output,kernel_size=1, stride=1, padding=0,**block_kwargs),
)
net_ladl = net_ladl.to(device)
H, W = (34,34)
N, C, T = 32, 2, 31
input = torch.rand(N, C, H, W, T)
net_ladl(input.to(device=device))
test_loader2 = DataLoader(dataset=testing_set, batch_size=1, shuffle=False)
with torch.no_grad():
print("export hdf5 ")
device_cpu = torch.device('cpu')
net_ladl = net_ladl.to(device_cpu)
output_path = os.path.join("/media/atiye/Data/Udaya_Research_stuff/to_lava_code/nmnist/hdf5_net_outputs/","nmnist_mean_only_final_voltage_state_2048.net")
export_hdf5(net_ladl,output_path,skip_last_synapse_only_layer=False)
print("finish exporting to ",output_path)
Best Rgds,
Udayanga
Beta Was this translation helpful? Give feedback.
All reactions