-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ring Buffer left shift preserves smallest weight #745
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will try to have a look at the suggested changes
@@ -44,7 +44,7 @@ static inline input_t* input_type_get_input_value( | |||
input_t* value, input_type_pointer_t input_type, uint16_t num_receptors) { | |||
use(input_type); | |||
for (int i = 0; i < num_receptors; i++) { | |||
value[i] = value[i] >> 10; | |||
value[i] = value[i] >> 5; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might be a better value; I guess it depends on what the user provides as weight values. The idea here is for the weight to be more precise when using conductance values. For reference, conductance weight is in micro-Siemens but current weight is in nano-Amps; thus a scale of ~1000 has some justification, however the effects of conductance are non-linear since they depend upon the membrane voltage, and so weights to get similar outputs tend to be relatively bigger than a simple division by 1000. Perhaps this scaling should be dynamic depending on the expected sum of weights, in the same way as is done for the weight scaling elsewhere.
rb_ls = list(max_weight_powers) | ||
print("=" * 60) | ||
print("RB left shifts for {:20}".format(application_vertex.label), | ||
"=", rb_ls) | ||
print("-" * 60) | ||
return rb_ls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned, this printing should be removed. Instead it would make sense to have a report that tells the user what scaling was used and why (i.e. expected maximum weights and scaling values).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should also possibly warn the user up-front when weight scaling cannot be achieved due to minimum weight representation issues. This would then explain the overflow messages that are likely to received later.
Additionally, this code could detect and warn the user when it is impossible to represent both the minimum and maximum weight, suggesting the 16-bit weight is not enough to represent the dynamic range of values requested.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned elsewhere, we don't have the minimum weight here yet, so this might be needed if this is to report the range between the true minimum and the maximum, rather than the minimum of the maximums.
@@ -446,6 +446,7 @@ def _get_ring_buffer_to_input_left_shifts( | |||
weights_signed = False | |||
rate_stats = [RunningStats() for _ in range(n_synapse_types)] | |||
steps_per_second = 1000000.0 / machine_timestep | |||
min_max_weight = numpy.ones(n_synapse_types) * 2 ** 32 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that this stores the "minimum of the max weights" rather than the "minimum weight". This is because the connector code does not include this. This could be added, but it should be noted that:
- This would have to be the minimum absolute weight as the maximum is the maximum absolute weight
- This would have to account for the fact that the minimum might be 0, which is always representable! Thus it should be the minimum non-zero weight.
else: | ||
return numpy.mean(numpy.abs(self.__weights)) | ||
|
||
@overrides(AbstractConnector.get_weight_maximum) | ||
def get_weight_maximum(self, synapse_info): | ||
# pylint: disable=too-many-arguments | ||
if self.__weights is None: | ||
return numpy.amax(synapse_info.weights) | ||
return self._get_weight_maximum(self.__weights, len(self.__conn_list)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line too long (flake 8)
Use case: cerebellum model -- though it might affect other people without them realising
Context:
Symptom (on master): weights for
pf_pc
are 0, while some other weights (not mentioned here are way off, although not 0).Outcome (on PR):
pf_pc -> 0.00001526 uS c.f. 0.00002000 uS ( 76.29%)
pf_pc
weights are now representable and match more closely.(Potential) downside: more weight saturations occur.
This PR needs more thought + removing of some debug print statements + probably reverting IF_COND_EXP default left shift back to 10 rather than 5.
This implementation was done with @rowleya next to me.