You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Network decay should be evenly spread out over all the synapses that did not fire in learning period, maintaining overall network weight constant.
This provides a very elegant solution around the problem of avoiding network overload through positive reinforcement.
This way the network strength will increase over time (more synapses over the threshold), however the average weight should always remain constant. (exponential weight decrease might be a bit tricky thou)
Negative reinforcement is perhaps not best handled at the moment, At present it inhibits recently fired synapses, but also provides some randomness by increasing the weight of synapses that did not fire recently. I think if compensatory decay is used instead (as per point 1 above) this may take care of the need to allow alternate pathways... does random increases actually help?
Negative weighted (inhibitory) synapses are a bit of a conundrum. If reinforcement is positive one would assume they need to provide further inhibition, but at the moment they will swing the other way.
SignalFireThreshold is another tricky one. When synapses are decaying it makes sense for them to decay towards somewhere below the threshold, perhaps the average network weight, but what should that be? A calculated value that is derived from the fire threshold (ie 1/2)?
The text was updated successfully, but these errors were encountered:
The text was updated successfully, but these errors were encountered: