You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I'm understanding correctly, the goal is to compute a likelihood to combine with prior formed from the exponentially smoothed history of actions. The likelihood that is needed is specifically p(observation | stimulus side = right) so that we can end up with a posterior p(stimulus side = right | observations). However, this code seems to compute p(observation < 0 | signed stimulus contrast strength). Are the two distributions equivalent? I would think not, but if they aren't interchangeable, why is p(observation < 0 | signed stimulus contrast strength) the correct likelihood?
I would think that if the model assumes the mouse knows the true signed stimulus contrasts strengths and their variances, then the mouse should compute \sum_{signed stimulus contrast strength} p(o|signed stimulus contrast strength) p(signed stimulus contrast strength | stimulus side = right)
Why don't the minimum and maximum truncations introduce truncation errors?
The text was updated successfully, but these errors were encountered:
I have two questions about this line of code:
https://github.com/csmfindling/behavior_models/blob/master/models/expSmoothing_prevAction.py#L52
I would think that if the model assumes the mouse knows the true signed stimulus contrasts strengths and their variances, then the mouse should compute \sum_{signed stimulus contrast strength} p(o|signed stimulus contrast strength) p(signed stimulus contrast strength | stimulus side = right)
The text was updated successfully, but these errors were encountered: