Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarification of Computing Likelihood? #4

Open
RylanSchaeffer opened this issue May 9, 2021 · 0 comments
Open

Clarification of Computing Likelihood? #4

RylanSchaeffer opened this issue May 9, 2021 · 0 comments

Comments

@RylanSchaeffer
Copy link

RylanSchaeffer commented May 9, 2021

I have two questions about this line of code:

https://github.com/csmfindling/behavior_models/blob/master/models/expSmoothing_prevAction.py#L52

  1. If I'm understanding correctly, the goal is to compute a likelihood to combine with prior formed from the exponentially smoothed history of actions. The likelihood that is needed is specifically p(observation | stimulus side = right) so that we can end up with a posterior p(stimulus side = right | observations). However, this code seems to compute p(observation < 0 | signed stimulus contrast strength). Are the two distributions equivalent? I would think not, but if they aren't interchangeable, why is p(observation < 0 | signed stimulus contrast strength) the correct likelihood?

I would think that if the model assumes the mouse knows the true signed stimulus contrasts strengths and their variances, then the mouse should compute \sum_{signed stimulus contrast strength} p(o|signed stimulus contrast strength) p(signed stimulus contrast strength | stimulus side = right)

  1. Why don't the minimum and maximum truncations introduce truncation errors?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant