Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting voxelwise maps for logL (and other attributes of LikelihoodModelResults) #2417

Open
jdkent opened this issue Apr 8, 2020 · 7 comments
Labels
Effort: medium The issue is likely to require a decent amount of work (in between a few hours and a couple days). GLM Issues/PRs related to the nilearn.glm module. Impact: medium Solving this issue will have a decent impact on the project. Priority: low The task is not urgent and can be delayed.

Comments

@jdkent
Copy link
Contributor

jdkent commented Apr 8, 2020

To get residual maps nilearn/nistats#410, we added a method called _get_voxelwise_model_attribute, which has broader use than returning residuals, such as "potentially" returning log-likelihood and other attributes of RegressionResult and LikelihoodModelResults.

However, there was some difficulty getting a voxelwise map of the logL results, with discussion/investigation outlined here and potential solution outlined here

Let me know if the solution looks reasonable, and I can open a pull request.
(and perhaps outline which attributes we would want to make explicit methods for in addition to residuals, predicted, and r_square in the FirstLevelModel class. For example, including:

  • logL
  • SSE
  • MSE
  • ???
@adelavega
Copy link
Contributor

I'd also like to see a degrees of freedom map

@jdkent
Copy link
Contributor Author

jdkent commented Apr 8, 2020

Do degrees of freedom vary by voxel? My understanding is that every voxel should have the same number of time points and every voxel should have the same model applied to it. (here is the calculation) I could be looking in the wrong place...

Would you like an single value attribute representing degrees of freedom be available from the FirstLevelModel class if degrees of freedom do not vary between voxels?

@bthirion
Copy link
Member

bthirion commented Apr 8, 2020

Indeed, degrees of freedom are constant across voxels.

@bthirion
Copy link
Member

bthirion commented Apr 8, 2020

@jdkent
Yes, please open a PR.

@adelavega
Copy link
Contributor

adelavega commented Apr 8, 2020

Great, in some other software it can vary by voxel if its estimated, so no need for that, then.

@tyarkoni
Copy link

tyarkoni commented Apr 8, 2020

@bthirion, is there a principled reason to store voxels under separate AR1 models, as opposed to the other way around? It seems more intuitive to me to store all the attributes (R^2, logL, etc.) as 1d arrays of length n_voxels than to have variable length arrays under each of the AR1 bins. To ensure the mapping is preserved, one could add an ar_model attribute that provides each voxel's index into the list of ARModels. That seems like a cleaner API and would eliminate the _get_voxelwise_model_attribute step. But I may be missing something about how the computations are performed internally that would make this approach less practical.

@bthirion
Copy link
Member

bthirion commented Apr 9, 2020

Thx for you suggestion

  • First by default I really don't want to store R^2 logl and all these quantities because in all use case I know, people don't need them, and they consume memory, which ultimately an issue. We compute and store them if needed but not as a default behavior.
  • From what I understand, you would like to store all the statistics in arrays of size (feature, dimension), which is indeed clearer to introspect the object.
    This is however a bit clumsy when computing contrasts, because we still rely on the underlying regression objects.
    See e.g.
    https://github.com/nilearn/nistats/blob/master/nistats/contrasts.py l.73
    So, I think that this may be changed (and probably should), but this requires some refactoring.

@NicolasGensollen NicolasGensollen added Effort: medium The issue is likely to require a decent amount of work (in between a few hours and a couple days). Enhancement for feature requests GLM Issues/PRs related to the nilearn.glm module. Impact: medium Solving this issue will have a decent impact on the project. Priority: low The task is not urgent and can be delayed. labels Oct 25, 2021
@Remi-Gau Remi-Gau added this to GLM Jan 9, 2024
@github-project-automation github-project-automation bot moved this to Backlog in GLM Jan 9, 2024
@Remi-Gau Remi-Gau removed the Enhancement for feature requests label Oct 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Effort: medium The issue is likely to require a decent amount of work (in between a few hours and a couple days). GLM Issues/PRs related to the nilearn.glm module. Impact: medium Solving this issue will have a decent impact on the project. Priority: low The task is not urgent and can be delayed.
Projects
Status: Backlog
Development

No branches or pull requests

6 participants