Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: Add condition SNR estimation #273

Open
larsoner opened this issue Dec 2, 2019 · 7 comments
Open

ENH: Add condition SNR estimation #273

larsoner opened this issue Dec 2, 2019 · 7 comments

Comments

@larsoner
Copy link
Member

larsoner commented Dec 2, 2019

One way to get a handle on single-subject SNR is to compute the sliding estimator as a function of time between two conditions, as in:

https://mne.tools/dev/auto_tutorials/machine-learning/plot_sensors_decoding.html#temporal-decoding

I propose adding a new report argument time_decoding that is a dict. For funloc this could be for example:

report:
  time_decoding: {analysis: AV, contrast_pair: ['Auditory/Standard', 'Visual/Standard']}

This would add a single plot to the report which is the time-resolved decoding for the first condition versus the second using basically the same code as in the MNE example. Eventually (or from the start) this could be made into a list of dicts.

@drammock @ktavabi would you find this useful? Happy with the proposed YAML API?

This would be a good first mnefun PR for @NeuroLaunch in the coming weeks, feel free to comment as well

EDIT: Edited to incorporate @drammock's suggestions

@drammock
Copy link
Member

drammock commented Dec 2, 2019

Am I correct that what you've called analysis will only be used to label the plot? If so, I'd call that key title instead. I think the other key would more naturally be called conditions, although if anyone can think of an alternate name that makes it even more clear that the values passed should be drawn from the event dict (something like event_dict_conditions_to_compare?) that might be even better. I originally thought contrast but that is probably too vague.

@drammock
Copy link
Member

drammock commented Dec 2, 2019

I think it would be pretty useful, but here are a couple questions:

  1. could you pass multiple contrast pairs in the YAML and get multiple plots?
  2. Is the run time of the example you linked to (~25 seconds) a realistic estimate of how much time this will add for typical data?
  3. WDYT about contrast_pair as the name of the second key?

@larsoner
Copy link
Member Author

larsoner commented Dec 2, 2019

Am I correct that what you've called analysis will only be used to label the plot?

It's also used to know which -ave.fif file to load

could you pass multiple contrast pairs in the YAML and get multiple plots?

Yes after the basic one (dict) is done, iterating over a list should be an easy next step.

Is the run time of the example you linked to (~25 seconds) a realistic estimate of how much time this will add for typical data?

That example does temporal generalization to generate the bottom image plot, which is slower. Timing just the time-resolved decoding from that example on my machine I get 1.5 sec, the time-generalization computation (which we wouldn't do here) takes 8 sec.

WDYT about contrast_pair as the name of the second key?

Nice, it's clear it needs to be a pair, changed the top comment

@drammock
Copy link
Member

drammock commented Dec 2, 2019

Am I correct that what you've called analysis will only be used to label the plot?

It's also used to know which -ave.fif file to load

OK, maybe leave as-is then.

@larsoner
Copy link
Member Author

larsoner commented Apr 2, 2020

@NeuroLaunch any interest in trying this one next? :)

@NeuroLaunch
Copy link
Collaborator

NeuroLaunch commented Apr 9, 2020

@larsoner this slipped past my email alerts from GitHub, so I'm just seeing your request now. I'm definitely up for it!

@larsoner
Copy link
Member Author

larsoner commented Apr 9, 2020

Okay feel free to give it a shot, if the API or implementation is not clear in your mind (after some thinking on it) let me know and we can hash it out here or on Slack, probably with @drammock because he has a good eye for API design

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants