Skip to content

Commit

Permalink
Merge pull request #231 from ejolly/master
Browse files Browse the repository at this point in the history
Big Design Matrix overhaul to make working with multiple-runs much easier/intuitive.

Former-commit-id: 8d5ab1a
  • Loading branch information
ljchang authored May 9, 2018
2 parents 3d284fa + 774a3a6 commit d2a3833
Show file tree
Hide file tree
Showing 177 changed files with 1,020 additions and 544 deletions.
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ dist/
.cache/
htmlcov
.pytest_cache/*

dev/
# Logs and databases #
######################
*.log
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
78 changes: 44 additions & 34 deletions docs/auto_examples/01_DataOperations/plot_design_matrix.ipynb

Large diffs are not rendered by default.

201 changes: 146 additions & 55 deletions docs/auto_examples/01_DataOperations/plot_design_matrix.py

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -1 +1 @@
674c1c7e55385d8ebc2e109812f5c3b8
d25dc9c6791965dca7df05760d6549bf
284 changes: 181 additions & 103 deletions docs/auto_examples/01_DataOperations/plot_design_matrix.rst

Large diffs are not rendered by default.

Binary file not shown.
Binary file modified docs/auto_examples/01_DataOperations/plot_mask_codeobj.pickle
Binary file not shown.
Binary file not shown.
Binary file modified docs/auto_examples/02_Analysis/plot_decomposition_codeobj.pickle
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified docs/auto_examples/auto_examples_jupyter.zip
Binary file not shown.
Binary file modified docs/auto_examples/auto_examples_python.zip
Binary file not shown.
4 changes: 2 additions & 2 deletions docs/auto_examples/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -235,13 +235,13 @@ Neuroimaging Analysis Examples
.. container:: sphx-glr-download
:download:`Download all examples in Python source code: auto_examples_python.zip <//Users/lukechang/Github/nltools/docs/auto_examples/auto_examples_python.zip>`
:download:`Download all examples in Python source code: auto_examples_python.zip <//Users/Esh/Documents/Python/Cosan/nltools/docs/auto_examples/auto_examples_python.zip>`
.. container:: sphx-glr-download
:download:`Download all examples in Jupyter notebooks: auto_examples_jupyter.zip <//Users/lukechang/Github/nltools/docs/auto_examples/auto_examples_jupyter.zip>`
:download:`Download all examples in Jupyter notebooks: auto_examples_jupyter.zip <//Users/Esh/Documents/Python/Cosan/nltools/docs/auto_examples/auto_examples_jupyter.zip>`
.. only:: html
Expand Down
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
201 changes: 146 additions & 55 deletions examples/01_DataOperations/plot_design_matrix.py

Large diffs are not rendered by default.

47 changes: 33 additions & 14 deletions nltools/data/brain_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -327,6 +327,19 @@ def write(self, file_name=None):

self.to_nifti().to_filename(file_name)

def scale(self, scale_val=100.):
""" Scale all values such that theya re on the range 0 - scale_val, via grand-mean scaling. This is NOT global-scaling/intensity normalization. This is useful for ensuring that data is on a common scale (e.g. good for multiple runs, participants, etc) and if the default value of 100 is used, can be interpreted as something akin to (but not exactly) "percent signal change." This is consistent with default behavior in AFNI and SPM. Change this value to 10000 to make consistent with FSL.
Args:
scale_val (int/float): what value to send the grand-mean to; default 100
"""

out = deepcopy(self)
out.data = out.data / out.data.mean() * scale_val

return out

def plot(self, limit=5, anatomical=None, **kwargs):
""" Create a quick plot of self.data. Will plot each image separately
Expand Down Expand Up @@ -399,18 +412,23 @@ def regress(self,mode='ols',**kwargs):
b,t,p,df,res = regress(self.X,self.data,mode=mode,**kwargs)
sigma = np.std(res,axis=0,ddof=self.X.shape[1])

b_out = deepcopy(self)
b_out.data = b
t_out = deepcopy(self)
# Prevent copy of all data in self multiple times; instead start with an empty instance and copy only needed attributes from self, and use this as a template for other outputs
b_out = self.__class__()
b_out.mask = deepcopy(self.mask)
b_out.nifti_masker = deepcopy(self.nifti_masker)

# Use this as template for other outputs before setting data
t_out = b_out.copy()
t_out.data = t
p_out = deepcopy(self)
p_out = b_out.copy()
p_out.data = p
df_out = deepcopy(self)
df_out = b_out.copy()
df_out.data = df
sigma_out = deepcopy(self)
sigma_out = b_out.copy()
sigma_out.data = sigma
res_out = deepcopy(self)
res_out = b_out.copy()
res_out.data = res
b_out.data = b

return {'beta': b_out, 't': t_out, 'p': p_out, 'df': df_out,
'sigma': sigma_out, 'residual': res_out}
Expand Down Expand Up @@ -497,11 +515,12 @@ def ttest(self, threshold_dict=None):

return out

def append(self, data):
def append(self, data, **kwargs):
""" Append data to Brain_Data instance
Args:
data: Brain_Data instance to append
kwargs: optional inputs to Design_Matrix append
Returns:
out: new appended Brain_Data instance
Expand Down Expand Up @@ -532,7 +551,7 @@ def append(self, data):
out.Y = self.Y.append(data.Y)
if self.X.size:
if isinstance(self.X, pd.DataFrame):
out.X = self.X.append(data.X)
out.X = self.X.append(data.X,**kwargs)
else:
out.X = np.vstack([self.X, data.X])
return out
Expand Down Expand Up @@ -1155,11 +1174,11 @@ def r_to_z(self):
out.data = fisher_r_to_z(out.data)
return out

def filter(self,sampling_rate=None, high_pass=None,low_pass=None,**kwargs):
def filter(self,sampling_freq=None, high_pass=None,low_pass=None,**kwargs):
''' Apply 5th order butterworth filter to data. Wraps nilearn functionality. Does not default to detrending and standardizing like nilearn implementation, but this can be overridden using kwargs.
Args:
sampling_rate: sampling rate in seconds (i.e. TR)
sampling_freq: sampling freq in hertz (i.e. 1 / TR)
high_pass: high pass cutoff frequency
low_pass: low pass cutoff frequency
kwargs: other keyword arguments to nilearn.signal.clean
Expand All @@ -1168,18 +1187,18 @@ def filter(self,sampling_rate=None, high_pass=None,low_pass=None,**kwargs):
Brain_Data: Filtered Brain_Data instance
'''

if sampling_rate is None:
if sampling_freq is None:
raise ValueError("Need to provide sampling rate (TR)!")
if high_pass is None and low_pass is None:
raise ValueError("high_pass and/or low_pass cutoff must be"
"provided!")
if sampling_rate is None:
if sampling_freq is None:
raise ValueError("Need to provide TR!")

standardize = kwargs.get('standardize',False)
detrend = kwargs.get('detrend',False)
out = self.copy()
out.data = clean(out.data,t_r=sampling_rate,detrend=detrend,standardize=standardize,high_pass=high_pass,low_pass=low_pass,**kwargs)
out.data = clean(out.data,t_r= 1. / sampling_freq,detrend=detrend,standardize=standardize,high_pass=high_pass,low_pass=low_pass,**kwargs)
return out

def dtype(self):
Expand Down
Loading

0 comments on commit d2a3833

Please sign in to comment.