Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

codespell: config, workflow, typo fixes #147

Open
wants to merge 5 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .codespellrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
[codespell]
skip = .git,*.pdf,*.svg,versioneer.py,*.psyexp
# TE -- echo time
# whos -- some command in debug workflow
# Sepulcre -- name
ignore-words-list = te,whos,sepulcre
19 changes: 19 additions & 0 deletions .github/workflows/codespell.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
---
name: Codespell

on:
push:
branches: [master]
pull_request:
branches: [master]

jobs:
codespell:
name: Check for spelling errors
runs-on: ubuntu-latest

steps:
- name: Checkout
uses: actions/checkout@v3
- name: Codespell
uses: codespell-project/actions-codespell@v1
2 changes: 1 addition & 1 deletion dataladhandbook_support/_version.py
Original file line number Diff line number Diff line change
Expand Up @@ -268,7 +268,7 @@ def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
# TAG-NUM-gHEX
mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe)
if not mo:
# unparseable. Maybe git-describe is misbehaving?
# unparsable. Maybe git-describe is misbehaving?
pieces["error"] = ("unable to parse git-describe output: '%s'"
% describe_out)
return pieces
Expand Down
2 changes: 1 addition & 1 deletion dataladhandbook_support/directives.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ class FindOutMore(BaseAdmonition):
node_class = nodes.admonition
# empty is no allowed
has_content = True
# needs at least a one word titel
# needs at least a one word title
required_arguments = 1

def run(self):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -77,12 +77,12 @@ function RealTimeGreenEyesDisplay(debug, useButtonBox, fmri, rtData, subjectNum,
fprintf(['* debug: ' num2str(debug) '\n']);
fprintf('*********************************************\n\n');

%% Initalizing scanner parameters
%% Initializing scanner parameters

disdaqs = 15; % how many seconds to drop at the beginning of the run
TR = 1.5; % seconds per volume
% story duration is 11 minutes 52 seconds
% sotry ends at 11 minutes 36.5 seconds
% story ends at 11 minutes 36.5 seconds
audioDur = 712; % seconds how long the entire autioclip is
runDur = audioDur;
nTRs_run = ceil(runDur/TR);
Expand Down
2 changes: 1 addition & 1 deletion docs/content_pages/01-04-faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@ PNI's `reference protocols <https://pni-facilities.princeton.edu/index.php/Refer

I try to avoid in-plane acceleration for fMRI, and instead opt for multi band acceleration. In-plane acceleration is more susceptible to movement (according to practicalMRI blog).

GRAPPA: For skyra, I wanted to avoid SMS for reasons above. I also was having a subject talk in the scanner, and worried that SMS was more sensitive to motion than inplane acceleration (see pratical MRI blog post on this - https://practicalfmri.blogspot.com/2012/03/grappa-and-multi-band-imaging-and.html). To get whole-brain coverage with even a large voxel/long TR (3mm voxel, 2 sec TR) you have to use inplane acceleration = 2. And this still results in quite a small slab.
GRAPPA: For skyra, I wanted to avoid SMS for reasons above. I also was having a subject talk in the scanner, and worried that SMS was more sensitive to motion than inplane acceleration (see practical MRI blog post on this - https://practicalfmri.blogspot.com/2012/03/grappa-and-multi-band-imaging-and.html). To get whole-brain coverage with even a large voxel/long TR (3mm voxel, 2 sec TR) you have to use inplane acceleration = 2. And this still results in quite a small slab.

.. findoutmore:: "Do you use multiband acceleration (e.g., simultaneous multi-slice; SMS)? What acceleration factor do you use? Why do you or why do you not use multiband? Are there tradeoffs you are aware of?"

Expand Down
4 changes: 2 additions & 2 deletions docs/content_pages/01-05-overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,11 @@ Below, you will find a flowchart we have created to guide you through your own f
.. <area shape="rect" coords="68,270,143,319" alt="Design Experiment" href="01-05-designExp.html">
.. <area shape="rect" coords="150,270,256,291" alt="Best Practices FAQ" href="01-04-faq.html">
.. <area shape="rect" coords="150,298,256,320" alt="Glossary of Terms" href="glossary.html">
.. <area shape="rect" coords="340,273,420,310" alt="Aquisition" href="02-01-reproin.html">
.. <area shape="rect" coords="340,273,420,310" alt="Acquisition" href="02-01-reproin.html">
.. <area shape="rect" coords="436,273,560,310" alt="ReproIn" href="02-01-reproin.html#id1">
.. <area shape="rect" coords="67,440,134,475" alt="PNI Pre-Scan Checklist" href="02-02-prescan.html">
.. <area shape="rect" coords="215,440,278,475" alt="Scanning Checklist" href="02-03-forms.html">
.. <area shape="rect" coords="363,454,473,478" alt="Best Practices FAQ" href="01-04-faq.html">
.. </map>`

.. :raw-html:`<img src="../_static/Finalized_Timeline.png" width="675" height="1844" alt="Overview of conducting fMRI research" usemap="#imagemap"> <map name="imagemap"> <area shape="rect" coords="65,248,135,297" alt="Design Experiment" href="02-04-designExp.html"> <area shape="rect" coords="340,273,420,310" alt="Aquisition" href="02-05-reproin.html"> </map>`
.. :raw-html:`<img src="../_static/Finalized_Timeline.png" width="675" height="1844" alt="Overview of conducting fMRI research" usemap="#imagemap"> <map name="imagemap"> <area shape="rect" coords="65,248,135,297" alt="Design Experiment" href="02-04-designExp.html"> <area shape="rect" coords="340,273,420,310" alt="Acquisition" href="02-05-reproin.html"> </map>`
4 changes: 2 additions & 2 deletions docs/content_pages/03-02-converting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ SessionID (second input) should match how your runs were named on the scanner (e
.. TIP::
If you need to, run :blue:`step1_preproc.sh` line by line to check that the correct paths will go into :blue:`run_heudiconv.py`. If there is a problem with your paths, check your :blue:`globals.sh` file.

We recommended running :blue:`step1_preproc.sh` in a tmux window so you don’t run into issues with losing connection to the server, etc. After ssh-ing into the server, create a new tmux window OR attach to an exisiting tmux window. After creating a new window, you can attach to that specific window/session in the future. In other words, you don't have to create a new window every time you run :blue:`step1_preproc.sh`.
We recommended running :blue:`step1_preproc.sh` in a tmux window so you don’t run into issues with losing connection to the server, etc. After ssh-ing into the server, create a new tmux window OR attach to an existing tmux window. After creating a new window, you can attach to that specific window/session in the future. In other words, you don't have to create a new window every time you run :blue:`step1_preproc.sh`.
* Create a new tmux window: ``tmux new -s [name]``
* Attach to an existing window: ``tmux a -t [name]``
* NOTE: replace ``[name]`` with whatever you want to name your tmux window -- we recommend naming it *step1*.
Expand Down Expand Up @@ -219,7 +219,7 @@ The script takes one input:
.. NOTE::
* This script will need to be customized for your study! Edit this script once at the beginning of your project so that all the filenames match your naming scheme, and so the fieldmaps are being applied to the correct functional runs. If you did not collect fieldmaps, then you can ignore the steps specific to fieldmaps.

* If an individual subject deviates from your standard (e.g., has an extra set of fieldmaps or is missing functional runs), then you will need to edit :blue:`step2_preproc.sh` again to accomodate these differences.
* If an individual subject deviates from your standard (e.g., has an extra set of fieldmaps or is missing functional runs), then you will need to edit :blue:`step2_preproc.sh` again to accommodate these differences.

* **Sample project**: The sample dataset does NOT include fieldmaps. Therefore, when you edit the :blue:`step2_preproc.sh` for the sample project, you can comment out the lines of code dealing with the fieldmaps. You should still run :blue:`step2_preproc.sh` to delete the extra (scout and dup) files.

Expand Down
2 changes: 1 addition & 1 deletion docs/content_pages/03-06-sampleProjectWithDatalad.rst
Original file line number Diff line number Diff line change
Expand Up @@ -366,7 +366,7 @@ Now we will add the :blue:`/data/bids` directory as its own (sub)dataset. Ultima
# create a new dataset
(datalad) [sample_project]$ datalad create -c text2git --description "Princeton pygers sample dataset raw BIDS files" -f -d^ ./data/bids

# edit the unuseful commit message
# edit the useless commit message
(datalad) [sample_project]$ git commit --amend

# edit commit message to say: [DATALAD] Add BIDS dataset
Expand Down
2 changes: 1 addition & 1 deletion docs/content_pages/04-03-registration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ For volumetric transformations, fmriprep uses a combination of Freesurfer, FSL,
* Transformations: this is really a two step transformation so you need two different matrices

* MNI → T1w .h5 transformation as described above
* T1w → EPI transformation: fmriprep saves this transformation in the :blue:`derivatives/work/fmriprep_wf` directory. Specifically, for subject 001 and task-story run 01, thie file would be located in
* T1w → EPI transformation: fmriprep saves this transformation in the :blue:`derivatives/work/fmriprep_wf` directory. Specifically, for subject 001 and task-story run 01, this file would be located in

.. code-block:: bash

Expand Down
2 changes: 1 addition & 1 deletion docs/content_pages/05-02-mvpa.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ In this case, you would set up a GLM with the motion confounds (or whichever oth

**(2b) - Regressing out motion and setting up a GLM with task stimuli**

In this case, you want to do MVPA on *beta coefficients* instead of BOLD activation. You need to set up your design matrix so that each trial/TR is separate. However many stimulus labels you set (plus constrasts) will determine how many betas your model will calculate. So if you want to train on faces vs. scenes, do not only make 1 face label and 1 scene label. If you did this, you will only get 1 beta value/voxel for face and 1 beta value/voxel for scene. This is usually not enough samples to train for MVPA, unless you’re doing some across subject analysis. Thus, you should probably make face_TR1, face_TR2, etc. separate labels so you get an abbreviate time series out of your results to use for MVPA.
In this case, you want to do MVPA on *beta coefficients* instead of BOLD activation. You need to set up your design matrix so that each trial/TR is separate. However many stimulus labels you set (plus contrasts) will determine how many betas your model will calculate. So if you want to train on faces vs. scenes, do not only make 1 face label and 1 scene label. If you did this, you will only get 1 beta value/voxel for face and 1 beta value/voxel for scene. This is usually not enough samples to train for MVPA, unless you’re doing some across subject analysis. Thus, you should probably make face_TR1, face_TR2, etc. separate labels so you get an abbreviate time series out of your results to use for MVPA.

Scikit-learn
------------
Expand Down
2 changes: 1 addition & 1 deletion docs/content_pages/05-03-surfaceBased.rst
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ Group Level Analysis

This part is the same for surfaces and volumes. Which AFNI command you use will depend on your specific analysis of interest. Two straightforward options are:

* 3dttest++: `See documentation here <https://afni.nimh.nih.gov/pub/dist/doc/program_help/3dttest++.html>`_. AFNI’s command for performing ttests on imaging data.
* 3dttest++: `See documentation here <https://afni.nimh.nih.gov/pub/dist/doc/program_help/3dttest++.html>`_. AFNI’s command for performing t-tests on imaging data.
* 3dANOVA2: See `documentation here <https://afni.nimh.nih.gov/pub/dist/doc/program_help/3dttest++.html>`_. Performs a two-factor ANOVA. Usually one factor is your IV of interest and the other is subject as a random variable.

Regardless of what option you choose, make sure you output the results as a .gii file. This will store your stats on the same surfaces that we’ve been using up until now.
Expand Down
2 changes: 1 addition & 1 deletion docs/content_pages/06-03-usefulLicenses.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Research tools for Princeton researchers
.. role:: blue
.. role:: red

The `Princeton University Library <https://library.princeton.edu/>`_ and `Princeton Research Data Service <https://researchdata.princeton.edu/>`_ have purchsaed institutional memberships to a number of reearch tools which can help researchers open their workflows. Below you will find a list of research tools that are currently supported. To recieve the benefits of a Princeton affiliation, simply create an account with your Princeton email address.
The `Princeton University Library <https://library.princeton.edu/>`_ and `Princeton Research Data Service <https://researchdata.princeton.edu/>`_ have purchsaed institutional memberships to a number of reearch tools which can help researchers open their workflows. Below you will find a list of research tools that are currently supported. To receive the benefits of a Princeton affiliation, simply create an account with your Princeton email address.

* `Code Ocean <research_tools/code_ocean.html>`_
* `Open Science Framework <research_tools/osf.html>`_
Expand Down
2 changes: 1 addition & 1 deletion docs/content_pages/06-04-contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -330,7 +330,7 @@ Now that you have a general sense of how the repo is organized (and how you will

* However, this sometimes doesn't happen right away. The moderators might want you to make some changes before accepting your merge request. If this is the case, they will contact you.

10. If your pull request gets approved and is merged to the offical handbook, **delete the branch** for the feature that was just approved. You can do this from the terminal or from the GitHub page for your forked repo.
10. If your pull request gets approved and is merged to the official handbook, **delete the branch** for the feature that was just approved. You can do this from the terminal or from the GitHub page for your forked repo.

*On terminal*:

Expand Down
2 changes: 1 addition & 1 deletion docs/content_pages/06-06-resources.rst
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ Useful Toolboxes
* `PySurfer <http://pysurfer.github.io/>`_
Python library for visualizing cortical surface representations of neuroimaging data.
* `scikit-learn <https://scikit-learn.org/stable/>`_
Software machine learning library for Python that featurs various classification, regression, and clustering algorithms.
Software machine learning library for Python that features various classification, regression, and clustering algorithms.
* `SciPy <https://www.scipy.org/>`_
Python-based ecosystem of open-source software for mathematics, science, and engineering. Its core packages include NumPy, SciPy, Matplotlib, iPython, and pandas.

Expand Down
2 changes: 1 addition & 1 deletion docs/content_pages/hack_pages/git.rst
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ As one last step, examine ``git log`` in both your original and second computer
General Git workflow
====================

Initialize Git in an empty or exisiting code directory on computer1 (``git init``). Setup a corresponding new repository on GitHub. Add GitHub as the remote to your computer1 repo (``git remote add origin <url>``). Clone the GitHub repo to computer2 (``git clone <url>``).
Initialize Git in an empty or existing code directory on computer1 (``git init``). Setup a corresponding new repository on GitHub. Add GitHub as the remote to your computer1 repo (``git remote add origin <url>``). Clone the GitHub repo to computer2 (``git clone <url>``).

Keep everything synchronized! Follow this workflow when working from computer1 or computer2 (or computer3, etc.).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,7 @@ In the following, we'll work through a simple exercise of version control with G

# note that floats break this script
# check your python version then try division with remainder
# if you're using python 3, the ouput should be a fractional float
# if you're using python 3, the output should be a fractional float
# if you're using python 2, the default is integer division (yikes)

# let's make an adjustment to our script
Expand Down Expand Up @@ -352,7 +352,7 @@ Refresh your GitHub page online. You should be able to see the contents inside :
- Comment out `module load anacondapy/5.3.1`
- Set the **scanner_dir** to the location where the data lives:
- for PNI folks, the location is :blue:`/jukebox/norman/pygers/conquest`
- fot non-PNI folks, this should be the path to where you downloaded and saved the sample data
- for non-PNI folks, this should be the path to where you downloaded and saved the sample data
- Edit **project_dir** to where you have the sample_study directory (e.g., /jukebox/LAB/YOURNAME/pygers_workshop/sample_study)

Now, you need to commit and push the changes to your globals.sh file:
Expand Down
2 changes: 1 addition & 1 deletion docs/content_pages/research_tools/osf.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Create an account
=========================================
1. Go to `OSF <https://osf.io/dashboard>`_
2. Click 'Sign in through institution'
3. Create an accout with your Princeton email address
3. Create an account with your Princeton email address

Getting started
===============
Expand Down
4 changes: 2 additions & 2 deletions docs/extra_files/sampleMaterials/RealTimeGreenEyesDisplay.m
Original file line number Diff line number Diff line change
Expand Up @@ -77,12 +77,12 @@ function RealTimeGreenEyesDisplay(debug, useButtonBox, fmri, rtData, subjectNum,
fprintf(['* debug: ' num2str(debug) '\n']);
fprintf('*********************************************\n\n');

%% Initalizing scanner parameters
%% Initializing scanner parameters

disdaqs = 15; % how many seconds to drop at the beginning of the run
TR = 1.5; % seconds per volume
% story duration is 11 minutes 52 seconds
% sotry ends at 11 minutes 36.5 seconds
% story ends at 11 minutes 36.5 seconds
audioDur = 712; % seconds how long the entire autioclip is
runDur = audioDur;
nTRs_run = ceil(runDur/TR);
Expand Down