Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add AMICO noddi fitting step #20

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 25 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,18 +21,13 @@ That last item on rigid body registration and resampling might be irrelevant in

## Setup

Set up two python 3.8 environments, a main one and one for dmipy.
Install the required packages as follows:
Set up a python environment as follows:
```sh
python3.8 -m venv .venv
. .venv/bin/activate
pip install git+https://github.com/dipy/dipy.git@871b498c2bab8ad1c4961e2dbd594063bd877440
pip install -r requirements.txt
deactivate
python3.8 -m venv .venv_dmipy
. .venv_dmipy/bin/activate
pip install -r requirements_dmipy.txt
deactivate
```

Use the main virtual environment for most steps, but use the dmipy environment for the NODDI computation steps.
Expand Down Expand Up @@ -108,15 +103,35 @@ python fit_dti.py extracted_images/ hdbet_output/ dti_output/

## Perform NODDI fit

The NODDI fit takes a while to run. Almost 3 hours per image with parallel processing enabled on my 12-core machine.
There are two approaches: [dmipy](https://github.com/AthenaEPI/dmipy) and [amico](https://github.com/daducci/AMICO). The dmipy approach requires a separate python environment and is very slow. The amico approach is fast, but it is a type of approximation that won't necessarily be where the actual best fit would converge.

Currently we must choose one or the other approach. Ideally we'd want a combination that uses amico to make a fast initial guess and then uses an iterative technique to refine it. But this combination approach is not implemented.

### AMICO approach

```sh
mkdir noddi_output/
python fit_noddi_amico.py extracted_images/ hdbet_output/ noddi_output/
```

### DMIPY approach


DMIPY needs a separate python environment. Here's what I had that worked with DMIPY:

```
python3.8 -m venv .venv_dmipy
. .venv_dmipy/bin/activate
pip install -r requirements_dmipy.txt
```

It uses [dmipy](https://github.com/AthenaEPI/dmipy), following [this tutorial](https://nbviewer.org/github/AthenaEPI/dmipy/blob/master/examples/tutorial_setting_up_acquisition_scheme.ipynb) for creating the acquisition scheme object and [this tutorial](https://nbviewer.org/github/AthenaEPI/dmipy/blob/master/examples/example_noddi_watson.ipynb) for constructing a Watson NODDI model.
The DMIPY NODDI fit takes a while to run. Almost 3 hours per image with parallel processing enabled on my 12-core machine.

Remember to use the dmipy python environment that you set up above, not the main python environment.
The DMIPY approach follows [this tutorial](https://nbviewer.org/github/AthenaEPI/dmipy/blob/master/examples/tutorial_setting_up_acquisition_scheme.ipynb) for creating the acquisition scheme object and [this tutorial](https://nbviewer.org/github/AthenaEPI/dmipy/blob/master/examples/example_noddi_watson.ipynb) for constructing a Watson NODDI model.

```sh
mkdir noddi_output/
python fit_watson_noddi.py extracted_images/ hdbet_output/ noddi_output/
python fit_watson_noddi_dmipy.py extracted_images/ hdbet_output/ noddi_output/
```

## Estimate FODs
Expand Down
71 changes: 71 additions & 0 deletions fit_noddi_amico.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
from pathlib import Path
import shutil
import argparse
import amico
from common import get_unique_file_with_extension



# === Parse args ===

parser = argparse.ArgumentParser(description='fits NODDI model to ABCD DWI images using AMICO')
parser.add_argument('extracted_images_path', type=str, help='path to folder in which downloaded ABCD images were extracted')
parser.add_argument('masks_path', type=str, help='path to folder containing brain masks. the mask of x.nii.gz is expected to be named x_mask.nii.gz')
parser.add_argument('output_dir', type=str, help='path to folder in which to save the fitted NODDI results')
args = parser.parse_args()
extracted_images_path = Path(args.extracted_images_path)
masks_path = Path(args.masks_path)
output_dir = Path(args.output_dir)

# === Set up AMICO ===
# following https://github.com/daducci/AMICO/wiki/NODDI

amico.setup()
kernels_generated = False


# === Iterate through images, performing NODDI fit and saving results ===

for dwi_nii_directory in extracted_images_path.glob('*/*/*/dwi/'):

nii_path = get_unique_file_with_extension(dwi_nii_directory, 'nii.gz')
# (Note that getting a unique file like this wouldn't work in general on an ABCD download if someone extracted everything to the same
# target folder instead of creating one folder for each archive as I did.)
basename = nii_path.name.split('.')[0]

subject_output_dir = output_dir/basename
if subject_output_dir.exists():
print(f"Skipping {basename} and assuming it was already processed since the following output directory exists:\n{subject_output_dir}")
continue
subject_output_dir.mkdir()

try:
amico_workspace = subject_output_dir/'amico_workspace'
amico_workspace.mkdir(exist_ok=True)

bval_path = get_unique_file_with_extension(dwi_nii_directory, 'bval')
bvec_path = get_unique_file_with_extension(dwi_nii_directory, 'bvec')

scheme_path = amico_workspace/f'{basename}.scheme'
amico.util.fsl2scheme(bval_path, bvec_path, schemeFilename=scheme_path)

mask_path = masks_path/(basename + '_mask.nii.gz')

ae = amico.Evaluation()
ae.load_data(nii_path, scheme_path, mask_path, replace_bad_voxels=0)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

still not sure about replace_bad_voxels=0

i need to look at minimally processed ABCD images and why there are "bad voxels" in the first place

ae.set_model('NODDI')
ae.generate_kernels(regenerate=(not kernels_generated)) # We only need to generate kernels once because the b-values are the same for all of ABCD.
kernels_generated = True
ae.load_kernels()
ae.fit()
ae.save_results(path_suffix=basename)
amico_results_dir = Path('.')/'AMICO'/f'NODDI_{basename}'

for amico_output_file in amico_results_dir.iterdir():
filename = amico_output_file.name
filename = filename.replace('fit', basename)
shutil.move(amico_output_file, subject_output_dir/filename)
shutil.rmtree(Path('.')/'AMICO')
except Exception as e:
shutil.rmtree(subject_output_dir) # Remove subject output dir so that upon re-run it doesn't think this was already processed successfully.
raise e
File renamed without changes.
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ defusedxml==0.7.1
Deprecated==1.2.14
dill==0.3.7
dipy==1.9.0.dev0
dmipy==1.0.5
dmri-amico==2.0.1
ecos==2.0.12
exceptiongroup==1.1.3
executing==2.0.1
Expand Down