-
Notifications
You must be signed in to change notification settings - Fork 46
Showcase
Brainchop is designed with the current version to support T1 weighted MRI volume segmentation in the browser. The input must be a T1 brain volume in the Nifti format.
This example shows the best practice to use Brainchop with sample data from MRI public human brain datasets such as Mindboggle101, where we can download the brain MRI sample:
NKI-RS-22_volumes/NKI-RS-22-1/t1weighted.nii.gz
The T1 sample has a shape of [192,256,256] with 32 bits voxel values.
T1 image needs to be in volumetric shape 256^3, scaled and resampled to 1 mm voxels as a preprocessing step for proper results. This preprocessing can be made automatically in Brainchop UI by calling mri_convert.js .
resampling.mp4
After reshaping/scaling/resampling the input MRI, Brainchop can run basic volumetric enhancement operations to improve the inference results and input data visualization, such as threshold voxel values, remove noisy voxels around the brain by using the 3D Connected components algorithm, and increase the global brain contrast by using volumetric histogram equalization.
Preprocessing.mp4
Multiple pre-trained models are available with Brainchop for full-volume and sub-volumes inference, including brain masking, gray matter white matter (GMWM) segmentation models, in addition to brain atlas models for 50 cortical regions and 104 cortical and sub-cortical structures.
Inference50.mp4
Brainchop supports 3D real-time rendering of the input and output volume by using Three.js with Region of Interest (ROI) selection capability.
Typically 3D noisy regions may result from the inference process due to possible bias, variance, and irreducible error (e.g., noise with data). To remove these noisy volumes, we designed a 3D connected components algorithm to filter out those noisy regions. After verifying the output, it can be saved locally in Nifti file format.