MUsculo-Skeleton-Aware (MUSA) Deep Learning for Anatomically Guided Head-and-Neck CT Deformable Registration
This is the official PyTorch implementation of the paper:
Liu, H., McKenzie, E., Xu, D., Xu, Q., Chin, R. K., Ruan, D., & Sheng, K. (2025). MUsculo-Skeleton-Aware (MUSA) deep learning for anatomically guided head-and-neck CT deformable registration. Medical Image Analysis, 99, 103351. https://doi.org/10.1016/j.media.2024.103351
- Upload musa code [Dec 2024]
- Upload training code [Mar 2025]
- Finalize README.md [Mar 2025]
MUSA is a two-stage deformable image registration framework for head-and-neck CT. It decomposes the complex head-and-neck deformation into a bulk posture change and residual fine deformation by leveraging spatially variant regularization on bony structures and soft tissue. We highlight the importance of explicit multiresolution modeling and anatomical constraints for achieving anatomically plausible deformations.
[work in progress]
We cannot share the processed dataset. However, the raw inter-subject datasets used in this study can be obtained from The Cancer Imaging Archive (TCIA).
The preprocessing steps include the following:
- Background removal: Remove the background, including the scanning bed and patient immobilization devices.
- Standardizing orientation: Reorient all images to follow the convention:
- i: Right-to-Left (R → L)
- j: Anterior-to-Posterior (A → P)
- k: Inferior-to-Superior (I → S)
- Centering: Rigid alignment to a common template.
- Intensity clippling and normalization: Clip image intensity values to the range [-1024, 3000] Hounsfield Units (HU) and normalize them to the range [0,1].
- Spatial interpolation and cropping: All images are resampled to an isotropic pixel spacing of 2 mm using trilinear interpolation and then cropped to a matrix size of 160x160x192. The half-resolution images used in the first stage of the two-stage approaches are downsampled to a resolution of 4 mm and a matrix size of 80x80x96.
Segmentation for bony structures and related soft tissue organs at risk (OARs) can be obtained using existing deep learning-based autosegmentation methods, for example:
- Vertebrae segmentation: challenge, example repo
- Head and Neck (HN) OAR segmentation: challenge, example repo
[work in progress]
If you find this repository useful in your research, please consider to cite:
@article{liu2025musa,
title = {MUsculo-Skeleton-Aware (MUSA) deep learning for anatomically guided head-and-neck CT deformable registration},
journal = {Medical Image Analysis},
volume = {99},
pages = {103351},
year = {2025},
issn = {1361-8415},
doi = {https://doi.org/10.1016/j.media.2024.103351},
url = {https://www.sciencedirect.com/science/article/pii/S1361841524002767},
author = {Hengjie Liu and Elizabeth McKenzie and Di Xu and Qifan Xu and Robert K. Chin and Dan Ruan and Ke Sheng},
}
The implementation of MUSA is based on the following open-source code: