-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lesion segmentation analysis #38
Comments
I worked on a segmentation that was inferred by nnU-Net (the one trained on 1000 epochs) on image
Output: It outputs a file where each segmented voxel is indexed per lesion (1 for lesion 1, 2 for lesion 2, etc.)
|
Cool! For this, you are using the lesion_seg_analysis.py script from plb/nnunet branch, right? Be aware that SCT contains sct_analyze_lesion.py function, which does pretty similar things: $ sct_analyze_lesion -m sub-cal080_ses-M0_STIR_lesion-manual.nii.gz
...
Lesion count = 1
Measures on lesion #1...
Volume : 52.92 mm^3
...
Averaged measures...
Volume [mm^3] = 52.92+/-0.0
(S-I) Length [mm] = nan+/-nan
Equivalent Diameter [mm] = nan+/-nan
Total volume = 52.92 mm^3
Lesion count = 1
... ( |
Currently performing registration between M0 and M12 images using: The registration is not of great quality because of the very limited number of slices on the z-axis (9 slices in this case). This will surely have a clear impact on the study of the evolution of spinal cord lesions. |
what is the purpose of the registration? if it is to identify the same lesions across time points, then it should be fine (given that lesions span several voxels) |
The objective is to register images from M0 and M12 to perform comparison of the sc lesions. Yes, it means identifying the lesions across time points. |
Another possible approach might be using |
from @valosekj:
That's a good idea, although it might be an overkill (straightening takes time). But for visualisation purpose, that could be interesting. In any case, if @plbenveniste needs to have some reasonable level of alignment between time points, he will definitely need disc levels (otherwise, how can he make sure that the coverage is exactly the same across time points). from @plbenveniste:
In that case, you don't need sub-millimetric precision, given that lesions are 'big blobs'. 2-3mm precision in the registration will make it possible to identify the same lesion across time. |
I'm not sure I fully understand #38 (comment). A few comments:
|
The code is here |
additional comments:
|
I can't manage to use vertebral labels in the following command line. |
Do the |
Thanks for the tip. One of the vertebral label files had one more label. Now I only keep common labels in both files before using them for registration. The registration is much better when using vertebral levels. We can see the lesions clearly overlapping each other. Now, moving on to lesion comparison and temporal evolution of the lesions. |
now that looks like a much better registration indeed! |
can you also try with the 'dl' algo (without segmentation)? I'm curious to see how it performs |
Registration performed with 'syn' algo without the spinal cord segmentation but with vertebral levels:
Registration performed with 'dl' with vertebral levels:
Registration performed with 'dl' without the vertebral levels:
Conclusion : It seems that overall the best results are obtained with the 'dl' algorithm with vertebral levels. Even though, it doesn't require spinal cord segmentation, computing vertebral levels does: so in terms of computing time, both 'syn' and 'dl' seem similar. Next objectives:
|
I believe that when |
with nearest neighbour interpolation, the intensity of the voxel should remain the same |
After quite some trouble with graph matching, I changed my strategy to the following:
The result is the following: The output file is the following: The code can be found here |
very good progress @plbenveniste ! 👏
please avoid using t1 t2 for times points 1-2, because in MRI we often use t1 and t2 for the contrasts. Instead, you could use ses-1, ses-2 (inspired by BIDS) |
Or maybe, |
As suggested in this comment, I tried using
However, I am surprised by a very long computation time for the images (which is much longer than the previous method which consisted of loading the data, removing outliers and saving it. For the first image: 147 seconds. For the second image: 161 seconds |
about #38 (comment): it is surprising indeed, given the triviality of the task. Could you please open an issue on the SCT repos? Thanks! |
In this issue, I use the segmentation of the lesions on the spinal cord after cropping the images (issue #37 ).
The analysis of the segmentations accounts for the following:
The text was updated successfully, but these errors were encountered: