Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DG surface clean #357

Open
wants to merge 10 commits into
base: dev-v2.0.0
Choose a base branch
from
Open

DG surface clean #357

wants to merge 10 commits into from

Conversation

jordandekraker
Copy link
Collaborator

Instead of trimming surfaces (contours) with a nan mask, we keep only vertices that overlap with good coord values. We then apply some surface-based morphological opening and closing to keep vertices along holes in the dg

I still want to do some testing with this, and it could possibly be split up into a surface generation and post-processing step, but tentatively this could be a good way to clean up our DG surfaces

TODO: also apply largest connected component?

Note: If we trim these surfaces too much then there may be a risk that in Laplace-Beltrami, no vertex is assigned APsrc or APsink, instead all being assigned PDsrc or PDsink

@jordandekraker jordandekraker requested a review from akhanf January 27, 2025 20:17
@jordandekraker jordandekraker marked this pull request as ready for review January 27, 2025 20:39
@akhanf
Copy link
Member

akhanf commented Jan 27, 2025

any luck with getting cleaner DG surfaces here? I tried this branch after the first commit but didn't really see much difference..

@jordandekraker
Copy link
Collaborator Author

yup, seems to be working in my cases now!

@akhanf
Copy link
Member

akhanf commented Jan 28, 2025

I'm getting some syntax errors from your gen_isosurface.py script -- are you sure the latest is committed?

@jordandekraker
Copy link
Collaborator Author

you're right, with my last commit, that i never tested, there was a wrong variable name. should be good now I believe!

@jordandekraker
Copy link
Collaborator Author

looks like we're good to go now, sorry i was so loosey goosey with my commits!

@akhanf
Copy link
Member

akhanf commented Jan 28, 2025

thanks, running the test again now..

@akhanf
Copy link
Member

akhanf commented Jan 28, 2025

What test datasets have you been looking at and what atlas/surface template? I'm using the ds002168 dataset with modality T1w and getting the attached surfaces -- also it's segfaulting when resampling the surfaces, though I have to see why that's the case..

Screenshot_20250128_180906_Horizon.jpg

@jordandekraker
Copy link
Collaborator Author

I'm running on these: https://osf.io/k2nme/

Yeah I agree the surface shown is pretty choppy, I was seeing some of that but not quite so bad. I think this will be inevitable given the voxelization of the DG compared to the actual surface size though, and we could apply some smoothing still to clean it up. Looking at the surface in the image planes (e.g. the coronal plane), its definitely in the right place and the behaviour matches the segmentation (which is also unfortunately usually a bit messy). So, I believe the topology and boundary edges are correct and I'm also pretty sure this will be good quality in high res images & multihist7 where the DG is thicker, but maybe we should apply smoothing on lower res images that are prone to voxelization?

For some of my tests i've been using the --force-nnunet-model synthseg_v0.2 which tends to show a bit more DG and fill out the tip of the tail and the uncus a bit better too.

@jordandekraker
Copy link
Collaborator Author

I also was seeing the segfaults too, was wondering if it might be related to this 38a3e89

@akhanf
Copy link
Member

akhanf commented Jan 29, 2025

No looks like the AP and PD coords for dentate are completely flat -- does the laplace beltrami code actually generalize to the dentate? I don't think we tackled that yet as far as I can tell..

@akhanf
Copy link
Member

akhanf commented Jan 29, 2025

but yes as for the smoothness I have some ideas of things we can do in the image-space before making the surface that can improve things a fair bit, especially now that we aren't really constrained by the image resolution that much -- e.g. could use an even higher res ref space, and then do some smoothing (e.g. with astropy, to smooth only within the coords), and/or use template-injected src/sink to then run laynii..

Re: the code changes in this PR, the new additions make surface generation pretty slow (eg from a few seconds to ~5 minutes). I think it's because it's implemented with numpy and for loops -- if we reuse existing functions from pyvista this could be much faster.. e.g. pyvista can already do connected components, and compute geodesic distances.. But, wondering if these particular clean-up steps will still be needed if we do the image-based changes. I didn't actually see any issues with the hippocampus surfaces generated before (in my test cases), and the code here doesn't seem to alter them further. Did you have issues with your hippocampus surfaces that motivated these changes? Or was it mainly geared towards the DG?

@jordandekraker
Copy link
Collaborator Author

If speed is an issue we can vectorize or use a library for common operations

The previous best volumetric cleanup of DG was to dilate the SRLM over the grey matter label where its PD coords was >0.9 (or 0.95 or something like that). Specifically, we were using that to heuristically define a DG label before we started training it into nnunet and template shapes. We were also doing it on a rougher PD coords method - geodesic distance from MTL cortex in discreet AP bins. This was an OK way to approximate a DG without holes in it, but its still a bit ugly in my opinion, and doesn't deal well with cases that are actually highly detailed. It would also ideally be done at a fixed resolution (eg. setting DG thickness of 1 voxel or 0.3mm). If you have an idea for a better volumetric solution then i'd love to see it.

The issue I wanted to address here was "holes" in the dg label, which is solved just by removing some of the nan-masking where the src and sink meet. instead, those unwanted areas are now masked out using a different method.

The PD coords src/sink are not available yet to DG, and even the AP coords may not solve if the dg label doesn't reach into the tip of the tail and uncus. This will still need to be addressed.

Ideally i would like to reduce reliance on template shape injection

@akhanf
Copy link
Member

akhanf commented Jan 29, 2025

If speed is an issue we can vectorize or use a library for common operations

The previous best volumetric cleanup of DG was to dilate the SRLM over the grey matter label where its PD coords was >0.9 (or 0.95 or something like that). Specifically, we were using that to heuristically define a DG label before we started training it into nnunet and template shapes. We were also doing it on a rougher PD coords method - geodesic distance from MTL cortex in discreet AP bins. This was an OK way to approximate a DG without holes in it, but its still a bit ugly in my opinion, and doesn't deal well with cases that are actually highly detailed. It would also ideally be done at a fixed resolution (eg. setting DG thickness of 1 voxel or 0.3mm). If you have an idea for a better volumetric solution then i'd love to see it.

The volumetric solution is the one I outlined in my previous comment -- use a (much) higher-res space, and either smooth the injected coordinates, or use the injected coordinates to infer src-sink, to then run laynii.. anyways, I'm going to be testing that out in the laynii PR.

The issue I wanted to address here was "holes" in the dg label, which is solved just by removing some of the nan-masking where the src and sink meet. instead, those unwanted areas are now masked out using a different method.

I guess my point was that if this PR is solely to deal with holes in the DG, not sure it ultimately will be needed - I think we can just keep it on hand for now in case it still proves useful later..

The PD coords src/sink are not available yet to DG, and even the AP coords may not solve if the dg label doesn't reach into the tip of the tail and uncus. This will still need to be addressed.

Ideally i would like to reduce reliance on template shape injection

Yes would be nice to not rely on the template injection for DG, but I think we're stuck with that until we train updated models.. And to unfold the DG we do still need to get the coords from somewhere..

@jordandekraker
Copy link
Collaborator Author

It may actually be just as easy to train a new unet instead of jerry-rigging the template shape injection, and this way we can harmonize protocol for DG and hipp (#334). Just added an example (#359).

While we're at it, can:

  • include DG PD src-sink (as in hippaverage atlas from multihist7 #359)
  • automatically import background labels (similar to Mahmoud)
  • split hipp gm into 2 or 4 IO layers so we dont need to worry about holes in the SRLM anymore, and retire template shape injection (and also remove some minor steps like this)
  • (optional) fast & loose synthsick augmentation by generating a warp from dilated CSF labels

Will be a shame to retire all the old nnunet models as no longer having compatible labels. But synthseg+nnunet seems to perform better in all these cases anyway.

I'll look into how long this will take. tentatively, I would do this using

  • multihist7 (+ bigbrain2 (labels only, i cant share the raw data), so 9 samples) (this needs doing anyway for atlas generation)
  • agile12 (24 samples) (will need to semi-manually add labels 9,10)
  • 10 7T superres subjects from hippomaps (20 samples) (will need to manually add labels 9,10).

I wouldn't bother with any lower resolutions since they show low variability and are easy to simulate via synthseg anyway

For added variability, we can also import some background labels from fastsurfer subjects, maybe using a reduced set of ~3 different subject backgrounds x 53 manual hipp segmentations to have good combinatiorial variability. The 3 fastsurfer backgrounds can be single | split | hybrid collateral sulcus, and the nnunet random warps should take care of the rest of the variability.

Another related nice-to-have is that if we use only new compatible nnunet labeling, we can run fully contrast-agnostic. In that case, we could generate multimodal superres images with a preproc.smk like this:

superres = T1w + 1/T1map + 1/T2w + 1/b500 + 1/FLAIR

That's a lot of changes, but it would simplify things quite a bit while also giving me a bit more control &detail in the training data manual segmentations. We have a shiney new GPU node in Boris' lab, so I'm not too worried about the training time, just the time to generate / QA these manual segs. I think I can use heuristics to get most of it done and then manually touch-up the results. I'll try out a few things and give a time estimate in the next few days. This would also be truly worthy of the v2.0.0 tag

Can always incrementally train the new nnunet too!

@akhanf
Copy link
Member

akhanf commented Jan 29, 2025

The main issue currently faced with the DG coords/surfaces is just the discretization is with too large voxels.. That is still going to be an issue with a new model, unless we drastically change the resolution we are using, which would make it much more computationally intensive. I think thats a good direction to head in, but I imagine it will take some time to get there. Also would need to validate once more (we know the synthseg approach has not been as robust in general MRI datasets).

I am actually seeing great results with DG template shape injection as long as we are using a high-enough upsampled voxel resolution, even using the exisiting upenn labels with an appropriate level of smoothing at the new resolution (eg 50micron iso for DG)..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants