Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Perform active learning to correct problematic segmentations #29

Closed
jcohenadad opened this issue Nov 27, 2023 · 8 comments
Closed

Perform active learning to correct problematic segmentations #29

jcohenadad opened this issue Nov 27, 2023 · 8 comments
Assignees

Comments

@jcohenadad
Copy link
Member

jcohenadad commented Nov 27, 2023

Related to #25

Procedure:

@rohanbanerjee
Copy link
Collaborator

  • We have decided on using the hard labels and nnUNet for our task.

@rohanbanerjee
Copy link
Collaborator

  • We need as streamlined way to select the good flagged images from the QC and create a dataset from it. It should select those subjects and create a BIDS dataset accordingly.

Provide more details on this comment.

@rohanbanerjee
Copy link
Collaborator

rohanbanerjee commented Jan 19, 2024

The progress of this issue are as follows:

What is done

  1. Performed by Julien in the issue: Systematic review of binary ground truth quality #25
  2. Based on the ✅ I made a separate dataset and trained a 3D nnUNet model. This dataset contained 124 subjects in total (The ✅ trained model gave a final dice score of 0.93).
  3. After this training was completed, I ran inference on the images that were marked as ❌. The qc for these images are below.

20240129_data_excluded_qc.zip

Next steps:

  1. Me along with @MerveKaptan will go through the qc that were marked as ❌ and work on manually correcting the images with suboptimal segmentations.
  2. The corrected images will be QCed again
  3. The corrected data will be added to the training data corpus and a new nnUNet model will be trained.

@rohanbanerjee
Copy link
Collaborator

rohanbanerjee commented Jan 29, 2024

Marked ✅ for the predicted segmentation which looked fine and ❌ for the subjects which had over or under-segmentations. There were a few other subjects as per me which have artifacts and I have makes them as ⚠️. The .yml files can be found below (the file named qc_fail.yml has ❌ subjects and file named qc_artifact.yml has ⚠️ subjects)

qc_inference_rb-20240129.zip. Working on correcting the segmentations which were marked as ❌ in the qc.

@rohanbanerjee
Copy link
Collaborator

Hello @jcohenadad , after a discussion with @MerveKaptan, we decided that it would be great if you QC'd these predictions. This is would maintain rater consistency and also get your inputs on which subjects we should manually correct. Below is the QC:

data_excluded_qc_all.zip

@jcohenadad
Copy link
Member Author

overseg:
image

overseg:
image

overseg:
image

etc.

how were those predictions generated? can you point to the code, version of model, etc.

@rohanbanerjee
Copy link
Collaborator

rohanbanerjee commented Feb 23, 2024

Predictions were generated using the steps mentioned in comment

  • model version can be found at: fmri_sc_seg (The README contains details regarding the model version and the dataset with the ✅ marked data -- named as data_included_bids_20240113)

@rohanbanerjee
Copy link
Collaborator

Closing this issue as we are opening an issue for each round, see #38 , #35 for reference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants