layout | title | description | permalink | toc | _styles | |||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
distill |
Weakly Supervised MyoPS |
Weakly Supervised Myocardial Pathology Segmentation |
/weak_myops/ |
|
d-article {
contain: layout style;
overflow-x: hidden;
border-top: 1px solid rgba(0, 0, 0, 0.1);
padding-top: 2rem;
color: rgba(0, 0, 0, 0.8);
}
d-article > * {
grid-column: text;
}
|
{% include figure.liquid loading="eager" path="/assets/img/weak_myops.png" class="img-fluid" zoomable=true caption="Figure 1. Fully supervised and weakly supervised myocardial pathology segmentation." %}
Myocardial infarction is a common and serious cardiovascular disease that leads to necrosis and scar formation in cardiac tissues, significantly impacting patients' quality of life and health. Utilizing cardiac magnetic resonance (CMR) imaging techniques, particularly the late gadolinium enhancement (LGE) sequence, allows for the visualization of infarcted scar regions within the myocardium. However, manual segmentation of infarcted scars from LGE images is a time-consuming and labor-intensive task, while automated segmentation methods can enhance efficiency and reduce human error.
As shown in Figure 1, traditional fully supervised deep learning methods require a large amount of accurately labeled data for training, which is often challenging and expensive. Weakly supervised learning methods, which aim to train models with limited supervision, have emerged as a promising solution to these challenges. By utilizing only partial or noisy labels, weakly supervised methods enable the training of deep learning models with reduced reliance on expensive and labor-intensive annotation efforts.
Through this challenge, we aim to encourage participants to explore and develop novel weakly supervised deep learning methods for accurate segmentation of myocardial infarcted scars from LGE images with limited supervision. Additionally, we encourage participants to tackle the challenge of incorporating multi-center data, recognizing the importance of addressing real-world complexities to enhance the robustness and applicability of their proposed solutions. The best works, following the precedent of MyoPS 2020, will be recognized with awards.
This challenge will provide LGE for 300 patients across 6 centers from China, France, and the United Kingdom. Traced with lines in LGE to indicate scarred areas.
All clinical data have received institutional ethical approval and have been anonymized to ensure privacy and compliance with ethical standards.
Center | Num. patients |
---|---|
A | 181 |
B | 50 |
C | 45 |
D | 07 |
E | 09 |
F | 08 |
The dataset is divided into three main parts: training, validation, and test sets:
- Validation Set: 50 LGE from Center A.
- Test Set: 50 LGE from Center A.
- Training Set: The rest LGE from Centers A, B, C, D, E, and F.
LGE and line label of scars will be provided in the NIfTI format as follows:
- [Patient Identifier]_LGE.nii.gz
- [Patient Identifier]_line.nii.gz
The performance of scar and edema segmentation results will be evaluated by:
- Dice Similarity Coefficient (DSC)
- Accuracy (ACC)
- Sensitivity (SEN)
Note that the track will provide an open platform for research groups to validate and test their methods. For a fair comparison, the test dataset will remain unseen. Participants need to submit their docker container to our platform for testing.
- Only automatic methods are acceptable. Participants must utilize algorithms that do not require manual intervention or human-assisted processes for the segmentation task.
- External data sets and pre-trained models are not allowed in this track. The solutions must be developed using only the data provided within the scope of this track and cannot leverage any external datasets or models for assistance.
Please sign up to join this track.
After registration, we will assign participants an account to log into our evaluation platform. Participants can directly upload their predictions on the validation data (in NIfTI format) via the website. Note that evaluation of validation data will be allowed up to 10 times for each task per team. For a fair comparison, the test dataset will remain unseen. Participants need to submit their docker models for testing.
The schedule for this track is as follows. All deadlines(DDLs) are at 23:59 in Pacific Standard Time.
Training Data Release | June 1, 2024 |
---|---|
Validation Phase | August 1, 2024 to September 1, 2024 (DDL) |
Test Phase | September 1, 2024 to October 1, 2024 (DDL) |
Notification | October 5, 2024 |
Workshop (Half-Day) | November 8, 2024 |
- Xiahai Zhuang, School of Data Science, Fudan University
- Wangbin Ding, School of Imaging, Fujian Medical University
- Liqin Huang, College of Physics and Information Engineering, Fuzhou University
If you have any questions regarding the challenge, please feel free to contact:
- Email1: [email protected]
- Email2: [email protected]
- Yibo Gao (contact person for IT support): [email protected]
- Zhen Zhang (contact person for IT support): [email protected]