Skip to content

QiaoSiBo/DPC-MSGATNet

Repository files navigation

DPC-MSGATNet

Pytorch code for the paper ["DPC-MSGATNet: Dual-Path Chain Multi-Scale Gated Axial-Transformer Network for Four-chamber View Segmentation in Fetal Echocardiography"], which has been accepted by Complex & Intelligent Systems.

Using the code:

The code is stable using Python 3. 7. 6, Pytorch 1. 7. 1, Torchvision 0. 8. 2, CUDA 11. 0. 221, and CUDNN 8. 0. 5

The DPC-MSGATNet is trained with NVIDIA Telsa V100 32G. If you want to know more detailed information about Hardware and Software, please go to our manuscript. Thanks!

Links for downloading the Datasets:

  1. MoNuSeG Dataset - Link (Original)
  2. GLAS Dataset - Link (Original)
  3. Fetal US FC view dataset from the paper will be made public soon! But, we have provided several test samples and our pre-trained weights of DPC-MSGATNet in the repo.

Links for downloading the Pth of DPC-MSGATNet:

The pth of DPC-MSGATNet for fetal US FC view segmentation can be found here. DPC-MSGATNet

Using the Code for your dataset

Dataset Preparation

Prepare the dataset in the following format for easy use of the code. The train and test folders should contain two subfolders each: img and label. Make sure the images and their corresponding segmentation masks are placed under these folders and have the same name for easy correspondence. Please change the data loaders to your need if you prefer not to prepare the dataset in this format.

Train Folder-----
      img----
          0001.png
          0002.png
          .......
      labelcol---
          0001.png
          0002.png
          .......
Validation Folder-----
      img----
          0001.png
          0002.png
          .......
      labelcol---
          0001.png
          0002.png
          .......
Test Folder-----
      img----
          0001.png
          0002.png
          .......
      labelcol---
          0001.png
          0002.png
          .......
  • The ground truth images should have pixels corresponding to the labels. Example: In case of binary segmentation, the pixels in the GT should be 0 or 255.

Training Command:

python train.py --train_dataset "enter train directory" --val_dataset "enter validation directory" --direc 'path for results to be saved' --batch_size 4 --epoch 405 --save_freq 5 --modelname "msgatnet" --learning_rate 0.001 --imgsize 224 --blocksize 32 --gray "no"

Testing Command:

python test.py --loaddirec "./saved_model_path/model_name.pth" --val_dataset "test dataset directory" --direc 'path for results to be saved' --batch_size 1 --modelname "msgatnet" --imgsize 224 --blocksize 32 --gray "no" --mode "multiple"

Acknowledgement:

We refer to the code of Medical-Transformer . The axial attention code is developed from axial-deeplab.

Citation:

@article{Qiaosibo,
  title={DPC-MSGATNet: dual-path chain multi-scale gated axial-transformer network for four-chamber view segmentation in fetal echocardiography},
  author={Sibo Qiao and Shanchen Pang and Gang Luo and Yi Sun and Wenjing Yin and Silin Pan and Zhihan Lv},
  journal={Complex and Intelligent Systems},
  year={2023},
  url={https://doi.org/10.1007/s40747-023-00968-x}
} 

Open an issue or mail me directly in case of any queries or suggestions.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages