Skip to content

Commit

Permalink
Update google drive link (#404)
Browse files Browse the repository at this point in the history
Signed-off-by: YunLiu <[email protected]>
  • Loading branch information
KumoLiu authored Nov 6, 2024
1 parent 6ffd627 commit 207cad9
Show file tree
Hide file tree
Showing 4 changed files with 7 additions and 7 deletions.
4 changes: 2 additions & 2 deletions SwinUNETR/BRATS21/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Challenge: RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge
"TrainingData/BraTS2021_01146/BraTS2021_01146_flair.nii.gz"


- Download the json file from this [link](https://drive.google.com/file/d/1i-BXYe-wZ8R9Vp3GXoajGyqaJ65Jybg1/view?usp=sharing) and placed in the same folder as the dataset.
- Download the json file from this [link](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/brats21_folds.json) and placed in the same folder as the dataset.


The sub-regions considered for evaluation in BraTS 21 challenge are the "enhancing tumor" (ET), the "tumor core" (TC), and the "whole tumor" (WT). The ET is described by areas that show hyper-intensity in T1Gd when compared to T1, but also when compared to “healthy” white matter in T1Gd. The TC describes the bulk of the tumor, which is what is typically resected. The TC entails the ET, as well as the necrotic (NCR) parts of the tumor. The appearance of NCR is typically hypo-intense in T1-Gd when compared to T1. The WT describes the complete extent of the disease, as it entails the TC and the peritumoral edematous/invaded tissue (ED), which is typically depicted by hyper-intense signal in FLAIR [[BraTS 21]](http://braintumorsegmentation.org/).
Expand All @@ -41,7 +41,7 @@ Figure from [Baid et al.](https://arxiv.org/pdf/2107.02314v1.pdf) [3]

# Models
We provide Swin UNETR models which are pre-trained on BraTS21 dataset as in the following. The folds
correspond to the data split in the [json file](https://drive.google.com/file/d/1i-BXYe-wZ8R9Vp3GXoajGyqaJ65Jybg1/view?usp=sharing).
correspond to the data split in the [json file](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/brats21_folds.json).

<table>
<tr>
Expand Down
2 changes: 1 addition & 1 deletion SwinUNETR/BTCV/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ The training data is from the [BTCV challenge dataset](https://www.synapse.org/#

Please download the json file from this link.

We provide the json file that is used to train our models in the following <a href="https://drive.google.com/file/d/1t4fIQQkONv7ArTSZe4Nucwkk1KfdUDvW/view?usp=sharing"> link</a>.
We provide the json file that is used to train our models in the following <a href="https://developer.download.nvidia.com/assets/Clara/monai/tutorials/swin_unetr_btcv_dataset_0.json"> link</a>.

Once the json file is downloaded, please place it in the same folder as the dataset. Note that you need to provide the location of your dataset directory by using ```--data_dir```.

Expand Down
6 changes: 3 additions & 3 deletions UNETR/BTCV/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ We provide state-of-the-art pre-trained checkpoints and TorchScript models of UN

For using the pre-trained checkpoint, please download the weights from the following directory:

https://drive.google.com/file/d/1kR5QuRAuooYcTNLMnMj80Z9IgSs8jtLO/view?usp=sharing
https://developer.download.nvidia.com/assets/Clara/monai/research/UNETR_model_best_acc.pth

Once downloaded, please place the checkpoint in the following directory or use ```--pretrained_dir``` to provide the address of where the model is placed:

Expand All @@ -86,7 +86,7 @@ python main.py

For using the pre-trained TorchScript model, please download the model from the following directory:

https://drive.google.com/file/d/1_YbUE0abQFJUR4Luwict6BB8S77yUaWN/view?usp=sharing
https://developer.download.nvidia.com/assets/Clara/monai/research/UNETR_model_best_acc.pt

Once downloaded, please place the TorchScript model in the following directory or use ```--pretrained_dir``` to provide the address of where the model is placed:

Expand Down Expand Up @@ -155,7 +155,7 @@ Under Institutional Review Board (IRB) supervision, 50 abdomen CT scans of were

We provide the json file that is used to train our models in the following link:

https://drive.google.com/file/d/1t4fIQQkONv7ArTSZe4Nucwkk1KfdUDvW/view?usp=sharing
https://developer.download.nvidia.com/assets/Clara/monai/tutorials/swin_unetr_btcv_dataset_0.json

Once the json file is downloaded, please place it in the same folder as the dataset.

Expand Down
2 changes: 1 addition & 1 deletion coplenet-pneumonia-lesion-segmentation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ pip install "monai[nibabel]==0.2.0"
```
The rest of the steps assume that this repo is cloned to your local file system and the current directory is the folder of this README file.
- download the input examples from [google drive folder](https://drive.google.com/drive/folders/1pIoSSc4Iq8R9_xXo0NzaOhIHZ3-PqqDC) to `./images`.
- download the adapted pretrained model from [google drive folder](https://drive.google.com/drive/folders/1HXlYJGvTF3gNGOL0UFBeHVoA6Vh_GqEw) to `./model`.
- download the adapted pretrained model from this [link](https://developer.download.nvidia.com/assets/Clara/monai/research/coplenet_pretrained_monai_dict.pt) to `./model`.
- run `python run_inference.py` and segmentation results will be saved at `./output`.

_(To segment COVID-19 pneumonia lesions from your own images, make sure that the images have been cropped into the lung region,
Expand Down

0 comments on commit 207cad9

Please sign in to comment.