The RadImageNet dataset are available by request at https://www.radimagenet.com/)
This code designed to process the RadImagenet and convert to refined and stratified organization.
You can find the preprint paper here: Policy Gradient-Driven Noise Mask
If you use this code in your research, please cite our paper:
@article{yavuz2024policy,
title={Policy Gradient-Driven Noise Mask},
author={Yavuz, Mehmet Can and Yang, Yang},
year={2024},
eprint={2406.14568},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
This table compares the performance of ResNet models pretrained on 2D RadImagenet using regular and Two2Three convolution techniques across various metrics.
Features | Precision (macro) | Recall (macro) | F1 (macro) | Accuracy (balanced) | Accuracy (Average) |
---|---|---|---|---|---|
Resnet10t | 0.4720 | 0.3848 | 0.3998 | 0.3848 | 0.7981 |
Resnet18 | 0.5150 | 0.4383 | 0.4545 | 0.4383 | 0.8177 |
Resnet50 | 0.5563 | 0.4934 | 0.5097 | 0.4934 | 0.8352 |
We highly recommend you to adap the code for benchmarking for other models:
https://github.com/pytorch/vision/tree/main/references/classification
The model weights shared through https://huggingface.co/ogrenenmakine/RadImagenet
timm.create_model('resnet10t', num_classes=165)
The trained model are timm implementations.
correction_masks/
data/
weights/
output/
source/
correction_masks.tar.gz
radimagenet.tar.gz
RadiologyAI_test.csv
RadiologyAI_train.csv
RadiologyAI_val.csv
process.py
measure_acc_metrics.py
- correction_masks/: Contains correction masks for the images.
- data/: Contains the extracted radiology images.
- weights/: Model weights containing folders.
- output/: Directory for output files.
- source/: Contains source files and datasets.
- correction_masks.tar.gz: the file contains correction masks.
- radimagenet.tar.gz: the original compressed RadImagenet file.
- RadiologyAI_test.csv: CSV file for test dataset.
- RadiologyAI_train.csv: CSV file for training dataset.
- RadiologyAI_val.csv: CSV file for validation dataset.
- process.py: Main script to process and organize the RadImagenet files.
- measure_acc_metrics.py: The script to measure accuracy metrics.
To create a GitHub README file with the instructions for using Git to clone the Hugging Face repository ogrenenmakine/Refined-RadImagenet
, you can format it as follows:
This repository contains files from the Hugging Face repository ogrenenmakine/Refined-RadImagenet
. Follow the instructions below to clone the repository using Git.
If you haven't installed Git LFS yet, you can do so using the following command:
git lfs install
To clone the entire repository into your local machine, use the following command:
git clone https://huggingface.co/ogrenenmakine/Refined-RadImagenet source/
This command will clone all files from the repository into a directory named source
.
- Make sure you have sufficient storage space for large files.
- For more information about this dataset, visit the Hugging Face page.
Feel free to contribute or raise issues if you encounter any problems.
- Extract the Dataset:
python process.py
Ensure the dataset tar file is located at:
source/
The script will automatically extract to:
data/
- Process the Images: The script will read the CSV files, refined the images, and organize accordingly.
- Python 3.9+
- pandas
- OpenCV
- tarfile
- tqdm
- numpy
Install the required packages using pip:
pip install pandas opencv-python tarfile tqdm numpy
This project is licensed under the MIT License.