Yuki Kondo, Norimichi Ukita
Toyota Technological Institute (TTI-J)
in MVA2021 (Oral Presentation, Best Practical Paper Award)
Paper | Data | Project Page
🚀 CSBSR [IEEE TIM'24], an advanced version of CSSR, has been released!🚀
- May 1, 2024 -> CSBSR[Y. Kondo and N. Ukita IEEE TIM'23], an advanced version of CSSR, has been released! Click here for details!
- July 27, 2021 -> We received the Best Practical Paper Award 🏆 at MVA 2021!
We have proposed a method for high-resolution crack segmentation for low-resolution images. This approach enables automatic detection of cracks even when the image resolution of the crack area is reduced due to an environment in which the area where cracks may occur must be photographed from a distance (e.g., An environment in which a drone that targets a high-altitude chimney wall must take a distance in order to fly safely.). The proposed method is composed of the following two approaches.
-
Deep learning based super resolution to increase the resolution of low-resolution images. This super-resolution image enables delicate crack segmentation. In addition, we proposed CSSR (Crack Segmentation with Super Resolution) using end-to-end joint learning to optimize the super-resolution for the crack segmentation process.
-
In order to optimize the deep learning model for segmentation, we proposed a loss function Boundary Combo loss that simultaneously optimizes the global and local structures of cracks. This loss function enables both detection of thin and difficult-to-detect cracks and detection of fine crack boundaries.
The experimental results show that the proposed method is superior to the conventional method, and quantitatively*1 and qualitatively, the segmentation is as precise as when using high-resolution images.
*1; In terms of IoU, the proposed method achieves 97.3% of the IoU of the high-resolution image input.
- Python >= 3.6
- PyTorch >= 1.8
- numpy >= 1.19
-
Clone the repository:
git clone https://github.com/Yuki-11/CSSR.git
-
Download khanhha dataset:
cd $CSSR_ROOT mkdir datasets cd datasets curl -sc /tmp/cookie "https://drive.google.com/uc?export=download&id=1xrOqv0-3uMHjZyEUrerOYiYXW_E8SUMP" > /dev/null CODE="$(awk '/_warning_/ {print $NF}' /tmp/cookie)" curl -Lb /tmp/cookie "https://drive.google.com/uc?export=download&confirm=${CODE}&id=1xrOqv0-3uMHjZyEUrerOYiYXW_E8SUMP" -o temp_dataset.zip unzip temp_dataset.zip rm temp_dataset.zip
-
Download trained models:
cd $CSSR_ROOT mkdir output
You can download trained models here. Then, place the unzipped directory of the models you want to use under <$CSSR_ROOT/output/>.
-
Install packages:
cd $CSSR_ROOT pip install -r requirement.txt
-
Training:
cd $CSSR_ROOT python train.py --config_file <CONFIG FILE>
If you want to resume learning, you can do it with the following command.
cd $CSSR_ROOT python train.py --config_file output/<OUTPUT DIRECTORY (OUTPUT_DIR at config.yaml)>/config.yaml --resume_iter <Saved iteration number>
-
Test:
cd $CSSR_ROOT python test.py output/<OUTPUT DIRECTORY (OUTPUT_DIR at config.yaml)> <iteration number>
If you find this work useful, please consider citing it.
@inproceedings{CSSR2021,
title={Crack Segmentation for Low-Resolution Images using Joint Learning with Super-Resolution},
author={Kondo, Yuki and Ukita, Norimichi},
booktitle={International Conference on Machine Vision Applications (MVA)},
year={2021}
}