-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Used for making bounding boxes around text
- Loading branch information
1 parent
329e219
commit 1db8484
Showing
21 changed files
with
911 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,19 @@ | ||
Copyright (c) 2019-present NAVER Corp. | ||
|
||
Permission is hereby granted, free of charge, to any person obtaining a copy | ||
of this software and associated documentation files (the "Software"), to deal | ||
in the Software without restriction, including without limitation the rights | ||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | ||
copies of the Software, and to permit persons to whom the Software is | ||
furnished to do so, subject to the following conditions: | ||
|
||
The above copyright notice and this permission notice shall be included in | ||
all copies or substantial portions of the Software. | ||
|
||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR | ||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, | ||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE | ||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER | ||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, | ||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN | ||
THE SOFTWARE. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,103 @@ | ||
## CRAFT: Character-Region Awareness For Text detection | ||
Official Pytorch implementation of CRAFT text detector | [Paper](https://arxiv.org/abs/1904.01941) | [Pretrained Model](https://drive.google.com/open?id=1Jk4eGD7crsqCCg9C9VjCLkMN3ze8kutZ) | [Supplementary](https://youtu.be/HI8MzpY8KMI) | ||
|
||
**[Youngmin Baek](mailto:[email protected]), Bado Lee, Dongyoon Han, Sangdoo Yun, Hwalsuk Lee.** | ||
|
||
Clova AI Research, NAVER Corp. | ||
|
||
### Sample Results | ||
|
||
### Overview | ||
PyTorch implementation for CRAFT text detector that effectively detect text area by exploring each character region and affinity between characters. The bounding box of texts are obtained by simply finding minimum bounding rectangles on binary map after thresholding character region and affinity scores. | ||
|
||
<img width="1000" alt="teaser" src="./figures/craft_example.gif"> | ||
|
||
## Updates | ||
**13 Jun, 2019**: Initial update | ||
**20 Jul, 2019**: Added post-processing for polygon result | ||
**28 Sep, 2019**: Added the trained model on IC15 and the link refiner | ||
|
||
|
||
## Getting started | ||
### Install dependencies | ||
#### Requirements | ||
- PyTorch>=0.4.1 | ||
- torchvision>=0.2.1 | ||
- opencv-python>=3.4.2 | ||
- check requiremtns.txt | ||
``` | ||
pip install -r requirements.txt | ||
``` | ||
|
||
### Training | ||
The code for training is not included in this repository, and we cannot release the full training code for IP reason. | ||
|
||
|
||
### Test instruction using pretrained model | ||
- Download the trained models | ||
|
||
*Model name* | *Used datasets* | *Languages* | *Purpose* | *Model Link* | | ||
| :--- | :--- | :--- | :--- | :--- | | ||
General | SynthText, IC13, IC17 | Eng + MLT | For general purpose | [Click](https://drive.google.com/open?id=1Jk4eGD7crsqCCg9C9VjCLkMN3ze8kutZ) | ||
IC15 | SynthText, IC15 | Eng | For IC15 only | [Click](https://drive.google.com/open?id=1i2R7UIUqmkUtF0jv_3MXTqmQ_9wuAnLf) | ||
LinkRefiner | CTW1500 | - | Used with the General Model | [Click](https://drive.google.com/open?id=1XSaFwBkOaFOdtk4Ane3DFyJGPRw6v5bO) | ||
|
||
* Run with pretrained model | ||
``` (with python 3.7) | ||
python test.py --trained_model=[weightfile] --test_folder=[folder path to test images] | ||
``` | ||
|
||
The result image and socre maps will be saved to `./result` by default. | ||
|
||
### Arguments | ||
* `--trained_model`: pretrained model | ||
* `--text_threshold`: text confidence threshold | ||
* `--low_text`: text low-bound score | ||
* `--link_threshold`: link confidence threshold | ||
* `--cuda`: use cuda for inference (default:True) | ||
* `--canvas_size`: max image size for inference | ||
* `--mag_ratio`: image magnification ratio | ||
* `--poly`: enable polygon type result | ||
* `--show_time`: show processing time | ||
* `--test_folder`: folder path to input images | ||
* `--refine`: use link refiner for sentense-level dataset | ||
* `--refiner_model`: pretrained refiner model | ||
|
||
|
||
## Links | ||
- WebDemo : https://demo.ocr.clova.ai/ | ||
- Repo of recognition : https://github.com/clovaai/deep-text-recognition-benchmark | ||
|
||
## Citation | ||
``` | ||
@inproceedings{baek2019character, | ||
title={Character Region Awareness for Text Detection}, | ||
author={Baek, Youngmin and Lee, Bado and Han, Dongyoon and Yun, Sangdoo and Lee, Hwalsuk}, | ||
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, | ||
pages={9365--9374}, | ||
year={2019} | ||
} | ||
``` | ||
|
||
## License | ||
``` | ||
Copyright (c) 2019-present NAVER Corp. | ||
Permission is hereby granted, free of charge, to any person obtaining a copy | ||
of this software and associated documentation files (the "Software"), to deal | ||
in the Software without restriction, including without limitation the rights | ||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | ||
copies of the Software, and to permit persons to whom the Software is | ||
furnished to do so, subject to the following conditions: | ||
The above copyright notice and this permission notice shall be included in | ||
all copies or substantial portions of the Software. | ||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR | ||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, | ||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE | ||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER | ||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, | ||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN | ||
THE SOFTWARE. | ||
``` |
Empty file.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Empty file.
Binary file not shown.
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,73 @@ | ||
from collections import namedtuple | ||
|
||
import torch | ||
import torch.nn as nn | ||
import torch.nn.init as init | ||
from torchvision import models | ||
|
||
|
||
def init_weights(modules): | ||
for m in modules: | ||
if isinstance(m, nn.Conv2d): | ||
init.xavier_uniform_(m.weight.data) | ||
if m.bias is not None: | ||
m.bias.data.zero_() | ||
elif isinstance(m, nn.BatchNorm2d): | ||
m.weight.data.fill_(1) | ||
m.bias.data.zero_() | ||
elif isinstance(m, nn.Linear): | ||
m.weight.data.normal_(0, 0.01) | ||
m.bias.data.zero_() | ||
|
||
class vgg16_bn(torch.nn.Module): | ||
def __init__(self, pretrained=True, freeze=True): | ||
super(vgg16_bn, self).__init__() | ||
model = models.vgg16_bn(weights = 'DEFAULT') | ||
vgg_pretrained_features = model.features | ||
self.slice1 = torch.nn.Sequential() | ||
self.slice2 = torch.nn.Sequential() | ||
self.slice3 = torch.nn.Sequential() | ||
self.slice4 = torch.nn.Sequential() | ||
self.slice5 = torch.nn.Sequential() | ||
for x in range(12): # conv2_2 | ||
self.slice1.add_module(str(x), vgg_pretrained_features[x]) | ||
for x in range(12, 19): # conv3_3 | ||
self.slice2.add_module(str(x), vgg_pretrained_features[x]) | ||
for x in range(19, 29): # conv4_3 | ||
self.slice3.add_module(str(x), vgg_pretrained_features[x]) | ||
for x in range(29, 39): # conv5_3 | ||
self.slice4.add_module(str(x), vgg_pretrained_features[x]) | ||
|
||
# fc6, fc7 without atrous conv | ||
self.slice5 = torch.nn.Sequential( | ||
nn.MaxPool2d(kernel_size=3, stride=1, padding=1), | ||
nn.Conv2d(512, 1024, kernel_size=3, padding=6, dilation=6), | ||
nn.Conv2d(1024, 1024, kernel_size=1) | ||
) | ||
|
||
if not pretrained: | ||
init_weights(self.slice1.modules()) | ||
init_weights(self.slice2.modules()) | ||
init_weights(self.slice3.modules()) | ||
init_weights(self.slice4.modules()) | ||
|
||
init_weights(self.slice5.modules()) # no pretrained model for fc6 and fc7 | ||
|
||
if freeze: | ||
for param in self.slice1.parameters(): # only first conv | ||
param.requires_grad= False | ||
|
||
def forward(self, X): | ||
h = self.slice1(X) | ||
h_relu2_2 = h | ||
h = self.slice2(h) | ||
h_relu3_2 = h | ||
h = self.slice3(h) | ||
h_relu4_3 = h | ||
h = self.slice4(h) | ||
h_relu5_3 = h | ||
h = self.slice5(h) | ||
h_fc7 = h | ||
vgg_outputs = namedtuple("VggOutputs", ['fc7', 'relu5_3', 'relu4_3', 'relu3_2', 'relu2_2']) | ||
out = vgg_outputs(h_fc7, h_relu5_3, h_relu4_3, h_relu3_2, h_relu2_2) | ||
return out |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,85 @@ | ||
""" | ||
Copyright (c) 2019-present NAVER Corp. | ||
MIT License | ||
""" | ||
|
||
# -*- coding: utf-8 -*- | ||
import torch | ||
import torch.nn as nn | ||
import torch.nn.functional as F | ||
|
||
from CRAFTpytorchmaster.basenet.vgg16_bn import vgg16_bn, init_weights | ||
|
||
class double_conv(nn.Module): | ||
def __init__(self, in_ch, mid_ch, out_ch): | ||
super(double_conv, self).__init__() | ||
self.conv = nn.Sequential( | ||
nn.Conv2d(in_ch + mid_ch, mid_ch, kernel_size=1), | ||
nn.BatchNorm2d(mid_ch), | ||
nn.ReLU(inplace=True), | ||
nn.Conv2d(mid_ch, out_ch, kernel_size=3, padding=1), | ||
nn.BatchNorm2d(out_ch), | ||
nn.ReLU(inplace=True) | ||
) | ||
|
||
def forward(self, x): | ||
x = self.conv(x) | ||
return x | ||
|
||
|
||
class CRAFT(nn.Module): | ||
def __init__(self, pretrained=False, freeze=False): | ||
super(CRAFT, self).__init__() | ||
|
||
""" Base network """ | ||
self.basenet = vgg16_bn(pretrained, freeze) | ||
|
||
""" U network """ | ||
self.upconv1 = double_conv(1024, 512, 256) | ||
self.upconv2 = double_conv(512, 256, 128) | ||
self.upconv3 = double_conv(256, 128, 64) | ||
self.upconv4 = double_conv(128, 64, 32) | ||
|
||
num_class = 2 | ||
self.conv_cls = nn.Sequential( | ||
nn.Conv2d(32, 32, kernel_size=3, padding=1), nn.ReLU(inplace=True), | ||
nn.Conv2d(32, 32, kernel_size=3, padding=1), nn.ReLU(inplace=True), | ||
nn.Conv2d(32, 16, kernel_size=3, padding=1), nn.ReLU(inplace=True), | ||
nn.Conv2d(16, 16, kernel_size=1), nn.ReLU(inplace=True), | ||
nn.Conv2d(16, num_class, kernel_size=1), | ||
) | ||
|
||
init_weights(self.upconv1.modules()) | ||
init_weights(self.upconv2.modules()) | ||
init_weights(self.upconv3.modules()) | ||
init_weights(self.upconv4.modules()) | ||
init_weights(self.conv_cls.modules()) | ||
|
||
def forward(self, x): | ||
""" Base network """ | ||
sources = self.basenet(x) | ||
|
||
""" U network """ | ||
y = torch.cat([sources[0], sources[1]], dim=1) | ||
y = self.upconv1(y) | ||
|
||
y = F.interpolate(y, size=sources[2].size()[2:], mode='bilinear', align_corners=False) | ||
y = torch.cat([y, sources[2]], dim=1) | ||
y = self.upconv2(y) | ||
|
||
y = F.interpolate(y, size=sources[3].size()[2:], mode='bilinear', align_corners=False) | ||
y = torch.cat([y, sources[3]], dim=1) | ||
y = self.upconv3(y) | ||
|
||
y = F.interpolate(y, size=sources[4].size()[2:], mode='bilinear', align_corners=False) | ||
y = torch.cat([y, sources[4]], dim=1) | ||
feature = self.upconv4(y) | ||
|
||
y = self.conv_cls(feature) | ||
|
||
return y.permute(0,2,3,1), feature | ||
|
||
if __name__ == '__main__': | ||
model = CRAFT(pretrained=True).cuda() | ||
output, _ = model(torch.randn(1, 3, 768, 768).cuda()) | ||
print(output.shape) |
Oops, something went wrong.