Run train_prune.py
, calling something similar to python train_prune.py -c config_lp_seg_mobilenet_prune.json 2>&1 | tee pruned_models/logs.txt
The following steps are performed:
- [Ref]
Load pretrained weights (based on
config['train']['pretrained_weights']
from the json file). - [Ref] Create a mask (we count any weights that are initially 0 as being pruned from a previous iteration). We use this mask to determine which weights to prune this iteration.
- [Ref] Recompile the YOLOv2 network with the loaded weights.
- [Ref]
Train for
config['train']['nb_epochs']
number of epochs (no pruning, just training). We train the YOLOv2 network with a custom Adam Optimizer that takes the weight mask into account ('freezing' the pruned weights in the mask). - [Ref] Perform pruning on the network using prune_layers function. We add to the mask with any new weights that are zero after this process.
- [Ref] Save the new weights to a file.
- [Ref] Load the new weights.
- Repeat steps 3 through 7 for
config['train']['train_times']
number of times.
Run train_quantize.py
, calling something similar to python train_quantize.py -c config_lp_seg_mobilenet_quant.json 2>&1 | tee quant_models/logs.txt
The following steps are performed:
- [Ref]
Load pretrained weights (based on
config['train']['pretrained_weights']
from the json file). - [Ref] Create a mask (we count any weights that are initially 0 as being pruned from a previous iteration). Quantization does not use a mask, but we load one in case the network was pruned previously.
- [Ref] Recompile the YOLOv2 network with the loaded weights.
- [Ref]
Train for
config['train']['nb_epochs']
number of epochs (no quantization, just training). We train the YOLOv2 network with a custom Adam Optimizer that takes the weight mask into account ('freezing' the pruned weights in the mask). - [Ref] Perform quantization on the network using quantize_layers function.
- [Ref] Save the new weights to a file.
- [Ref] Load the new quantized weights.
- Repeat steps 3 through 7 for
config['train']['train_times']
number of times
-
The dataset is generated by running
import_data.ipynb
. The LP datasets must be extracted into thedataset
folder in order forimport_data.ipynb
to read the files. The script will generate thesaved datasets
directory, containing the imported LP data.
-
Place the inception pretrained weights in the root folder: https://1drv.ms/f/s!ApLdDEW3ut5fec2OzK4S4RpT-SU
Convert the provided dataset by running
convert_dataset.py
. This script will place all LP datasets into one directory, and update their xml<filname>
references accordingly.Generate anchors using the following. Copy these anchors into the config file:
python gen_anchors.py -c config_lp_seg.json # should generate: [1.05,0.87, 1.99,1.46, 2.69,2.30, 2.78,1.82, 3.77,2.83]
The bounding boxes are derived from the given dataset by using the process found here: https://gurus.pyimagesearch.com/lesson-sample-segmenting-characters-from-license-plates/?fbclid=IwAR1djTQcAUV8Gyi6Oh-7PI-10bYdcFz0_EMmiE5ORpk6H2NVVXVkZ6RaANY
-
Had to install the following due to updated package versions:
pip install -U git+https://github.com/apple/coremltools.git # required for using a newer keras version
Had to install the following due to Windows OS:
pip install installation/Shapely-1.6.4.post1-cp35-cp35m-win_amd64.whl # inside project directory
-
To train:
python train.py -c config_lp_seg.json
To test:
python predict.py -c config_lp_seg.json -w lp_seg_inception.h5 -i images\lp_seg\AC_3.jpg python predict.py -c config_lp_seg.json -w lp_seg_inception.h5 -i images\lp_seg\LE_37.jpg python predict.py -c config_lp_seg.json -w lp_seg_inception.h5 -i images\lp_seg\RP_32.jpg
To train:
python train.py -c config_char_seg.json
To test:
python predict.py -c config_char_seg.json -w lp_char_inception.h5 -i images\char_seg\0.jpg
-
Had to perform the fixes to frontend.py according to this post.
-
Run the
download_data.sh
script in terminal to download the VOC 2007 dataset. -
python train.py --weights YOLO_small.ckpt --gpu 0 python test.py --weights YOLO_small.ckpt --gpu 0
-
For help debugging, view their README instructions
- Train YOLOv2*
- Initial work with Inception v3 backend
- Get great resulting bounding boxes
- Create cropped license plate images from output of YOLOv2 network
- Profile the YOLOv2/Inceptionv3 network
- Profile training phase
- Profile prediction phase
- Create a VOC-style dataset from original "only lp" dataset using conventional methods**
- Create intial work (minimum number of converted plates)
- Convert all license plates in this dataset
- Add image augmentations to improve training
- Train YOLOv2***
- Initial work with Inception v3 backend
- Get great resulting bounding boxes
- Profile the YOLOv2/Inceptionv3 network
- Profile training phase
- Profile prediction phase
- Use second network (***) to read plates from the cropped images generated by first network (*)
- Add bounding boxes from second network into the original image
- Optimize the pipeline
- Make improvements to bottlenecks found from profiling both networks
- Investigate combining both networks from the pipeline into a single YOLO network which will perform both
- Quantize weights
- Channel pruning using LASSO
- Huffman encoding license plate segmentation and reading
- Implement system for image-to-text using RNN and CTC
- Compare this method to performance of conventional methods (**)
- Write the report
- Proof read