This repository contains the code and hardware design files accompanying the paper Automated and Continuous Pollen Detection with an Affordable Technology. The data can be found here (download instructions provided below).
@inproceedings{namcao2020pollen,
title = {Automated Pollen Detection with an Affordable Technology},
author = {Nam Cao and Matthias Meyer and Lothar Thiele and Olga Saukh},
booktitle = {Proceedings of the International Conference on Embedded Wireless Systems and Networks (EWSN)},
pages={108–119}
month = {2},
year = {2020},
}
The ./hwdesign/ folder contains the design files for the automated pollen trap. For more information please refer to the paper.
The paper's results can be reproduced with the following guideline.
To use the code, you require the following tools as prerequisites:
- Python 3.7
- git
We recommend Anaconda or Miniconda, the latter being a minimal (but sufficient) version of the Anaconda distribution. The following instructions will be based on Miniconda. If you use another Python environment, the installation routine must be adapted.
Note: The code was only tested on Linux. Using Windows might lead to problems during installation of packages. If you get it to run on Windows please create a pull request with instructions/required changes.
After the installation of Anaconda, open a terminal (on Windows Anaconda Prompt) and create a new environment by typing:
git clone https://github.com/osaukh/pollenpub
cd code/
conda env create -f environment.yml
Note: The following commands are to be run from the code/
directory.
python utils/download.py -f weights.zip data.zip
python test.py --weights_path ../weights/pollen_20190526.pth --model_def config/yolov3-pollen.cfg --data_config config/test_20190523.data
Change the --image_folder
argument to an image directory which contains pollen images and detect pollen images with the following command:
python detect.py --image_folder ../data/pollen_20190523/layers/ --output_folder ../tmp/output/
Training on the provided dataset can be done by issuing the following command
python train.py --name fold0 --epochs=60 --model_def config/yolov3-pollen.cfg --data_config config/train_20190526fold0.data --pretrained_weights ../weights/darknet53.conv.74
The create_folds.py script can be used to prepare the image folders to be used with the training script. It creates one/multiple text files which can be used for training/testing. Each file contains the name of the images belonging to the train/val set. The script takes into account that each sample consists of multiple depth layers since it is important that all depth layers of a sample are either in train or val set. This avoids information leakage between test and val,
Note: The following commands need only to be exectuted if you want to use a new labeled image folder for training. The files produced by these commands are already in ./config/.
After preparing the data you can run the training procedure as described before but with updated --data_config
parameter.
python create_folds.py -f ../data/pollen_20190526/ -o ../data/pollen_20190526/ -K 5 -n train_20190526
python create_folds.py -f ../data/pollen_20190523/ -o ../data/pollen_20190523/ -n test_20190523
Thanks to Erik Linder-Norén who open sourced his YOLOv3 code on which this implementation is based.
Joseph Redmon, Ali Farhadi
[Paper] [Project Webpage] [Authors' Implementation]
@article{yolov3,
title={YOLOv3: An Incremental Improvement},
author={Redmon, Joseph and Farhadi, Ali},
journal = {arXiv},
year={2018}
}