Your images does not have to be glasses, they can be anything like cars, keys, or fruits. Just make sure that you have enough data for training (100-200 images for each class of objects)
I divided the training code into three Colab Notebooks to make the steps easier to follow:
- get_model.ipynb: Run this code to download the SSD Mobilenet V2 COCO Model (You will need the model.ckpt and pipeline.config files for training)
- custom_train.ipynb: Run this code to train the object detection model. Download the latest model.ckpt file after training for as many steps as you wish. You need the following things:
- model.ckpt (does not necessarily have to be the model.ckpt-0, it can be model.ckpt-50000 and so forth, just reupload the latest the model.ckpt file to retrain the model)
- pipeline.config
- 2 tfrecords files (one containing labeled training images and the other containing labeled test images)
- label_map.pbtxt
- export_tflite.ipynb: Run this code to convert the model.ckpt and pipeline.config files into a tflite format (you will need the model.tflite file for the app)
Follow the steps in the Object Detection Android Demo to create the app. Create the app bundle and publish the app on Google Play.
If you get stuck, check out this tutorial or Tony607's tutorial.
Most of the training code came from Tony607 under the MIT License, and I cleaned up some of the code. I made the android app using the Object Detection Android Demo under the Apache 2.0 License.