Mask-RCNN using Core ML, Metal 2 and Accelerate.
Mask-RCNN is a general framework for object instance segmentation. It detects objects, the class they belong to, their bounding box and segmentation masks.
Mask-RCNN is not fast, especially with the current ResNet101 + FPN backbone.
There are much faster models for object detection such as SSDLite and YOLO.
This model will only be useful if instance segmentation is valuable for your use-case.
- Xcode 10.1
- iOS 12 or macOS 10.14 device with Metal support
- Swift 4.2
- (More requirements details coming soon)
- Docker
- (More requirements details coming soon)
- Checkout or download this repository
- Open a shell and navigate (cd) to the root of the repository
- Download the pre-trained model files using the command:
$ swift run maskrcnn download example
- Open Example/iOS Example.xcodeproj
- Build and run on an iOS 12 device with Metal support
Coming soon. See Roadmap. Install Manually instead.
Coming soon. See Roadmap. Install Manually instead.
Coming soon. See Roadmap. Install Manually instead.
- Import all of the Swift files in the Sources/Mask-RCNN-CoreML/ directory
- If you have your own data to train or fine-tune a model, or if you have your own model weights, see Converting or training your own model. Otherwise, see Using COCO pre-trained model to use my model.
- Download the pre-trained model files from the releases page. (instructions for conversion coming soon)
- Make the code you use is associated
- Drag the four files into your Xcode project (anchors.bin, MaskRCNN.mlmodel, Mask.mlmodel, Classifier.mlmodel)
If you have pre-trained model weights, or if you have data you want to train the model with, follow the instructions in this section.
At the moment, only models trained using Matterport's Mask-RCNN implementation are supported. If your model is trained differently, you may be able to get it to work by renaming your weights following Matterport's naming structure and exporting your model to the Keras HDF5 format.
You should also specify configuration options in a JSON file.
- architecture : The backbone architecture your model is trained with. "resnet101" or "resnet50". Defaults to "resnet101".
- input_image_shape : The shape of the input image as a list of numbers. Defaults to [1024,1024,3].
- num_classes : The number of classes, including the background class. Defaults to 81.
- pre_nms_max_proposals : The number of proposed regions to evaluate using NMS. Only the top pre_nms_max_proposals proposals by score will be evaluated. Defaults to 6000.
- max_proposals : The number of proposals to classify. Only the top proposals by score, after NMS, will be evaluated. Defaults to 1000.
- More options to come
To use the default directory structure, place your files as such:
.maskrcnn/
models/
your_model_name/
model/
config.json
weights.h5
The products of the conversion will be placed as such :
.maskrcnn/
models/
your_model_name/
products/
anchors.bin
MaskRCNN.mlmodel
Mask.mlmodel
Classifier.model
Run :
$ swift run maskrcnn convert <your_model_name>
If you want to use input files located elsewhere, or to output the model to another directory, simply run :
$ swift run maskrcnn convert <your_model_name> --config=<path_to_config_file> --weights=<path_to_weights_file> --output_dir=<path_to_output_dir>
This is not supported at the moment, but the next item in my roadmap.
After conversion, or training you may want to evaluate the model accuracy.
At the moment, only the COCO dataset can be used for evaluation.
To use the default directory structure, place your files as such:
.maskrcnn/
data/
coco/
the_coco_annotation_files.json
type_year(ex: val2017)/
the_images.jpg
models/
your_model_name/
products/
anchors.bin
MaskRCNN.mlmodel
Mask.mlmodel
Classifier.model
Run :
$ swift run maskrcnn eval coco <your_model_name> --year=<coco dataset year> --type=<coco dataset type ex : val>
If you want to compare against the model using Tensorflow, place the model files as such :
.maskrcnn/
models/
your_model_name/
model/
config.json
weights.h5
Use -c to compare
$ swift run maskrcnn eval coco <your_model_name> -c --year=<coco dataset year> --type=<coco dataset type ex : val>
If you want to have your files in custom locations :
$ swift run maskrcnn eval coco <your_model_name> -c --year=<coco dataset year> --type=<coco dataset type ex : val> --config=<path_to_config_file> --weights=<path_to_weights_file> --products_dir=<path_to_products_dir>
- Training and fine-tuning support
- Cocoapods, Carthage, Swift Package manager and improved documentation
- Mobile-optimized backbone and other performance optimizations
- Easy training support
- Support for custom evaluation datasets
- Support for pose estimation
Édouard Lavery-Plante, [email protected]