A cemented carbide insert wear state classification project using instance segmentation for predictive maintenance
Explore the docs »
Table of Contents
Monitoring tools is a fundamental process in modern factories because, by substituting worn tools with new ones, economic losses can be prevented. By using well known architectures able to solve both instance segmentation and classification tasks at the same time, it is possible to automate such process. However, manual labeling of the actual state of each worn region is time-consuming and requires the presence of highly qualified human operators. In this paper, an existing model, called baseline, has been implemented and its results have been analysed. Then, an attempt to lighten the labeling phase is proposed, by building a two-stage pipeline architecture where the first stage focuses on wear localisation and extraction by means of an architecture for instance segmentation, while the second stage performs the actual state classification for tools. In addition, to evaluate the feasibility of a real-time monitoring application, two different models that run at different fps were tested as stage 1 models. Results show that the proposed two-stage pipeline significantly outperforms the analysed baseline model and that real-time applications are viable whenever speed is preferred over prediction correctness.
Two different kind of architectures with some variants that use different mask annotations (in COCO format) are available:
- The baseline model works by predicting masks for instance segmentation with the respective associated label. The insert state is then predicted matching the label assigned to the mask with the largest area.
- The two-stage pipeline, instead, uses the first stage to predict wear masks with a single possible class (WEAR); then, for each sample, the mask with the largest area is passed down as input to the second stage that performs the wear state classification. If no mask is predicted in the first stage, the sample is automatically classified as belonging to the OK class. More info can be found in the report.
notebooks/
contains all runnable notebooks divided in different directories by test type and architecture used.src/
contains models code, utilities, requirements for each architecture and utils for COCO annotation manipulation + split.dataset/
contains the dataset class and masks annotations in COCO format.
Different architectures and tests are ready to be run using the appropriate notebook. The code was written to be run using Google Colaboratory, relying on Google Drive to retrieve and save important intermediate outputs.
To setup the appropriate environment prerequisites can be found in the respective section of the model that we decide to use (./src/models/*/colab-requirements.txt) and installed by doing
pip install -r colab-requirements.txt
Anyway, this process is already done with an appropriate cell in each notebook.
The dataset is private, since it is offered by manufacturing industries for research purposes.
Notebooks containing all tests can be easily uploaded and run on Google Colab. Baseline notebooks contains a Mask R-CNN architecture, while the pipeline proposed is composed by Stage_1 + Stage_2 files for each component. Stage_1 is available for instance segmentation using Mask R-CNN or Yolact architectures. Some generalization tests have been conducted in Stage_1_transfer_learning + Stage_2_transfer_learning notebooks.
The work was built by Claudio Tancredi, Francesca Russo, Matteo Quarta, Alessandro Versace.