This repository extends the DeepD3 project by adding panoptic segmentation capabilities for the detailed analysis and quantification of dendrites and dendritic spines. It utilizes Mask R-CNN for instance segmentation of dendritic spines and FCN-ResNet for semantic segmentation of dendrites.
Author
- 😀@sahilsharma
Report
- 📔Final Project Report
Panoptic-Segmentation-on-Dendrites-and-Dendritic-Spines/
├── assets/
├── notebooks/
├── report/
├── src/
│ ├── models/
│ │ ├── FCN_ResNet/
│ │ │ ├── checkpoint.py
│ │ │ ├── dataset.py
│ │ │ ├── inference.py
│ │ │ ├── main.py
│ │ │ ├── model.py
│ │ │ ├── train.py
│ │ │ ├── transforms.py
│ │ │ └── utils.py
│ │ ├── MaskRCNN/
│ │ │ ├── checkpoint.py
│ │ │ ├── dataset.py
│ │ │ ├── inference.py
│ │ │ ├── main.py
│ │ │ ├── model.py
│ │ │ ├── train.py
│ │ │ ├── transforms.py
│ │ │ └── utils.py
│ ├── panoptic_inference/
│ │ └── panoptic_inference.py
│ └── create_dataset.py
├── requirements.txt
├── README.md
Panoptic segmentation combines the strengths of semantic and instance segmentation by assigning both a semantic label and an instance ID to every pixel in the image. It assigns a unique label to each pixel, corresponding to either a “thing” (countable object instances like cars, people, or animals) or “stuff” (amorphous regions like grass, sky, or road).
Below are the steps that you can follow to run panoptic inference on pre-trained Instance and Semantic segmentation models on your dataset.
git clone https://github.com/sahil-sharma-50/Panoptic-Segmentation-on-Dendrites-and-Dendritic-Spines.git
- Instance Segmentation Model for Spines: MaskRCNN
- Semantic Segmentation Model for Dendrites: FCN_ResNet50
pip install -r requirements.txt
Arguments:
--instance_model_path:
Path to the instance model file (e.g., spines_model.pt).--semantic_model_path:
Path to the semantic model file (e.g., dendrite_model.pt).--input_images_folder:
Directory containing the input images (e.g., input_images).--output_folder:
Directory where the output will be saved (e.g., output_folder).
python src/panoptic_inference/panoptic_inference.py --instance_model_path spines_model.pt --semantic_model_path dendrite_model.pt --input_images_folder input_images --output_folder output_folder
Download the DeepD3 dataset from Zenodo link: DeepD3_Training_Validation_TIFF.Zip
This script extracts the images from the above-downloaded .tiff file and automatically converts the binary mask of spines into instance masks. Arguments:
--zip_path:
Path to the zip file (e.g., DeepD3_Training_Validation_TIFF.zip).--extract_path:
Path where to extract the zip file (e.g., DeepD3_Training_Validation_TIFF).--output_folder:
Directory where the output will be saved. (e.g. Dataset).
python src/create_dataset.py --zip_path DeepD3_Training_Validation_TIFF.zip --extract_path ./DeepD3_Training_Validation_TIFF --output_path ./Dataset
Note:
Make sure that the above output dir (e.g. Dataset) is in the same folder.
├── Dataset/
│ ├── DeepD3_Training/
│ │ ├── dendrite_images/
│ │ │ ├── dendrite_0.png
│ │ │ ├── ...
│ │ │ ├── ...
│ │ ├── input_images/
│ │ │ ├── image_0.png
│ │ │ ├── ...
│ │ │ ├── ...
│ │ ├── spine_images/
│ │ │ ├── spine_0.png
│ │ │ ├── ...
│ │ │ ├── ...
│ ├── DeepD3_Validation/
│ │ ├── dendrite_images/
│ │ │ ├── dendrite_0.png
│ │ │ ├── ...
│ │ │ ├── ...
│ │ ├── input_images/
│ │ │ ├── image_0.png
│ │ │ ├── ...
│ │ │ ├── ...
│ │ ├── spine_images/
│ │ │ ├── spine_0.png
│ │ │ ├── ...
│ │ │ ├── ...
├── src/
Run the below command to start training the Mask RCNN model.
python src/models/MaskRCNN/main.py
Inference
python src/models/MaskRCNN/inference.py --model_path spines_model.pt --Validation_Folder Dataset/DeepD3_Validation/ --output_path spine_predictions
Run the below command to start training the FCN model.
python src/models/FCN_ResNet/main.py
Inference
python src/models/FCN_ResNet/inference.py --model_path dendrite_model.pt --Validation_Folder Dataset/DeepD3_Validation/ --output_path dendrite_predictions
Below are the evaluation metrics that are used for the evaluation of both models.
Metric | FCN | Mask RCNN |
---|---|---|
IoU | 0.3996 | 0.2731 |
Precision | 0.5808 | 0.3109 |
Recall | 0.4350 | 0.6845 |
If you have any feedback, please reach out to me at [email protected]