Skip to content

Combining Segment Anything (SAM) with Grounded DINO for zero-shot object detection and CLIPSeg for zero-shot segmentation

License

Notifications You must be signed in to change notification settings

gradient-ai/panoptic-segment-anything

 
 

Repository files navigation

Zero-shot panoptic segmentation using SAM

Getting Started: Gradient

This is a proof of concept for zero-shot panoptic segmentation using the Segment Anything Model (SAM).

SAM cannot immediately achieve panoptic segmentation due to two limitations:

  • The released version of SAM is not text-aware
  • The authors of Segment Anything mention that it is unclear how to design simple prompts that implement semantic and panoptic segmentation

To solve these challenges we use the following additional models:

You can try out the pipeline by trying out the Gradio demo on Hugging Face Spaces.

The notebook also shows how the predictions from this pipeline can be uploaded to Segments.ai as pre-labels, where you can adjust them to obtain perfect labels for fine-tuning your segmentation model.

🖼️Results

Results

🏗️ Pipeline

Our Frankenstein-ish pipeline looks as follows:

  1. Use Grounding DINO to detect the "thing" categories (categories with instances) Step 1
  2. Get instance segmentation masks for the detected boxes using SAM Step 2
  3. Use CLIPSeg to obtain rough segmentation masks of the "stuff" categories Step 3
  4. Sample points in these rough segmentation masks and feed these to SAM to get fine segmentation masks Step 4a Step 4b
  5. Combine the background "stuff" masks with the foreground "thing" masks to obtain a panoptic segmentation label Step 5

💘 Acknowledgements

About

Combining Segment Anything (SAM) with Grounded DINO for zero-shot object detection and CLIPSeg for zero-shot segmentation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 99.8%
  • Python 0.2%