Our new release on Efficient Track Anything.
- Efficient Track Anything code: https://github.com/yformer/EfficientTAM
- Efficient Track Anything project (with gradio demo): https://yformer.github.io/efficient-track-anything/
- Efficient Track Anything paper: https://arxiv.org/pdf/2411.18933
Efficient Track Anything is an efficient foundation model for promptable unified image and video segmentation.
🤗Efficient Track Anything for video segmentation
🤗Efficient Track Anything for segment everything
EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything
[Jan.12 2024] ONNX version of EfficientSAM including separate encoder and decoder is available on the Hugging Face Space (thanks to @wkentaro Kentaro Wada for implementing onnx export)
[Dec.31 2023] EfficientSAM is integrated into the annotation tool, Labelme (huge thanks to lableme team and @wkentaro Kentaro Wada)
[Dec.11 2023] The EfficientSAM model code with checkpoints is fully available in this repository. The example script shows how to instantiate the model with checkpoint and query points on an image.
[Dec.10 2023] Grounded EfficientSAM demo is available on Grounded-Efficient-Segment-Anything (huge thanks to IDEA-Research team and @rentainhe for supporting grounded-efficient-sam demo under Grounded-Segment-Anything).
[Dec.6 2023] EfficientSAM demo is available on the Hugging Face Space (huge thanks to all the HF team for their support).
[Dec.5 2023] We release the torchscript version of EfficientSAM and share a colab.
Online demo and examples can be found in the project page.
Point-prompt | |
Box-prompt | |
Segment everything | |
Saliency |
EfficientSAM checkpoints are available under the weights folder of this github repository. Example instantiations and run of the models can be found in EfficientSAM_example.py.
EfficientSAM-S | EfficientSAM-Ti |
---|---|
Download | Download |
You can directly use EfficientSAM with checkpoints,
from efficient_sam.build_efficient_sam import build_efficient_sam_vitt, build_efficient_sam_vits
efficientsam = build_efficient_sam_vitt()
The notebook is shared here
If you're using EfficientSAM in your research or applications, please cite using this BibTeX:
@article{xiong2023efficientsam,
title={EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything},
author={Yunyang Xiong, Bala Varadarajan, Lemeng Wu, Xiaoyu Xiang, Fanyi Xiao, Chenchen Zhu, Xiaoliang Dai, Dilin Wang, Fei Sun, Forrest Iandola, Raghuraman Krishnamoorthi, Vikas Chandra},
journal={arXiv:2312.00863},
year={2023}
}