Skip to content

The labeling tool for dense video object segmentation

Notifications You must be signed in to change notification settings

hkust-vgd/Salt-Video

Repository files navigation

Salt-Video

An annotation tool for marine data labeling.

Setup

  1. clone the repository
  2. create a conda environment using the environment.yaml file
    conda env create -f environment.yaml

preparation

  1. Please download the original model weights of SAM (ViT-H) and the XMeM weights and put them to saves folder.
  2. Install segment anything by pip install git+https://github.com/facebookresearch/segment-anything.git
  3. Generate the embeddings in advance; run python helpers/extract_embeddings.py --dataset-path ./video_seqs/shark
  4. Generate the onnx models in advance; run python helpers/generate_onnx.py --dataset-path ./video_seqs/shark we use multi-thread to generate the onnx models. If your computer is less powerful, you can set --nthread to a smaller valuer (default is 8).

Labeling demo

labeling_demo.mp4

Usage

Activate the conda environment and run
python segment_anything_annotator.py --dataset-path <your_data_path>

Advanced features

  • Multi-object tracking and segmentation (check the mots branch)
  • Keyframe caption and refinement (check the caption branch)

Acknowledgement

Our labeling tool is heavily based on

  • SALT; The layout design and the basic logic!
  • XMeM; the mask propagation algorithm.

Thanks for their contributions to the whole community.

Citing Salt-Video

If you find this labeling tool helpful, please consider citing:

@article{zhengcoralvos,
	title={CoralVOS: Dataset and Benchmark for Coral Video Segmentation},
	author={Zheng, Ziqiang and Xie, Yaofeng and Liang, Haixin and Yu, Zhibin and Yeung, Sai-Kit},
	journal={arXiv preprint arXiv:2310.01946},
	year={2023}
}

About

The labeling tool for dense video object segmentation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published