Skip to content

ret-1/TAPIR-pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TAPIR-pytorch

The pytorch training & inference pipeline of TAPIR, mostly based on the implementation in EchoTracker. Thanks the authors for their great work!

Installation

After cloning the repository, you can install the required packages by running the following commands:

conda create -n tapir python=3.11
conda activate tapir
pip install -r requirements.txt

Training

  1. Use your own dataset to replace the code in dataset/train_dataset.py and change the corresponding part in train.py.
  2. Change the configs in config/default.yaml according to your needs.
  3. Run the following command to start training:
python train.py --config config/default.yaml

Inference

Currently, only TAP-Vid-DAVIS and TAP-Vid-RGB-Stacking are supported (because they are esay to implement). You could follow the instructions to download them.

To inference and get the results, you can run the following command:

python inference.py --ckpt /path/to/your/checkpoint.pth --dataset /path/to/dataset --output_dir /path/to/output

Then you can find the results in the output_dir.

Results

Using the checkpoint provided here, we could get the following results under 256x256 inference resolution:

dataset AJ $<\delta^x_{avg}$ OA survival MTE
DAVIS 57.4% 69.5% 86.9% 96.7% 4.31
RGB-Stacking 55.5% 71.5% 84.3% 96.7% 4.25

About

The pytorch training & inference pipeline of TAPIR.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages