Swift-Eye: Towards Anti-blink Pupil Tracking for Precise and Robust High-Frequency Near-Eye Movement Analysis with Event Cameras
fast_forward_video.mp4
This is the implementation code for Swift-Eye, which was built upon MMRotate: A Rotated Object Detection Benchmark using PyTorch.
After cloning our repositories, you can configure the environment by following these steps:
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia
pip install -U openmim
mim install mmcv-full
mim install mmdet<3.0.0
cd mmrotate
pip install -v -e .
To ensure the installation was successful, you can verify it by checking the output of pip list, where you should see something like:
mmrotate 0.3.4 path/to/mmrotate
A test dataset is available for download here. After downloading, please unzip the folder and place it in the Swift_Eye/mmrotate/train_swift_eye
directory. If you require additional data, consider checking EV-Eye and utilizing the code from timelens.
train_with_temporal_fusion_component
train_without_temporal_fusion_component
train Occlusion-ratio estimator
You can access the model weights from this link. After downloading, kindly place the weighst in the Swift_Eye/mmrotate/train_swift_eye/swift_eye
directory.
To generate results and the corresponding videos, execute /Swift-Eye/Swift-Eye/test_interpolated.py
.