Skip to content

The official PyTorch implementation of L2CS-Net: Fine-Grained Gaze Estimation in Unconstrained Environments

License

Notifications You must be signed in to change notification settings

janglinko-dac/L2CS-Net

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

L2CS-Net: Fine-Grained Gaze Estimation in Unconstrained Environments (Pytorch)

animated

Paper details

Authors

Ahmed A.Abdelrahman, Thorsten Hempel, Aly Khalifa, Ayoub Al-Hamadi, ICIP, 2022, "under review". [Arxiv]

Abstract

Human gaze is a crucial cue used in various applications such as human-robot interaction and virtual reality. Recently, convolution neural network (CNN) approaches have made notable progress in predicting gaze direction. However, estimating gaze in-the-wild is still a challenging problem due to the uniqueness of eye appearance, lightning conditions, and the diversity of head pose and gaze directions. In this paper, we propose a robust CNN-based model for predicting gaze in unconstrained settings. We propose to regress each gaze angle separately to improve the per-angel prediction accuracy, which will enhance the overall gaze performance. In addition, we use two identical losses, one for each angle, to improve network learning and increase its generalization. We evaluate our model with two popular datasets collected with unconstrained settings. Our proposed model achieves state-of-the-art accuracy of 3.92 and 10.41 on MPIIGaze and Gaze360 datasets, respectively.

Citation

If you use any part of our code or data, please cite our paper.

@misc{AAbdelrahman2022L2CSNetFG,
      title={L2CS-Net: Fine-Grained Gaze Estimation in Unconstrained Environments},
      author={Ahmed A.Abdelrahman, Thorsten Hempel, Aly Khalifa, Ayoub Al-Hamadi},
      year={2022},
      eprint={2203.03339},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Evaluation

Evaluation on MPIIGaze datasetEvaluation on Gaze360 dataset
Methods MPIIFaceGaze
iTracker (AlexNet) 5.6
MeNets 4.9
FullFace 4.8
Dilated-Net 4.8
RT-Gene(1 model) 4.8
GEDDNet 4.5
RT-Gene(4 ensemble) 4.3
FAR-Net 4.3
Bayesian Approach 4.3
CA-Net 4.1
AGE-Net 4.09
L2CS-Net ( =2) 3.96
L2CS-Net ( =1) 3.92
Methods Front 180 Front Facing
FullFace 14.99 N/A
Dilated-Net 13.73 N/A
RT-Gene (4 ensemble) 12.26 N/A
CA-Net 12.26 N/A
Gaze360 (LSTM) 11.4 11.1
L2CS-Net ( =2) 10.54 9.13
L2CS-Net ( =1) 10.41 9.01

Installation

  • Set up a virtual environment:
python3 -m venv venv
source venv/bin/activate
  • Install required packages:
pip install -r requirements.txt --extra-index-url https://download.pytorch.org/whl/cu116
  • Install the code in editable mode
pip install -e .

Demo (not necessary for training)

  • Install the face detector:
pip install git+https://github.com/elliottzheng/face-detection.git@master
  • Download the pre-trained models from here and Store it to models/.
  • Run:
 python demo.py \
 --snapshot models/L2CSNet_gaze360.pkl \
 --gpu 0 \
 --cam 0 \

This means the demo will run using L2CSNet_gaze360.pkl pretrained model

GazeCapture

Preapare datasets

TBD

Train

Example CLI command to run training for 3 epochs:

python clear_training.py --dataset gazecapture --snapshot output/snapshots --gpu 0 --num_epochs 50 --batch_size 12  --lr 0.00001 --arch ResNet18 --gazecapture-ann datasets/E2_DATASET_NORMALIZED/annotations.txt --gazecapture-dir datasets/E2_DATASET_NORMALIZED/ --tb e2_train-04_10_val_660_offset --validation-dir /home/janek/software/L2CS-Net/datasets/DAC_VALIDATION_NORMALIZED_660_OFFSET --validation-ann /home/janek/software/L2CS-Net/datasets/DAC_VALIDATION_NORMALIZED_660_OFFSET/annotations.txt

About

The official PyTorch implementation of L2CS-Net: Fine-Grained Gaze Estimation in Unconstrained Environments

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%