Welcome to the official repository of the EyeDentify: A Webcam Based Pupil Diameter Estimation Dataset!
It contains code for:
- Creation of the Dataset:
EyeDentify
. - Training Pipelines for Pupil Diameter Estimation.
- A link to the WebApp:
PupilSense
Note: The project supports execution using SLURM workload manager. You can modify the scripts in the
./scripts
folder to match your preferred environment.
Clone the repository:
git clone https://github.com/vijulshah/eyedentify.git
cd eyedentify
Our custom-built React app, Chemeleon View, facilitates video data collection through a simple interface where participants click a central button to start three-second recordings. Each recording session produces a timestamp file (<session_id>.csv
, as shown in the figure below) that helps synchronize with Tobii eye-tracker data for precise pupil diameter measurements.
- Interface: A central button initiates a 3-second webcam recording.
- Synchronization: The timestamps are key to aligning webcam and Tobii eye-tracker data.
- Session Setup: Each participant completes 50 sessions, with screen colors varying to evoke different pupil reactions. The first ten recordings are done with a white background (⬜), followed by 40 recordings alternating between black (⬛), red (🟥), blue (🟦), yellow (🟨), green (🟩), and gray (🌫️). Each color appears five times consecutively. The last ten recordings return to a white background (⬜).
Steps to run the app:
- Navigate to
./data_collection/
. - Run
npm install
to install the dependencies. - Start the app with
npm start
.
This stage includes 2 parts:
This stage involves synchronizing webcam and Tobii data and extracting recording frames.
-
Align Tobii data and timestamp files (
<session_id>.csv
) to match the frame rate differences (Tobii at 90 Hz vs. webcam at 30 fps) as show in the figure below, ensuring accurate pupil diameter measurements. -
Extract image frames corresponding to each synchronized data point.
Relevant files:
- Configuration:
./configs/tobii_and_webcam_data_alignment.yml
- Python File:
./data_creation/eyedentify/run_data_alignment.py
- Execution:
./scripts/data_creation/srun_tobii_and_webcam_data_processing.sh
- Configuration:
-
Eye Cropping using Mediapipe landmarks for focused analysis.
-
Blink Detection using Eye Aspect Ratio (EAR) and a pretrained ViT to ensure data quality.
Relevant files:
- Configuration:
./configs/eyedentify_ds_creation.yml
- Python File:
./data_creation/eyedentify/ds_creation.py
- Execution:
./scripts/data_creation/eyedentify_ds_creation.sh
- Configuration:
This section details on training and evaluation. You have the option to execute scripts that can run on:
- pytorch with DDP support
- pytorch-lightning
The training strategies incude: (A) Val/Test split Cross-Validation (B) Leave One Participant Out Cross Validation (LOPOCV)
Relevant files:
- Configuration:
- For pytorch with DDP:
./configs/pt_train.yml
- For pytorch-lightning:
./configs/pl_train.yml
- For pytorch with DDP:
- Python File:
- For pytorch with DDP:
./training/pt_training/pt_train.py
- For pytorch-lightning:
./training/pl_training/pl_train.py
- For pytorch with DDP:
- Execution:
- For pytorch with DDP:
- Single Run:
./scripts/training/pt_training/srun_train_single_exp.sh
- Multiple Runs with Val/Test Split CV:
./scripts/training/pt_training/srun_pt_training_5foldcv.sh
- Multiple Runs with LOPOCV:
./scripts/training/pt_training/srun_pt_training_loocv.sh
- Single Run:
- For pytorch-lightning:
- Single Run:
./scripts/training/pl_training/srun_train_single_exp.sh
- Multiple Runs with Val/Test Split CV:
./scripts/training/pl_training/srun_pl_training_5foldcv.sh
- Multiple Runs with LOPOCV:
./scripts/training/pl_training/srun_pl_training_loocv.sh
- Single Run:
- For pytorch with DDP:
PupilSense
is created with streamlit and hosted on 🤗 Hugging Face Spaces.
You can view the app 🤗 here and the source code here.
The following is a BibTeX reference. The BibTeX entry requires the url
LaTeX package.
If EyeDentify
helps your research or work, please cite EyeDentify.
@article{shah2024eyedentify,
title={Eyedentify: A dataset for pupil diameter estimation based on webcam images},
author={Shah, Vijul and Watanabe, Ko and Moser, Brian B and Dengel, Andreas},
journal={arXiv preprint arXiv:2407.11204},
year={2024}
}
If you have any questions, please email [email protected]
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.