Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Poc #83

Open
wants to merge 5 commits into
base: master
Choose a base branch
from
Open

Poc #83

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,13 @@
__pycache__/
*.py[cod]
*$py.class

**.idea/**
# C extensions
*.so

# Distribution / packaging
.Python
data/
build/
develop-eggs/
dist/
Expand Down
13 changes: 0 additions & 13 deletions Dockerfile

This file was deleted.

21 changes: 0 additions & 21 deletions LICENSE

This file was deleted.

178 changes: 13 additions & 165 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,172 +1,20 @@
# Gaze Tracking
# Gaze Tracking for Guzy

![made-with-python](https://img.shields.io/badge/Made%20with-Python-1f425f.svg)
![Open Source Love](https://badges.frapsoft.com/os/v1/open-source.svg?v=103)
![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)
[![GitHub stars](https://img.shields.io/github/stars/antoinelame/GazeTracking.svg?style=social)](https://github.com/antoinelame/GazeTracking/stargazers)
Based on: github.com/antoinelame/GazeTracking

This is a Python (2 and 3) library that provides a **webcam-based eye tracking system**. It gives you the exact position of the pupils and the gaze direction, in real time.
setting up:

[![Demo](https://i.imgur.com/WNqgQkO.gif)](https://youtu.be/YEZMk1P0-yw)
- create a virtual env with python==3.8
e.g. using conda: ``conda create -n your_name python=3.8``
- activate your virtual environment
- git clone this project
- install packages by running ``pip install -e .``
- run the solution from command line with command: ``python video_analysis.py -p data/sample_input.mp4 -o sample_output.json``

_🚀 Quick note: I'm looking for job opportunities as a software developer, for exciting projects in ambitious companies. Anywhere in the world. Send me an email!_
please use ``python video_analysis.py -h`` for possible settings

## Installation
the output points are located under key 'points' in output json.

Clone this project:
*keep in mind that output needs to be a json.*

```shell
git clone https://github.com/antoinelame/GazeTracking.git
```

### For Pip install
Install these dependencies (NumPy, OpenCV, Dlib):

```shell
pip install -r requirements.txt
```

> The Dlib library has four primary prerequisites: Boost, Boost.Python, CMake and X11/XQuartx. If you doesn't have them, you can [read this article](https://www.pyimagesearch.com/2017/03/27/how-to-install-dlib/) to know how to easily install them.


### For Anaconda install
Install these dependencies (NumPy, OpenCV, Dlib):

```shell
conda env create --file environment.yml
#After creating environment, activate it
conda activate GazeTracking
```


### Verify Installation

Run the demo:

```shell
python example.py
```

## Simple Demo

```python
import cv2
from gaze_tracking import GazeTracking

gaze = GazeTracking()
webcam = cv2.VideoCapture(0)

while True:
_, frame = webcam.read()
gaze.refresh(frame)

new_frame = gaze.annotated_frame()
text = ""

if gaze.is_right():
text = "Looking right"
elif gaze.is_left():
text = "Looking left"
elif gaze.is_center():
text = "Looking center"

cv2.putText(new_frame, text, (60, 60), cv2.FONT_HERSHEY_DUPLEX, 2, (255, 0, 0), 2)
cv2.imshow("Demo", new_frame)

if cv2.waitKey(1) == 27:
break
```

## Documentation

In the following examples, `gaze` refers to an instance of the `GazeTracking` class.

### Refresh the frame

```python
gaze.refresh(frame)
```

Pass the frame to analyze (numpy.ndarray). If you want to work with a video stream, you need to put this instruction in a loop, like the example above.

### Position of the left pupil

```python
gaze.pupil_left_coords()
```

Returns the coordinates (x,y) of the left pupil.

### Position of the right pupil

```python
gaze.pupil_right_coords()
```

Returns the coordinates (x,y) of the right pupil.

### Looking to the left

```python
gaze.is_left()
```

Returns `True` if the user is looking to the left.

### Looking to the right

```python
gaze.is_right()
```

Returns `True` if the user is looking to the right.

### Looking at the center

```python
gaze.is_center()
```

Returns `True` if the user is looking at the center.

### Horizontal direction of the gaze

```python
ratio = gaze.horizontal_ratio()
```

Returns a number between 0.0 and 1.0 that indicates the horizontal direction of the gaze. The extreme right is 0.0, the center is 0.5 and the extreme left is 1.0.

### Vertical direction of the gaze

```python
ratio = gaze.vertical_ratio()
```

Returns a number between 0.0 and 1.0 that indicates the vertical direction of the gaze. The extreme top is 0.0, the center is 0.5 and the extreme bottom is 1.0.

### Blinking

```python
gaze.is_blinking()
```

Returns `True` if the user's eyes are closed.

### Webcam frame

```python
frame = gaze.annotated_frame()
```

Returns the main frame with pupils highlighted.

## You want to help?

Your suggestions, bugs reports and pull requests are welcome and appreciated. You can also starring ⭐️ the project!

If the detection of your pupils is not completely optimal, you can send me a video sample of you looking in different directions. I would use it to improve the algorithm.

## Licensing

This project is released by Antoine Lamé under the terms of the MIT Open Source License. View LICENSE for more information.
**keep in mind that the solution still needs calibration**
1 change: 1 addition & 0 deletions VERSION
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
0.0.1
12 changes: 0 additions & 12 deletions build_and_run.sh

This file was deleted.

9 changes: 9 additions & 0 deletions config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
EYE_MARGIN = 10
# FOURTH_PERCENTILE_WEIGHT = 5
# THIRD_PERCENTILE_WEIGHT = 1
# PERCENTILE_WEIGHT = FOURTH_PERCENTILE_WEIGHT * THIRD_PERCENTILE_WEIGHT
SKEWNESS_WEIGHT = 10
PUPIL_WEIGHT = 1
ANGLE_TOTAL_WEIGHT = SKEWNESS_WEIGHT + PUPIL_WEIGHT
MAX_RATIO = 0.6
MIN_RATIO = 0.4
9 changes: 0 additions & 9 deletions environment.yml

This file was deleted.

44 changes: 0 additions & 44 deletions example.py

This file was deleted.

4 changes: 3 additions & 1 deletion gaze_tracking/eye.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
import numpy as np
import cv2
from .pupil import Pupil
from config import *


class Eye(object):
Expand Down Expand Up @@ -54,7 +55,7 @@ def _isolate(self, frame, landmarks, points):
eye = cv2.bitwise_not(black_frame, frame.copy(), mask=mask)

# Cropping on the eye
margin = 5
margin = EYE_MARGIN
min_x = np.min(region[:, 0]) - margin
max_x = np.max(region[:, 0]) + margin
min_y = np.min(region[:, 1]) - margin
Expand Down Expand Up @@ -117,3 +118,4 @@ def _analyze(self, original_frame, landmarks, side, calibration):

threshold = calibration.threshold(side)
self.pupil = Pupil(self.frame, threshold)

26 changes: 23 additions & 3 deletions gaze_tracking/gaze_tracking.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@
import dlib
from .eye import Eye
from .calibration import Calibration
import numpy as np
from config import *
from scipy.stats import skew


class GazeTracking(object):
Expand Down Expand Up @@ -82,9 +85,26 @@ def horizontal_ratio(self):
the center is 0.5 and the extreme left is 1.0
"""
if self.pupils_located:
pupil_left = self.eye_left.pupil.x / (self.eye_left.center[0] * 2 - 10)
pupil_right = self.eye_right.pupil.x / (self.eye_right.center[0] * 2 - 10)
return (pupil_left + pupil_right) / 2
# get skew of eye. it helps determine whether usr is looking right or left
count_right = np.sum(self.eye_right.frame < 255, axis=0)
# fourth_25_per = count_right[-int(len(count_right) / 4):]
# third_25_per = count_right[int(len(count_right) / 2):-int(len(count_right)/4)]
# right_skewness = ((fourth_25_per.sum() * FOURTH_PERCENTILE_WEIGHT +
# third_25_per.sum() * THIRD_PERCENTILE_WEIGHT)
# / PERCENTILE_WEIGHT) / count_right.sum()
right_skewness = count_right[int(len(count_right) / 2):].sum() / count_right.sum()
count_left = np.sum(self.eye_left.frame < 255, axis=0)
# fourth_25_per = count_left[-int(len(count_left) / 4):]
# third_25_per = count_left[int(len(count_left) / 2):-int(len(count_left)/4)]
# left_skewness = ((fourth_25_per.sum() * FOURTH_PERCENTILE_WEIGHT +
# third_25_per.sum() * THIRD_PERCENTILE_WEIGHT)
# / PERCENTILE_WEIGHT) / count_left.sum()
left_skewness = count_left[int(len(count_left) / 2):].sum() / count_left.sum()
skewness = (right_skewness + left_skewness) / 2
pupil_left = (self.eye_left.pupil.x - EYE_MARGIN) / (self.eye_left.center[0] * 2 - EYE_MARGIN)
pupil_right = (self.eye_right.pupil.x - EYE_MARGIN) / (self.eye_right.center[0] * 2 - EYE_MARGIN)
ratio = ((pupil_left + pupil_right) / 2 * PUPIL_WEIGHT + skewness * SKEWNESS_WEIGHT) / ANGLE_TOTAL_WEIGHT
return ratio

def vertical_ratio(self):
"""Returns a number between 0.0 and 1.0 that indicates the
Expand Down
5 changes: 4 additions & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
numpy == 1.22.0
opencv_python == 4.2.0.32
opencv_python == 4.8.1.78
dlib == 19.16.0
matplotlib==3.7.3
scipy==1.10.1
tqdm==4.66.1
Loading