Used the Media Pipe library to extract the face landmarks x, and y only. Performed preprocessing step to make the model independent of the face position or scale using normalized data based on position noise. Trained a different regression model to estimate the values of the pitch, yaw, and roll. Used rotation, translation, and projection of the axes on the image to visualize the direction the person islooking at. Used the values of pitch, yaw, and roll to define the direction.
For this project, the dataset used is AFLW2000 Dataset, which consists of 2000 face images, with some information about them like the facial landmarks and the pitch, yaw, and roll values for each picture.
final_qD5HjuhD.mp4
- Used the MediaPipe library to extract the face landmarks x,y.
- Performed preprocessing step to make the model independent of the face position or scale using nomralize data based on position noise.
- Trained a differnet regression model to estimate the values of the pitch, yaw, and roll.
- Used rotation, translation, and projection of the axes on the image to visualize the direction the person is looking at.
- Used the values of pitch, yaw, and roll to define the direction.
- MediaPipe
- Numpy
- OpenCV
- Matplotlib
- Pandas
- Scikit-Learn
- Mohamed Badr (Me)
- Mohamed El-feky