This project implements gesture detection using a model trained with Google's Teachable Machine. The application detects hand gestures (such as fist, palm, and directional movements) via webcam input and allows interaction with an on-screen element (a red square), which moves based on the detected gestures.
- Real-time gesture detection using a pre-trained model from Teachable Machine.
- Webcam-based interface to detect hand gestures.
- Moves an on-screen red square based on detected gestures:
- Fist: Move right.
- Palm: Move left.
- Thumb up: Move up.
- Thumb down: Move down.
- Simple and intuitive UI with webcam feed display.
- Teachable Machine by Google
- TensorFlow.js for pose estimation and gesture recognition
- HTML5 Canvas for webcam feed and rendering keypoints/skeleton
- JavaScript for real-time webcam control and DOM manipulation
- The model is loaded from Teachable Machine and integrated using TensorFlow.js.
- The webcam is initialized, and the feed is displayed inside a canvas element.
- Hand gestures are detected in real-time, and the position of a red square on the screen is updated based on the detected gestures.
- Clone the repository:
git clone https://github.com/your-username/gesture-detection.git
- Open
index.html
in your preferred browser.
No additional installation steps are needed, as the project uses a CDN for TensorFlow.js and Teachable Machine.
- Open the app.
- Ensure you allow access to the webcam when prompted.
- Use hand gestures (fist, palm, thumb up, thumb down) to move the red square on the screen.
- Add more gestures for additional functionality.
- Improve gesture recognition accuracy.
- Add sound or other feedback mechanisms for detected gestures.