Welcome to the Sign Language Action Detection Project! This endeavor delves into the captivating fusion of keypoint detection and action recognition, offering a glimpse into the intricate language of sign gestures. By synergizing keypoint detection with action detection using LSTM layers, we've engineered a neural network capable of real-time interpretation of sign language.
-
Keypoint Extraction with MediaPipe: Uncover the process of extracting holistic keypoints from video frames using MediaPipe. This foundational step provides insights into hand and body movements, essential for accurate sign language interpretation.
-
Action Detection with LSTM Layers: Immerse yourself in the realm of action detection powered by LSTM layers. Learn to construct a model adept at processing sequences of keypoints, unraveling the nuanced expressions embedded in sign language gestures.
-
Real-time Sign Language Prediction: Witness the thrill of real-time sign language prediction using video sequences. Observe the model in action as it interprets and decodes sign gestures on the fly, showcasing the practical application of the developed neural network.
Feel empowered to explore the provided codebase, allowing for seamless adaptation to your specific needs. Whether you're working with a distinct sign language dataset or envisioning modifications to the architecture for specialized applications, this project serves as a versatile foundation for your endeavors.
-
Clone the Repository: Initiate your journey by cloning the repository to your local machine, creating a dedicated space for exploration and experimentation.
-
Install Dependencies: Ensure a smooth experience by installing the necessary dependencies. Execute the following command to install TensorFlow, Keras, and MediaPipe:
pip install tensorflow keras mediapipe