This project methodology is to utilize MediaPipe Studio to develop and train an AI model specifically designed for American Sign Language (ASL) recognition. In the initial phase, a comprehensive dataset of ASL letters was compiled, and the model was trained on these letters using MediaPipe Studio's built-in hand tracking and gesture recognition functionalities. this project was developed leveraging various machine learning tools available in the Python programming language and Google Mediapipe for feature extraction of key coordinate points, Tensorflow framework, and Keras, as well as OpenCV. The approach involved training the AI model and classifying videos using this trained model.
Here you can see the data collection and final testings.