Welcome to the ASL Recognition project! This project aims to recognize American Sign Language (ASL) gestures using OpenCV in Python.
American Sign Language (ASL) is a complete, complex language that employs signs made by moving the hands combined with facial expressions and postures of the body. This project utilizes computer vision techniques to recognize and interpret ASL gestures in real-time.
- Real-time Recognition: Utilize OpenCV to perform real-time ASL gesture recognition.
- Gesture Detection: Detect hand gestures from video input.
- Model Integration: Integrate machine learning models for gesture classification.
- Graphical User Interface (GUI): Develop a user-friendly interface for interaction.
-
Clone the repository:
git clone [email protected]:SINEdowskY/ASL-Recognition.git
-
Navigate to the project directory:
cd ASL-Recognition
-
Create a virtual environment (optional but recommended):
python3 -m venv env
-
Activate the virtual environment:
- On Windows:
env\Scripts\activate
- On macOS and Linux:
source env/bin/activate
-
Install the dependencies:
pip install -r requirements.txt
-
Run the Application: Execute the main script to start the ASL recognition application.
python front.py
-
Gesture Recognition: Position your hand in front of the camera and perform ASL gestures. The application will attempt to recognize and display the corresponding ASL alphabet or word.
-
Customization: Modify the code to add new gestures, improve recognition accuracy, or integrate additional features.
Contributions are welcome! If you have any suggestions, enhancements, or bug fixes, feel free to open an issue or create a pull request.
- Fork the repository.
- Create your feature branch (
git checkout -b feature/YourFeature
). - Commit your changes (
git commit -am 'Add some feature'
). - Push to the branch (
git push origin feature/YourFeature
). - Open a pull request.
This project is licensed under the MIT License.