Automatically caption ASL videos using deep neural networks , using the data set provided in the paper- "Word-level Deep Sign Language Recognition from Video: A New Large-scale Dataset and Methods Comparison". The project aims at developing a browser extension which can provide live captioning to sign language within a video call.
- live captioning for sign language during video call.
- All participants in the google meet or zoom meet first join the socket room through our extension
- Our model translates the sign language into text in the client server using sockets, which is then broadcasted into the room.
- Broadcasted text appears as subtitles for everyone present in the meeting.
Let's see how to start the client server and start making predictions!
For linux users, first cd into the client-server
directory, install the requirements from requirements.txt inside a virtual environment and then run-
sudo bash run.sh
Next, open another terminal in the same directory and make sure you're inside the virtual environment you previously created, then run-
python3 charserver.py <INSERT A NAME FOR THE SOCKET ROOM>
or, if one wants to use the word-level prediction server, run-
python3 wordserver.py <INSERT A NAME FOR THE SOCKET ROOM>
Now, one can use our extension to simply join the room to get all subtitles. After joining the room, the person who will be signing must go to the host settings on google meet or zoom, and then select the My Fake Webcam
option under camera, as shown below-
Sharanya Mukherjee |
Made with ❤️ by DSC VIT