Skip to content

A breakthrough Convolutional Neural Network (CNN) application that accurately interprets sign language, transforming gestures into speech and text, fostering seamless communication for the deaf community.

Notifications You must be signed in to change notification settings

kaushikrohit004/Sign-Language-Recognition-using-CNN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

Sign-Language-Recognition-using-CNN

ABSTRACT

In society, many people are not blessed with hearing and speaking abilities hence they need sign language to connect and communicate with others. The process can be simplified with help of technology using real-time sign language recognition. This process involves recognizing sign images captured in video feed using machine learning models. This involves recognizing the edges and vertices and then evaluating the desired outcome using the deep learning model. Such models will help all those people who need help in their communication with society. Also, we have taken a step to increase efficiency in the evaluation of gestures which will give quick and accurate output, ensuring the complexity remains as low as possible while evaluation time remains low as well. The goal of the project is to create a convolutional neural network that will identify the signs captured or focused from the video capture and in turn provide us with correct or accurate output based on text and to increase the accuracy of the real-time sign language recognition via scanning and detecting that would help other people who are physically challenged. This process will be done at runtime which means that we will get results continuously while depicting sign language and implementing it with very less delay time using the CNN model. We need a CNN model to train the data we have to predict the sign displayed. The main importance of using a model is that the application can learn from each training it receives and generalize the outcome with much better accuracy each time. We can perform a single Character recognition and also provide letter sentence construction by appending each letter together with an accuracy of 97 percent. Through this, we can recognize sign language precisely.

INTRODUCTION

Sign language recognition is an application to detect various gestures of different characters and convert them into text. This application has huge importance in the field of science and technology. It has different applications based on machine learning and even in virtual reality. There are various types of Sign languages such as ISL (Indian Sign Language), BSL (British Sign Language), ASL (American Sign Language), and many more implemented differently in different parts of the world. The American sign language is just like any other normal language with an ability to express itself through the means of gestures such as Hand movement or Body movement. It has similar properties to that of other languages but the grammar differs from English. It is the most common sign language in the world. It is mainly implemented in countries such as America, Africa, and the whole of south-eastern Asia. The American sign language acts as a bridge between deaf and non-deaf communities. This application helps them to convey their actions through text. This kind of work has also been implemented before, each one produced through different methods and results but not many reach the criteria of excellence. Sign language is unique in itself because it is a non-verbal language that is known around many parts of the world and used by many people globally. The places where these language has been implemented like schools, hospitals, police stations, or any learning institutions have improved the overall spread of this language.

Why American Sign Language?

Normally, the sign languages differ from each other in terms of structure, grammar, and gestures. American sign language (ASL) uses a single-headed fingerspelling alphabet structure as opposed to other sign languages. It is easy to interpret and implement than others. Also, the gestures are created with respect to different cultures. This in turn attracts a wider audience because people are familiar with such gestures throughout their life. British Sign language (BSL) means of communication come through two-handed operations and this creates a predicament for people to understand and interpret the language. The ISL (Indian Sign Language) is a known sign language in the nation of India, however, since the number of researches and sources for correct translations is less and it also captures less audience than ASL, thus, many people choose ASL over other languages. The ISL also contains various same gestures indicating different meanings which leads to confusion when interpreted. Even though the time of interpretation of all these languages are almost the same for characters and words alike. The more globally known language is ASL and hence the reason why we needed ASL as the sign language converter

Motivation

Communication is the most essential requirement in society. It is very difficult for an individual if he/she is unable to communicate with other people and it is a great challenge faced by deaf and dumb people every day. Therefore, this model was gravely needed in the fact that the people suffering from deafness or muteness need their rightful place in a social environment. Due to their primary deficiency, they also suffer from alternate problems such as isolation, depression and it would be better if they can integrate more socially and create more social connections. People also tend to provide an alternate solution and one such solution is that” Instead of using another language by deaf people, why don’t they simply write and display as a means of communication”. From the perspective of a normal person, this explanation may seem normal and inviting but the people suffering from such difficulties need human solutions for their problems. These people need to display their emotions, their actions and this cannot be done just through writing. Therefore, that is one more reason why we decided to contribute to the field of sign language. The idea of providing results in written language mainly helps us to communicate to those people who are not blessed with the chance to speak or the ability to hear. Such an application would indeed help all the deaf or dumb people and grant a little ease in their lives. The more such applications are developed and technology is improved, the merrier these people will be sharing such a wider platform.

About

A breakthrough Convolutional Neural Network (CNN) application that accurately interprets sign language, transforming gestures into speech and text, fostering seamless communication for the deaf community.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published