STUDY OF SIGN LANGUAGE USING NEURAL NETWORKS

Authors

  • Amal Mathew, Shriya, S. Aravinda Prabhu

DOI:

#10.25215/8119070771.14

Keywords:

Sign language, Feature Extraction and Representation, Artificial Neural Networks, Convolution Neural Network, TensorFlow, Keras, OpenCV(Open Source Computer Vision).

Abstract

Hand gestures, facial expressions, and body language are all used in sign language to communicate. Either word level signs or fingerspelling make up a sign language. For the deaf-dumb community, it is their sole means of communication. However, hearing individuals never attempt to learn sign language. As a result, hearing-impaired people are unable to converse with hearing people without a sign language interpreter. Because of this, deaf persons are socially isolated. Therefore, a system that can decode sign language automatically is required. The execution of such a system gives hearing-impaired people a platform for communication without the help of anyone. One of the most traditional and organic kinds of language for communicating is sign language. Sign language based on fingerspelling technique employing neural networks. Deaf and dumb people use sign language, a kind of nonverbal communication, to communicate. D&M individuals can only communicate through sign languages due to a single communication-related disability that prevents them from utilising spoken languages. Exchanging thoughts and messages through a variety of channels, such as speaking, signalling, behaviour, and visuals, is the act of communication. People (D&M) without hearing or speech use their hands to produce a range of gestures to communicate with others.

Metrics

Metrics Loading ...

Published

2023-07-07

How to Cite

Amal Mathew, Shriya, S. Aravinda Prabhu. (2023). STUDY OF SIGN LANGUAGE USING NEURAL NETWORKS. Redshine Archive, 2. https://doi.org/10.25215/8119070771.14