LSTM DEEP LEARNING APPROACH FOR INTERPRETING SIGN LANGUAGE

Authors:

Mr.Md.Shakeel Ahmed, CH MSVN LAKSHMI DEDEEPYA,GUDIPALLI SUHRUDHA,GANJI KAVYA SRI, GUDETI LAVANYA

Page No: 682-689

Abstract:

The Interpreting of Sign Language is a challenging task due to its complexity and variability of hand gestures. In recent years, deep learning models such as CNN had shown promising results in solving the problem. But they are of high and are best suitable for recognition of alphabets than words. This paper proposes LSTM Deep Learning model to recognize hand gestures. Our project's major goal is to eliminate communication barriers between people who are deaf or hard of hearing and regular people by translating their gestures or actions into text(word form) that is clear to everyone. In order to accomplish this, we are employing a camera-based computer system called OpenCV that has been trained using a neural network to recognise and translate signs1. In our study, Media Pipe Holistic is used to recognise the facial, hand, and stance movements that make up a gesture. We then use this information to create Long Short Term Memory (LSTM) layers, which are part of the neural network that learns and identifies these signs. This neural network is used to recognise different movements, with the output appearing as text on the screen.

Description:

LSTM, OpenCV, Neural Network, Mediapipe Holistic, Deep Learning

Volume & Issue

Volume-12,Issue-4

Keywords

.