ENHANCED REAL-TIME ASL-TO-TEXT CONVERSION AND VIDEO CONFERENCING INTEGRATION

Authors:

S. SELVI

Page No: 18-30

Abstract:

People with hearing impairments encounter numerous communication challenges and need an interpreter to understand what others are saying. Sign language is a method for communication which employs hand gestures to give meaning. The conversion method facilitates communication between deaf or hard-of-hearing individuals and the hearing community, while also assisting people in learning sign language. There has been ongoing scientific study, and the models that are now in use are unable to produce precise forecasts. It is proposed to create a deep learning model trained on American Sign Language (ASL) to convert ASL gestures into text. This will be implemented using a convolutional neural network and a transfer learning model built on this framework. The system identifies hand gestures using key point detection algorithms like Media Pipe and translates them into corresponding text using Long Short-Term Memory (LSTM) model. The pre-processed sign language data is utilized in training an LSTM neural network to achieve accurate gesture recognition and text generation. Also, a video conference application to integrate the trained model for generating in real time has been developed. The predictive model has been experimentally evaluated and shows accuracy 95%.

Description:

.

Volume & Issue

Volume-14,ISSUE-3

Keywords

Keywords: American Sign Language; Sign recognition; CNN; Text conversion; Hand Gesture Recognition; Media Pipe.