Anagha Kulkarni (Corresponding author), Yael Robert, Yashshree Nigudkar, Pranjali Barve, Namita Mutha
Department of Information Technology, Cummins College of Engineering for Women, Karvenagar, Pune, 411052, Maharashtra, India.
Intelligent Gesture Recognition System for Translating Indian Sign Language to English
Authors
Abstract
Sign languages involve a combination of hand movements and facial gestures.
Alphabets and digits form static signs whereas dynamic signs consist of words and
sentences. Based upon the cultural dif erences and regional variations, dif erent
signs have evolved for a word in each sign language. In reality, every sign
language has its own set of signs for each word. As a result, recognizing words and
phrases in sign languages is dif icult. The recognition of spatial and
time-distributed features of Indian Sign Language is the focus of this research. The
main goal of this work is to identify gestures in Indian Sign Language using a
multi-class classification technique. Various experiments have been conducted
using Convolutional Neural Network, Long-Short Term Memory, and Gated
Recurrent Units. Processing videos posed challenges. Various experiments and the
methodology yielded an accuracy of 87.5% on unseen test data. The most
significant advantage of this system is that it does not require any special device
such as a depth-sensing camera, hand gloves, or special t-shirts.