Smart Voice Assistant For Visually Impaired People Using Deep Learning Paradigm

Authors

Dr. R. Amutha, Pranav S P
Department of Electronics and Communication Engineering, Sri Sivasubramaniya Nadar College of Engineering, Chennai, India.
Gokulram A
Department of Information Technology, Sri Sivasubramaniya Nadar College of Engineering, Chennai, India.

Abstract

Abstract—The SVA, a fusion of cutting-edge technologies, blends machine learning and text-to-speech synthesis to create an innovative solution tailored for individuals with visual impairments. A Raspberry Pi is responsible for processing data, while a dedicated camera module captures real-time environmental visuals. These visuals are swiftly processed into frames by the Pi, allowing for deep learning-based object detection. Leveraging pre-trained models, the system identifies objects within the frames and conveys their descriptions through a speaker. This transformative innovation empowers the visually impaired, offering them a cohesive understanding of their environment in real-time, thereby nurturing greater independence and confidence in navigating their surroundings. The compact and portable nature of this setup, coupled with its ability to provide audible information, promises an impactful solution for the visually impaired, enhancing their everyday experiences. Data processing has been shifted to the cloud to address latency issues with the Raspberry Pi module, while concurrent threading of image capturing and text-to-speech synthesis has significantly boosted operational efficiency.