Sound Sense (Smart Visual Alert System for Deaf ) / Muhammad Usman Ghani, Faheem Haider, Ahmed Bin Yasin, Muhammad Arsam Khalid

By: Ghani, Muhammad UsmanContributor(s): Supervisor Dr. Naima IltafMaterial type: TextTextMCS, NUST Rawalpindi 2023Description: 76 pSubject(s): UG BESE | BESE-25DDC classification: 005.1,GHA
Contents:
Deaf people face numerous challenges in their daily lives due to their inability to hear sounds in the environment. To address this issue, we have developed a smart visual alert system for deaf people that can detect and classify sounds in real-time and provide visual alerts using LED lights. The system is implemented on an edge computing device (a Raspberry Pi) to take the processing closer to data gathering in order to ensure fast and efficient processing of sound data and classification. The input sound is pre-processed to generate spectrograms which are then classified using a Convolutional Neural Network into several categories, including "Baby Cry", "Doorbell", "Talking", etc. The LED lights are controlled using GPIO pins on the Raspberry Pi, to provide different patterns or colors to indicate different types of sounds. A mobile app is also developed to allow users to view the history of events, adjust configurations, and access other assistive features that include Reminder, Speech-to-text, etc. The system has the potential to improve the quality of life for deaf people by providing fast and reliable visual alerts for important sounds at their homes.
Tags from this library: No tags from this library for this title. Log in to add tags.
Item type Current location Home library Call number Status Date due Barcode Item holds
Project Report Project Report Military College of Signals (MCS)
Military College of Signals (MCS)
005.1,GHA (Browse shelf) Available MCSPCS-466
Total holds: 0

Deaf people face numerous challenges in their daily lives due to their inability to hear sounds in the environment. To address this issue, we have developed a smart visual alert system for deaf people that can detect and classify sounds in real-time and provide visual alerts using LED lights. The system is implemented on an edge computing device (a Raspberry Pi) to take the processing closer to data gathering in order to ensure fast and efficient processing of sound data and classification. The input sound is pre-processed to generate spectrograms which are then classified using a Convolutional Neural Network into several categories, including "Baby Cry", "Doorbell", "Talking", etc. The LED lights are controlled using GPIO pins on the Raspberry Pi, to provide different patterns or colors to indicate different types of sounds. A mobile app is also developed to allow users to view the history of events, adjust configurations, and access other assistive features that include Reminder, Speech-to-text, etc. The system has the potential to improve the quality of life for deaf people by providing fast and reliable visual alerts for important sounds at their homes.

There are no comments on this title.

to post a comment.
© 2023 Central Library, National University of Sciences and Technology. All Rights Reserved.