Perception of Emotion in Human-Robot Interaction / (Record no. 607900)

000 -LEADER
fixed length control field 02692nam a22001577a 4500
082 ## - DEWEY DECIMAL CLASSIFICATION NUMBER
Classification number 629.8
100 ## - MAIN ENTRY--PERSONAL NAME
Personal name Zia, Muhammad Faisal
245 ## - TITLE STATEMENT
Title Perception of Emotion in Human-Robot Interaction /
Statement of responsibility, etc. Muhammad Faisal Zia
264 ## - PRODUCTION, PUBLICATION, DISTRIBUTION, MANUFACTURE, AND COPYRIGHT NOTICE
Place of production, publication, distribution, manufacture Islamabad :
Name of producer, publisher, distributor, manufacturer SMME- NUST;
Date of production, publication, distribution, manufacture, or copyright notice 2022.
300 ## - PHYSICAL DESCRIPTION
Extent 59p.
Other physical details Soft Copy
Dimensions 30cm
500 ## - GENERAL NOTE
General note Perception of emotion is an intuitive replication of a person’s internal state without the need for<br/>verbal communication. Visual emotion recognition has been broadly studied and several end-toend deep neural networks (DNNs)-based and Machine learning-based models have been proposed<br/>but they lack the ability to be implemented in low-specification devices like robots, and vehicles.<br/>The drawbacks of conventional handcrafted feature-based Facial Emotion Recognition (FER)<br/>methods are eliminated by DNNs-based FER approaches. In spite of that, Deep Neural Network<br/>based FER techniques suffer from high processing costs and exorbitant memory requirements,<br/>their application is constrained in fields like Human-Robot Interaction (HRI) and HumanComputer Interaction (HCI) and relies on hardware requirements. In aforementioned study, we<br/>presented a computationally inexpensive and robust FER system for the perception of six basic<br/>emotions (i.e., disgust, surprise, fear, anger, happy, and sad) that is capable of running on<br/>embedded devices with constrained specifications. In the first step after pre-processing input<br/>images, geometric features are extracted from detected facial landmarks, considering the facial<br/>spatial position among influential landmarks. The extracted features are given as input to trainthe<br/>SVM classifier. Our proposed FER system was trained and evaluated experimentally using two<br/>databases, Karolinska Directed Emotional Faces (KDEF) and Extended Cohn-Kanade (CK+)<br/>database. Fusion of KDEF and CK+ datasets at the training level were also employed in order to<br/>generalize the FER system’s response to the variations of ethnicity, race, national and provincial<br/>backgrounds. The results show that our proposed FER system is optimized for real-time embedded<br/>applications with constrained specifications and yields an accuracy of 96.8%, 86.7% and 86.4%<br/>for CK+, KDEF and fusion of CK+ and KDEF databases respectively. As a part of our future<br/>research objectives, the developed system will make a robotic agent capable of perceiving emotion<br/>and interacting naturally without the need for additional hardware during HRI.
650 ## - SUBJECT ADDED ENTRY--TOPICAL TERM
Topical term or geographic name entry element MS Robotics and Intelligent Machine Engineering
700 ## - ADDED ENTRY--PERSONAL NAME
Personal name Supervisor : Dr. Sara Ali
856 ## - ELECTRONIC LOCATION AND ACCESS
Uniform Resource Identifier <a href="http://10.250.8.41:8080/xmlui/handle/123456789/31844">http://10.250.8.41:8080/xmlui/handle/123456789/31844</a>
942 ## - ADDED ENTRY ELEMENTS (KOHA)
Source of classification or shelving scheme
Koha item type Thesis
Holdings
Withdrawn status Permanent Location Current Location Shelving location Date acquired Full call number Barcode Koha item type
  School of Mechanical & Manufacturing Engineering (SMME) School of Mechanical & Manufacturing Engineering (SMME) E-Books 02/20/2024 629.8 SMME-TH-808 Thesis
© 2023 Central Library, National University of Sciences and Technology. All Rights Reserved.