000 02692nam a22001577a 4500
082 _a629.8
100 _aZia, Muhammad Faisal
_935408
245 _aPerception of Emotion in Human-Robot Interaction /
_cMuhammad Faisal Zia
264 _aIslamabad :
_bSMME- NUST;
_c2022.
300 _a59p.
_bSoft Copy
_c30cm
500 _aPerception of emotion is an intuitive replication of a person’s internal state without the need for verbal communication. Visual emotion recognition has been broadly studied and several end-toend deep neural networks (DNNs)-based and Machine learning-based models have been proposed but they lack the ability to be implemented in low-specification devices like robots, and vehicles. The drawbacks of conventional handcrafted feature-based Facial Emotion Recognition (FER) methods are eliminated by DNNs-based FER approaches. In spite of that, Deep Neural Network based FER techniques suffer from high processing costs and exorbitant memory requirements, their application is constrained in fields like Human-Robot Interaction (HRI) and HumanComputer Interaction (HCI) and relies on hardware requirements. In aforementioned study, we presented a computationally inexpensive and robust FER system for the perception of six basic emotions (i.e., disgust, surprise, fear, anger, happy, and sad) that is capable of running on embedded devices with constrained specifications. In the first step after pre-processing input images, geometric features are extracted from detected facial landmarks, considering the facial spatial position among influential landmarks. The extracted features are given as input to trainthe SVM classifier. Our proposed FER system was trained and evaluated experimentally using two databases, Karolinska Directed Emotional Faces (KDEF) and Extended Cohn-Kanade (CK+) database. Fusion of KDEF and CK+ datasets at the training level were also employed in order to generalize the FER system’s response to the variations of ethnicity, race, national and provincial backgrounds. The results show that our proposed FER system is optimized for real-time embedded applications with constrained specifications and yields an accuracy of 96.8%, 86.7% and 86.4% for CK+, KDEF and fusion of CK+ and KDEF databases respectively. As a part of our future research objectives, the developed system will make a robotic agent capable of perceiving emotion and interacting naturally without the need for additional hardware during HRI.
650 _aMS Robotics and Intelligent Machine Engineering
_9119486
700 _aSupervisor : Dr. Sara Ali
_9119733
856 _uhttp://10.250.8.41:8080/xmlui/handle/123456789/31844
942 _2ddc
_cTHE
999 _c607900
_d607900