Publication:
A multimodal emotion detection system during human-robot interaction

dc.affiliation.dptoUC3M. Departamento de Ingeniería de Sistemas y Automáticaes
dc.affiliation.grupoinvUC3M. Grupo de Investigación: Laboratorio de Robótica (Robotics Lab)es
dc.contributor.authorAlonso Martín, Fernando
dc.contributor.authorMalfaz Vázquez, María Ángeles
dc.contributor.authorSequeira, Joao
dc.contributor.authorFernández de Gorostiza Luengo, Javier
dc.contributor.authorSalichs Sánchez-Caballero, Miguel
dc.date.accessioned2019-01-16T11:15:22Z
dc.date.available2019-01-16T11:15:22Z
dc.date.issued2013-11-14
dc.description.abstractIn this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human-robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human-robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately.en
dc.description.sponsorshipThe authors gratefully acknowledge the funds provided by the Spanish MICINN (Ministry of Science and Innovation) through the project “Aplicaciones de los robots sociales”, DPI2011-26980 from the Spanish Ministry of Economy and Competitiveness. Moreover, the research leading to these results has received funding from the RoboCity2030-II-CM project (S2009/DPI-1559), funded by Programas de Actividades I+D en la Comunidad de Madrid and cofunded by Structural Funds of the EU.en
dc.format.extent33
dc.format.mimetypeapplication/pdf
dc.identifier.bibliographicCitationAlonso-Martín, F., Malfaz, M., Sequeira, J., Gorostiza, J. F., Salichs, M. A. (2013) .A Multimodal Emotion Detection System during Human–Robot Interaction. Sensors, 13 (11), pp.15549-15581.en
dc.identifier.doihttps://doi.org/10.3390/s131115549
dc.identifier.issn1424-8220
dc.identifier.publicationfirstpage15549
dc.identifier.publicationissue11
dc.identifier.publicationlastpage15581
dc.identifier.publicationtitleSensorsen
dc.identifier.publicationvolume13
dc.identifier.urihttps://hdl.handle.net/10016/27916
dc.identifier.uxxiAR/0000014610
dc.language.isoeng
dc.publisherMDPIen
dc.relation.projectIDGobierno de España. DPI2011-26980
dc.relation.projectIDComunidad de Madrid. S2009/DPI-1559/RoboCity2030-II-CM
dc.rights© 2013 by the authors; licensee MDPI, Basel, Switzerland.
dc.rightsAtribución-NoComercial-SinDerivadas 3.0 España*
dc.rights.accessRightsopen access
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/*
dc.subject.ecienciaRobótica e Informática Industriales
dc.subject.otherEmotion recognitionen
dc.subject.otherAffective computingen
dc.subject.otherHuman-robot interactionen
dc.subject.otherDialog systemsen
dc.subject.otherFACSen
dc.subject.otherFacial expressionsen
dc.subject.otherFace detectionen
dc.subject.otherRecognitionen
dc.titleA multimodal emotion detection system during human-robot interactionen
dc.typeresearch article*
dc.type.hasVersionVoR*
dspace.entity.typePublication
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Multimodal_sensors_2013.pdf
Size:
12.15 MB
Format:
Adobe Portable Document Format