Modelling Multimodal Dialogues for Social Robots Using Communicative Acts

dc.affiliation.dptoUC3M. Departamento de Ingeniería de Sistemas y Automáticaes
dc.affiliation.grupoinvUC3M. Grupo de Investigación: Laboratorio de Robótica (Robotics Lab)es
dc.contributor.authorFernández Rodicio, Enrique
dc.contributor.authorCastro González, Álvaro
dc.contributor.authorAlonso Martín, Fernando
dc.contributor.authorMaroto Gómez, Marcos
dc.contributor.authorSalichs Sánchez-Caballero, Miguel
dc.contributor.funderComunidad de Madrides
dc.description.abstractSocial Robots need to communicate in a way that feels natural to humans if they are to effectively bond with the users and provide an engaging interaction. Inline with this natural, effective communication, robots need to perceive and manage multimodal information, both as input and output, and respond accordingly. Consequently, dialogue design is a key factor in creating an engaging multimodal interaction. These dialogues need to be flexible enough to adapt to unforeseen circumstances that arise during the conversation but should also be easy to create, so the development of new applications gets simpler. In this work, we present our approach to dialogue modelling based on basic atomic interaction units called Communicative Acts. They manage basic interactions considering who has the initiative (the robot or the user), and what is his/her intention. The two possible intentions are either ask for information or give information. In addition, because we focus on one-to-one interactions, the initiative can only be taken by the robot or the user. Communicative Acts can be parametrised and combined in a hierarchical manner to fulfil the needs of the robot’s applications, and they have been equipped with built-in functionalities that are in charge of low-level communication tasks. These tasks include communication error handling, turn-taking or user disengagement. This system has been integrated in Mini, a social robot that has been created to assist older adults with cognitive impairment. In a case of use, we demonstrate the operation of our system as well as its performance in real human–robot interactions.en
dc.description.sponsorshipThe research leading to these results has received funding from the projects Development of social robots to help seniors with cognitive impairment (ROBSEN), funded by the Ministerio de Economia y Competitividad; RoboCity2030-DIH-CM, Madrid Robotics Digital Innovation Hub, S2018/NMT-4331, funded by “Programas de Actividades I+D en la Comunidad de Madrid” and cofunded by Structural Funds of the EU; and Robots sociales para estimulación física, cognitiva y afectiva de mayores (ROSES) RTI2018-096338-B-I00 funded by Agencia Estatal de Investigación (AEI), Ministerio de Ciencia, Innovación y Universidadesen
dc.identifier.bibliographicCitationFernández-Rodicio, E., Castro-González, Á., Alonso-Martín, F., Maroto-Gómez, M., & Salichs, M. Á. (2020). Modelling Multimodal Dialogues for Social Robots Using Communicative Acts. Sensors, 20(12), 3440
dc.relation.projectIDComunidad de Madrid. S2018/NMT-4331es
dc.relation.projectIDGobierno de España. RTI2018-096338-B-I00es
dc.rights© 2020 by the authors
dc.rightsAtribución 3.0 España
dc.rights.accessRightsopen access
dc.subject.ecienciaRobótica e Informática Industriales
dc.subject.otherDialogue managementen
dc.subject.otherDialogue modellingen
dc.subject.otherHuman-robot interactionen
dc.subject.otherMultimodal interactionen
dc.titleModelling Multimodal Dialogues for Social Robots Using Communicative Actsen
dc.typeresearch article*
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
2.66 MB
Adobe Portable Document Format