Modelling Multimodal Dialogues for Social Robots Using Communicative Acts

e-Archivo Repository

Show simple item record Fernández Rodicio, Enrique Castro González, Álvaro Alonso Martín, Fernando Maroto Gómez, Marcos Salichs Sánchez-Caballero, Miguel 2021-06-22T12:46:22Z 2021-06-22T12:46:22Z 2020-06-18
dc.identifier.bibliographicCitation Fernández-Rodicio, E., Castro-González, Á., Alonso-Martín, F., Maroto-Gómez, M., & Salichs, M. Á. (2020). Modelling Multimodal Dialogues for Social Robots Using Communicative Acts. Sensors, 20(12), 3440
dc.identifier.issn 1424-8220
dc.description.abstract Social Robots need to communicate in a way that feels natural to humans if they are to effectively bond with the users and provide an engaging interaction. Inline with this natural, effective communication, robots need to perceive and manage multimodal information, both as input and output, and respond accordingly. Consequently, dialogue design is a key factor in creating an engaging multimodal interaction. These dialogues need to be flexible enough to adapt to unforeseen circumstances that arise during the conversation but should also be easy to create, so the development of new applications gets simpler. In this work, we present our approach to dialogue modelling based on basic atomic interaction units called Communicative Acts. They manage basic interactions considering who has the initiative (the robot or the user), and what is his/her intention. The two possible intentions are either ask for information or give information. In addition, because we focus on one-to-one interactions, the initiative can only be taken by the robot or the user. Communicative Acts can be parametrised and combined in a hierarchical manner to fulfil the needs of the robot’s applications, and they have been equipped with built-in functionalities that are in charge of low-level communication tasks. These tasks include communication error handling, turn-taking or user disengagement. This system has been integrated in Mini, a social robot that has been created to assist older adults with cognitive impairment. In a case of use, we demonstrate the operation of our system as well as its performance in real human–robot interactions.
dc.description.sponsorship The research leading to these results has received funding from the projects Development of social robots to help seniors with cognitive impairment (ROBSEN), funded by the Ministerio de Economia y Competitividad; RoboCity2030-DIH-CM, Madrid Robotics Digital Innovation Hub, S2018/NMT-4331, funded by “Programas de Actividades I+D en la Comunidad de Madrid” and cofunded by Structural Funds of the EU; and Robots sociales para estimulación física, cognitiva y afectiva de mayores (ROSES) RTI2018-096338-B-I00 funded by Agencia Estatal de Investigación (AEI), Ministerio de Ciencia, Innovación y Universidades
dc.language.iso eng
dc.publisher MDPI
dc.rights © 2020 by the authors
dc.rights Atribución 3.0 España
dc.subject.other Dialogue management
dc.subject.other Dialogue modelling
dc.subject.other Human-robot interaction
dc.subject.other Multimodal interaction
dc.title Modelling Multimodal Dialogues for Social Robots Using Communicative Acts
dc.type article
dc.subject.eciencia Robótica e Informática Industrial
dc.rights.accessRights openAccess
dc.relation.projectID Comunidad de Madrid. S2018/NMT-4331
dc.relation.projectID Gobierno de España. RTI2018-096338-B-I00
dc.type.version publishedVersion
dc.identifier.publicationfirstpage 1
dc.identifier.publicationissue 12
dc.identifier.publicationlastpage 30
dc.identifier.publicationtitle Sensors
dc.identifier.publicationvolume 20
dc.identifier.uxxi AR/0000026693
dc.contributor.funder Comunidad de Madrid
 Find Full text

Files in this item

*Click on file's image for preview. (Embargoed files's preview is not supported)

The following license files are associated with this item:

This item appears in the following Collection(s)

Show simple item record