Publication:
A multi-agent architecture to combine heterogeneous inputs in multimodal interaction systems

Loading...
Thumbnail Image
Identifiers
Publication date
2013
Defense date
Advisors
Tutors
Journal Title
Journal ISSN
Volume Title
Publisher
Conferencia de la Asociación Española para la Inteligencia Artificial
Impact
Google Scholar
Export
Research Projects
Organizational Units
Journal Issue
Abstract
In this paper we present a multi-agent architecture for the integration of visual sensor networks and speech-based interfaces. The proposed architecture combines different techniques related to Artificial Intelligence, Natural Language Processing and User Modeling to provide an enhanced interaction with their users. Firstly, the architecture integrates a Cooperative Surveillance Multi-Agent System (CS-MAS), which includes several types of autonomous agents working in a coalition to track and make inferences on the positions of the targets. Secondly, the proposed architecture incorporates enhanced conversational agents to facilitate human-computer interaction by means of speech interaction. Thirdly, a statistical methodology allows to model the user conversational behavior, which is learned from an initial corpus and posteriorly improved with the knowledge acquired from the successive interactions. A technique is proposed to facilitate the multimodal fusion of these information sources and consider the result for the decision of the next system action.
Description
Actas de: CAEPIA 2013, Congreso federado Agentes y Sistemas Multi-Agente: de la Teoría a la Práctica (ASMas). Madrid, 17-20 Septiembre 2013.
Keywords
Software agents, Multimodal fusion, Visual sensor networks, Surveillance applications, Spoken interaction, Conversational Agents, User Modeling, Dialog Management
Bibliographic citation
Alonso-Betanzos, A. et al. (eds.) (2013). Multiconferencia CAEPIA 2013: 17-20 sep 2013. Madrid: Agentes y Sistemas Multi-Agente: de la Teoría a la Práctica (ASMas). (pp. 1513-1522)