Combining heterogeneous inputs for the development of adaptive and multimodal interaction systems

e-Archivo Repository

Show simple item record

dc.contributor.author Griol, David
dc.contributor.author García, Jesús
dc.contributor.author Molina, José M.
dc.date.accessioned 2015-06-25T10:08:49Z
dc.date.available 2015-06-25T10:08:49Z
dc.date.issued 2013-12
dc.identifier.bibliographicCitation Advances in Distributed Computing And Artificial Intelligence Journal (2013). 2(6), 37-53.
dc.identifier.issn 2255-2863
dc.identifier.uri http://hdl.handle.net/10016/21188
dc.description.abstract In this paper we present a novel framework for the integration of visual sensor networks and speech-based interfaces. Our proposal follows the standard reference architecture in fusion systems (JDL), and combines different techniques related to Artificial Intelligence, Natural Language Processing and User Modeling to provide an enhanced interaction with their users. Firstly, the framework integrates a Cooperative Surveillance Multi-Agent System (CS-MAS), which includes several types of autonomous agents working in a coalition to track and make inferences on the positions of the targets. Secondly, enhanced conversational agents facilitate human-computer interaction by means of speech interaction. Thirdly, a statistical methodology allows modeling the user conversational behavior, which is learned from an initial corpus and improved with the knowledge acquired from the successive interactions. A technique is proposed to facilitate the multimodal fusion of these information sources and consider the result for the decision of the next system action.
dc.description.sponsorship This work was supported in part by Projects MEyC TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS S2009/TIC-1485
dc.format.extent 17
dc.format.mimetype application/pdf
dc.language.iso eng
dc.publisher Universidad de Salamanca
dc.rights Atribución-NoComercial-SinDerivadas 3.0 España
dc.rights.uri http://creativecommons.org/licenses/by-nc-nd/3.0/es/
dc.subject.other Software agents
dc.subject.other Multimodal fusion
dc.subject.other Visual sensor networks
dc.subject.other Surveillance applications
dc.subject.other Spoken interaction
dc.subject.other Conversational Agents
dc.subject.other User Modeling
dc.subject.other Dialog Management
dc.title Combining heterogeneous inputs for the development of adaptive and multimodal interaction systems
dc.type article
dc.description.status Publicado
dc.relation.publisherversion http://dx.doi.org/10.14201/ADCAIJ2014263753
dc.subject.eciencia Informática
dc.identifier.doi 10.14201/ADCAIJ2014263753
dc.rights.accessRights openAccess
dc.relation.projectID Gobierno de España. TEC2011-28626-C02-02
dc.relation.projectID Comunidad de Madrid. S2009/TIC-1485/CONTEXTS
dc.type.version acceptedVersion
dc.identifier.publicationfirstpage 37
dc.identifier.publicationissue 6
dc.identifier.publicationlastpage 53
dc.identifier.publicationtitle Advances in Distributed Computing and Artificial Intelligence Journal
dc.identifier.publicationvolume 2
dc.identifier.uxxi AR/0000015629
 Find Full text

Files in this item

*Click on file's image for preview. (Embargoed files's preview is not supported)


The following license files are associated with this item:

This item appears in the following Collection(s)

Show simple item record