Publication:
Detecting deception from gaze and speech using a multimodal attention LSTM-based framework

dc.affiliation.dptoUC3M. Departamento de Teoría de la Señal y Comunicacioneses
dc.affiliation.grupoinvUC3M. Grupo de Investigación: Procesado Multimediaes
dc.contributor.authorGallardo Antolín, Ascensión
dc.contributor.authorMontero, Juan Manuel
dc.contributor.funderComunidad de Madrides
dc.contributor.funderMinisterio de Economía y Competitividad (España)es
dc.date.accessioned2021-11-29T10:32:49Z
dc.date.available2021-11-29T10:32:49Z
dc.date.issued2021-07-02
dc.descriptionThis article belongs to the Special Issue Computational Trust and Reputation Models.en
dc.description.abstractThe automatic detection of deceptive behaviors has recently attracted the attention of the research community due to the variety of areas where it can play a crucial role, such as security or criminology. This work is focused on the development of an automatic deception detection system based on gaze and speech features. The first contribution of our research on this topic is the use of attention Long Short-Term Memory (LSTM) networks for single-modal systems with frame-level features as input. In the second contribution, we propose a multimodal system that combines the gaze and speech modalities into the LSTM architecture using two different combination strategies: Late Fusion and Attention-Pooling Fusion. The proposed models are evaluated over the Bag-of-Lies dataset, a multimodal database recorded in real conditions. On the one hand, results show that attentional LSTM networks are able to adequately model the gaze and speech feature sequences, outperforming a reference Support Vector Machine (SVM)-based system with compact features. On the other hand, both combination strategies produce better results than the single-modal systems and the multimodal reference system, suggesting that gaze and speech modalities carry complementary information for the task of deception detection that can be effectively exploited by using LSTMsen
dc.description.sponsorshipThis research was partly funded by the Spanish Government-MinECo under Projects TEC2017-84395-P and TEC2017-84593-C2-1-R and Comunidad de Madrid and Universidad Carlos III de Madrid under Project SHARON-CM-UC3M.en
dc.format.extent16
dc.identifier.bibliographicCitationGallardo-Antolín, A. & Montero, J. M. (2021). Detecting Deception from Gaze and Speech Using a Multimodal Attention LSTM-Based Framework. Applied Sciences, 11(14), 6393.en
dc.identifier.doihttps://doi.org/10.3390/app11146393
dc.identifier.issn2076-3417
dc.identifier.publicationfirstpage6393
dc.identifier.publicationissue14
dc.identifier.publicationtitleApplied Sciencesen
dc.identifier.publicationvolume11
dc.identifier.urihttps://hdl.handle.net/10016/33703
dc.identifier.uxxiAR/0000028623
dc.language.isoeng
dc.publisherMDPI
dc.relation.projectIDGobierno de España. TEC2017-84395-Pes
dc.relation.projectIDGobierno de España. TEC2017-84593-C2-1-Res
dc.relation.projectIDComunidad de Madrid. SHARON-CM-UC3Mes
dc.rights© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.en
dc.rightsAtribución 3.0 España*
dc.rights.accessRightsopen accessen
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/es/*
dc.subject.ecienciaTelecomunicacioneses
dc.subject.otherDeception detectionen
dc.subject.otherMultimodalen
dc.subject.otherGazeen
dc.subject.otherSpeechen
dc.subject.otherLSTMen
dc.subject.otherAttentionen
dc.subject.otherFusionen
dc.titleDetecting deception from gaze and speech using a multimodal attention LSTM-based frameworken
dc.typeresearch article*
dc.type.hasVersionVoR*
dspace.entity.typePublication
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Detecting_APPLSCI_2021.pdf
Size:
1020.65 KB
Format:
Adobe Portable Document Format