Publication:
Detecting deception from gaze and speech using a multimodal attention LSTM-based framework

Loading...
Thumbnail Image
Identifiers
Publication date
2021-07-02
Defense date
Advisors
Tutors
Journal Title
Journal ISSN
Volume Title
Publisher
MDPI
Impact
Google Scholar
Export
Research Projects
Organizational Units
Journal Issue
Abstract
The automatic detection of deceptive behaviors has recently attracted the attention of the research community due to the variety of areas where it can play a crucial role, such as security or criminology. This work is focused on the development of an automatic deception detection system based on gaze and speech features. The first contribution of our research on this topic is the use of attention Long Short-Term Memory (LSTM) networks for single-modal systems with frame-level features as input. In the second contribution, we propose a multimodal system that combines the gaze and speech modalities into the LSTM architecture using two different combination strategies: Late Fusion and Attention-Pooling Fusion. The proposed models are evaluated over the Bag-of-Lies dataset, a multimodal database recorded in real conditions. On the one hand, results show that attentional LSTM networks are able to adequately model the gaze and speech feature sequences, outperforming a reference Support Vector Machine (SVM)-based system with compact features. On the other hand, both combination strategies produce better results than the single-modal systems and the multimodal reference system, suggesting that gaze and speech modalities carry complementary information for the task of deception detection that can be effectively exploited by using LSTMs
Description
This article belongs to the Special Issue Computational Trust and Reputation Models.
Keywords
Deception detection, Multimodal, Gaze, Speech, LSTM, Attention, Fusion
Bibliographic citation
Gallardo-Antolín, A. & Montero, J. M. (2021). Detecting Deception from Gaze and Speech Using a Multimodal Attention LSTM-Based Framework. Applied Sciences, 11(14), 6393.