Publication:
Emotion and attention: Audiovisual models for group-level skin response recognition in short movies

dc.affiliation.dptoUC3M. Departamento de Teoría de la Señal y Comunicacioneses
dc.affiliation.grupoinvUC3M. Grupo de Investigación: Procesado Multimediaes
dc.contributor.authorGarcía Faura, Álvaro
dc.contributor.authorHernández García, Alejandro
dc.contributor.authorFernández Martínez, Fernando
dc.contributor.authorDíaz de María, Fernando
dc.contributor.authorSan Segundo, Rubén
dc.date.accessioned2020-09-09T09:35:37Z
dc.date.available2020-09-09T09:35:37Z
dc.date.issued2019-02-22
dc.description.abstractThe electrodermal activity (EDA) is a psychophysiological indicator which can be considered a somatic marker of the emotional and attentional reaction of subjects towards stimuli. EDA measurements are not biased by the cognitive process of giving an opinion or a score to characterize the subjective perception, and group-level EDA recordings integrate the reaction of the whole audience, thus reducing the signal noise. This paper contributes to the field of affective video content analysis, extending previous novel work on the use of EDA as ground truth for prediction algorithms. Here, we label short video clips according to the audience's emotion (high vs. low) and attention (increasing vs. decreasing), derived from EDA records. Then, we propose a set of low-level audiovisual descriptors and train binary classifiers that predict the emotion and attention with 75% and 80% accuracy, respectively. These results, along with those of previous works, reinforce the usefulness of such low-level audiovisual descriptors to model video in terms of the induced affective response.en
dc.description.sponsorshipThe work leading to these results has been supported by ESITUR (MINECO, RTC-2016-5305-7), CAVIAR (MINECO, TEC2017-84593-C2-1-R), and AMIC (MINECO, TIN2017-85854-C4-4-R) projects (AEI/FEDER, UE). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 641805.en
dc.description.statusPublicadoes
dc.format.extent11
dc.identifier.bibliographicCitationWeb Intelligence, (2019), 17(1), pp.: 29-40.en
dc.identifier.doihttps://doi.org/10.3233/WEB-190398
dc.identifier.issn2405-6456
dc.identifier.publicationfirstpage29
dc.identifier.publicationissue1
dc.identifier.publicationlastpage40
dc.identifier.publicationtitleWeb Intelligenceen
dc.identifier.publicationvolume17
dc.identifier.urihttps://hdl.handle.net/10016/30807
dc.identifier.uxxiAR/0000023302
dc.language.isoenges
dc.publisherIOSen
dc.relation.projectIDGobierno de España. RTC-2016-5305-7/ESITURes
dc.relation.projectIDGobierno de España. TEC2017-84593-C2-1-R/CAVIARes
dc.relation.projectIDGobierno de España. TIN2017-85854-C4-4-R/AMICes
dc.relation.projectIDinfo:eu-repo/grantAgreement/EC/H2020/641805en
dc.rights© 2019 IOS Press and the authors. All rights reserved.en
dc.subject.ecienciaTelecomunicacioneses
dc.subject.otherElectrodermal activityen
dc.subject.otherEmotionen
dc.subject.otherAttentionen
dc.subject.otherAffective video content analysisen
dc.subject.otherAudiovisual descriptorsen
dc.titleEmotion and attention: Audiovisual models for group-level skin response recognition in short moviesen
dc.typeresearch article*
dc.type.hasVersionAM*
dspace.entity.typePublication
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
emotion_WI_2019_ps.pdf
Size:
483.52 KB
Format:
Adobe Portable Document Format