Combining speech-based and linguistic classifiers to recognize emotion in user spoken utterances

e-Archivo Repository

Show simple item record

dc.contributor.author Griol Barres, David
dc.contributor.author Molina López, José Manuel
dc.contributor.author Callejas, Zoraida
dc.date.accessioned 2020-12-16T12:35:03Z
dc.date.available 2021-01-31T00:00:04Z
dc.date.issued 2019-01-31
dc.identifier.bibliographicCitation Griol, D., Molina, J.M., Callejas, Z. (2019). Combining speech-based and linguistic classifiers to recognize emotion in user spoken utterances. Neurocomputing, 326-327, pp. 132-140.
dc.identifier.issn 0925-2312
dc.identifier.uri http://hdl.handle.net/10016/31613
dc.description.abstract In this paper we propose to combine speech-based and linguistic classification in order to obtain better emotion recognition results for user spoken utterances. Usually these approaches are considered in isolation and even developed by different communities working on emotion recognition and sentiment analysis. We propose modeling the users emotional state by means of the fusion of the outputs generated with both approaches, taking into account information that is usually neglected in the individual approaches such as the interaction context and errors, and the peculiarities of transcribed spoken utterances. The fusion approach allows to employ different recognizers and can be integrated as an additional module in the architecture of a spoken conversational agent, using the information generated as an additional input for the dialog manager to decide the next system response. We have evaluated our proposal using three emotionally-colored databases and obtained very positive results.
dc.description.sponsorship Work partially supported by Projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485).
dc.language.iso eng
dc.publisher Elsevier
dc.rights © 2017 Elsevier B.V. All rights reserved.
dc.rights Atribución-NoComercial-SinDerivadas 3.0 España
dc.rights.uri http://creativecommons.org/licenses/by-nc-nd/3.0/es/
dc.subject.other Context
dc.subject.other Sentiment analysis
dc.subject.other Emotion recognition
dc.subject.other Paralinguistics fusion
dc.subject.other Affective computing
dc.subject.other Spoken Interaction
dc.subject.other Conversational interfaces
dc.title Combining speech-based and linguistic classifiers to recognize emotion in user spoken utterances
dc.type article
dc.subject.eciencia Informática
dc.identifier.doi https://doi.org/10.1016/j.neucom.2017.01.120
dc.rights.accessRights openAccess
dc.relation.projectID Comunidad de Madrid. S2009/TIC-1485
dc.relation.projectID Gobierno de España. TEC2011-28626-C02-02
dc.relation.projectID Gobierno de España. TEC2012-37832-C02-01
dc.type.version acceptedVersion
dc.identifier.publicationfirstpage 132
dc.identifier.publicationlastpage 140
dc.identifier.publicationtitle NEUROCOMPUTING
dc.identifier.publicationvolume 326-327
dc.identifier.uxxi AR/0000022581
 Find Full text

Files in this item

*Click on file's image for preview. (Embargoed files's preview is not supported)


The following license files are associated with this item:

This item appears in the following Collection(s)

Show simple item record