Publication:
Multimodal Fake News Detection

dc.affiliation.dptoUC3M. Departamento de Informáticaes
dc.affiliation.grupoinvUC3M. Grupo de Investigación: Human Language and Accessibility Technologies (HULAT)es
dc.contributor.authorSegura-Bedmar, Isabeles
dc.contributor.authorAlonso Bartolomé, Santiagoes
dc.contributor.funderComunidad de Madrides
dc.contributor.funderUniversidad Carlos III de Madrides
dc.date.accessioned2023-03-27T16:02:40Z
dc.date.available2023-03-27T16:02:40Z
dc.date.issued2022-06-02
dc.description.abstractOver the last few years, there has been an unprecedented proliferation of fake news. As a consequence, we are more susceptible to the pernicious impact that misinformation and disinformation spreading can have on different segments of our society. Thus, the development of tools for the automatic detection of fake news plays an important role in the prevention of its negative effects. Most attempts to detect and classify false content focus only on using textual information. Multimodal approaches are less frequent and they typically classify news either as true or fake. In this work, we perform a fine-grained classification of fake news on the Fakeddit dataset, using both unimodal and multimodal approaches. Our experiments show that the multimodal approach based on a Convolutional Neural Network (CNN) architecture combining text and image data achieves the best results, with an accuracy of 87%. Some fake news categories, such as Manipulated content, Satire, or False connection, strongly benefit from the use of images. Using images also improves the results of the other categories but with less impact. Regarding the unimodal approaches using only text, Bidirectional Encoder Representations from Transformers (BERT) is the best model, with an accuracy of 78%. Exploiting both text and image data significantly improves the performance of fake news detection.en
dc.description.sponsorshipThis research was funded by the Madrid Government (Comunidad de Madrid) under the Multiannual Agreement with UC3M in the line of “Fostering Young Doctors Research” (NLP4RARECM- UC3M) and in the context of the V PRICIT (Regional Programme of Research and Technological Innovation) and under the Multiannual Agreement with UC3M in the line of Excellence of University Professors (EPUC3M17).es
dc.description.statusPublicadoes
dc.format.extent16
dc.identifier.bibliographicCitationInformation. 2022; 13(6):284en
dc.identifier.doihttp://doi.org/10.3390/info13060284
dc.identifier.issn2078-2489
dc.identifier.publicationfirstpage1
dc.identifier.publicationissue6
dc.identifier.publicationlastpage16
dc.identifier.publicationtitleInformation (Switzerland)en
dc.identifier.publicationvolume13
dc.identifier.urihttps://hdl.handle.net/10016/36984
dc.identifier.uxxiAR/0000031329
dc.language.isoengen
dc.publisherMDPI
dc.relation.projectIDComunidad de Madrid. NLP4RARECM- UC3Mes
dc.rightsCopyright © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).en
dc.rightsAtribución-NoComercial-SinDerivadas 3.0 Españaes
dc.rights.accessRightsopen accessen
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/*
dc.subject.ecienciaInformáticaes
dc.subject.otherBerten
dc.subject.otherDeep learningen
dc.subject.otherMultimodal fake news detectionen
dc.subject.otherNatural language processingen
dc.titleMultimodal Fake News Detectionen
dc.typeresearch article*
dc.type.hasVersionVoR*
dspace.entity.typePublication
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Multimodal_Information_2022.pdf
Size:
383.6 KB
Format:
Adobe Portable Document Format
Description:
Artículo