DeepFakes: The Next Challenge in Fake News Detection

Authors

Abstract

A deepfake is a hyper-realistic video, digitally manipulated to represent people saying or doing things that never really happened. With the sophistication of techniques for developing these counterfeits, it is becoming increasingly difficult to detect whether public appearances or statements by influential people respond to parameters of reality or, on the contrary, are the result of fictitious representations. These synthetic documents, generated by computerized techniques based on Artificial Intelligence (AI), pose serious threats to privacy, in a new scenario in which the risks derived from identity theft are increasing. This study aims to advance the state of the art through the analysis of academic news and through an exhaustive literature review, seeking answers to the following questions, which we understand to be of general interest, from both an economic and a social perspective and in various areas of research. What are deepfakes? Who produces them and what technology supports them? What opportunities do they present? What risks are associated with them? What methods exist to combat them? And framing the study in terms of information theory: is this a revolution or an evolution of fake news? As we know, fake news influences public opinion and is effective in appealing to emotions and modifying behaviours. We can assume that these new audiovisual texts will be tremendously effective in undermining, even more if possible, the credibility of digital media, as well as accelerating the already evident exhaustion of critical thinking.

Keywords

deepfakes, fake news, deep learning, artificial intelligence, disinformation

References

ALDWAIRI, M. y ALWAHEDI, A. (2018). «Detecting Fake News in Social Media Networks». Procedia Computer Science, 141, 215-222. https://doi.org/10.1016/j.procs.2018.10.171

ANDERSON, K. E. (2018). «Getting acquainted with social networks and apps: combating fake news on social media». Library HiTech News, 35 (3), 1-6. https://doi.org/10.1108/LHTN-02-2018-0010

ANWAR, S.; MILANOVA, M.; ANWER, M. y BANIHIRWE, A. (2019). «Perceptual Judgments to Detect Computer Generated Forged Faces in Social Media». En: SCHWENKER, F. y SCHERER. S. (eds.). Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction. MPRSS, 2018. Lecture Notes in Computer Science, 11.377. Springer, Cham. https://doi.org/10.1007/978-3-030-20984-1_4

ATANASOVA, P.; NAKOV, P.; MÀRQUEZ, L.; BARRÓN-CEDEÑO, A.; KARADZHOV, G.; MIHAYLOVA, T.; MOHTARAMI, M. y GLASS, J. (2019). «Automatic Fact-Checking Using Context and Discourse Information». Journal of Data and Information Quality, 11 (3), art. n. 12. https://doi.org/10.1145/3297722

BORGES, L.; MARTINS, B. y CALADO, P. (2019). «Combining Similarity Features and Deep Representation Learning for Stance Detection in the Context of Checking Fake News». Journal of Data and Information Quality, 11 (3), art. n.º 14. https://doi.org/10.1145/3287763

BRITT, M. A.; ROUET, J.-F.; BLAUM, D. y MILLIS, K. (2019). «A Reasoned Approach to Dealing with Fake News». Policy Insights from the Behavioral and Brain Sciences, 6 (1), 94-101. https://doi.org/10.1177/2372732218814855

CHAWLA, R. (2019). «Deepfakes: How a pervert shook the world». International Journal of Advance Research and Development, 4 (6), 4-8. http://doi.org/10.22215/timreview/1282

CONSTINE, J. (2019). «Instagram hides false content behind warnings, except for politicians». TechCrunch. Recuperado de https://techcrunch.com/2019/12/16/instagram-fact-checking

CYBENKO, A. K. y CYBENKO, G. (2018). «AI and Fake News». IEEE Intelligent Systems, 33 (5), 3-7. https://doi.org/10.1109/MIS.2018.2877280

DAGDILELIS, V. (2018). «Preparing teachers for the use of digital technologies in their teaching practice». Research in Social Sciences and Technology, 3 (1), 109-121. http://doi.org/10.46303/ressat.03.01.7

DAY, C. (2019). «The Future of Misinformation». Computing in Science & Engineering, 21 (1), 108-108. https://doi.org/10.1109/MCSE.2018.2874117

FIGUEIRA, A. y OLIVEIRA, L. (2017). «The current state of fake news: challenges and opportunities». Procedia Computer Science, 121, 817-825. https://doi.org/10.1016/j.procs.2017.11.106

FLETCHER, J. (2018). «Deepfakes, Artificial Intelligence, and Some Kind of Dystopia: The New Faces of Online Post-Fact Performance». Theatre Journal, 70 (4), 455-471. ProjectMUSE. https://doi.org/10.1353/tj.2018.0097

FLORIDI, L. (2018). «Artificial Intelligence, Deepfakes and a Future of Ectypes». Philosophy & Technology, 31 (3), 317-321. https://doi.org/10.1007/s13347-018-0325-3

GOODFELLOW, I. J.; POUGET-ABADIE, J.; MIRZA, M.; XU, B.; WARDE-FARLEY, D.; OZAIR, S.; COURVILLE, A. y BENGIO, Y. (2014). «Generative Adversarial Networks». arXiv:1406.2661.

HAMBORG, F.; DONNAY, K. y GIPP, B. (2018). «Automated identification of media bias in news articles: an interdisciplinary literature review». International Journal on Digital Libraries, 20, 391-415. https://doi.org/10.1007/s00799-018-0261-y

HARRISON, S. (2019). «Instagram Now Fact-Checks, but Who Will Do the Checking?». Wired. Recuperado de https://www.wired.com/story/instagram-fact-checks-who-will-do-checking/

HASAN, H. R. y SALAH, K. (2019). «Combating Deepfake Videos Using Blockchain and Smart Contracts». IEEE Access, 7, 41.596-41.606. https://doi.org/10.1109/ACCESS.2019.2905689

JANG, S. M. y KIM, J. K. (2018). «Third person effects of fake news: Fake news regulation and media literacy interventions». Computers in Human Behavior, 80, 295-302. https://doi.org/10.1016/j.chb.2017.11.034

KEERSMAECKER, J. de y ROETS, A. (2017). «Fake news: Incorrect, but hard to correct. The role of cognitive ability on the impact of false information on social impressions». Intelligence, 65, 107-110. https://doi.org/10.1016/j.intell.2017.10.005

KÖHN, M.; OLIVIER, M. S. y ELOFF, J. H. (2006). «Framework for a Digital Forensic Investigation». ISSA 1-7.

KORSHUNOV, P. y MARCEL, S. (2019). «Vulnerability assessment and detection of deepfake videos». International Conference on Biometrics (ICB), 1-6. IEEE. http://doi.org/10.1109/ICB45273.2019.8987375

KWOK, A. O. y KOH, S. G. (2020). «Deepfake: a social construction of technology perspective». Current Issues in Tourism, 1-5. https://doi.org/10.1080/13683500.2020.1738357

LI, Y.; CHANG, M. C. y LYU, S. (2018). «In ictu oculi: Exposing AI created fake videos by detecting eye blinking». IEEE International Workshop on Information Forensics and Security (WIFS), 1-7. IEEE.

LIN, H. (2019). «The existential threat from cyber-enabled information warfare». Bulletin of the Atomic Scientists, 75 (4), 187-196. https://doi.org/10.1080/00963402.2019.1629574

LIV, N. y GREENBAUM, D. (2020). «Deep Fakes and Memory Malleability: False Memories in the Service of Fake News». AJOB Neuroscience, 11 (2), 96-104. https://doi.org/10.1080/21507740.2020.1740351

MACKENZIE, A. y BHATT, I. (2018). «Lies, Bullshit and Fake News: Some Epistemological Concerns». Postdigital Science and Education. https://doi.org/10.1007/s42438-018-0025-4

MARAS, M. H. y ALEXANDROU, A. (2019). «Determining authenticity of video evidence in the age of artificial intelligence and in the wake of deepfake videos». International Journal of Evidence & Proof, 23 (3), 255-262. https://doi.org/10.1177/1365712718807226

MOROZOV, E. (2013). «To save everything, click here: The folly of technological solutionism». Public Affairs.

PÉREZ, J.; MESO, K. y MENDIGUREN, T. (2021). «Deepfakes on Twitter: Which Actors Control Their Spread?». Media and Communication, 9 (1), 301-312. http://dx.doi.org/10.17645/mac.v9i1.3433

QAYYUM, A.; QADIR, J.; JANJUA, M. U. y SHER, F. (2019). «Using Blockchain to Rein in the New Post-Truth World and Check the Spread of Fake News». IT Professional, 21 (4), 16-24. https://doi.org/10.1109/MITP.2019.2910503

RADFORD, A.; METZ, L. y CHINTALA, S. (2015). «Unsupervised representation learning with deep convolutional generative adversarial networks». arXiv preprint arXiv:1511.06434.

RÖSSLER, A.; COZZOLINO, D.; VERDOLIVA, L.; RIESS, C., THIES, J. y NIEßNER, M. (2018). «Faceforensics: A large-scale video dataset for forgery detection in human faces». arXiv preprint arXiv:1803.09179.

VIZOSO, A.; VAZ-ÁLVAREZ, M. y LÓPEZ-GARCÍA, X. (2021). «Fighting Deepfakes: Media and Internet Giants’ Converging and Diverging Strategies Against Hi-Tech Misinformation». Media and Communication, 9 (1), 291-300. http://dx.doi.org/10.17645/mac.v9i1.3494

WAGNER, T. L. y BLEWER, A. (2019). «The Word Real Is No Longer Real: Deepfakes, Gender, and the Challenges of AI-Altered Video». Open Information Science, 3 (1), 32-46. https://doi.org/10.1515/opis-2019-0003

WESTERLUND, M. (2019). «The Emergence of Deepfake Technology: A Review». Technology Innovation Management Review, 9 (11), 39-52. http://doi.org/10.22215/timreview/1282

WHYTE, C. (2020). «Deepfake news: AI-enabled disinformation as a multi-level public policy challenge». Journal of Cyber Policy, 5 (2), 199-217. https://doi.org/10.1080/23738871.2020.1797135

Published

2021-06-30

Downloads

Download data is not yet available.