Deepfake Parodies, Technological Spectacle, and Digital Catharsis: The Case of Iberian Son
DOI:
https://doi.org/10.62161/revvisual.v17.5902Keywords:
Deepfakes, TikTok, Artificial Intelligence, Reception, Political Humor, Pedro Sánchez, Political hate speechAbstract
This study examines 1,919 user comments on 57 deepfakes created by Iberian Son and posted on TikTok, which parody the Spanish Prime Minister, Pedro Sánchez. Content analysis and textual analysis are applied to explore whether the use of AI motivates viewing, what kind of responses they generate, and whether users infer implicit meanings not directly expressed in the videos. The results show that AI not only increases the appeal of the content but also acts as a tool for digital catharsis. The comments reveal that these videos provoke visceral reactions and reactivate latent political hate speech. In addition, they project broader ideological readings, some of which are linked to narratives typical of the far right.
Downloads
Global Statistics ℹ️
|
0
Views
|
0
Downloads
|
|
0
Total
|
|
References
Abbas, F., & Taeihagh, A. (2024). Unmasking deepfakes: A systematic review of deepfake detection and generation techniques using artificial intelligence. Expert Systems with Applications, 252, 124260. https://doi.org/10.1016/j.eswa.2024.124260
Appel, M., & Prietzel, F. (2022). The detection of political deepfakes. Journal of Computer-Mediated Communication, 27(4), zmac008. https://doi.org/10.1093/jcmc/zmac008
Audry, S. (2021). Art in the Age of Machine Learning. MIT Press.
Ballesteros Aguayo, L., & Ruiz del Olmo, F. J. (2024). Vídeos falsos y desinformación ante la IA: el deepfake como vehículo de la posverdad. Revista de Ciencias de la Comunicación e Información, 29, 1-14. https://doi.org/10.35742/rcci.2024.29.e294
Banda, K. K., & Cluverius, J. (2018). Elite polarization, party extremity, and affective polarization. Electoral Studies, 56, 90–101. https://doi.org/10.1016/j.electstud.2018.09.009
Battista, D. (2024). Political communication in the age of artificial intelligence: an overview of deepfakes and their implications. Society Register, 8(2), 7-24. https://doi.org/10.14746/sr.2024.8.2.01
Becker, A. B. (2020). Applying mass communication frameworks to study humor’s impact: advancing the study of political satire. Annals of the International Communication Association, 44(3), 273–288. https://doi.org/10.1080/23808985.2020.1794925
Bonaut, J., Vicent-Ibáñez, M., & Paz-Rebollo, M. A. (2023). Sports journalist and readers. Journalism and user incivility. Journalism Practice, 18(2), 356-373. https://doi.org/10.1080/17512786.2023.2222730
Botan, C. (2018). Strategic communication theory and practice: The cocreational model. Wiley-Blackwell.
Brewer, P. R., Bingaman, J., Paintsil, A., Wilson, D. C., & Dawson, W. (2022). Media use, interpersonal communication, and attitudes toward artificial intelligence. Science Communication, 44(5), 559–592. https://doi.org/10.1177/10755470221130307
Cheatham, B., Javanmardian, K., & Samandari, H. (2019, April). Confronting the risks of artificial intelligence. McKinsey Quarterly (Online). https://www.mckinsey.com/capabilities/quantumblack/our-insights/confronting-the-risks-of-artificial-intelligence#/
Clark, S. P., & Lewandowsky, S. (2024). Seeing is Believing: The Continued Influence of a Known AI-Generated 'Deepfake' Video. PsyArXiv. https://doi.org/10.31234/osf.io/t7jfk
Colliander, J. (2019). “This is fake news”: Investigating the role of conformity to other users’ views when commenting on and spreading disinformation in social media. Computers in Human Behavior, 97, 202–215. https://doi.org/10.1016/j.chb.2019.03.032
D’Errico, F. (2022). Cognitive, affective, and persuasive effects of political parody. Social influence, power, and multimodal communication. In I. Poggi & F. D’Errico (Eds.), Social Influence, Power, and Multimodal Communication (pp. 238–254). Routledge.
Diakopoulos, N., & Johnson, D. (2021). Anticipating and addressing the ethical implications of deepfakes in the context of elections. New Media & Society, 23(7), 2072–2098. https://doi.org/10.1177/1461444820925811
Dobber, T., Metoui, N., Trilling, D., Helberger, N., & de Vreese, C. (2021). Do (microtargeted) deepfakes have real effects on political attitudes? The International Journal of Press/Politics, 26(1), 69–91. https://doi.org/10.1177/1940161220944364
EI Hana, N., & Sabri, O. (2021). Expressing one's opinions freely on politicians using parodies: Effect of the sources of political parodies (user‐vs. media‐generated parodies). Psychology & Marketing, 38(10), 1670-1685. https://doi.org/10.1002/mar.21484
Fabuyi, J., Olaniyi, O. O., Olateju, O., Aideyan, N. T., Selesi-Aina, O., & Olaniyi, F. G. (2024). Deepfake Regulations and Their Impact on Content Creation in the Entertainment Industry. Archives of Current Research International, 24(12), 52-74. https://doi.org/10.9734/acri/2024/v24i12997
Farid, H., & Schindler, H. J. (2020). Deep fakes. On the Threat of Deep Fakes to Democracy and Society. Konrad Adenauer Stiftung. https://www.kas.de/en/single-title/-/content/on-the-threat-of-deep-fakes-to-democracy-and-society
Farkas, J., Schou, J., & Neumayer, C. (2018). Cloaked Facebook pages: Exploring fake Islamist propaganda in social media. New Media & Society, 20(5), 1850-1867. https://doi.org/10.1177/1461444817707759
Gallo, M., Fenza, G., & Battista, D. (2022). Information Disorder: What about global security implications? Rivista di Digital Politics, 2(3), 523-538. https://doi.org/1 0.53227/106458
Gil de Zúñiga, H., Goyanes, M. & Durotoye, T. (2024). A Scholarly Definition of Artificial Intelligence (AI): Advancing AI as a Conceptual Framework in Communication Research. Political Communication, 41(2), 317-334, https://doi.org/10.1080/10584609.2023.2290497
Glick, J. (2023). Deepfake satire and the possibilities of synthetic media. Afterimage, 50(3), 81-107. https://doi.org/10.1525/aft.2023.50.3.81
Hwang, Y., Ryu, J. Y., & Jeong, S. H. (2021). Effects of disinformation using deepfake: The protective effect of media literacy education. Cyberpsychology, Behavior, and Social Networking, 24(3), 188–193. https://doi.org/10.1089/cyber.2020.0174
Jeong, M. S., Long, J. A., & Lavis, S. M. (2023). The viral water cooler: Talking about political satire promotes further political discussion. Mass Communication and Society, 26(6), 938-962. https://doi.org/10.1080/15205436.2022.2138766
Jungherr, A. (2023). Artificial intelligence and democracy: A conceptual framework. Social media+ society, 9(3), 1-14. https://doi.org/10.1177/20563051231186353
Krämer, N. C., Neubaum, G., Winter, S., Schaewitz, L., Eimler, S., & Oliver, M. B. (2019). I feel what they say: the effect of social media comments on viewers’ affective reactions toward elevating online videos. Media Psychology, 24(3), 332–358. https://doi.org/10.1080/15213269.2019.1692669
Klinger, U. (2024). Political Communication in Challenging Times. Political Communication, 41(5), 866-869. https://doi.org/10.1080/10584609.2024.2395084
Krippendorff, K. (1990). Metodología de análisis de contenido. Teoría y práctica. Paidós Comunicación.
Lacalle, C., Gómez-Morales, B., & Vicent-Ibáñez, M. (2023). Misogyny and the construction of toxic masculinity in the Spanish Manosphere (Burbuja.info). Profesional De La información, 32(2). https://doi.org/10.3145/epi.2023.mar.15
Le. B., Tariq, S., Abuadbba, A., Moore, K., & Woo, S. (2023). Why do facial deepfake detectors fail? The 2nd Workshop on the Security Implications of Deepfakes and Cheapfakes, 24-28. https://doi.org/10.1145/3595353.3595882
Leicht, C. V. (2023). Nightly news or nightly jokes? News parody as a form of political communication: A review of the literature. Political Studies Review, 21(2), 390-399. https://doi.org/10.1177/14789299221100339
Li, B., Chen, P., Liu, H., Guo, W., Cao, X., Du, J., Zhao, C., & Zhang, J. (2021). Random sketch learning for deep neural networks in edge computing. Nature Computational Science, 1(3), 221-228. https://doi.org/10.1038/s43588-021-00039-6
Lu, H., & Yuan, S. (2024). “I know it’s a deepfake”: the role of AI disclaimers and comprehension in the processing of deepfake parodies, Journal of Communication, 74(5), 359–373. https://doi.org/10.1093/joc/jqae022
Lundberg, E., & Mozelius, P. (2024). The potential effects of deepfakes on news media and entertainment. AI & SOCIETY, 40, 2159-2170. https://doi.org/10.1007/s00146-024-02072-1
Matthes, J., & Rauchfleisch, A. (2013). The Swiss “Tina Fey effect”: The content of late-night political humor and the negative effects of political parody on the evaluation of politicians. Communication Quarterly, 61(5), 596–614. https://doi.org/10.1080/01463373. 2013.822405
Möller, A. M., Baumgartner, S. E., Kühne, R., & Peter, J. (2021). The effects of social information on the enjoyment of online videos: an eye tracking study on the role of attention. Media Psychology, 24(2), 214–235. https://doi.org/10.1080/15213269.2019.1679647
Möller, A. M., & Boukes, M. (2023). Online social environments and their impact on video viewers: The effects of user comments on entertainment experiences and knowledge gain during political satire consumption. New media & society, 25(5), 999-1022. https://doi.org/10.1177/14614448211015984
Oancea, M. (2024). AI and Deep Fake-Video and Audio Manipulation Techniques Capable of Altering the Political Process. Revista de Stiinte Politice, 81, 70-82. https://cis01.ucv.ro/revistadestiintepolitice/numarul81_2024.php
Peifer, J. T., & Landreville, K. D. (2020). Spoofing presidential hopefuls: The roles of affective disposition and positive emotions in prompting the social transmission of debate parody. International Journal of Communication, 14, 21. https://ijoc.org/index.php/ijoc/article/view/11439
Samoilenko, S. A., & Suvorova, I. (2023). Artificial intelligence and deepfakes in strategic deception campaigns: The US and Russian experiences. In E. Pashentev (Ed.), The Palgrave handbook of malicious use of AI and psychological security (pp. 507-529). Springer International Publishing.
Shifman, L. (2014). Memes in digital culture. MIT Press.
Statista (2023). Distribución porcentual de los usuarios de TikTok en España en 2023, por edad. Statista. https://es.statista.com/estadisticas/1178532/distribucion-porcentual-por-edad-de-los-usuarios-de-tittok-en-espana/
Tajfel, H. (1978). Differences between social groups. Academic Press.
Yuan, S., & Lu, H. (2022). Examining a conceptual framework of aggressive and humorous styles in science YouTube videos about climate change and vaccination. Public Understanding of Science, 31(7), 921–939. https://doi.org/10.1177/09636625221091490
Xu, D., Fan, S., & Kankanhalli, M. (2023). Combating misinformation in the era of generative AI models. Proceedings of the 31st ACM International Conference on Multimedia, 9291-9298. https://doi.org/10.1145/3581783.3612704
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Authors retain copyright and transfer to the journal the right of first publication and publishing rights

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.
Those authors who publish in this journal accept the following terms:
-
Authors retain copyright.
-
Authors transfer to the journal the right of first publication. The journal also owns the publishing rights.
-
All published contents are governed by an Attribution-NoDerivatives 4.0 International License.
Access the informative version and legal text of the license. By virtue of this, third parties are allowed to use what is published as long as they mention the authorship of the work and the first publication in this journal. If you transform the material, you may not distribute the modified work. -
Authors may make other independent and additional contractual arrangements for non-exclusive distribution of the version of the article published in this journal (e.g., inclusion in an institutional repository or publication in a book) as long as they clearly indicate that the work was first published in this journal.
- Authors are allowed and recommended to publish their work on the Internet (for example on institutional and personal websites), following the publication of, and referencing the journal, as this could lead to constructive exchanges and a more extensive and quick circulation of published works (see The Effect of Open Access).







