Generative artificial intelligence and its impact on media content creation
Main Article Content
Abstract
Generative artificial intelligence (AI) is a rapidly advancing field that enables the automated production of high-quality textual, graphic, sound and audiovisual content. This technology has significant implications for journalism, advertising and entertainment, as well as ethical, legal and social issues. This paper examines the possibilities, limitations and risks of generative AI for content production in the media. It analyzes large language models oriented to automated writing, generative adversarial networks oriented to image and short video synthesis, and deepfake technology for video manipulation and voice cloning. The implications of these technologies for intellectual property, information veracity, personal identity and human creativity are discussed. It is concluded that generative AI is a powerful and innovative tool for media content creation, but that it also requires careful and ethical use by both content producers and consumers.
Downloads
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Funding data
-
Ministerio de Ciencia e Innovación
Grant numbers PID2019-107748RB-I00
References
Ajder, H. (2020). Deepfake threat intelligence: a statistics snapshot from June 2020. Sensity.
Ajder, H., Patrini, G., Cavalli, F., & Cullen, L. (2019). The state of deepfakes: landscape, threats, and impact. DeepTrace. https://is.gd/LK0X5N
Ayuso, S., y Pascual, M. G. (2023, 11 de mayo). Europa quiere poner más obligaciones a la inteligencia artificial generativa como la de ChatGPT. El País. https://is.gd/lQNRia
Barandy, K. (2022, 10 de agosto). Will artists be replaced by artificial intelligence? Designboom. https://is.gd/NZqVHX
Bender, E. M. (2022, 14 de junio). Human-like programs abuse our empathy: even Google engineers aren’t immune. The Guardian. https://is.gd/rAUdbF
Bhargava, C., y Sharma, P.K. (Eds.) (2022). Artificial intelligence: fundamentals and applications. CRC Press.
Boden, M. A. (2018). Artificial intelligence: a very short introduction. Oxford University Press.
Botha, J., y Pieterse, H. (2020). Fake news and deepfakes: a dangerous threat for 21st century information security. 15th International Conference on Cyber Warfare and Security: ICCWS 2020, 57-66.
Boucher, P. (2020). Artificial intelligence: how does it work, why does it matter, and what can we do about it? Servicio de Estudios del Parlamento Europeo. https://doi.org/10.2861/44572
Broderick, R. (2023, 31 de mayo). AI can’t replace humans yet: but if the WGA writers don’t win, it might not matter. Polygon. https://is.gd/PT9hSr
Campesato, O. (2020). Artificial intelligence, machine learning and deep learning. Mercury Learning and Information.
Castillo, C. (2023, 3 de mayo). Los creadores del canon AEDE quieren una “tasa ChatGPT” para la inteligencia artificial. elDiario.es. https://eldiario.es/1_9b351c
Dean, I. (2022, 11 de agosto). You can now sell your DALL·E 2 art, but it feels murky. Creative bloq. https://is.gd/ov3In5
Dale, R. (2022). The voice synthesis business: 2022 update. Natural language engineering, 28(3), 40-408. https://doi.org/10.1017/S1351324922000146
Davenport, T. H., y Mittal, N. (2022, 14 de noviembre). How generative AI is changing creative work. Harvard Business Review. https://is.gd/by7hQt
Giannini, S. (2023). Generative AI and the future of education. UNESCO. https://is.gd/CbhGO5
Giansiracusa, N. (2021). How algorithms create and prevent fake news: exploring the impacts of social media, deepfakes, GPT-3 and more. Apress. https://doi.org/10.1007/978-1-4842-7155-1
Greenhouse, (2023, 8 de febrero). US experts warn AI likely to kill off jobs and widen wealth inequality. The Guardian. https://is.gd/n38xQn
Hao, K. (2021, 3 de febrero). Internet está tan sesgado que, para la IA, las mujeres solo llevan bikini. MIT Technology Review. https://is.gd/kSOd56
Hatzius, J., Briggs, J., Kodnani, D., y Pierdomenico, G. (2023, 26 de marzo). The potentially large effects of artificial intelligence on economic growth. Goldman Sachs.
Higgins, E. (2023, 20 de marzo). Making pictures of Trump getting arrested while waiting for Trump's arrest. Twitter. https://is.gd/oiAwPp
Kreps, S., McCain, R. M., y Brundage, M. (2022). All the news that’s fit to fabricate: AI-generated text as a tool of media misinformation. Journal of Experimental Political Science, 9(1), 104-117. https://doi.org/10.1017/xps.2020.37
Longoni, C., Fradkin, A., Cian, L., y Pennycook, G. (2022, junio). News from generative artificial intelligence is believed less. 2022 ACM Conference on Fairness, Accountability, and Transparency, 97-106. https://doi.org/10.1145/3531146.3533077
López Delacruz, S. (2023). Un vínculo paradójico: narrativas audiovisuales generadas por inteligencia artificial, entre el pastiche y la cancelación del futuro. Hipertext.net, 26, 31-35, https://doi.org/10.31009/hipertext.net.2023.i26.05
Lyons, B. A., Montgomery, J. M., Guess, A. M., Nyhan, B., y Reifler, J. (2021). Overconfidence in news judgments is associated with false news susceptibility. PNAS, 118(23), e2019527118. https://doi.org/10.1073/pnas.2019527118
Metz, C. (2022, 5 de agosto). AI is not sentient: why do people say it is? The New York Times. https://is.gd/gBlLu2
Navigli, R., Conia, S., y Ross, B. (2023). Biases in large language models: origins, inventory and discussion. Journal of Data and Information Quality, 15(2). https://doi.org/10.1145/3597307
Nightingale, S. J., y Farid, H. (2022). AI-synthesized faces are indistinguishable from real faces and more trustworthy. PNAS, 119(8), e2120481119. https://doi.org/10.1073/pnas.2120481119
Newsguard (2023). Reports about online misinformation and disinformation from NewsGuard’s analysts. https://newsguardtech.com/reports
OpenAI (2020, 8 de septiembre). A robot wrote this entire article: are you scared yet, human? The Guardian. https://is.gd/CLk6NQ
Osmanovic-Thunström, A. (2022, 30 de junio). We asked GPT-3 to write an academic paper about itself, then we tried to get it published. Scientific American. https://is.gd/OnGPRf
Ousidhoum, N., Zhao, X., Fang, T., Song, Y., y Yeung, D. Y. (2021). Probing toxic content in large pre-trained language models. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 1, https://doi.org/10.18653/v1/2021.acl-long.329
Pérez Colomé, J. (2023, 11 de abril). Los actores de voz se unen al ver peligrar su trabajo por la inteligencia artificial. El País. https://is.gd/UzwnED
PwC (2018). Will robots really steal our jobs? An international analysis of the potential long-term impact of automation. https://is.gd/ApKuPI
Sánchez-García, P., Merayo-Álvarez, N., Calvo-Barbero, C., y Díez-Gracia, A. (2023). Desarrollo tecnológico español de la inteligencia artificial aplicada al periodismo: empresas y herramientas de documentación, producción y distribución de información. Profesional de la información, 32(2), e320208. https://doi.org/10.3145/epi.2023.mar.08
Schomer, A. (2023, 6 de julio). Entertainment industry has high anxiety about generative AI: survey. Variety. https://is.gd/yTnheu
Solà, P. (2021, 27 de enero). El síndrome de Lupin y la falta de originalidad de Netflix. La Vanguardia. https://is.gd/rpt5hZ
Steed, R., y Caliskan, A. (2021). Image representations learned with unsupervised pre-training contain human-like biases. FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability and Transparency, 701-713. https://doi.org/10.1145/3442188.3445932
Sweney, M. (2023, 7 de marzo). Mirror and Express owner publishes first articles written using AI. The Guardian. https://is.gd/osI6u2
Warzel, C. (2022, 17 de agosto). I went viral in the bad way. The Atlantic. https://is.gd/4muxwS
Yu, N., Skripniuk, V., Abdelnabi, S., y Fritz, M. (2021). Artificial fingerprinting for generative models: rooting deepfake attribution in training data. Proceedings of the IEEE/CVF International conference on computer vision, 14.448-14.457. https://doi.org/10.1109/iccv48922.2021.01418