The Revolution in Visual Creation

Generative Artificial Intelligence

Authors

DOI:

https://doi.org/10.62161/revvisual.v16.5304

Keywords:

Artificial intelligence, Photography, Midjourney, Visual, Social Networks

Abstract

The integration of artificial intelligence (AI) in audiovisual creation is redefining the limits between human creativity and technological potential and its use is widespread on social networks.

This research will review the technical background and aims to analyze the application of artificial intelligence in the different stages of visual production, where it will be studied whether the communication professional can take advantage of their knowledge to get greater performance from these tools.

The conclusions determine that artificial intelligence is involved in the emergence of new forms of artistic and communicative expression.

Downloads

Download data is not yet available.

References

Adams, A. (1942). National Archives. https://www.archives.gov/espanol/ansel-adams

Arana Arrieta, E., Mimenza Castillo, L. y Narbaiza Amillategi, B. (2020). Pandemia, consumo audiovisual y tendencias de futuro en comunicación. Revista de Comunicación y Salud, 10(2), 149–183. https://doi.org/10.35669/rcys.2020.10(2).149-183 DOI: https://doi.org/10.35669/rcys.2020.10(2).149-183

Boden, M. A. y Edmonds, E. A. (2009). What is generative art? Digital Creativity, 20(1–2), 21–46. https://doi.org/10.1080/14626260902867915 DOI: https://doi.org/10.1080/14626260902867915

Brisco, R., Hay, L. y Dhami, S. (2023). Exploring the role of text-to-image AI in concept generation. Proceedings of the Design Society, 3, 1835–1844. https://doi.org/10.1017/pds.2023.184 DOI: https://doi.org/10.1017/pds.2023.184

Chen, L., Wang, P., Dong, H., Shi, F., Han, J., Guo, Y., Childs, P. R. N., Xiao, J. y Wu, C. (2019). An artificial intelligence based data-driven approach for design ideation. Journal of Visual Communication and Image Representation, 61, 10-22. https://doi.org/10.1016/j.jvcir.2019.02.009 DOI: https://doi.org/10.1016/j.jvcir.2019.02.009

Cobb, P. J. (2023). Large Language Models and Generative AI, Oh My! Advances in Archaeological Practice, 11, 363–369). Cambridge University Press. https://doi.org/10.1017/aap.2023.20 DOI: https://doi.org/10.1017/aap.2023.20

Elharrouss, O., Almaadeed, N., Al-Maadeed, S. y Akbari, Y. (2020). Image Inpainting: A Review. Neural Processing Letters, 51, 2007–2028. Springer. https://doi.org/10.1007/s11063-019-10163-0 DOI: https://doi.org/10.1007/s11063-019-10163-0

Evans, Z., Carr, C., Taylor, J., Hawley, S. H. y Pons, J. (7 febrero 2024). Fast Timing-Conditioned Latent Audio Diffusion. Arxiv. Cornell University. https://doi.org/10.48550/arXiv.2402.04825

Figoli, F. A., Mattioli, F. y Rampino, L. (2022). AI in the design process: training the human-AI collaboration. Proceedings of the 24th International Conference on Engineering and Product Design Education 2022. The design society. https://doi.org/10.35199/EPDE.2022.61 DOI: https://doi.org/10.35199/EPDE.2022.61

Forrester Consulting (5 octubre 2017). The Machine on your Team: New study shows how marketers are adapting in the Age of AI. https://www.prnewswire.com/news-releases/the-machine-on-your-team-new-study-shows-how-marketers-are-adapting-in-the-age-of-ai-300531385.html

Fu, T.-J., Hu, W., Du, X., Wang, W. Y., Yang, Y. y Gan, Z. (2023). Guiding Instruction-based Image Editing via Multimodal Large Language Models. Arxiv. Cornell University. https://doi.org/10.48550/arXiv.2309.17102

Gatys, L. A., Ecker, A. S. y Bethge, M. (2016). Image Style Transfer Using Convolutional Neural Networks. Computer Vision Foundation. https://www.cv-foundation.org/openaccess/content_ cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf DOI: https://doi.org/10.1109/CVPR.2016.265

Jayanthiladevi, A., Raj, A. G., Narmadha, R., Chandran, S., Shaju, S. y Krishna Prasad, K. (2020). AI in Video Analysis, Production and Streaming Delivery. Journal of Physics: Conference Series, 1712(1). https://doi.org/10.1088/1742-6596/1712/1/012014 DOI: https://doi.org/10.1088/1742-6596/1712/1/012014

Son, J.-W., Han, M.-H. y Kim, S.-J. (2019). Artificial Intelligence-Based Video Content Generation. Electronics and Telecommunications Trends. https://doi.org/10.22648/ETRI.2019.J.340304

Crowson, K., Biderman, S., Kornis, D. y Stander, D. (2023). VQGAN-CLIP: Open Domain Image Generation and Editing with Natural Language Guidance. Arxiv. Cornell University. https://doi.org/10.48550/arXiv.2204.08583 DOI: https://doi.org/10.1007/978-3-031-19836-6_6

Lee, S. (2023). Transforming Text into Video: A Proposed Methodology for Video Production Using the VQGAN-CLIP Image Generative AI Model. International Journal of Advanced Culture Technology, 11(3), 225–230. https://doi.org/10.17703/IJACT.2023.11.3.225

Liu, V. y Chilton, L. B. (2022, April 29). Design Guidelines for Prompt Engineering Text-to-Image Generative Models. Conference on Human Factors in Computing Systems - Proceedings. https://doi.org/10.1145/3491102.3501825 DOI: https://doi.org/10.1145/3491102.3501825

López, C. E., Miller, S. R. y Tucker, C. S. (2019). Exploring biases between human and machine generated designs. Journal of Mechanical Design, Transactions of the ASME, 141(2). https://doi.org/10.1115/1.4041857 DOI: https://doi.org/10.1115/1.4041857

Mirowski, P. W., Mathewson, K. W., Pittman, J. y Evans, R. (2023). Writing Screenplays and Theatre Scripts with Language Models: Evaluation by Industry Professionals. CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3544548.3581225 DOI: https://doi.org/10.1145/3544548.3581225

Molina-Siles, P. y Giménez Ribera, M. (2023). Inteligencia artificial y creatividad para la generación de imágenes arquitectónicas a partir de descripciones textuales en Midjourney. Emulando a Louis I. Kahn. EGA Expresión Gráfica Arquitectónica, 28(49), 238–251. doi: 10.4995/ega.2023.19294. DOI: https://doi.org/10.4995/ega.2023.19294

Momot, I. (2022). Artificial Intelligence in Filmmaking Process Future Scenarios. [Bachelor’s thesis]. https://urn.fi/URN:NBN:fi:amk-2022052712497

Nightingale, S. J. y Farid, H. (2022). AI-synthesized faces are indistinguishable from real faces and more trustworthy. Proceedings of the National Academy of Sciences of the United States of America, 119(8). https://doi.org/10.1073/pnas.2120481119 DOI: https://doi.org/10.1073/pnas.2120481119

Oppenlaender, J. (2022). The Creativity of Text-to-Image Generation. ACM International Conference Proceeding Series, 192–202. https://doi.org/10.1145/3569219.3569352 DOI: https://doi.org/10.1145/3569219.3569352

Oppenlaender, J. (2023). A taxonomy of prompt modifiers for text-to-image generation. Behaviour and Information Technology. https://doi.org/10.1080/0144929X.2023.2286532 DOI: https://doi.org/10.1080/0144929X.2023.2286532

Parr, M. (1997). Martin Parr's official website. https://www.martinparr.com/

Rogers, A., Kovaleva, O. y Rumshisky, A. (2020). A Primer in BERTology: What we know about how BERT works. Arxiv. Cornell University. https://doi.org/10.48550/arXiv.2002.12327 DOI: https://doi.org/10.1162/tacl_a_00349

Schetinger, V., Di Bartolomeo, S., El‐Assady, M., McNutt, A., Miller, M., Passos, J. P. A. y Adams, J. L. (2023). Doom or Deliciousness: Challenges and Opportunities for Visualization in the Age of Generative Models. Computer Graphics Forum, 42(3), 423–435. https://doi.org/ 10.1111/CGF.14841 DOI: https://doi.org/10.1111/cgf.14841

Sosa, R. y Gero, J. S. (2016). Multi-dimensional creativity: A computational perspective. International Journal of Design Creativity and Innovation, 4(1), 26–50. https://doi.org/ 10.1080/21650349.2015.1026941 DOI: https://doi.org/10.1080/21650349.2015.1026941

Steinfeld, K. (2023). Clever little tricks: A socio-technical history of text-to-image generative models. International Journal of Architectural Computing, 21(2), 211–241. https://doi.org/10.1177/ 14780771231168230 DOI: https://doi.org/10.1177/14780771231168230

Wang, X., Li, Y., Zhang, H. y Shan, Y. (2021). Towards Real-World Blind Face Restoration with Generative Facial Prior. Arxiv. Cornell University. https://doi.org/10.48550/arXiv.2101.04061 DOI: https://doi.org/10.1109/CVPR46437.2021.00905

Zhang, C. y Peng, Y. (2018). Stacking VAE and GAN for Context-aware Text-to-Image Generation. 2018 IEEE 4th International Conference on Multimedia Big Data, BigMM 2018. https://doi.org/10.1109/BIGMM.2018.8499439 DOI: https://doi.org/10.1109/BigMM.2018.8499439

Zhang, L., Chen, Q., Hu, B. y Jiang, S. (2020). Text-Guided Neural Image Inpainting. MM 2020 - Proceedings of the 28th ACM International Conference on Multimedia, 1302–1310. https://doi.org/10.1145/3394171.3414017 DOI: https://doi.org/10.1145/3394171.3414017

Published

2024-07-09

How to Cite

Casas Arias, M., Priego Díaz, A., & Lara-Martínez, M. (2024). The Revolution in Visual Creation: Generative Artificial Intelligence. VISUAL REVIEW. International Visual Culture Review Revista Internacional De Cultura Visual, 16(4), 227–244. https://doi.org/10.62161/revvisual.v16.5304

Issue

Section

Research articles