Abstract
AI-generated artworks are rapidly improving in quality, and bring many ethical issues to the forefront of discussion. Data scarcity leaves many individuals under-represented due to aspects such as age and ethnicity, which can provide useful context when transferring artistic styles to an image. In this study, we consider current issues through the engineering of an AI art model trained on work inspired by Vincent van Gogh. The model is fine-tuned from a dataset of nearly 6 billion images and thus enables style transfer to individuals and entities not present in the art dataset given the knowledge of context. All models in this work are trained on consumer-level computing hardware with presented hyperparameters and configurations. Finally, we explore the application of computer vision models that can detect when an artwork has been created by human or machine with 98.14% accuracy. The dataset and models are open-sourced for future work.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Further details on schedulers can be found at: https://huggingface.co/docs/diffusers/using-diffusers/schedulers.
- 2.
A unique token is used to add a new term to the dictionary without interfering with the base knowledge.
- 3.
The dataset from this study can be downloaded from https://www.kaggle.com/datasets/birdy654/detecting-ai-generated-artwork.
References
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10,684–10,695 (2022)
Roose, K.: An AI-generated picture won an art prize. artists aren’t happy. The New York Times 2, 2022 (2022)
Epstein, Z., Levine, S., Rand, D.G., Rahwan, I.: Who gets credit for AI-generated art? Iscience 23(9), 101515 (2020)
Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., Sutskever, I.: Zero-shot text-to-image generation. In: International Conference on Machine Learning, pp. 8821–8831. PMLR (2021)
Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E., Ghasemipour, S.K.S., Ayan, B.K., Mahdavi, S.S., Lopes, R.G., et al.: Photorealistic text-to-image diffusion models with deep language understanding. arXiv:2205.11487 (2022)
Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: an open large-scale dataset for training next generation image-text models. arXiv:2210.08402 (2022)
Chambon, P., Bluethgen, C., Langlotz, C.P., Chaudhari, A.: Adapting pretrained vision-language foundational models to medical imaging domains. arXiv:2210.04133 (2022)
Yi, D., Guo, C., Bai, T.: Exploring painting synthesis with diffusion models. In: 2021 IEEE 1st International Conference on Digital Twins and Parallel Intelligence (DTPI), pp. 332–335. IEEE (2021)
Sha, Z., Li, Z., Yu, N., Zhang, Y.: De-fake: detection and attribution of fake images generated by text-to-image diffusion models. arXiv:2210.06998 (2022)
Amerini, I., Galteri, L., Caldelli, R., Del Bimbo, A.: Deepfake video detection through optical flow based CNN. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (2019)
Saikia, P., Dholaria, D., Yadav, P., Patel, V., Roy, M.: A hybrid CNN-LSTM model for video deepfake detection by leveraging optical flow features. In: 2022 International Joint Conference on Neural Networks (IJCNN), pp. 1–7. IEEE (2022)
Nightingale, S.J., Wade, K.A., Watson, D.G.: Can people identify original and manipulated photos of real-world scenes? Cogn. Res. Princ. Implic. 2(1), 1–21 (2017)
Kobiela, D., Welchman, H.: Loving Vincent. Universal Pictures. https://lovingvincent.com/ (2017)
van Gogh, V.: Self-portrait (1889)
Ruiz, N., Li, Y., Jampani, V., Pritch, Y., Rubinstein, M., Aberman, K.: Dreambooth: fine tuning text-to-image diffusion models for subject-driven generation. arXiv:2208.12242 (2022)
Stephenson, C., Seguin, L.: Training stable diffusion from scratch costs \$160k. https://www.mosaicml.com/blog/ (2023). Accessed 03 February 2023
Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo numerical methods for diffusion models on manifolds. arXiv:2202.09778 (2022)
Song, J., Meng, C., Ermon, S.: Denoising diffusion implicit models. arXiv:2010.02502 (2020)
Karras, T., Aittala, M., Aila, T., Laine, S.: Elucidating the design space of diffusion-based generative models. arXiv:2206.00364 (2022)
Hu, E.J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., Chen, W.: Lora: low-rank adaptation of large language models. arXiv:2106.09685 (2021)
Dettmers, T., Lewis, M., Belkada, Y., Zettlemoyer, L.: Llm.int8(): 8-bit matrix multiplication for transformers at scale. arXiv:2208.07339 (2022)
Lefaudeux, B., Massa, F., Liskovich, D., Xiong, W., Caggiano, V., Naren, S., Xu, M., Hu, J., Tintore, M., Zhang, S., Labatut, P., Haziza, D.: xformers: a modular and hackable transformer modelling library. https://github.com/facebookresearch/xformers (2022)
Dao, T., Fu, D.Y., Ermon, S., Rudra, A., Ré, C.: Flash attention: fast and memory-efficient exact attention with IO-awareness. In: Advances in Neural Information Processing Systems (2022)
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR) 54(6), 1–35 (2021)
Ba, Y., Wang, Z., Karinca, K.D., Bozkurt, O.D., Kadambi, A.: Style transfer with bio-realistic appearance manipulation for skin-tone inclusive RPPG. In: 2022 IEEE International Conference on Computational Photography (ICCP), pp. 1–12. IEEE (2022)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Bird, J.J., Barnes, C.M., Lotfi, A. (2024). AI Generated Art: Latent Diffusion-Based Style and Detection. In: Naik, N., Jenkins, P., Grace, P., Yang, L., Prajapat, S. (eds) Advances in Computational Intelligence Systems. UKCI 2023. Advances in Intelligent Systems and Computing, vol 1453. Springer, Cham. https://doi.org/10.1007/978-3-031-47508-5_13
Download citation
DOI: https://doi.org/10.1007/978-3-031-47508-5_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-47507-8
Online ISBN: 978-3-031-47508-5
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)