Skip to content Skip to sidebar Skip to footer

A commentary on AI generated art

If you’ve spent any amount of time on social media you’ve likely encountered a deluge of AI-generated art. Earlier in the year, Craiyon (formerly known as DALL-E mini) went viral on Twitter, and the site has since been saturated with examples of AI-generated art. In the following months DALLE-2, Midjourney, and Stable Diffusion emerged as the most popular platforms of choice. Users have been using these tools to generate impressive examples of digital art, deeply surreal memes, and everything in between. 

Back in August, a piece of AI art generated by Jason M. Allen, using Midjourney, even won first prize in the digital art category at the Colorado State Fair Fine Arts Competition. Allen’s win immediately caused controversy on social media with some even lamenting the death of art itself.

Source: Jason M. Allen’s ‘Théâtre D’opéra Spatial’

However, due to its immense prevalence and the huge amount of investment behind it, digital art looks set only to further dominate the landscape of the internet in the coming years. So, how exactly did we get here, and does AI really signal the end of art?

How does it work?

The competition to build an AI model capable of generating images from prompts expressed in free, natural language has been ongoing for several years. In 2014 Ian Goodfellow coined the term Generative Adversarial Network (GAN). A GAN is an algorithmic model that can be trained on a specific data set of images with the intention of eventually generating entirely new images within the set. In 2018 the French art collective Obvious sold a GAN-generated artwork for $432,000 at Christie’s, sparking debate over the conceptual and financial value of AI art.  

Google began to experiment with AI image generation in 2015 with DeepDream, a program designed by engineer Alexander Mordvintsev that utilises neural networks to produce composite images with an almost psychedelic quality. In 2017, Google also developed SketchRNN, a tool designed to teach AI how to draw pictures and generalise abstract concepts. Likewise, Microsoft followed suit in 2018 by developing an image generator called AttnGAN which could generate images from text prompts. Microsoft also invested $1 billion into OpenAI, who would go on to launch DALLE-2 in 2021. More recently, Microsoft announced last month that they will be integrating DALLE-2 into their Bing search engine. 

At a basic level, today’s AI art generators work by scanning the internet for millions, if not billions, of pictures. Using captions and metadata to ascertain roughly what the image shows, the AI algorithm analyses the images and finds patterns between them. With enough training, the AI model can be taught what a particular image should look like and begin generating its own images. Essentially, the user simply needs to enter a prompt into a text field, and the AI model will produce an image based on the user’s description. 

As well as being trained on large data sets, AI image generators infer understanding from natural language in order to accurately recognise user inputs and generate relevant images. At the moment, the experience of using one of these AI services is rather like using an opaque command line tool where you don’t know any of the commands. Prompts can be interpreted in wild and surprising ways, often resulting in images that do not match your initial intent. Prompt writing is the main skill behind using these tools, and many online artists keep the exact syntax they use a secret. However, as they become more sophisticated, it is likely that these tools will get better at understanding what users want without the need to resort to lengthy, esoteric descriptions.

A robot answering a telephone (image generated with DALLE-2)

Accessing these AI image engines can be tricky in itself. At the moment, DALLE-2 gives new users 50 free image credits, while MidJourney remains in beta testing. Meanwhile, Stable Diffusion is freely available as open-source software. While there are a few public instances online, to run Stable Diffusion locally the user needs to have a fairly powerful GPU, or graphics card, and some technical expertise. The relative expense of GPU tech is currently one of the key limitations to the accessibility of AI art.

What are the applications?

The premise behind AI art is a compelling one. Whatever you can imagine and put into words can be almost instantly made visual in the form of an image. This is fundamentally a very powerful concept, and the potential for both creative individuals and businesses looking to illustrate their ideas is huge. While there are plenty of people on the internet who would argue that writing prompts for AI generators is an artistic skill in itself, AI art does radically reduce the skill barrier to producing detailed sketches and images. 

A robot in an office building (image generated with Stable Diffusion)

From book covers and album artwork to stock images, storyboards, and concept art, AI-generated images are already being used for a wide range of creative, experimental, and commercial purposes. AI art can potentially play a role in almost any instance where images or graphics are required, and it’s difficult to predict what the long-term effects of this disruptive technology will be. For starters, with the integration of AI art tools into search engines and social media sites, the technology has the potential to radically alter how the internet looks and feels. 

Why has AI art been criticised?

There are several reasons why AI art has come under criticism recently. Firstly, the process by which these AI algorithms are trained has been criticised by artists. The AI models that underpin these generators utilise millions of freely available images from across the internet. And in many cases, the resulting AI images are sold for commercial profit. 

Secondly, the quality of these AI models can be lacking in several areas. If you’ve spent any amount of time using them, you likely already know that most of these models are not particularly adept at drawing faces and hands. It’s not simply that the hands look badly drawn (a lot of human artists struggle to draw them), it’s that the AI models make choices that no human artists would ever make, often resulting in an uncanny amalgamation of flesh and fingers. 

AI image generators can struggle to process prompts that use ambiguous language. For example, should the word ‘kiwi’ refer to the fruit or to the bird? And should the word ‘salmon’ bring up images of live fish or cooked food? These ambiguities can sometimes return surprising results.

‘A flying kiwi’ (image generated with DALLE-2)

Another problem is that while these image generators roughly know what, for example, a telephone should look like, they do not actually yet have the intelligence to understand what a telephone actually is. As a result, many of the (non)-objects in AI images have a surreal and dreamlike quality.

A telephone with too many receivers (image generated with DALLE-2)

Thirdly, as with many advanced AI algorithms, there is the possibility to abuse AI-generated images. As AI art improves in accuracy and quality, the ability to render realistic-looking images of real people could open the door to harassment, abuse, and misinformation in the future. 

Finally, the internet is currently filled with articles and social media posts debating whether AI art is in fact really art at all. Some people argue that AI is devaluing and de-skilling the process of making art in favour of profit, while others would argue that these tools enable a wider range of individuals to experiment with art. 

Will AI replace artists?

Shutterstock recently announced that they would start selling AI-generated images, presumably in competition with the human photographers who sell images through their website. In this instance, it appears that Shutterstock wants to place AI art in direct competition with human artists. Moreover, generative art cannot be copyright protected because it was autonomously generated by a computer with no human creative input involved at all, although it’s an interesting exercise to consider where the source material came from in the first place. 

It remains to be seen if these AI images are able to match the quality of those produced by human artists and photographers. The accuracy and detailing of AI art are still lacking in many areas. And many argue that no AI algorithm can come close to the level of specificity that can be gained by working with a human artist. 

Despite the hype surrounding AI art, it seems incredibly unlikely that AI will replace human artists, photographers, and illustrators any time soon. If anything, the best current use case for AI art is in collaboration with human artists who can use the tools to generate reference images, fill in repetitive details, and edit images. In short, the most promising feature of AI art is its ability to assist rather than replace human artists – it’s easy to think of that analogy being applied to call centres and conversational interfaces for automated customer services. 

Sources

‘Dall-E 2 mini: what exactly is ‘AI-generated art’? How does it work? Will it replace human visual artists?’ (The Guardian, 2022)

‘Dear Artists: Do Not Fear AI Image Generators’ (Wired, 2022)

‘Can AI-Generated Art Replace Creative Humans?’ (Vice, 2022)

‘An AI-Generated Artwork Won First Place at a State Fair Fine Arts Competition’ (Vice, 2022)

‘The Past, Present, and Future of AI Art’ (Skynet Today, 2019)

‘How Did A.I. Art Evolve?’ (Artnet, 2021)

COMPANY

 © 2024. All Rights Reserved.