One of these pictures is real. Can you pick which one?
Artificial Intelligence is just the latest tool with which artists explore their ideas. So why is everyone getting so het up?
By Andrew Stephens
Fans were vexed recently when Cindy Sherman posted an image on Instagram showing a blue-faced woman framed by a ruff-like bonnet. The US artist is known for starring as various personas in all her work, but this one appeared to cross a new line. Sherman, it turned out, like pretty much everyone right now, had prompted an AI to make the portrait.
Writing about it in the Financial Times Magazine, Sherman explained how she had taken selfies, manipulated them in a face-tuning app to make herself “look weird”, and loaded the results into an AI program to see what it might produce. She wasn’t overly impressed; it didn’t look like Cindy Sherman artwork.
“And when I put a whole bunch of these new images on Instagram, people were furious. They were like, ‘Oh my God, I can’t believe you’re using AI. Your other work is so much better than this!’
“It’s as if I’m presenting this as my own work, which seems so ridiculous. I’m just experimenting; it’s like a sketchbook.” The images were not for exhibition and she doesn’t intend to use AI as a tool for finished pieces.
Sherman is hardly the first artist to encounter heightened emotions about distinctions between “real” and AI-assisted art. When new-generation AIs exploded into the public consciousness late last year, creatives responded with everything from panic (Save the artists!) to unbridled enthusiasm (How do I sign up?) to horror (It’s Skynet from Terminator!).
Underneath, though, is a new urgency about questions that have long been pondered in speculative art, film, literature and, of course, by tech nerds: can AIs truly have consciousness? Can they have agency to be creative and make art and, if so, what role remains for artists? And what is art anyway – human-made or otherwise?
AI art-generator systems such as DALL-E 2, Midjourney, Stable Diffusion and BlueWillow are being trained on incomprehensibly vast data sets of images and artworks to make new images following a human text prompt. And anyone can create unique imagery. Take, for example, Swedish multinational engineering company Sandvik, creator of the stainless steel Impossible Statue, the result of multiple AIs trained on the work of five famous sculptors: Michelangelo, Auguste Rodin, Kathe Kollwitz, Takamura Kotaro and Augusta Savage.
Unveiled in May, the statue weighs 500kg and went on display at Sweden’s national science and technology museum.
Responses range from trepidation to enthusiasm, possibly not unlike the reactions to other art-making innovations, such as the first photographs and movies (1800s), surrealist paintings (1930s) or early video art (1960s-70s).
All sorts of online groups are pondering these issues – from AI Art Universe and Promptism to NAAIAI (Non-Awful AI Art Imaging). Emergent Garden, a YouTube channel creator interested in neural networks, recently released The Creativity of AI Art, a half-hour documentary tackling the ethical and practical issues surrounding AI and the visual arts. Its (human) narrator suggests AI-generated images can be beautiful, funny, weird, daunting and meaningful in a philosophical way – and isn’t that what constitutes art? But what will happen if AIs achieve pure, independent innovation, moving beyond cobbling, imitating and extending existing images?
Australian filmmaker Patrick Clair, whose Antibody studio has created title sequences for TV hits such as Westworld, True Detective and The Man in the High Castle, describes AI systems as “limb extensions”.
“Art, for me, is the creating of meaning. And considering that these models do not have a true understanding of what it is they’re doing, I think the creation of meaning still lies with the human driving it.”
He says there is a lot of bad AI art on Instagram and Reddit. “The lesson there is that you still need sophisticated, visually literate, creative people,” he says. “They are the prerequisite for good results. [AI] is a very sophisticated tool that is terrifying in the best possible way, but a tool nonetheless.”
Clair has been creating AI-assisted still art images for some time. These include his atmospheric dust-storm series of a Melbourne tram and a masked parent with a stroller, made using an older version of DALL-E no longer on public release. He liked the results. “If you were quite vague in what you asked for, you would get these touching little moments of character. As [developers] have tweaked the model, that is now less so; I’m hoping it will get back to that.”
Clair’s poignant AI images are up there with the spookily authentic-looking Victorian-era images created by Mario Cavalli, a London filmmaker and animation director. Like Clair, Cavalli thinks AI is only as creative as the artist using it. “AI is no more inherently creative than a pencil, or a camera,” he says. “It is not autonomous, nor sentient, nor truly ‘intelligent’ in any conventional sense.”
Even so, he says the output can be highly polished and beautifully rendered, which can blind us to its limitations such as weird anatomy, garbled text or anachronistic detail. Some of his Victorian images ended up with blurred hands and facial features, and the backgrounds included electric street lighting which was not introduced in London until two decades after the 1860s specified in the prompt. Even so, when the images were published in PetaPixel, he was “alarmed to see how many people took the images to be real”. Others “cried ‘fake!’ as though calling out a deliberate deception, when no attempt was made on my part to hide the fact, or correct the errors inherent in the images”, he says.
“Out of fear that these deliberate fakes might find their way into the vast photographic AI database and eventually be taken as authentic historical record, I have since abandoned ersatz historical imagery, preferring a jokier, ‘what if...?’ approach, alongside more technical or aesthetic exercises.” These include “What if... there was a Care Home for Aged and Retired Superheros?” and “What if... the storming of the Bastille was reenacted by 21st century school children?”
“Hopefully, these are sufficiently absurd as to raise no more than a smile,” Cavalli says.
Like Cindy Sherman, artist Petrina Hicks used AI while researching her latest body of work, Biophilia. Hicks is well-known for her analogue process methods; everything in her impeccably polished images is manually orchestrated and styled in the studio. While preparing for Biophilia, she experimented with DALL-E 2.
“Rather than use a text description to generate images, I would upload existing images of mine, or test images to see what the AI would generate,” Hicks says. “Most of the AI-generated images look weird! However, I found the process expanded my vision and added some perspectives and ideas I would not have considered otherwise.
“Primarily in relation to composition, pose and perspective, the AI-generated images feel liberated from my vision – the way I see things – and I found this helpful.”
The images were never considered for exhibition, but it raises the question of where the line is crossed between human art and machine-made imagery.
Artists Karen ann Donnachie and Andy Simionato work together on non-human and autonomous art systems to make drawing, writing and reading robots. They question whether AI is simply another tool, saying the meanings of words such as “agency”, “creativity”, “consciousness” and even “computer” are constantly transforming, with the technologies themselves reshaping our understanding.
“The word ‘computer’ used to refer to women whose work consisted in literally ‘computing’ mathematical equations manually on paper,” they say. “Now, we know the word ‘computer’ to signify a machine. So, it’s not inconceivable to imagine that one day the word ‘computer’ could mean a mixture of human and machine, or perhaps a machine with agency.”
One of their recent works was for the cover of Art + Australia magazine using AI-generated images that respond to the prompt “oil-painted Australian landscape”. Another, The Library of Nonhuman Books (2019–ongoing), is an automated book system that “reads” a physical copy of an existing book to uncover haiku poems. On each page, it erases all the other words leaving only the poem. It then “illuminates” pages with a relevant image retrieved from a Google Image search. Finally, the new version of the book is physically published by a print-on-demand service.
“All this happens without any human intervention after the system is initiated,” the pair say. “In fact, we don’t see the outcomes of the book until it is delivered to our door!”
Melbourne artist and RMIT lecturer Xanthe Dobbie, who uses AI to produce artwork, is more sceptical.
“Most of the ‘art’ churned out of AI image generators is very boring,” Dobbie says. “Most of the text regurgitated by ChatGPT is highly unreliable. The amount of fact-checking required often negates the apparent shortcut of using the thing in the first place. Harnessing and fine-tuning AI requires a significant amount of human intervention, creative coding and problem-solving.”
Perhaps what we really need is AEI – Artificial Emotional Intelligence – proposed by philosopher Alain de Botton, who hopes computational power might one day be directed at the emotional and psychological dimensions of existence – the bedrock of art-making.
He and John Armstrong argue in their book Art as Therapy that art is a tool like any other, an extension of the body that allows a wish to be carried out. “A knife is a response to our need, yet inability, to cut,” they write.
An image-generating AI, they might have added, is just another knife. But a double-edged one with an unpredictable future.