When Russia’s invasion of Ukraine stymied his travel plans, Belgian photographer Carl De Keyzer decided to take a virtual trip to Russia instead.
From his home, the lauded documentary photographer began to work on a collection of images about Russia with the help of generative artificial intelligence (AI). He was unprepared for the consequences.
In the late 1980s, De Keyzer travelled to Russia 12 times in the space of a year. The USSR was in its death throes and De Keyzer photographed the rituals and pastimes which would soon disappear. He returned in the 2000s to photograph inside Siberia’s prison camps.
In November, three decades after he first visited Russia, De Keyzer published a series of AI-generated images in a book called Putin’s Dream. This time there were no human bodies, no moments in time, instead a vision brought to life with the help of computers.
Within hours of posting online about Putin’s Dream, De Keyzer was facing criticism for having produced fake images and possibly contributing to misinformation.
An estimated 15 billion images using text-to-image algorithms — a type of artificial intelligence where written prompts are given to software to create new images — had already been created by August 2023.
As generative AI imagery becomes ubiquitous, concerns about its ethics are rising. It has also become a thorny topic among photographers.
Putin’s Dream
To create the Putin’s Dream series, De Keyzer fed the AI software his own photographs from previous projects, adjusting it to meet his visual style.
He says the series is a “comment on the horrors of [the Ukraine] war caused by the dream of basically one man” and that using generative AI was a means to that end.
Pleased with the results, De Keyzer says the “new images — illustrations” he has published in Putin’s Dream reflect his previous photography work, which has often explored propaganda and systems of power.
“I did try to get as close to ‘real’ images,” he tells ABC News.
“Of course, it remains artificial, but it was possible to get really close to almost realistic looking images and more importantly to introduce my way of composing, and commenting [using] irony, humour, doubt, wonder, surrealism … A lot of people say that they clearly see my style in these images, which was the idea.”
De Keyzer says he was always transparent about using AI to create Putin’s Dream.
But when he posted several of the images on Instagram to publicise his new book, the reaction was harsh.
Many people criticised him for posting “fake” images, De Keyzer says.
“There were a lot of negative comments on my Insta post, like 600 in two hours. I was not used to that. I always had great reactions to my posts … but this time the box exploded … Some said that they were my biggest fans before but not anymore. AI still provokes automatic disgust, whatever the approach or progress made.”
For a moment, he worried the project was a mistake. But he has also received encouragement from people praising the work, he says.
“Amazing work that shows once again how photography can be done differently, without travelling the world, but by navigating the other world, our double, this latent space of computer memories that contain the countless accumulated media strata,” Belgian digital culture academic Yves Malicy wrote in French (this is a translation) on Facebook.
Is the world ready for AI images?
The history of photography is marked by scandals involving manipulation, staging or fakery. Yet photography’s status as a record of reality endures. As generative AI becomes more sophisticated, many fear it could unleash a tsunami of misinformation.
When artist Boris Eldagsen shocked the photographic world by winning the Sony World Photography Prize with an AI-generated image, he said he wanted to provoke a debate about AI and photography.
“It was a test to see if photo competitions are prepared for [AI]… They are not,” he told ABC Radio National.
Unlike Eldagsen, De Keyzer was not out to deceive anyone. But he eventually deleted his Instagram post of the images because, he says, people started to attack Magnum Photos, the prestigious photographic collective he has been a member of since 1994.
One week after De Keyzer’s post, Magnum Photos released a statement on AI-generated images.
“[Magnum] respects and values the creative freedom of our photographers,” the statement said. But its archive “will remain dedicated exclusively to photographic images taken by humans and that reflect real events and stories, in keeping with Magnum’s legacy and commitment to documentary tradition”.
De Keyzer is not the only member of Magnum Photos to spark controversy by experimenting with AI image generation.
Michael Christopher Brown used generative AI to produce a series of images about Cuban refugees. It was a way to tell inaccessible stories, he told PetaPixel.
In a complex meditation on AI, and a “prank” on his photographic community, Jonas Bendiksen used software to create 3D models of people and insert them into landscape photographs he took for a series examining a Macedonian town which had become a notorious hub of fake news production. He published a book of the images called The Book of Veles, and he used AI to generate the book’s accompanying text.
“Seeing that I have lied and myself produced fake news, I have in some way undermined the believability of my work,” he told Magnum Photos. “But I do hope … that this project will open peoples’ eyes to what lies ahead of us, and what territory photography and journalism is heading into.”
The liar’s dividend
Speaking at the Photo Ethics Centre symposium in December, Alex Mahadevan says the breakdown in trust brought about by AI-generated images which enables people to question the veracity of real images or videos is known as “the liar’s dividend”.
Mahadevan, the director of digital media literacy project MediaWise, points to the Princess Catherine photo debacle as an example.
After an AI-assisted image of the princess and her children was published, then hastily retracted by news organisations as anomalies were spotted, the photo led to wild speculation about Princess Catherine’s health. A video released by the princess updating supporters of her health was then dismissed by many. “Immediately, you had people all over the internet saying that is not a video of Princess Kate, it is a deepfake, she is dead … all of these wild conspiracy theories,” Mahadevan says.
It is why transparency is vital when it comes to using generative AI. But as the panellists at the symposium discussed, how the use of AI is labelled or captured in metadata, or when AI assistance — as opposed to generative AI — becomes significant enough to warrant disclosure, are unresolved issues at this stage.
Savannah Dodd, founder and director of the Photography Ethics Centre, says there are other ethical considerations, beyond questions of truth, when it comes to generative AI technology.
“AI allows creators to make images of places that they have never themselves visited or that they may not know very much about,” she says.
Dodd has written about how bias in AI image generators, and a lack of consultation by the user, can lead to the reproduction of stereotypes.
The question of which AI generator to use should also be carefully considered, Dodd says.
“Many of the more prominent generators scrape images from across the internet, without consideration for copyright”.
Last year, images of Australian children were found in a dataset called LAION-5B, which was used to train a number of publicly available AI generators that produce hyper-realistic images.
In November, a parliamentary inquiry into AI released a report saying the companies behind generative AI had committed “unprecedented theft” from creative workers in Australia.
The inquiry was presented with a “significant body of evidence” suggesting generative AI was already impacting creative industries in Australia.
Dodd says that creators working in photography or AI-generated image making should question their motivations, the message they wish to convey and the medium they are using to do that.
“I think it’s worth taking time to understand how an image or a set of images will operate in the world, how they will be understood, and what their potential impact might be,” she says.
For De Keyzer, the fuss over his use of generative AI is overblown. While he says the world needs to educate itself on AI to avoid its possible abuse, he may use it again.
“AI is just another tool with a great future, why should I have to repeat what I have always done,” he says
“I do like the fact that I can travel in my mind now. I’m getting older, and this could be a way of staying creative without the problems and cost you have with real travels. Of course, the real thing is still preferred. It is a fact that it is getting more and more difficult to travel, sell the images, have these published.”