‘F*ck generative AI’: Why artists are so angry
Creatives rail at the ‘art heist of the century’
Molly Crabapple is incandescent. “Fuck generative AI,” she said. “Fuck generative AI left. Fuck generative AI right. Fuck generative AI centre.”
The US-based writer and artist railed against what she described as the “biggest art heist of the century,” in which “everything that we’ve created together as a culture was suctioned up and used to train these generators.”
It’s hard to argue with that fact. Indeed, OpenAI has admitted that it would be “impossible to train today's leading AI models without using copyrighted materials, asserting that “legally, copyright law does not forbid training.”

That remains to be seen; cases are piling up against OpenAI and other AI companies in courtrooms around the world. But it’s certainly true that regulators have been slow to enforce copyright laws, seemingly afraid of standing in the way of the GenAI juggernaut. In the UK, several well-known artists, eyeing the government’s apparent wobble towards easing copyright restrictions tor AI training, have recently weighed into the debate fearful of its impact on culture, careers and artistic expression. Grumbled complaints are becoming a roar of pain.
What follows is a summary of their arguments.
Jobs are being lost or becoming more menial
When dealing with abstract promises such as progress, growth and innovation, it always pays to ask, “For whom?” Who benefits? Clearly, few creatives believe it’s them.
Crabapple was among several visual artists, writers, translators, actors, musicians, broadcasters and others attending an event organised by The Alan Turing Institute’s CREAATIF (Crafting Responsive Assessments of AI & Tech-Impacted Futures) research project and Queen Mary University of London.
A study by CREAATIF found that AI is contributing to a worsening of what it says are already exploitative working conditions for creative workers, with employers demanding ever-greater output at lower rates, and with an associated fall in the value placed on creative work.

Eighty percent of respondents in the study, three-quarters of whom were freelance or self-employed, saw GenAI as decreasing their job security, with almost the same number saying financial compensation is taking a hit and their work is being devalued. This compares to 12% or less in each case who saw the technology as beneficial.
In particular, the sort of entry level, “bread and butter” commercial jobs that artists depend on are disappearing as companies hand them over, in full or in part, to AI.
“It is absolutely devastating and apocalyptic for every single person who, instead of making themselves a brand and trying to make themselves famous, is just focused on being good at their craft. Those are the people that AI is going after,” said Crabapple.
“It is also coming after every single possible job that a young artist or writer or musician would have done in order to get the experience they need to find their voice. It is a mass act of kicking out the ladder from under us.”
She voiced scepticism at the promises of new roles: “Sam Altman, trots up on stage [at Trump’s inauguration] and says that AI is going to create hundreds of thousands of new jobs. I don't know what he's talking about, unless he's talking about child cobalt miners in Congo.”
Meanwhile, those in the creative professions are being demoted to mere administrators of “AI slop”, she went on. “If there are jobs left for us as artists after generative AI is done, they will be Photoshopping out six fingers and bunions; they will not be creating.”
Some groups are more affected by GenAI than others, the CREAATIF study found, with freelancers feeling they are in no position to turn down work or negotiate on contracts, even where these may require them to agree to allow their data to used to train future generative AI models. Women were also more likely to be at the sharp end.

“We all know that damage is being done by generative AI on the viability of the already precarious creative careers, and the ability to sustain a living income is becoming harder and harder,” said Anna Ganley, chief executive of the Society of Authors.
Last May, the Society of Authors surveyed its members on the topic of GenAI. Ninety-seven percent of the 1,833 people who responded said they did not consent to the use of their works to develop AI systems. Seeking the views of the model makers themselves, the Society wrote to 70 tech companies in the summer.
“We’ve had very few responses,” said Ganley. “So little engagement, shrouded in NDAs.”
Much more transparency is needed, she added.
A lack of transparency
The production of GenAI models is murky, as attested by the multitude of ongoing court cases. Artists often stumble across their own work, or mutations thereof, in datasets like LAION-5B, the model used to train Stable Diffusion. LAION-5B was also found to contain images of child sexual abuse. The New York Times claims that stories that should have been protected by copyright were reproduced recognisably, and often misleadingly, by ChatGPT.
The use of GenAI often is murky too, with companies only tending to admit to deploying it after errors have been made public, such as AI “journalists” publishing hallucinated stories.
Hollwood movie The Brutalist is currently experiencing a backlash over its use of Respeecher to change the accents of its leading actors. It’s not the deployment of such tools that’s infuriating fans, argue industry watchers, it's the lack of transparency. People feel deceived. Trust is lost. The same applies to other AI-generated content passed off as human endeavour.
Participants in the CREAATIF study suggested AI-generated content could be labelled so consumers are made aware of it. They argued that there should be transparency around the tasks it is being used for, and which roles may have been displaced. And developers should be required to disclose the data that models have been trained on.
Mass mediocrity
It’s not as if AI output is superior to what it’s displacing - quite the opposite. Its generic adverb-heavy prose, clunky graphics, and factual inaccuracies are widely parodied. Rather, it’s a race to the bottom, said Crabapple.
“AI prose reads like a slow middle-schooler who's trying to hide the fact that he hasn't read the assigned reading. AI music is muzak, but worse and more boring. AI art, at best, looks like weird, slightly glossy Uncanny Valley monstrosities, and at worst, looks like the horrors that you see behind your eyelids before you wake up in a bad dream,” she said.
“It's about AI producing something that is just good enough to fill a space, build a Spotify playlist, just good enough that people will get used to something worse.”

Companies purchasing AI in the hope of becoming more efficient may be buying a pup, said Laurence Bouvard, a movie actress with a background in computer science, who also does translation and voice work.
“There has been so much hype about how amazing these machines are and how they're going to replace us all in a few years. And I am here to say that it is actually marketing BS.”
Bouvard spoke of a last-minute voice-over job, where an agency begged her to replace what they had.
“They had originally wanted a child's voice, but as the language was too complex for a kid, they had decided to use an AI instead. And it was a disaster, not because the words had been mispronounced, but - and I heard it - the prosody, the intonation, the humanity about it was gone. It was supposed to touch the audience, but it was weird and soulless.
“I did the job in less than an hour, because I'm a skilled creative worker. What we do is skilled, just as skilled as a surgeon or just as skilled as a plumber or just as skilled as a computer scientist.”
Translators are also seeing their work degraded. Publisher Veen Bosch & Keuning used AI to translate Dutch texts into English. Those professional translators who avoided being laid off found themselves correcting poor quality machine-generated text rather than writing from scratch. “It can actually take just as long for the human translator to re-humanise it,” Ganley commented.
Who wants AI art?
“The big question about all of this is why.” said Bouvard. “Who asked for AI art? Who wants AI music and AI books and AI paintings and AI voice artists and AI actors? Nobody's asking for that.
“These big, wealthy corporations are scraping up resources, using people from the Global South to identify and tag the stuff in the training sets, paying them three bucks a day, draining our rivers and endangering the environment, all for what?”
The same companies making the AI are also heavily involved in the rush to build datacentres, she noted.
“This is all about money. It's all about making money for lots of people, and nobody asked whether we as the public wanted it.”
Crabapple suspects the motive is darker still. “Generative AI represents the mass theft of creative life, the screwing of the planet, the destruction of livelihoods, and the colonisation of our lives by billionaires,” she said
“It's to discipline workers, to show that workers are replaceable and interchangeable, to show that we can just get rid of you with the machine. So you'd better not strike, you'd better not ask for higher wages or better conditions. Its purpose is discipline.”
Who’s enforcing the laws?
Asking for forgiveness rather than permission has long been the modus operandi of Silicon Valley, but the rush to achieve AI supremacy has put a rocket under this approach, leaving regulators floundering.

“We are facing a time where even existing loss in copyright, intellectual property law, data protection law, are not being effectively enforced,” said David Leslie, director of ethics and responsible innovation research at The Alan Turing Institute and professor of Ethics, Technology and Society at Queen Mary University of London.
“There are rights that we have as creators, as artists, that are being violated because there is copyright law, there is intellectual property law, we have data protection law, and so we need to demand the strengthening of structural approaches to enforcing those laws.”
Leslie spoke of the need for more organisations to apply a good work algorithmic impact assessment, and for creative workers to join unions or otherwise come together as a collective to ensure their voices are heard by employers thinking of using AI systems – although he acknowledged this is a challenge since many are self-employed.
Ganley urged creators to ensure they have appropriate AI safeguards in their contracts, where possible.
“We want to ensure that creative work is made by humans,” she said. Companies pay for raw materials all the time. Why should this be different?
“If tech companies need high quality data on which to train their LLMs – and we can understand why they do – they need to (a) seek permission to use those materials; and (b) pay for them.”
She urged the government to acknowledge that content used to train models should have been licensed, and to take retrospective action "to rectify the industrial copyright infringement” that has taken place to date.
This seems unlikely to happen.
The UK government is currently consulting on how copyright materials should be used to train AI models. Last week it was defeated in the Lords after it suggested exempting “text and data mining” from copyright law. The Lords introduced measures to explicitly subject AI companies to UK copyright law, regardless of where they are based.
“There is a role in our economy for AI... and there is an opportunity for growth in the combination of AI and creative industries, but this forced marriage on slave terms is not it,” said Baroness Kidron, who promoted the amendment.
An AI artist’s perspective
Those taking part in the CREAATIF debate and who spoke to Computing insisted they are not against new technology, but are opposed to the way GenAI is being used to devalue their work. While the introduction of cameras or Photoshop brought about new and different ways of working creatively, GenAI is different, they said, because the models are trained, without permission, on the works of artists and are now being used to put them out of work.

However, there are many creatives who use AI as a primary tool. Daniel Ambrosi is a visual artist with a computer graphics background who creates works using “a superscaled version of Google's DeepDream,” as he told Computing.
“It turns out that the images DeepDream outputs have artistic potential; I happened to see that potential and have learned to apply it in a way that enables my unique artistic visions.”
DeepDream is not really generative AI, though; it’s more of an image classifier and enhancer. Ambrosi asserts that artists should have been asked to opt in to the training datasets for generative models, although he thinks it may be too late to right that wrong now.
As a partial solution he suggests that when a prompt is invoked to generate a work "in the style of" a particular artist, “the metadata for the resulting image should readily and perpetually expose the name of the artist.”
Together with an opt in, indelible proof of origin would benefit artists through increased exposure, he said. It would then constitute genuine fair use.
While Ambrosi believes “fine artists” are safe, the threat from GenAI to jobbing creatives is real.
“One pattern I've observed is that if a new technology offers 80% of the quality, capability, productivity, or output for 20% of the price of the incumbent technology, the incumbent tech will always lose out. That is exactly what we're seeing now in a specific segment of the art market, namely ‘art-for-hire’.”
But art-for-hire is exactly how many artists fund their passion projects. Ambrosi’s advice is to become proficient with GenAI.
“It's imperative that artists-for-hire learn to use these new tools to compete in the future,” he said.
“The good news is that their background, training and talents will quickly make them best-in-class. Sure, some clients will accept inferior work if the price is low enough, but that shouldn't stop an experienced artist-for-hire from embracing these new tools and learning how to use them with subtlety and finesse to remain competitive.”