And no, it's not copyright.

Sign up | Read more Future Perfect stories | What is Future Perfect? | Submit a question to Your Mileage May Vary

Future Perfect.

Sigal Samuel is a senior reporter for Future Perfect. She writes about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. You can read more of her work here and follow her on X.

Sigal Samuel is a senior reporter for Future Perfect. She writes about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. You can read more of her work here and follow her on X.

Hey readers,

Every artist I know is furious. The illustrators, the novelists, the poets — all furious. These are people who have painstakingly poured their deepest yearnings onto the page, only to see AI companies pirate their work without consent or compensation.

 

The latest surge of anger is a response to OpenAI integrating new image-generation capabilities into ChatGPT and showing how they can be used to imitate the animation style of Studio Ghibli. That triggered an online flood of Ghiblified images, with countless users (including OpenAI CEO Sam Altman) getting the AI to remake their selfies in the style of Spirited Away or My Neighbor Totoro.

 

Couple that with the recent revelation that Meta has been pirating millions of published books to train its AI, and you can see how we got a flashpoint in the culture war between artists and AI companies.

 

When artists try to express their outrage at companies, they say things like, “They should at least ask my permission or offer to pay me!” Sometimes they go a level deeper: “This is eroding the essence of human creativity!”

 

These are legitimate points, but they’re also easy targets for the supporters of omnivorous AI. These defenders typically make two arguments.

 

First, using online copyrighted materials to train AI is fair use — meaning, it’s legal to copy them for that purpose without artists’ permission. (OpenAI makes this claim about its AI training in general and notes that it allows users to copy a studio’s house style — Studio Ghibli being one example — but not an individual living artist. Lawyers say the company is operating in a legal gray area.) 

 

Second, defenders argue that even if it’s not fair use, intellectual property rights shouldn’t be allowed to stand in the way of innovation that will greatly benefit humanity.

 

The strongest argument artists can make, then, is that the unfettered advance of AI technologies that experts can neither understand nor control won’t greatly benefit humanity on balance — it’ll harm us. And for that reason, forcing artists to be complicit in the creation of those technologies is inflicting something terrible on them: moral injury. 

 

Moral injury is what happens when you feel you’ve been forced to violate your own values. Psychiatrists coined the term in the 1990s after observing Vietnam-era veterans who’d had to carry out orders — like dropping bombs and killing civilians — that completely contradicted the urgings of their conscience.

 

Moral injury can also apply to doctors who have to ration care, teachers who have to implement punitive behavior-management programs, and anyone else who’s been forced to act contrary to their principles. In recent years, a swell of research has shown that people who’ve experienced moral injury often carry a sense of shame that can lead to severe anxiety and depression.

 

Maybe you’re thinking that this psychological condition sounds a world away from AI-generated art — that having your images or words turned into fodder for AI couldn’t possibly trigger moral injury. I would argue, though, that this is exactly what’s happening for many artists who are seeing their work sucked up to enable a project they fundamentally oppose, even if they don’t yet know the term to describe it. 

 

Framing their objection in terms of moral injury would be more effective. Unlike other arguments, it challenges the AI boosters’ core narrative that everyone should support AI innovation because it’s essential to progress. 

Arms enclosed around a giant painting.

Getty Images

Why AI art is more than just fair use or remixing 

By now, you’ve probably heard people argue that trying to rein in AI development means you’re anti-progress, like the Luddites who fought against power looms at the dawn of the industrial revolution or the people who said photographers should be barred from taking your likeness in public without your consent when the camera was first invented.

 

Some folks point out that as recently as the 1990s, many people saw remixing music or sharing files on Napster as progressive and actually considered it illiberal to insist on intellectual property rights. In their view, music should be a public good — so why not art and books?

 

To unpack this, let’s start with the Luddites, so often invoked in discussions about AI these days. Despite the popular narrative we’ve been fed, the Luddites were not anti-progress or even anti-technology. What they opposed was the way factory owners used the new machines: not as tools that could make it easier for skilled workers to do their jobs but as a means to fire and replace them with low-skilled, low-paid child laborers who’d produce cheap, low-quality cloth. The owners were using the tech to immiserate the working class while growing their own profit margins. 

 

That is what the Luddites opposed. And they were right to oppose it because it matters whether tech is used to make all classes of people better off or to empower an already-powerful minority at others’ expense. 

 

Narrowly tailored AI — tools built for specific purposes, such as enabling scientists to discover new drugs — stands to be a huge net benefit to humanity as a whole, and we should cheer it on.

 

But we have no compelling reason to believe the same is true of the race to build AGI — artificial general intelligence, a hypothetical system that can match or exceed human problem-solving abilities across many domains. In fact, those racing to build it, like Altman, will be the first to tell you that it might break the world’s economic system or even lead to human extinction. 

 

They cannot argue in good faith, then, that intellectual property should be swept aside because the race to AGI will be a huge net benefit to humanity. They might hope it will benefit us, but they themselves say it could easily doom us instead. 

 

But what about the argument that shoveling the whole internet into AI is fair use?

That ignores the fact that when you take something from someone else, it really matters exactly what you do with it. Under the fair use principle, the purpose and character of the use is key. Is it for commercial use? Or not-for-profit? Will it harm the original owner?

 

Think about the people who sought to limit photographers’ rights in the 1800s, arguing that they can’t just take your photo without permission. Now, it’s true that the courts ruled that I can take a photo with you in it even if you didn’t explicitly consent. But that doesn’t mean the courts allowed any and all uses of your likeness. I cannot, for example, legally take that photo of you and non-consensually turn it into pornography. 

 

Pornography — not music remixing or file sharing — is the right analogy here. Because AI art isn’t just about taking something from artists; it’s about transforming it into something many of them detest since they believe it contributes to the “enshittification” of the world, even if it won’t literally end the world. 

 

That brings us back to the idea of moral injury. 

 

Currently, as artists grasp for language in which to lodge their grievance, they are naturally using the language that is familiar to them: creativity and originality, intellectual property and copyright law.

 

But that language gestures toward something deeper. The reason we value creativity and originality in the first place is because we believe they’re an essential part of human agency. And there is a growing sense that AI is eroding that agency, whether by homogenizing our tastes, addicting us to AI companions, or tricking us into surrendering our capacity for ethical decision-making.

 

Forcing artists to be complicit in that project — a project they find morally detestable because it strikes at the core of who we are as human beings — is to inflict moral injury on them. That argument can’t be easily dismissed with claims of “fair use” or  “benefitting humanity.” And it’s the argument that artists should make loud and clear. 

 

—Sigal Samuel, senior reporter

 

📲 Questions? Comments? Tell us what you think! If there is a topic you want us to explain or a story you’re curious to learn more about, fill out this form or email us at futureperfect@vox.com.  

 

The controversial anti-poverty solution coming to public schools

Students sit at their desks and look toward the blackboard and their teacher in a primary school.

Nicholas Guyonnet/Hans Lucas/AFP via Getty Images

The “success sequence” has many critics, but lawmakers and parents don’t seem to care. More here »

 

We’re on the verge of a universal allergy cure

A young girl smells blue wildflowers in a field.

Getty Images

The one allergy treatment to rule them all, explained. Read here »

 

Become a Vox Member

 

Support our journalism — become a Vox Member and you’ll get exclusive access to the newsroom with members-only perks including newsletters, bonus podcasts and videos, and more.

Join our community
 
WHAT WE’RE READING
  • Two vegan lovers, an AI "cult," and a trail of dead bodies (The Cut)
  • The end of “college for all” (Vox)
  • Will AI improve your life? Here’s what 4,000 researchers think. (Nature)
  • These fluffy white wolves explain everything wrong with bringing back extinct animals (Vox)
  • The techno-utopians who want to colonize the sea (New York Times)
 

Are you enjoying the Future Perfect newsletter? Forward it to a friend; they can sign up for it right here.

Want more Future Perfect in your inbox? We also have four other free newsletters you can subscribe to:

  • Good News: A weekly newsletter from Bryan Walsh curating the everyday acts of progress that can help us feel better about the world around us.

  • Processing Meat: A biweekly newsletter from Kenny Torrella and Marina Bolotnikova analyzing how the meat and dairy industries shape health politics, culture, and the environment.

  • Meat/Less: A five-week course on how to eat less meat.

  • More to Meditation: A five-day meditation course providing helpful frameworks to dive deeper into meditation.

Today’s edition was edited by Marina Bolotnikova and produced by Izzie Ramirez. We'll see you Friday! 

 
FacebookTwitterYouTube

Access the web version of this newsletter here.

 

This email was sent to jonathan.gueguen@apala.fr. Manage your email preferences or unsubscribe.

 

View our Privacy Notice and our Terms of Service.

 

Vox Media, 1201 Connecticut Ave. NW, Floor 12, Washington, DC 20036.
Copyright © 2025. All rights reserved.