Suing OpenAI Won’t Save the Arts
post by E.G. Blee-Goldman · 2025-04-04T13:42:22.687Z · LW · GW · 0 commentsContents
No comments
The outrage over the Ghibliification of everything proves that people care much more about good art than you would expect. After all, no one was deeply upset when AI-generated art had extra fingers or looked outrageously terrible. But as soon as it crossed the believability benchmark [1] people started to become incredibly concerned about the future of the arts.
This fury, that anyone can create an almost believable Ghibli-like image, partially stems from an affront to the mental space that these movies occupy in the minds of the viewers. They see bastardized versions of the style as an aesthetic mind virus: these images are not simple rehashes of a beautiful animation style but instead are coming to rob the viewer of a simpler time from childhood, one of a fond memory, or of a cornerstone show that the viewer returns to in times of distress.
If you are a content consumer and believe that the canonical form of a piece of art holds the most value, every rehash comes to feel like toast with butter that’s been spread too thin.
However, people also like Ghilbi because most people intuitively understand good art.
Similarly, many people are skeptical of the high arts today because they feel that a trick has been played on them. It is a trick that many contemporary or modern artists have pulled to survive, one in which they have niched down into ever-increasingly conceptual ideas. In some cases, these artworks end up looking like cynical reflections of a chain of transactions and concepts fully divested from anything related to the human experience other than the cost and perceived social clout they confer to the final buyer. One doubts whether the vision the artist initially had would truly be expressed this way if they could find another way to survive as an artist.
There has also been an outcry from artists themselves about the fair use issues of data that foundation models have trained on. This constituency believes that AI companies (frequently OpenAI) have committed some vast theft of intellectual property.
Variously, they believe that “Generative AI is the pillaging of existing art, the rape of the creative spirit to drive the cost of imagination down.”[2] Or that “…anybody creating content for a living should see these proposed policies as a potential career death sentence”[3] And in part these arguments are made because some people feel that AI will cause a “spiritual replacement to be almost as existential as the physical one.”[4]
Somewhat amusingly, if you push back on litigation as the most successful solution to this problem, you immediately find critics who resort to ad hominem (ad automata?) hand waving such as “This was 100% written by AI”[5] and then others who proceed to heavily use AI to craft their responses.
Naturally, I have no problem with using AI for all facets of work, but find it deeply ironic that these commentators will argue against a concept as innocuous as suggesting that we should empower underserved artists through patronage, and do so using the very tools they believe are created from theft.
These commentators protest without reading the idea they critique, without thinking for themselves, and with the cognitive uplift of the very AIs they disclaim. (Although I was eventually able to convince one of them that I was not a bot after posting a sort of proof of existence by doing a selfie with a current newspaper and a handwritten note.[6])
The case for litigation finds its origin in Folsom v. Marsh which is regarded as a foundational legal case in the matter of fair use. Somewhat poetically, the judge said that “Patents and copyrights approach, nearer than any other class of cases belonging to forensic discussion to what may be called the metaphysics of the law, where the distinctions are, or at least may be, very subtle and refined, and, sometimes, almost evanescent.”
To this end, let’s do a brief thought experiment: imagine you’ve just waved a wand and determined that every foundation model company must immediately pay every creator who may have created data that was used in training these models.
You have decided to be a bit magnanimous in this decision and not immediately force the AI companies to shut down. You also decide that individually litigating every potential copyright case would cripple the nation's judicial process, so you decide to do a “super GDPR” penalty scheme and charge 8% of total revenue. OpenAI (say around $5 billion in annual revenue) and Alphabet ($350 billion in 2024 annual revenue) immediately drop image-generating capabilities, but you have cleverly thought this through eventuality and still charge them for the prior year.
Now you know that this won’t counter the open-source image generation models that will continue to proliferate around the world regardless of this ruling (although perhaps future litigation will help you outlaw and criminalize the use of certain software in the United States), but for now you’ve had a good win against two of the largest companies doing image generation.
Alphabet and OpenAI have $355 billion in revenue, so an 8% penalty makes $28.4 billion. That is a sizable amount to distribute to creators. One small issue is that much of the training data includes non-living artists and artists who themselves infringed on others, but you decide to pursue a sort of benevolent jingoistic approach to this and compensate all U.S. creators (and the U.S. copyright holders of those works.) It’s a bit hard to figure out how many of these people are in this category, but it’s probably on the order of 2 to 5 million people. To be fair, you decide to give each artist a one-time payment, pro-rata, of $5,680 to $14,200 (depending on how you calculate the number of people that should be included.)
But there is public outcry about this plan: not every artist created Spirited Away! Not every actor was in 500 Days of Summer! And so, you relent and concede that indeed the creative impact of all artists is not equal, but rather it is heavily disproportionate to a few of the most popular ones. It’s difficult to figure this ratio out exactly so you conduct a study to review all content that has been produced and how frequently it was viewed. And what do you find? The major artists and major movie stars were viewed most frequently.
Therefore, you adjust your payment scheme, and now instead of Mary in Fargo, who created some very nice Corporate Memphis[7] work in 2021, receiving $14,200 you find she in fact has contributed maybe a few cents of value that should be compensated in this penalty scheme. Big checks go out to the most famous artists, most creators receive virtually nothing, there is a renaissance of art in the United States and everyone is happy. Dixit, et facta sunt!
But of course, that is not what would happen.
This is because there is an uncomfortable truth that we are all facing with how AI will impact our lives: what will it mean to live in a world where anyone can do anything better with AI than the median expert in their field could?
That is, a world where the graphics created by AI are better than the median graphic designer, where the software created by the AI is better than the median programmer, where the finance spreadsheets are better than the median analyst?
An even more sensitive point is embedded in this quest for what to do with our creative desires given that most of us, even if we dedicated our lives to the arts, would not be that good. To some, the world is in decline already. The dieworkwear guy recently delivered a takedown of AI in the arts, declaring that “…the difference is about the story. The VP at a suit factory once put it to me like this: if money were no object, would you rather have the handmade Mona Lisa? Or a digital, printed out reproduction? Assume you couldn't visually tell the difference.”[8]
However, if you really think about it, it’s difficult to name an artist in the past 100 years who can match the conceptual and visual execution of the Mona Lisa. Maybe Picasso? The “story” isn’t what makes it good, it is the actual artwork. AI isn't robbing you of your creativity or ability to create. It certainly isn't killing the ability to paint like da Vinci (which has been rare, or non-existent for hundred of years.) dieworkwear concludes we will lose more skilled artists, but the opposite future is much more likely to be true: craftsmanship will have a resurgence.
It is dangerous to believe that suing all the major model companies will fix the arts because this will lead to a stifling reinforcement of the status quo. The key question we really must answer is how we want to support each other in the future. It is not a question that can be answered in litigation because it is a sort of world-eater Shiva / world-creator Brahma[9] [9] topic that we all face. In virtually all professions. One simple solution is to support our friends in their endeavors, regardless of how they are created, and to try to be a bit kinder.
To be abundantly clear: I am not saying AI companies shouldn’t face fair use compliance and penalties if that is what the courts decide. Again, I am not a lawyer or a wand-waver.
I am certain, however, that you can’t litigate the arts into the hearts of mankind, and you can’t force people to create high-quality unique ideas.
We should think deeply about how we want the world to look regardless of how these fair-use court cases turn out.
- ^
Gemini 2.5 Pro came up with this and it sounds roughly right for an uncanny-valley like concept.
- ^
- ^
- ^
- ^
- ^
- ^
This is so good by the way https://jonahprimiano.substack.com/p/fan-art
- ^
- ^
Shiva is responsible for the dissolution of the universe, Brahma for its creation.
0 comments
Comments sorted by top scores.