Using an LLM for creative writing feels wrong to me

post by Declan Molony (declan-molony) · 2025-01-28T06:42:24.799Z · LW · GW · 13 comments

Contents

13 comments

Last year I remember seeing a Japanese novelist win an award with the help of an LLM.[1]

Art forms frequently evolve, but there are several concerns with the use of LLMs for creative writing. For one, there's the issue of copyright infringement that famous writers are taking up arms against:

George R.R. Martin, Jodi Picoult, John Grisham and Jonathan Franzen are among the 17 prominent authors who joined the suit led by the Authors Guild, a professional organization that protects writers’ rights. Filed in the Southern District of New York, the suit alleges that OpenAI’s models directly harm writers’ abilities to make a living wage, as the technology generates texts that writers could be paid to pen, as well as uses copyrighted material to create copycat work.

Generative AI threatens to decimate the author profession,” the Authors Guild wrote in a press release.

Even if someone legally acquired non-copyrighted text to train an LLM on, there’s also the issue of talent.

If someone told me they produced a 300-page novel by hand—I would be impressed. 

If someone else told me they produced a 300-page novel with the aid of an LLM (that wrote the vast majority of the book) and the human merely organized the AI-generated content—I would consider the real author to be the LLM and the human would merely be an editor (or, at best, an artist creating a collage).

The more someone relies on LLMs to write a novel, the less impressive their accomplishment is.


The debate on the overuse of technology in writing has been going on long before LLMs took the stage. In the 1980s writers were debating the word processor as a legitimate writing tool:

Plenty of writers balked at the joys of word processing, for a host of reasons. Overwriting, in their view, became too easy; the labor of revision became undervalued. When Gore Vidal wrote in the mid-1980s that the “word processor is erasing literature,” he expressed an uneasiness about technology’s proximity to creative writing.

I respect the famous authors who have gone to great lengths [LW · GW] to produce their books. With the advent of LLMs, people will be deprived of the thinking process. They'll miss the opportunity to struggle before a blank piece of paper while nervously chewing on a Ticonderoga #2 pencil, and they'll miss the opportunity to sit behind a keyboard and mutter swear words at the blinking cursor that seems to be taunting them and asking what’s next?

With the ubiquity of LLMs, we'll never actually know if an author is using AI to think for them.[2] Practically anyone can be an author now. And when everyone's an author, no one will be.[3]

I asked a friend if he thinks it's silly that I only use my biological brain to write. He said, "You're still allowed to do math without a calculator. But why would you?" Paul Graham has a good rebuttal to that:

In a couple decades there won't be many people who can write.

AI has blown this world open. Almost all pressure to write has dissipated. You can have AI do it for you, both in school and at work.

The result will be a world divided into writes and write-nots. There will still be some people who can write. But the middle ground between those who are good at writing and those who can't write at all will disappear. Instead of good writers, ok writers, and people who can't write, there will just be good writers and people who can't write.

Is that so bad? Isn't it common for skills to disappear when technology makes them obsolete? There aren't many blacksmiths left, and it doesn't seem to be a problem.

Yes, it's bad. The reason is: writing is thinking. In fact there's a kind of thinking that can only be done by writing. You can't make this point better than Leslie Lamport did:

If you're thinking without writing, you only think you're thinking.

So a world divided into writes and write-nots is more dangerous than it sounds. It will be a world of thinks and think-nots. I know which half I want to be in, and I bet you do too.

This situation is not unprecedented. In preindustrial times most people's jobs made them strong. Now if you want to be strong, you work out. So there are still strong people, but only those who choose to be.

It will be the same with writing. There will still be smart people, but only those who choose to be.

I wonder if we'll look back on the people (like me) who solely use their biological brains to produce writing and view them as luddites compared to everyone else using LLMs. Am I basically a grumpy old scribe complaining about the newfangled Gutenberg Press? Or will my steadfast refusal to let go of a fading art form be seen as the death throes of a generation that's more than happy to slide into the warm comfort of brain rot.

  1. ^

    She didn't simply tell ChatGPT "write me an entire story about XYZ", but instead used the LLM to generate text for an AI assistant within the story which characters interacted with.

  2. ^

    People are constantly searching for competitive advantages. Unlike in the world of sports, there are few regulations in the writing world. Authors can claim their work is strictly biologically-produced for status purposes, but how can we possibly verify that? LLMs will become the PEDs of writers.

  3. ^

    I'd recommend watching this short clip.

13 comments

Comments sorted by top scores.

comment by Scrith · 2025-01-28T17:16:26.532Z · LW(p) · GW(p)

There definitely seems to be a continuum. I’m legitimately confused about using an LLM to generate actual text, since that seems like the easy part. I am using one to help write a novel, but it’s as a research assistant. For example, I struggled for years using tools like Vulgar to create new languages. But, it turns out LLMs are great at creating them. A list of things I routinely use the LLM for: 

  1. Finding the perfect word. Often times, I am trying to describe something and I can’t even come up with a word to look up in the thesaurus. So, “how would you describe someone’s face when they are expressing skepticism? A noun.”
  2. Develop languages as mentioned above. In one example, I am trying to make sure when I invent a fantasy word for a concept, I am not using any words with french roots. (I am trying to keep the root languages consistent for the fantasy terms. Languages have a certain “sound” that can inform the creation of new languages and anchor readers).
  3. Research things like meteorology, history, etc. For example, what technologies were contemporaneous and what other technologies do they depend on? “How long would it take a fast sailing ship to travel from Ireland to New York in the 1600s at different times of year? What variable affect that?”
  4. Generate names with a particular cultural or linguistic feel. “Give me an old english name for a little boy,” for a single use character or, “Give me ten names that imply smarminess with a Gothic feel” if I’m trying to come up with a name for a more major character.

I have a soft rule that I never upload the actual text of my book for feedback. I keep the actual text of the book out of the LLM’s memory. 

I’m not sure where that fits in your model. 

Replies from: declan-molony
comment by Declan Molony (declan-molony) · 2025-01-28T17:37:33.843Z · LW(p) · GW(p)

I’m legitimately confused about using an LLM to generate actual text

LLMs are in their nascent form with limited capabilities. As they continue to develop, they'll likely become more adept at creating large cohesive narratives. 

 

I have a soft rule that I never upload the actual text of my book for feedback. I keep the actual text of the book out of the LLM’s memory. 

I’m not sure where that fits in your model. 

It's interesting that you have this soft rule. Why? Are you worried that it'll steal your ideas? Or possibly concerned that it'll strip you of the feeling of authorship?

Replies from: Scrith
comment by Scrith · 2025-01-28T17:40:12.702Z · LW(p) · GW(p)

Definitely the latter. I would feel stripped of authorship. This isn't an ethical position, it's purely emotional / subjective. 

comment by Bezzi · 2025-01-28T09:34:49.353Z · LW(p) · GW(p)

If someone told me they produced a 300-page novel by hand—I would be impressed. 

If someone else told me they produced a 300-page novel with the aid of an LLM (that wrote the vast majority of the book) and the human merely organized the AI-generated content—I would consider the real author to be the LLM and the human would merely be an editor (or, at best, an artist creating a collage).

 

Imagine a photographer taking pictures with a fancy digital camera. Should we consider the camera as the real author and the person holding it as some clever impostor?

I'm not trolling. This was a serious question when photography was invented. For decades, art critics refused to consider photography as True Art. If we can plausibly claim that a professional photographer can sometimes be an Artist, I think that we also should accept that a novelist writing with AI assistance could be considered as such. Note that typing "write me a 300-page novel" into the prompt won't get you good results, even with the most powerful models. The human has still to do heavy editing work... as long as not everyone can do that, the concept of "AI Artist" could be in some sense meaningful.

Replies from: Tapatakt
comment by Tapatakt · 2025-01-28T13:42:04.774Z · LW(p) · GW(p)

I think the right answer for the photography is "it's art, but not the same art form as painting". And it has different quality and interestingness metrics. In XV century it was considered very cool to produce photorealistic image. Some people think it's still cool, but only if it's not a photo.

And it's the same for the AI-art. Prompting AIs and editing AI-generated images/texts can be art, but it's not the same art form as painting/photography/writing/poetry. And it should have different merics too. Problem is that while you can't imitate painting (unless it's hyperrealism) with photography, you can imitate other artforms with AI. And this is kinda cheating.

comment by Hastings (hastings-greer) · 2025-01-28T15:03:54.920Z · LW(p) · GW(p)

Humans learn and grow so fast that no matter how bad of a writer you start as, you are nearly incapable of producing 300 pages of a single story without simultaneously levelling up into an interesting writer. This lets readers give 300 page manuscripts by randos the benefit of the doubt (see fanfiction.net, ao3, etc). An LLM will not be changed at all by producing a 300 page story, an LLM/human team will be changed very little.

Replies from: declan-molony
comment by Declan Molony (declan-molony) · 2025-01-28T17:24:49.355Z · LW(p) · GW(p)

After writing his first 100-page short story, my brother realized that he'd become a better writer over the course of creating it. The beginning chapters therefore needed more rewriting than the ending chapters.

He just finished writing his first novel this week (and is getting ready to pitch it to publishers). Because of his prior writing experience, this story needed less overall editing as he has developed his writing style.

comment by quetzal_rainbow · 2025-01-28T11:38:42.387Z · LW(p) · GW(p)

What if I have wonderful plot in my head and I use LLM to pour it into acceptable stylistic form?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2025-01-28T14:42:32.612Z · LW(p) · GW(p)

What if I have wonderful plot in my head and I use LLM to pour it into acceptable stylistic form?

What if you have wonderful plot in your head and you ask writer to ghost-write it for you? And you'll be so generous as to split the profits 50-50? No writer will accept such an offer, and I've heard that established writers receive such requests all the time.

"Wonderful plots" are ten a penny. Wonderful writing is what makes the book worth reading, and LLMs are not there yet.

Replies from: Seth Herd, abandon
comment by Seth Herd · 2025-01-28T16:15:27.732Z · LW(p) · GW(p)

This is the way most people feel about writing. I do not think wonderful plots are ten a penny; I think writers are miserable at creating actually good plots from the perspective of someone who values scifi and realism. Their technology and their sociology is usually off in obvious ways, because understanding those things is hard.

I would personally love to see more people who do understand science, use AI to turn them into stories.

Or alternately I'd like to see skilled authors consult AI about the science in their stories.

This attitude that plots don't matter and writing is all is why we get lazily constructed plots and worlds.

This turns literature into mostly a sort of hallucinatory slop instead of a way to really understand the world while you're being entertained.

Most writers do seem to understand psychology so that's a plus.and some of them understand current technology and society, but that's the exception.

comment by dirk (abandon) · 2025-01-28T20:19:28.044Z · LW(p) · GW(p)

Plots that are profitable to write abound, but plots that any specific person likes may well be quite thin on the ground.

I think the key here is that authors don't feel the same attachment to submitted plot ideas as submitters do (or the same level of confidence in their profitability), and thus would view writing them as a service done for the submitter. Writing is hard work, and most people want to be compensated if they're going to do a lot of work to someone else's specifications. In scenarios where they're paid for their services, writers often do write others' plots; consider e.g. video game novelizations, franchises like Nancy Drew or Animorphs, and celebrity memoirs. (There are also non-monetized contexts like e.g. fanfiction exchanges, in which participants write a story to someone else's request and in turn are gifted a story tailored to their own.)

I wouldn't describe LLMs' abilities as wonderful, but IME they do quite serviceable pastiche of popular styles I like; if your idea is e.g. a hard-boiled detective story, MilSF, etc., I would expect an LLM to be perfectly capable of rendering it into tolerable form.

comment by weightt an (weightt-an) · 2025-01-28T09:27:36.718Z · LW(p) · GW(p)

I think the thing with talent is that it's a useful and straightforward signal of quality you can obtain without investing a whole lot of resources into evaluation/reading/research. 

Same with awards, recommendations from famous people, popularity scores and so on. 

And it's probably reasonable to feel a bit sad when some source of this signal gets invalidated. 

Just don't go to far with it? Like, if someone wrote a book while holding a pen with their toes while doing a headstand, it's not a good signal that the book will be of any interest to you.

Replies from: declan-molony
comment by Declan Molony (declan-molony) · 2025-01-28T17:28:01.324Z · LW(p) · GW(p)

if someone wrote a book while holding a pen with their toes while doing a headstand, it's not a good signal that the book will be of any interest to you.

Agreed, though I would definitely want to meet this insane person.