The case for turning glowfic into Sequences

post by Thomas Kwa (thomas-kwa) · 2022-04-27T06:58:57.395Z · LW · GW · 28 comments

Contents

  Q&A
    What is glowfic and how do I read it?
    Surely glowfic doesn't actually contain useful information?
    How should I start writing?
    What not to write
    If Eliezer can't write nonfiction because of trolls and bad takes, won't turning glowfic into Sequences just make him stop writing glowfic?
    Seems plausibly good, but this is a dumb plan. Are there better plans?
None
28 comments

Epistemic status: serious, uncertain, moderate importance. Leaving comments is encouraged!

Recently Eliezer Yudkowsky's main writing output has been rationalist [? · GW] glowfic: role-play fiction written on an Internet forum like glowfic.com.[1] I think that LessWrongers, fans of rationalist fiction, and anyone interested in raising the sanity waterline should consider distilling lessons from Yudkowsky glowfic into LW posts.

Here's the basic case:

  1. The original Sequences were extremely good at building the community and raising the sanity waterline. If you want to make the impact case, I think they plausibly get multiple percent of the entire rationality community's impact points.
  2. The Sequences are incomplete. Despite most of his knowledge coming from his home planet [LW · GW], Eliezer has in fact learned things since 2009. Having more sequences would be great!
  3. Eliezer's thoughts are still relevant. Recent posts like conversations with AI researchers [? · GW], calling attention to underrated ideas [LW · GW], and short fiction [LW · GW] have all been good.
  4. Not everyone gets useful lessons from the Sequences, because Eliezer's writing style and tone can be annoying. Eliezer was deliberately discourteous [LW · GW] towards "stupid ideas", and regrets this. Also, some people just learn better from other writing styles.
  5. Eliezer stopped writing Sequences and probably cannot write more. This is a combination of Eliezer's chronic fatigue syndrome and being tired of trolls / bad takes in comments. The only medium he can write in without being drained is glowfic. Thus, even though it's a non-serious format, glowfic is Eliezer's main intellectual output right now.
  6. Eliezer attempts to make his glowfic roughly as edifying as HPMOR, and among people who read glowfic, some find it really good [LW(p) · GW(p)] at teaching rationality.
  7. But not everyone can read glowfic and gain useful lessons.
    1. Many people (including me) read fiction for maximum enjoyment rather than to extract maximum knowledge. I had the same problem with HPMOR, reading through it like any other novel, whereas many people I know who got more from HPMOR read it carefully, perhaps stopping after every chapter to think about the goals and motivations of each character and predict what happens next.
    2. It's really long (>>100 hours of reading time just for the existing material in the planecrash sequence) and most of the rationality lessons are contained in a small proportion of the words.
    3. It's in a weird format; there's no paper book or e-book version.
    4. Many of the stories have so much gratuitous sex (and often bad kink practices, torture, etc.) that they're inappropriate for children and offputting to some adults. (I started reading HPMOR at 14 and would not recommend most 14yo read glowfic.)

I expect that if good work is produced here, it's mostly by people who personally derived some important lesson from glowfic, and were thinking of writing it up already, whether or not it's on the idea list below. One such person could potentially be counterfactual for getting a lot more discussion of, and context for, Eliezer's current thoughts into the community, which I would see as a big win.

Q&A

What is glowfic and how do I read it?

There's a LW post here explaining the format here [LW · GW], and also a community guide written by members of the glowfic community. Eliezer also announced [LW · GW] the planecrash sequence in particular and linked to a website containing just planecrash.

Surely glowfic doesn't actually contain useful information?

I'm pretty uncertain about the value of glowfic. I would update down if several people tried creating posts and none of them were good. But right now I think it's underexplored. Some evidence on the value of glowfic:

How should I start writing?

I don't necessarily recommend reading rationalist glowfic just to gain shards of Eliezer's thinking and write them up, if you don't find it fun in itself. (If you want to do this anyway, reading the first 2/3 of Mad Investor Chaos is a place to start.) But if you're already a glowfic fan, here's a list of topics from glowfic that could be turned into posts. (Thanks to Keller Scholl for some of these.) A large class of these is "dath ilani virtue": positive traits displayed by the civilization in Eliezer's utopia, or its citizens when placed in other worlds.

There are also points in glowfic where Eliezer gives a blog post as the narrator, or gives a blog post as a character giving a lecture; such content could be posted here with minor annotations/edits.

What not to write

If the goal is edification, I'm not particularly looking for the following artifacts (but I'd like to be proven wrong).

If Eliezer can't write nonfiction because of trolls and bad takes, won't turning glowfic into Sequences just make him stop writing glowfic?

No, I asked him.

Seems plausibly good, but this is a dumb plan. Are there better plans?

Maybe! Here are some alternate plans:

Some plans sound much less dumb but maybe intractable:

  1. ^

    Note that not all glowfic is rationalist fiction, and not all rationalist fiction is written as glowfic.

  2. ^

    In the planecrash series, Otolmens is the god of preventing existential risk.

28 comments

Comments sorted by top scores.

comment by Dweomite · 2022-05-18T05:24:28.058Z · LW(p) · GW(p)

Rational actors don’t respond to threats

I'm currently reading planecrash, and just today read a scene that could plausibly have prompted this bullet point:  Keltham is confused about teachers punishing students, and makes an argument about how if someone threatens to break your arm unless you give them your shoes, you should fight back, even though having your arm broken is worse than losing your shoes.

But my interpretation of this scene was "Keltham has lived all his life in dath ilan, where Very Serious people have done a lot of work specifically to engineer a societal equilibrium where this would be true, and has utterly failed to grasp how the game theory changes for the circumstances in this new world (partly because culture gap, partly because lies)."  I don't think it's actually true in general that's it's irrational to respond to threats (though judging when it's rational is more complicated than just deciding whether a broken arm is worse than losing your shoes).

(The glowfic characters don't have cause to directly address this point, because "teachers punishing students" isn't actually about threats at all; it's reinforcement, which is a different thing, and they are arguably still doing it wrong but for totally different reasons, so Keltham's parable about shoes turns out to be irrelevant.)

I...guess I could probably turn my interpretation of the scene into a post, if that has noticeable expected value?  Which it probably does if this scene is commonly being interpreted as "Keltham correctly argues that it is never rational to cave to a threat", but I'm not actually sure if this is the scene you had in mind or if your interpretation of it is common.

Replies from: SaidAchmiz, Dweomite, michael-grosse
comment by Said Achmiz (SaidAchmiz) · 2022-09-02T18:42:50.363Z · LW(p) · GW(p)

I have also had the thought, very often while reading this story, that many of the (apparently? it’s sometimes hard to tell, though not always) intended lessons do seem to be wrong. Neither Keltham’s nor the “dath ilan” narrator’s explanations / arguments for these (apparently) intended lessons are convincing, generally (indeed they often serve to solidify my view that the lessons are actually wrong).

comment by Dweomite · 2022-08-23T19:53:54.642Z · LW(p) · GW(p)

For posterity:  I've read much further in planecrash, and it has gradually become clear that this no-giving-in-to-threats thing is a considered philosophical position (not a throwaway giving color on dath ilan), and in fact is rather important to the overarching plot, but (as of now) still hasn't been explained in full detail.

There's now a reserved threadspace here where the authors promise to explain this "eventually", asynchronously with the main story, but that discussion has not yet begun.

comment by Celenduin (michael-grosse) · 2022-08-20T20:41:28.092Z · LW(p) · GW(p)

This would seem to be related to "Knowing when to lose" from HPMOR.

comment by Razied · 2022-04-27T12:17:59.522Z · LW(p) · GW(p)

Well, I tried reading mad investor chaos, and even though I loved hpmor, I couldn't make it through the first thread page of that story, it just feels extremely pedantic, though that's not exactly the right word. The density of terminology makes it all unpleasant, even though I understand what every term means, it just feels like a horribly stilted form of human communication. This might be appropriate in-universe, but it doesn't make it any less annoying to read.

comment by Slider · 2022-04-27T11:59:45.864Z · LW(p) · GW(p)

One fun thing about the stories is that they are nuanced and express positions as beliefs of the characters and because there is such a variety the authors can't personally be backing everything. And for the same reason its hard to argue what is the correct takeaway. Making everything super complicated keeps things interesting and is mentally stimulating but doesn't provide the most clarity. I am pretty sure that "people should regard Evil as a supreme virtue" is not a correct takeaway but there is something to the direction of "don't be Stupid Good".

Althought the explicit learnings of cognition are very condenced the context of them being practised immediatly before or after is a kind of thing I suspect to be pretty central to the things and harder to make shorter.

It did occur to me that I would totally read through "virtues and their layers" and Tolkien style specification of Baseline.

comment by Richard_Kennaway · 2022-04-27T07:54:00.393Z · LW(p) · GW(p)

Is there any Eliezer glowfic besides "mad investor chaos and the woman of asmodeus"? Which work is gigantic enough, but because it's so gigantic I find myself unmotivated to read any more of it now that I've more or less got the framework of that world.

Also, is that work a collaboration between Eliezer and one or more others? While reading it, for some reason I took Eliezer to be writing Keltham's part and someone else GM-ing all the other characters, but I'm not sure I have any reason to think that.

Replies from: Vaniver, Slider
comment by Vaniver · 2022-04-27T18:15:13.164Z · LW(p) · GW(p)

Also, is that work a collaboration between Eliezer and one or more others? While reading it, for some reason I took Eliezer to be writing Keltham's part and someone else GM-ing all the other characters, but I'm not sure I have any reason to think that.

Glowfic is generally written by multiple people. When you look at a post, you'll see on the left the character picture for that post (giving some mood info), the character's name, the character's short phrase-bio, and then below that the author's username.

Most of planecrash is written by Iarwain and lintamande, but the most recent thread has five authors (as more characters have joined the research project).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2022-09-02T18:39:07.480Z · LW(p) · GW(p)

Most of planecrash is written by Iarwain and lintamande

Clarifying for readers who don’t keep track of these sorts of things:

Iarwain is Eliezer, and lintamande is a different person who is not Eliezer.

comment by Slider · 2022-04-27T11:33:28.340Z · LW(p) · GW(p)

Each "post" on the glowfic also lists the author which I imagine is linked to the account that the post originated from. There are two main authors. They don't always strickly stick to writing particular characters. There is definetely parts where there is a clear agent-environment structure to the proceedings. "Keltham tries to open the door. Does it open? Yes, it does.". The structure does tickle my game literacy. While the end text is frozen in stone that its based on interaction counterfactuals become way more relevant (the participants would be prepared to tell the story even if there were slight twists).

Other roleplaying shows also have that structure that its mostly quiet and then some exciting things happen very spikingly and sproadickly. It tends to a once a week 3-4 h episode being made into a clip complication of 15 mins with 1-2 min tidbits of tasty character expression (the bits that everybody remembers from watching the episode).

comment by maia · 2022-04-28T19:48:17.818Z · LW(p) · GW(p)

Re: no e-book version: here's a script for downloading glowfic posts and continuities into epub format: https://github.com/rocurley/glowfic-dl

comment by GAA · 2022-04-27T23:47:59.995Z · LW(p) · GW(p)

Perhaps this is a stupid suggestion, but if trolls in the comments annoy him, can he post somewhere where no comments are allowed? You can turn off comments on wordpress, for example.

comment by Cakoluchiam · 2023-10-08T02:34:49.387Z · LW(p) · GW(p)

A free full-cast Audiobook of Planecrash is currently in production at https://shows.acast.com/project-lawful-aka-planecrash, using AI-generated voices. It is quite excellent, albeit missing a few of the glowfic-specific elements such as character portraits and reaction tags (posts with no text, only character portraits). I highly recommend it for anyone.

There is a parallel analysis podcast and book club hosted by myself and members of The Bayesian Conspiracy's discord channel, formerly It Makes Sense If You Understand Decision Theory, now We Want Headbands (in homage to We've Got Worm). As we go, I'm sectioning it into shorter Books and Chapters, with somewhat-descriptive titles. The podcast-aligned table of contents and links to the podcasts are available at http://www.imsiyudt.com/ 

comment by Henry Prowbell · 2022-05-05T08:55:38.986Z · LW(p) · GW(p)

If somebody has time to pour into this I'd suggest recording an audio version of Mad Investor Chaos.

HPMOR reached a lot more people thanks to Eneasz Brodski's podcast recordings. That effect could be much more pronounced here if the weird glowfic format is putting people off.

I'd certainly be more likely to get through it if I could play it in the background whilst doing chores, commuting or falling asleep at night.

That's how I first listened to HPMOR, and then once I'd realised how good it was I went back and reread it slowly, taking notes, making an effort to internalize the lessons.

Replies from: EniScien
comment by EniScien · 2022-05-12T18:30:58.645Z · LW(p) · GW(p)

Hmm, funny, I usually listen to audiobooks, but this was not the case with HPMOR, I realized "how good it is" literally from the first chapter, which is extremely rare with books.

comment by Yoav Ravid · 2022-04-28T03:38:15.512Z · LW(p) · GW(p)

I would be glad if stories from there were straight up crossposted to here (and perhaps formatted/edited a bit), because several times already I went to the site to read something when I saw a recommendation, and just couldn't navigate there and understand what I'm supposed to read.

comment by Thomas Kwa (thomas-kwa) · 2023-02-17T23:35:54.787Z · LW(p) · GW(p)

I'm offering a $300 bounty to anyone that gets 100 karma doing this this year (without any vote manipulation).

Manifold market for this:

Replies from: thomas-kwa
comment by Thomas Kwa (thomas-kwa) · 2023-06-02T20:20:54.975Z · LW(p) · GW(p)

The bounty remains open, but I'm no longer excited about this due to three reasons:

  • lack of evidence for glowfic being an important positive influence on rationality,
  • Eliezer is speaking in the public sphere (some would argue too much)
  • general increasing quality and decreasing weirdness of alignment research
Replies from: Maxc
comment by Max H (Maxc) · 2023-06-02T20:31:14.110Z · LW(p) · GW(p)

I wasn't aware of the bounty until seeing this comment, but I am a big fan of planecrash, both as a work of fiction and as pedagogy. 

I wrote one post [LW · GW] that built on the corrigibility tag in planecrash, and another [LW · GW] on understanding decision theory, which isn't directly based on anything in placecrash, but is kind of loosely inspired by some things I learned from reading it.

(Neither of these posts appear to meet the requirements for the bounty, and they didn't get much engagement in any case. Just pointing them out in case you or anyone else is looking for some planecrash-inspired rationality / AI content.)

comment by NicholasKross · 2022-09-29T18:14:40.803Z · LW(p) · GW(p)

Planecrash is really cool, but also I am allergic to reading fantasy proper nouns, let alone remembering what they refer to and the relationships between them.

Some fantasy is easier for me to absorb because it's either highly visual (in non-HPMOR HP, they mostly shoot colorful firebolts at each other), and/or are based on existing intuitive concepts (in ATLA, it's easy to learn what "waterbending" is, and suddenly you can quickly figure out "metalbending").

Tempted to make an Anki deck and/or cheatsheet for the things in Planecrash that I'd want to have on hand (e.g. the names of different Gods), but I'm open and eager for easier/better solutions. Is there a character sheet somewhere?

EDIT: 2 ideas I had, not sure if plugins for this exist already:

  1. browser extension that replaces words with some short custom definition and highlighting. So I can replace [godname] with [god of mad experimentation].
  2. browser extension that lets you hover over words to get a custom, user-set definition. I think this might do that?
comment by EniScien · 2022-05-12T18:36:43.277Z · LW(p) · GW(p)

"Create glowfic characters for top AI researchers, and have Eliezer critique their ideas by role-playing with them (mostly a joke)" It looks interesting

comment by Tofly · 2022-06-06T20:26:32.090Z · LW(p) · GW(p)

cure Eliezer's chronic fatigue so he can actually attempt to grant humanity a couple more bits of information-theoretic dignity save the world

Possibly relevant: I know someone who had chronic fatigue syndrome which largely disappeared after she had her first child. I could possibly put her in contact with Eliezer or someone working on the problem.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2022-09-02T18:43:45.179Z · LW(p) · GW(p)

Wouldn’t this solution be, ahem, biologically infeasible for Eliezer to implement?

comment by AprilSR · 2022-05-04T18:03:58.164Z · LW(p) · GW(p)

Was the "glowfic excerpts" link supposed to be Self Integrity and the Drowning Child [LW · GW]?

Replies from: thomas-kwa
comment by Thomas Kwa (thomas-kwa) · 2022-05-04T18:28:35.143Z · LW(p) · GW(p)

Yes, fixed

comment by Casey B. (Zahima) · 2022-04-27T18:37:50.297Z · LW(p) · GW(p)

With so much apparently available energy/effort for eliezer-centered-improvement initiatives (like the $100,000 bounty mentioned in this post), I'd like to propose that we seriously consider cloning Eliezer. 

From a layman/outsider perspective, it seems the hardest thing would be keeping it a secret so as to avoid controversy and legal trouble, since from a technical perspective it seems possible and relatively cheap. EA folks seem well connected and capable of such coordination, even under the burden of secrecy and keeping as few people "in the know" as possible. 

Partially related: (in the category of comparatively off-the-wall - but nonviolent - AI alignment strategies): at some point there was a suggestion that MIRI pay $10mil (or some such figure) to Terence Tao (or some such prodigy) to help with alignment work. Eliezer replied thus [LW(p) · GW(p)]: 

We'd absolutely pay him if he showed up and said he wanted to work on the problem.  Every time I've asked about trying anything like this, all the advisors claim that you cannot pay people at the Terry Tao level to work on problems that don't interest them.  We have already extensively verified that it doesn't particularly work for eg university professors.

I'd love to see more visibility into proposed strategies like these (i.e. strategies surrounding/above the object-level strategy of "everyone who can do alignment research puts their head down and works", and the related: "everyone else make money in their comparative specialization/advantage and donate to MIRI/FHI/etc"). Even visibility into why various strategies were shot down would be useful, and a potential catalyst for farming further ideas from the community. (even if - for game theoretic reasons - one may never be able to confirm that an idea has been tried, as in my cloning suggestion)

Replies from: tomcatfish
comment by Alex Vermillion (tomcatfish) · 2022-08-12T19:30:38.474Z · LW(p) · GW(p)

Meta level: Why on earth would you say "Here is my secret idea, internet"? That doesn't make any sense to me

Replies from: lc
comment by lc · 2022-09-15T14:02:51.854Z · LW(p) · GW(p)

Many such cases.