The case for turning glowfic into Sequences
post by Thomas Kwa (thomas-kwa) · 2022-04-27T06:58:57.395Z · LW · GW · 28 commentsContents
Q&A What is glowfic and how do I read it? Surely glowfic doesn't actually contain useful information? How should I start writing? What not to write If Eliezer can't write nonfiction because of trolls and bad takes, won't turning glowfic into Sequences just make him stop writing glowfic? Seems plausibly good, but this is a dumb plan. Are there better plans? None 28 comments
Epistemic status: serious, uncertain, moderate importance. Leaving comments is encouraged!
Recently Eliezer Yudkowsky's main writing output has been rationalist [? · GW] glowfic: role-play fiction written on an Internet forum like glowfic.com.[1] I think that LessWrongers, fans of rationalist fiction, and anyone interested in raising the sanity waterline should consider distilling lessons from Yudkowsky glowfic into LW posts.
Here's the basic case:
- The original Sequences were extremely good at building the community and raising the sanity waterline. If you want to make the impact case, I think they plausibly get multiple percent of the entire rationality community's impact points.
- The Sequences are incomplete. Despite most of his knowledge coming from his home planet [LW · GW], Eliezer has in fact learned things since 2009. Having more sequences would be great!
- Eliezer's thoughts are still relevant. Recent posts like conversations with AI researchers [? · GW], calling attention to underrated ideas [LW · GW], and short fiction [LW · GW] have all been good.
- Not everyone gets useful lessons from the Sequences, because Eliezer's writing style and tone can be annoying. Eliezer was deliberately discourteous [LW · GW] towards "stupid ideas", and regrets this. Also, some people just learn better from other writing styles.
- Eliezer stopped writing Sequences and probably cannot write more. This is a combination of Eliezer's chronic fatigue syndrome and being tired of trolls / bad takes in comments. The only medium he can write in without being drained is glowfic. Thus, even though it's a non-serious format, glowfic is Eliezer's main intellectual output right now.
- Eliezer attempts to make his glowfic roughly as edifying as HPMOR, and among people who read glowfic, some find it really good [LW(p) · GW(p)] at teaching rationality.
- But not everyone can read glowfic and gain useful lessons.
- Many people (including me) read fiction for maximum enjoyment rather than to extract maximum knowledge. I had the same problem with HPMOR, reading through it like any other novel, whereas many people I know who got more from HPMOR read it carefully, perhaps stopping after every chapter to think about the goals and motivations of each character and predict what happens next.
- It's really long (>>100 hours of reading time just for the existing material in the planecrash sequence) and most of the rationality lessons are contained in a small proportion of the words.
- It's in a weird format; there's no paper book or e-book version.
- Many of the stories have so much gratuitous sex (and often bad kink practices, torture, etc.) that they're inappropriate for children and offputting to some adults. (I started reading HPMOR at 14 and would not recommend most 14yo read glowfic.)
I expect that if good work is produced here, it's mostly by people who personally derived some important lesson from glowfic, and were thinking of writing it up already, whether or not it's on the idea list below. One such person could potentially be counterfactual for getting a lot more discussion of, and context for, Eliezer's current thoughts into the community, which I would see as a big win.
Q&A
What is glowfic and how do I read it?
There's a LW post here explaining the format here [LW · GW], and also a community guide written by members of the glowfic community. Eliezer also announced [LW · GW] the planecrash sequence in particular and linked to a website containing just planecrash.
Surely glowfic doesn't actually contain useful information?
I'm pretty uncertain about the value of glowfic. I would update down if several people tried creating posts and none of them were good. But right now I think it's underexplored. Some evidence on the value of glowfic:
- + HPMOR spawned discussion of core rationalist virtues, like heroic responsibility [? · GW]
- ? HPMOR didn't have good sequences extracted from it (though maybe that's because most of the rationality material was already in the original Sequences)
- ? reaction to this idea from glowfic fans I know has been mixed: some are pretty enthusiastic, while some think glowfic doesn't contain much practical rationality content
- ? people have not written about many glowfic-derived insights yet (though maybe this is for no reason, which would make this project neglected)
- - This post [LW · GW] was less well received than I expected (though maybe that's due to concern about generalizing from fictional evidence, which wouldn't be a problem with all glowfic-derived sequences)
How should I start writing?
I don't necessarily recommend reading rationalist glowfic just to gain shards of Eliezer's thinking and write them up, if you don't find it fun in itself. (If you want to do this anyway, reading the first 2/3 of Mad Investor Chaos is a place to start.) But if you're already a glowfic fan, here's a list of topics from glowfic that could be turned into posts. (Thanks to Keller Scholl for some of these.) A large class of these is "dath ilani virtue": positive traits displayed by the civilization in Eliezer's utopia, or its citizens when placed in other worlds.
- An introduction to rationalist glowfic: where glowfic lives, how to read it.
- "Lawfulness" and its facets: Bayes, expected utility, the ability to coordinate and trade, etc.
- How Keltham analyzes everything to try to understand it as an equilibrium between rational actors, whether this works in real life, and how to do it
- The strengths and weaknesses of glowfic as an edification tool
- "What would Otolmens say?"[2]
- What civilizational competence looks like
- A list of dath ilani virtues.
- Decision theory. Some possible topics:
- Someone who helps you should be rewarded, even if you were not in contact with them at the time
- Rational actors don’t respond to threats
- Applied rationality. Some possible topics:
- Forming hypotheses is costly, because they distort future thinking in favor of themselves, and should be avoided as long as possible
- Evidence accumulates: so long as you track hypotheses and evidence-shifts accurately, you will converge on the truth, and reality is full of information
- How to "introspectively experience belief updates" [LW(p) · GW(p)]
There are also points in glowfic where Eliezer gives a blog post as the narrator, or gives a blog post as a character giving a lecture; such content could be posted here with minor annotations/edits.
What not to write
If the goal is edification, I'm not particularly looking for the following artifacts (but I'd like to be proven wrong).
- Plot summaries: I can't see anything in the plot of glowfic I've read so far that's more useful than the plot of any other fiction. (I also don't expect these to be very fun to read)
- Book reviews: The reviews I've seen so far are amusing but don't really teach anything. Someone like Scott Alexander could write a book review that does teach things, but it doesn't seem substantially easier than writing other glowfic-related content. (edit: since writing this I'm more excited about book reviews than I was, although they do have to be done well)
- Broad high-context discussions: HPMOR discussions [? · GW] were successful, but aren't what I'm looking for; ideally we make glowfic content accessible for people who don't want to read glowfic.
If Eliezer can't write nonfiction because of trolls and bad takes, won't turning glowfic into Sequences just make him stop writing glowfic?
No, I asked him.
Seems plausibly good, but this is a dumb plan. Are there better plans?
Maybe! Here are some alternate plans:
- get Eliezer to write enlightening short fiction [LW · GW] rather than glowfic
- get Eliezer to write glowfic excerpts [LW · GW] that can be posted on LW
- create glowfic characters for top AI researchers, and have Eliezer critique their ideas by role-playing with them (mostly a joke)
Some plans sound much less dumb but maybe intractable:
- cure Eliezer's chronic fatigue so he can actually attempt to
grant humanity a couple more bits of information-theoretic dignitysave the world- There was a $100,000 bounty for this that went unclaimed. Also, 5 people worked pretty seriously on it part-time for 2 years before giving up.
- have Eliezer do more consulting with AI alignment researchers instead
- This is already happening. I have heard that this is much more tiring than Eliezer for writing glowfic, and the glowfic is basically free, being written in his free time and not requiring nearly as much energy as consulting.
- ^
Note that not all glowfic is rationalist fiction, and not all rationalist fiction is written as glowfic.
- ^
In the planecrash series, Otolmens is the god of preventing existential risk.
28 comments
Comments sorted by top scores.
comment by Dweomite · 2022-05-18T05:24:28.058Z · LW(p) · GW(p)
Rational actors don’t respond to threats
I'm currently reading planecrash, and just today read a scene that could plausibly have prompted this bullet point: Keltham is confused about teachers punishing students, and makes an argument about how if someone threatens to break your arm unless you give them your shoes, you should fight back, even though having your arm broken is worse than losing your shoes.
But my interpretation of this scene was "Keltham has lived all his life in dath ilan, where Very Serious people have done a lot of work specifically to engineer a societal equilibrium where this would be true, and has utterly failed to grasp how the game theory changes for the circumstances in this new world (partly because culture gap, partly because lies)." I don't think it's actually true in general that's it's irrational to respond to threats (though judging when it's rational is more complicated than just deciding whether a broken arm is worse than losing your shoes).
(The glowfic characters don't have cause to directly address this point, because "teachers punishing students" isn't actually about threats at all; it's reinforcement, which is a different thing, and they are arguably still doing it wrong but for totally different reasons, so Keltham's parable about shoes turns out to be irrelevant.)
I...guess I could probably turn my interpretation of the scene into a post, if that has noticeable expected value? Which it probably does if this scene is commonly being interpreted as "Keltham correctly argues that it is never rational to cave to a threat", but I'm not actually sure if this is the scene you had in mind or if your interpretation of it is common.
Replies from: SaidAchmiz, Dweomite, michael-grosse↑ comment by Said Achmiz (SaidAchmiz) · 2022-09-02T18:42:50.363Z · LW(p) · GW(p)
I have also had the thought, very often while reading this story, that many of the (apparently? it’s sometimes hard to tell, though not always) intended lessons do seem to be wrong. Neither Keltham’s nor the “dath ilan” narrator’s explanations / arguments for these (apparently) intended lessons are convincing, generally (indeed they often serve to solidify my view that the lessons are actually wrong).
↑ comment by Dweomite · 2022-08-23T19:53:54.642Z · LW(p) · GW(p)
For posterity: I've read much further in planecrash, and it has gradually become clear that this no-giving-in-to-threats thing is a considered philosophical position (not a throwaway giving color on dath ilan), and in fact is rather important to the overarching plot, but (as of now) still hasn't been explained in full detail.
There's now a reserved threadspace here where the authors promise to explain this "eventually", asynchronously with the main story, but that discussion has not yet begun.
↑ comment by Celenduin (michael-grosse) · 2022-08-20T20:41:28.092Z · LW(p) · GW(p)
This would seem to be related to "Knowing when to lose" from HPMOR.
comment by Razied · 2022-04-27T12:17:59.522Z · LW(p) · GW(p)
Well, I tried reading mad investor chaos, and even though I loved hpmor, I couldn't make it through the first thread page of that story, it just feels extremely pedantic, though that's not exactly the right word. The density of terminology makes it all unpleasant, even though I understand what every term means, it just feels like a horribly stilted form of human communication. This might be appropriate in-universe, but it doesn't make it any less annoying to read.
comment by Slider · 2022-04-27T11:59:45.864Z · LW(p) · GW(p)
One fun thing about the stories is that they are nuanced and express positions as beliefs of the characters and because there is such a variety the authors can't personally be backing everything. And for the same reason its hard to argue what is the correct takeaway. Making everything super complicated keeps things interesting and is mentally stimulating but doesn't provide the most clarity. I am pretty sure that "people should regard Evil as a supreme virtue" is not a correct takeaway but there is something to the direction of "don't be Stupid Good".
Althought the explicit learnings of cognition are very condenced the context of them being practised immediatly before or after is a kind of thing I suspect to be pretty central to the things and harder to make shorter.
It did occur to me that I would totally read through "virtues and their layers" and Tolkien style specification of Baseline.
comment by Richard_Kennaway · 2022-04-27T07:54:00.393Z · LW(p) · GW(p)
Is there any Eliezer glowfic besides "mad investor chaos and the woman of asmodeus"? Which work is gigantic enough, but because it's so gigantic I find myself unmotivated to read any more of it now that I've more or less got the framework of that world.
Also, is that work a collaboration between Eliezer and one or more others? While reading it, for some reason I took Eliezer to be writing Keltham's part and someone else GM-ing all the other characters, but I'm not sure I have any reason to think that.
Replies from: Vaniver, Slider↑ comment by Vaniver · 2022-04-27T18:15:13.164Z · LW(p) · GW(p)
Also, is that work a collaboration between Eliezer and one or more others? While reading it, for some reason I took Eliezer to be writing Keltham's part and someone else GM-ing all the other characters, but I'm not sure I have any reason to think that.
Glowfic is generally written by multiple people. When you look at a post, you'll see on the left the character picture for that post (giving some mood info), the character's name, the character's short phrase-bio, and then below that the author's username.
Most of planecrash is written by Iarwain and lintamande, but the most recent thread has five authors (as more characters have joined the research project).
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2022-09-02T18:39:07.480Z · LW(p) · GW(p)
Most of planecrash is written by Iarwain and lintamande
Clarifying for readers who don’t keep track of these sorts of things:
Iarwain is Eliezer, and lintamande is a different person who is not Eliezer.
↑ comment by Slider · 2022-04-27T11:33:28.340Z · LW(p) · GW(p)
Each "post" on the glowfic also lists the author which I imagine is linked to the account that the post originated from. There are two main authors. They don't always strickly stick to writing particular characters. There is definetely parts where there is a clear agent-environment structure to the proceedings. "Keltham tries to open the door. Does it open? Yes, it does.". The structure does tickle my game literacy. While the end text is frozen in stone that its based on interaction counterfactuals become way more relevant (the participants would be prepared to tell the story even if there were slight twists).
Other roleplaying shows also have that structure that its mostly quiet and then some exciting things happen very spikingly and sproadickly. It tends to a once a week 3-4 h episode being made into a clip complication of 15 mins with 1-2 min tidbits of tasty character expression (the bits that everybody remembers from watching the episode).
comment by maia · 2022-04-28T19:48:17.818Z · LW(p) · GW(p)
Re: no e-book version: here's a script for downloading glowfic posts and continuities into epub format: https://github.com/rocurley/glowfic-dl
comment by Cakoluchiam · 2023-10-08T02:34:49.387Z · LW(p) · GW(p)
A free full-cast Audiobook of Planecrash is currently in production at https://shows.acast.com/project-lawful-aka-planecrash, using AI-generated voices. It is quite excellent, albeit missing a few of the glowfic-specific elements such as character portraits and reaction tags (posts with no text, only character portraits). I highly recommend it for anyone.
There is a parallel analysis podcast and book club hosted by myself and members of The Bayesian Conspiracy's discord channel, formerly It Makes Sense If You Understand Decision Theory, now We Want Headbands (in homage to We've Got Worm). As we go, I'm sectioning it into shorter Books and Chapters, with somewhat-descriptive titles. The podcast-aligned table of contents and links to the podcasts are available at http://www.imsiyudt.com/
comment by Henry Prowbell · 2022-05-05T08:55:38.986Z · LW(p) · GW(p)
If somebody has time to pour into this I'd suggest recording an audio version of Mad Investor Chaos.
HPMOR reached a lot more people thanks to Eneasz Brodski's podcast recordings. That effect could be much more pronounced here if the weird glowfic format is putting people off.
I'd certainly be more likely to get through it if I could play it in the background whilst doing chores, commuting or falling asleep at night.
That's how I first listened to HPMOR, and then once I'd realised how good it was I went back and reread it slowly, taking notes, making an effort to internalize the lessons.
Replies from: EniSciencomment by Yoav Ravid · 2022-04-28T03:38:15.512Z · LW(p) · GW(p)
I would be glad if stories from there were straight up crossposted to here (and perhaps formatted/edited a bit), because several times already I went to the site to read something when I saw a recommendation, and just couldn't navigate there and understand what I'm supposed to read.
comment by Thomas Kwa (thomas-kwa) · 2023-02-17T23:35:54.787Z · LW(p) · GW(p)
I'm offering a $300 bounty to anyone that gets 100 karma doing this this year (without any vote manipulation).
Manifold market for this:
Replies from: thomas-kwa↑ comment by Thomas Kwa (thomas-kwa) · 2023-06-02T20:20:54.975Z · LW(p) · GW(p)
The bounty remains open, but I'm no longer excited about this due to three reasons:
- lack of evidence for glowfic being an important positive influence on rationality,
- Eliezer is speaking in the public sphere (some would argue too much)
- general increasing quality and decreasing weirdness of alignment research
↑ comment by Max H (Maxc) · 2023-06-02T20:31:14.110Z · LW(p) · GW(p)
I wasn't aware of the bounty until seeing this comment, but I am a big fan of planecrash, both as a work of fiction and as pedagogy.
I wrote one post [LW · GW] that built on the corrigibility tag in planecrash, and another [LW · GW] on understanding decision theory, which isn't directly based on anything in placecrash, but is kind of loosely inspired by some things I learned from reading it.
(Neither of these posts appear to meet the requirements for the bounty, and they didn't get much engagement in any case. Just pointing them out in case you or anyone else is looking for some planecrash-inspired rationality / AI content.)
comment by Nicholas / Heather Kross (NicholasKross) · 2022-09-29T18:14:40.803Z · LW(p) · GW(p)
Planecrash is really cool, but also I am allergic to reading fantasy proper nouns, let alone remembering what they refer to and the relationships between them.
Some fantasy is easier for me to absorb because it's either highly visual (in non-HPMOR HP, they mostly shoot colorful firebolts at each other), and/or are based on existing intuitive concepts (in ATLA, it's easy to learn what "waterbending" is, and suddenly you can quickly figure out "metalbending").
Tempted to make an Anki deck and/or cheatsheet for the things in Planecrash that I'd want to have on hand (e.g. the names of different Gods), but I'm open and eager for easier/better solutions. Is there a character sheet somewhere?
EDIT: 2 ideas I had, not sure if plugins for this exist already:
- browser extension that replaces words with some short custom definition and highlighting. So I can replace [godname] with [god of mad experimentation].
- browser extension that lets you hover over words to get a custom, user-set definition. I think this might do that?
comment by Tofly · 2022-06-06T20:26:32.090Z · LW(p) · GW(p)
cure Eliezer's chronic fatigue so he can actually attempt to
grant humanity a couple more bits of information-theoretic dignitysave the world
Possibly relevant: I know someone who had chronic fatigue syndrome which largely disappeared after she had her first child. I could possibly put her in contact with Eliezer or someone working on the problem.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2022-09-02T18:43:45.179Z · LW(p) · GW(p)
Wouldn’t this solution be, ahem, biologically infeasible for Eliezer to implement?
comment by AprilSR · 2022-05-04T18:03:58.164Z · LW(p) · GW(p)
Was the "glowfic excerpts" link supposed to be Self Integrity and the Drowning Child [LW · GW]?
Replies from: thomas-kwa↑ comment by Thomas Kwa (thomas-kwa) · 2022-05-04T18:28:35.143Z · LW(p) · GW(p)
Yes, fixed
comment by Casey B. (Zahima) · 2022-04-27T18:37:50.297Z · LW(p) · GW(p)
With so much apparently available energy/effort for eliezer-centered-improvement initiatives (like the $100,000 bounty mentioned in this post), I'd like to propose that we seriously consider cloning Eliezer.
From a layman/outsider perspective, it seems the hardest thing would be keeping it a secret so as to avoid controversy and legal trouble, since from a technical perspective it seems possible and relatively cheap. EA folks seem well connected and capable of such coordination, even under the burden of secrecy and keeping as few people "in the know" as possible.
Partially related: (in the category of comparatively off-the-wall - but nonviolent - AI alignment strategies): at some point there was a suggestion that MIRI pay $10mil (or some such figure) to Terence Tao (or some such prodigy) to help with alignment work. Eliezer replied thus [LW(p) · GW(p)]:
We'd absolutely pay him if he showed up and said he wanted to work on the problem. Every time I've asked about trying anything like this, all the advisors claim that you cannot pay people at the Terry Tao level to work on problems that don't interest them. We have already extensively verified that it doesn't particularly work for eg university professors.
I'd love to see more visibility into proposed strategies like these (i.e. strategies surrounding/above the object-level strategy of "everyone who can do alignment research puts their head down and works", and the related: "everyone else make money in their comparative specialization/advantage and donate to MIRI/FHI/etc"). Even visibility into why various strategies were shot down would be useful, and a potential catalyst for farming further ideas from the community. (even if - for game theoretic reasons - one may never be able to confirm that an idea has been tried, as in my cloning suggestion)
Replies from: tomcatfish↑ comment by Alex Vermillion (tomcatfish) · 2022-08-12T19:30:38.474Z · LW(p) · GW(p)
Meta level: Why on earth would you say "Here is my secret idea, internet"? That doesn't make any sense to me
Replies from: lc