post by Quaerendo
This was originally planned for release around Christmas, but our old friend Mr. Planning Fallacy said no. The best time to plant an oak tree is twenty years ago; the second-best time is today.
I present to you: Rationality Abridged -- a 120-page nearly 50,000-word summary of "Rationality: From AI to Zombies". Yes, it's almost a short book. But it is also true that it's less than 1/10th the length of the original. That should give you some perspective on how massively long R:AZ actually is.
As I note in the Preface, part of what motivated me to write this was the fact that the existing summaries out there (like the ones on the LW Wiki, or the Whirlwind Tour) are too short, are incomplete (e.g. not summarizing the "interludes"), and lack illustrations or a glossary. As such, they are mainly useful for those who have already read the articles to quickly glance at what it was about so they could refresh their memory. My aim was to serve that same purpose, while also being somewhat more detailed/extensive and including more examples from the articles in the summaries, so that they could also be used by newcomers to the rationality community to understand the key points. Thus, it is essentially a heavily abridged version of R:AZ.
Here is the link to the document. It is a PDF file (size 2.80MB), although if someone wants to convert it to .epub or .mobi format and share it here, you're welcome to.
There is also a text copy at my brand new blog: perpetualcanon.blogspot.com/p/rationality.html
I hope you enjoy it.
(By the way, this is my first post. I've been lurking around for a while.)
Comments sorted by top scores.
comment by habryka (habryka4) ·
2018-01-06T00:58:20.783Z · LW(p) · GW(p)
I haven't looked at it in detail yet, but it seems like this should also be available as a sequence on the new LessWrong (we are still finalizing the sequences features, but you can see a bunch of example in The Library).
We could just import the HTML from your website without much hassle and import them as a series of LW posts.
comment by Raemon ·
2018-01-06T01:01:25.480Z · LW(p) · GW(p)
Massive props. (For your first post, no less?)
I see some things I think could be tweaked a bit - mostly if the form of breaking paragraphs down into somewhat more digestible chunks (each summary feels slightly wall-of-text-y to me). However, overall my main takeaway is that this is great. :)Replies from: Quaerendo, adamzerner
↑ comment by Quaerendo ·
2018-01-06T17:31:38.422Z · LW(p) · GW(p)
Thanks for the kind words :) I agree with what you're saying about the 'wall-of-text-iness', especially on the web version; so I'm going to add some white space.
comment by Said Achmiz (SaidAchmiz) ·
2018-01-06T00:45:05.358Z · LW(p) · GW(p)
A worthy project! Very nice.
It seems like this could benefit from webification, a la https://www.readthesequences.com (including hyperlinking of glossary terms, navigation between sections, perhaps linking to the full versions, etc.—all the amenities of web-based hypertext). If this idea interests you, let me know.
comment by query ·
2018-01-06T19:24:19.813Z · LW(p) · GW(p)
This is completely awesome, thanks for doing this. This is something I can imagine actually sending to semi-interested friends.
Direct messaging seems to be wonky at the moment, so I'll put a suggested correction here: for 2.4, Aumann's Agreement Theorem does not show that if two people disagree, at least one of them is doing something wrong. From wikipedia: " if two people are genuine Bayesian rationalists with common priors, and if they each have common knowledge of their individual posterior probabilities, then their posteriors must be equal. " This could fail at multiple steps, off the top of my head:
- The humans might not be (mathematically pure) Bayesian rationalists (and in fact they're not.)
- The humans might not have common priors (even if they satisfied 1.)
- The humans might not have common knowledge of their posterior probabilities; a human saying words is a signal, not direct knowledge, so them telling you their posterior probabilities may not do the trick (and they might not know them.)
You could say failing to satisfy 1-3 means that at least one of them is "doing something wrong", but I think it's a misleading stretch -- failing to be normatively matched up to an arbitrary unobtainable mathematical structure is not what we usually call wrong. It stuck out to me as something that would put off readers with a bullshit detector, so I think it'd be worth fixing.Replies from: Quaerendo
↑ comment by Quaerendo ·
2018-01-06T20:27:10.443Z · LW(p) · GW(p)
Thanks for the feedback.
Here's the quote from the original article:
I said, "So if I make an Artificial Intelligence that, without being deliberately preprogrammed with any sort of script, starts talking about an emotional life that sounds like ours, that means your religion is wrong."
He said, "Well, um, I guess we may have to agree to disagree on this."
I said: "No, we can't, actually. There's a theorem of rationality called Aumann's Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong."
One could discuss whether Eliezer was right to appeal to AAT in a conversation like this, given that neither he nor his conversational partner are perfect Bayesians. I don't think it's entirely unfair to say that humans are flawed to the extent that we fail to live up to the ideal Bayesian standard (even if such a standard is unobtainable), so it's not clear to me why it would be misleading to say that if two people have common knowledge of a disagreement, at least one (or both) of them are "doing something wrong".
Nonetheless, I agree that it would be an improvement to at least be more clear about what Aumann's Agreement Theorem actually says. So I will amend that part of the text. Replies from: query
↑ comment by query ·
2018-01-06T21:23:02.168Z · LW(p) · GW(p)
Yeah; it's not open/shut. I guess I'd say in the current phrasing, the "but Aumann’s Agreement Theorem shows that if two people disagree, at least one is doing something wrong." is suggesting implications but not actually saying anything interesting -- at least one of them is doing something wrong by this standard whether or not they agree. I think adding some more context to make people less suspicious they're getting Eulered (http://slatestarcodex.com/2014/08/10/getting-eulered/) would be good.
I think this flaw is basically in the original article as well, though, so it's also a struggle between accurately representing the source and adding editorial correction.
Nitpicks aside, want to say again that this is really great; thank you!
comment by Yoav Ravid ·
2019-09-07T20:47:20.594Z · LW(p) · GW(p)
Just discovered this through the archive feature, this is awesome!
I think it should be linked in more places, it's a really useful resource.
Two Years late but, thank you for making this!
comment by Adnll ·
2018-02-18T12:31:37.755Z · LW(p) · GW(p)
Ideal format for beginning rationalists, thank you so much for that. I am reading it every day, going to full articles when wanting some more depth. It's also helped me "recruit" new rationalists among my friends. I think that this work may have wide and long-lasting effects.
It would be extra-nice, and I don't have the skills to do it myself, to have the links go to this LW - 2.0. Maybe you have reasons against it that I haven't considered? Replies from: Quaerendo
↑ comment by Quaerendo ·
2018-02-20T19:47:21.317Z · LW(p) · GW(p)
Thanks, I'm glad you found it useful!
The reason I didn't link to LW 2.0 is because it's still officially in beta, and I assumed that the URL (lesserwrong.com)will eventually change back to lesswrong.com (but perhaps I'm mistaken about this; I'm not entirely sure what the plan is). Besides, the old LW site links to LW 2.0 on the frontpage.
comment by waveman ·
2018-02-20T01:20:16.172Z · LW(p) · GW(p)
I just finished reading it. I find it a very useful summary and that is a hard thing to do, I know, and takes a lot of work. Thank you.
I noticed a typo
"The exact same gamble, framed differently, causes circular preferences.
People prefer certainty, and they refuse to trade off scared values (e.g. life) for unsacred ones.
But our moral preferences shouldn’t be circular."
scared => sacredReplies from: Quaerendo