Rationality: From AI to Zombies

post by Rob Bensinger (RobbBB) · 2015-03-13T15:11:20.920Z · LW · GW · Legacy · 104 comments

 

Eliezer Yudkowsky's original Sequences have been edited, reordered, and converted into an ebook!

Rationality: From AI to Zombies is now available in PDF, EPUB, and MOBI versions on intelligence.org (link). You can choose your own price to pay for it (minimum $0.00), or buy it for $4.99 from Amazon (link). The contents are:

The ebook's release has been timed to coincide with the end of Eliezer's other well-known introduction to rationality, Harry Potter and the Methods of Rationality. The two share many similar themes, and although Rationality: From AI to Zombies is (mostly) nonfiction, it is decidedly unconventional nonfiction, freely drifting in style from cryptic allegory to personal vignette to impassioned manifesto.

The 333 posts have been reorganized into twenty-six sequences, lettered A through Z. In order, these are titled:

Several sequences and posts have been renamed, so you'll need to consult the ebook's table of contents to spot all the correspondences. Four of these sequences (marked in bold) are almost completely new. They were written at the same time as Eliezer's other Overcoming Bias posts, but were never ordered or grouped together. Some of the others (A, C, L, S, V, Y, Z) have been substantially expanded, shrunk, or rearranged, but are still based largely on old content from the Sequences.

One of the most common complaints about the old Sequences was that there was no canonical default order, especially for people who didn't want to read the entire blog archive chronologically. Despite being called "sequences," their structure looked more like a complicated, looping web than like a line. With Rationality: From AI to Zombies, it will still be possible to hop back and forth between different parts of the book, but this will no longer be required for basic comprehension. The contents have been reviewed for consistency and in-context continuity, so that they can genuinely be read in sequence. You can simply read the book as a book.

I have also created a community-edited Glossary for Rationality: From AI to Zombies. You're invited to improve on the definitions and explanations there, and add new ones if you think of any while reading. When we release print versions of the ebook (as a six-volume set), a future version of the Glossary will probably be included.

104 comments

Comments sorted by top scores.

comment by Persol · 2015-04-01T01:17:22.182Z · LW(p) · GW(p)

Perhaps this is already discussed elsewhere and I'm failing at search. I'd be amazed if the below wasn't already pointed out.

On rereading this material it strikes me that this text is effectively inaccessible to large portions of the population. When I binged on these posts several years ago, I was just focused on the content for myself. This time, I had the thought to purchase for some others who would benefit from this material. I realized relatively quickly that the purchase of this book would likely fail to accomplish anything for these people, and may make a future attempt more difficult.

I think many of my specific concerns apply to a large percentage of the population.

  • The preface and introductions appear aimed at return readers. The preface is largely a description of 'oops', which means little to a new reader and is likely to trigger a negative halo effect in people who don't yet know what that means. - "I don't know what he's talking about, and he seems to make lots of writing mistakes."
  • There isn't a 'hook'. Talking about balls in urns in the intro seems too abstract for people. The rest of the sequences have more accessible examples, which most people would never reach.
  • Much of the original rhetoric is still in place. Admittedly that's part of what I liked about the original posts, but I think it limits the audience. As a specific example, a family member is starting high school, likes science, and I think would benefit from this material. However her immediate family is very religious, to the point of 'disowning' a sister when they found out about an abortion ~25 years ago. The existing material uses religion as an example of 'this is bad' frequently enough that my family member would likely be physically isolated from the material and socially isolated from myself. 87% of America (86% global) have some level of belief in religion. The current examples are likely to trigger defensive mechanisms, before they're education about them. (Side-note: 'Waking Up: A Guide to Spirituality Without Religion – by Sam Harris' is a good book, but has this same exact issue.)
  • Terminology is not sufficiently explained for people seeing this material with fresh eyes. As an example, ~15% of the way through 'New Improved Lottery' talks about probability distributions. There was no previous mention of this. Words with specific meanings, that are now often used, are unexplained. 'Quantitative' is used and means something to us, but not to most people. The Kindle provided dictionary and Wikipedia definitions are not very useful. This applies to the chapter titles as well, such as 'Bayesian Judo'.
  • The level of hyperlinks, while useful for us, is not optimal for someone reading a subject for the first time. A new reader would have to switch topics in many cases to understand the reference.
  • References to LessWrong and Overcoming Bias and only make sense to us.

Eliezer and Robb have done a lot to get the material into book state... but it's preaching to the choir.

Specifically what I think would make this more accessible:

  • A more immediate hook along the lines of 'Practicing rationality will help you make more winning decisions and be less wrong.' (IE: keep reading because this=good and doable) Eliezer was prolific enough that I think good paragraphs likely already exist; but need connectors.
  • Where negative examples are likely to dissuade large numbers of people, find better examples. In general avoid mentions of specific politics or religion in general. It's better to boil the frog.
  • Move or remove all early references to Bayes. 'Beliefs that are rational are call Bayesian' means nothing to most people. Later references might as well be technobabble.
  • Make sure other terminology is actually explained/understandable before it's used in the middle of an otherwise straightforward chapter. I'd try 1n & 2n-gramming the contents against Google Ngrams to identify terminology we need to make sure is actually explained/understood before casual use.
  • Get this closer to a 7th grade reading level. This sets a low bar at potential readers who can understand 'blockbuster' books in English. (This might be accomplished purely with the terminology concern/change above)
  • Change all hyperlinks to footnotes.
  • Discuss LessWrong, Overcoming Bias, Eliezer, Hanson in the preface as 'these cool places/people where much of this comes from' but limit the references within the content.

Is there any ongoing attempt or desire to do a group edit of this into an 'Accessible Rationality'?

Replies from: RobbBB, Gram_Stone, Jayson_Virissimo, ESRogs, Kenny, ChristianKl
comment by Rob Bensinger (RobbBB) · 2015-04-01T22:30:19.469Z · LW(p) · GW(p)

Thanks for all the comments! This is helpful. I agree 'Biases: An Introduction' needs to function better as a hook. The balls-in-an-urn example was chosen because it's an example Eliezer re-uses a few times later in the Sequences, but I'd love to hear ideas for better examples, or in general a more interesting way to start the book.

'Religion is an obvious example of a false set of doctrines' is so thoroughly baked into the Sequences that I think getting rid of it would require creating an entirely new book. R:AZ won't be as effective for theists, just as it won't be as effective for people who find math, philosophy, or science aversive.

I agree with you about 'boiling the frog', though: it would be nice if the book eased its way into anti-religious examples. I ended up deciding it was more important to quickly reach accessible interesting examples (like the ones in 'Fake Beliefs') than to optimize for broad appeal to theists and agnostics. One idea I've been tossing around, though, is to edit Book I ('Map and Territory') and Book II ('How to Actually Change Your Mind') for future release in such a way that it's possible to read II before I. It will still probably be better for most people to start with I, but if this works perhaps some agnostic or culturally religious readers will be able to start with II and get through more content before running into a huge number of anti-religious sentiments.

I agree about doing more to address the technobabble. In addition to including a Glossary in future editions of the book, I'll look into turning some unnecessarily technical asides into footnotes. The hyperlinks, of course, will need to be removed regardless when the print book comes out.

comment by Gram_Stone · 2015-04-01T05:24:47.740Z · LW(p) · GW(p)

I've had similar concerns and I agree with a lot of this.

Get this closer to a 7th grade reading level. This sets a low bar at potential readers who can understand 'blockbuster' books in English. (This might be accomplished purely with the terminology concern/change above)

If we really want to approach a 7th grade reading level, then we had better aim for kindergartners. I remember reading through the book trying to imagine how to bring it down several levels and thinking about just how many words I was taking for granted as a high-IQ adult who has had plenty of time to just passively soak up vocabulary and overviews of highly complex fields. I just don't think we're there yet; I think that's why there are things like SPARC where we're trying it out on highly intelligent high school students who are unusually well-educated for their age.

Change all hyperlinks to footnotes.

To my knowledge this is already a priority.

Is there any ongoing attempt or desire to do a group edit of this into an 'Accessible Rationality'?

I find that there's a wide disparity between LW users in intelligence and education, and I don't know if I see a wiki-like approach converging on anything particularly useful. I would imagine arguments about what's not simple enough and what's not complex enough, and about people using examples from their pet fields that others don't understand. It might work if you threw enough bodies at it, like Wikipedia, but we don't have that many bodies. I don't know how others feel.

Replies from: None, Persol
comment by [deleted] · 2015-04-01T18:57:15.767Z · LW(p) · GW(p)

The point wasn't to aim for 7th graders, but a 7th grade level which would make it generally accessible to busy adults.

comment by Persol · 2015-04-01T21:40:22.798Z · LW(p) · GW(p)

See Mark's post regarding 7th grade; my intention was aimed at adults, who (for whatever reason) seem to like the 7th grade reading level.

I'm not sure how to effectively crowd source this without getting volunteers for specific (non-overlapping) tasks and sections. I share your concern with the wiki-method, unless each section has a lead. At work we regularly get 20 people to collaborate on ~100 page proposals, but the same incentives aren't available in this case. Copyediting is time consuming and unexciting; does anyone know of similar crowd sourced efforts? I found a few but most still had paid writers.

comment by Jayson_Virissimo · 2015-04-04T01:01:50.608Z · LW(p) · GW(p)

'Accessible Rationality' already exists... in the form of a wildly popular Harry Potter fanfiction.

comment by ESRogs · 2015-04-04T03:20:44.852Z · LW(p) · GW(p)

I'd try 1n & 2n-gramming the contents against Google Ngrams

What does 1n or 2n-gramming mean? I'm looking at Google Ngrams, and it's not obvious to me.

Replies from: Persol
comment by Persol · 2015-04-04T22:24:06.424Z · LW(p) · GW(p)

1 gramming is checking single words; should identify unfamiliar vocabulary. (Ex: quantifiable)

2 gramming would check pairs of words; should identify uncommon phrases made of common words (ex: probability mass - better examples probably exist)

The 1/2 gram terminology may be made up, but I think I've heard it used before.

Replies from: ESRogs
comment by ESRogs · 2015-04-05T03:36:26.212Z · LW(p) · GW(p)

Thanks!

comment by Kenny · 2015-04-04T00:38:31.300Z · LW(p) · GW(p)

What's the payoff of changing hyperlinks to footnotes? Given all of the other, substantive, issues you raised, that seems unlikely to make any significant difference.

Replies from: Persol
comment by Persol · 2015-04-04T01:00:34.341Z · LW(p) · GW(p)

Two reason:

  • Frequently having multiple words as hyperlinks in ebooks mean that 'turning the page' may instead change chapters. Maybe it is just a problem with iPhone kindle.
  • For links that reference forward chapters, what is a new reader to do? They can ignore it and not understand the reference, or they can click, read, and then try to go back... but it's not a very smooth reading experience.

Granted, I probably wouldn't have noticed the second issue, if not for the first issue.

comment by ChristianKl · 2015-04-01T19:46:44.457Z · LW(p) · GW(p)

I don't think the point of the sequences or the book is to be accessible to everyone. If you want to write 'Accessible Rationality' it likely makes more sense to start from stretch.

Replies from: Persol
comment by Persol · 2015-04-01T21:23:21.049Z · LW(p) · GW(p)

Agreed that it may not be the point, but other than what I think are fixable issues, the book contents work well. I don't think starting from scratch would be a large enough improvement to justify the extra time and increased chance of failure.

I think the big work is in making the examples accessible, and Eliezer already did this for the -other- negative trigger.

"If you want to make a point about science, or rationality, then my advice is to not choose a domain from contemporary politics if you can possibly avoid it. If your point is inherently about politics, then talk about Louis XVI during the French Revolution. Politics is an important domain to which we should individually apply our rationality— but it’s a terrible domain in which to learn. Why would anyone pick such a distracting example to illustrate nonmonotonic reasoning?"

comment by Scott Garrabrant · 2015-03-13T15:17:28.673Z · LW(p) · GW(p)

The cover is incorrect :(

EDIT: If you do not understand this post, read essay 268 from the book!

Replies from: B_For_Bandana, Error, ciphergoth, Quill_McGee
comment by B_For_Bandana · 2015-03-13T21:20:03.692Z · LW(p) · GW(p)

The code of the shepherds is terrible and stern. One sheep, one pebble, hang the consequences. They have been known to commit fifteen, and twenty-one, and even even, rather than break it.

comment by Error · 2015-03-13T18:19:18.256Z · LW(p) · GW(p)

I just bust out laughing in the office at this...and can't share the joke with anybody.

Now I want to know if the incorrectness is intentional and if so, what message it's supposed to carry.

Replies from: lmm
comment by lmm · 2015-03-17T22:58:29.928Z · LW(p) · GW(p)

It's a bluff to make us think Yudkowsky cares about things like human happiness rather than what's right. Don't be fooled!

comment by Quill_McGee · 2015-03-13T18:30:11.410Z · LW(p) · GW(p)

There might be one more stone not visible?

Replies from: None, Transfuturist
comment by [deleted] · 2015-03-14T19:45:38.617Z · LW(p) · GW(p)

10 would still be incorrect.

Replies from: Quill_McGee
comment by Quill_McGee · 2015-03-19T23:51:19.325Z · LW(p) · GW(p)

Darn it, and I counted like five times to make sure there really were 10 visible before I said anything. I didn't realize that the stone the middle-top stone was on top of was one stone, not two.

comment by Transfuturist · 2015-03-13T18:38:17.015Z · LW(p) · GW(p)

I see nine stones, not ten.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2015-03-14T13:08:10.113Z · LW(p) · GW(p)

Three at the back, three at the front, one to one side, one standing up... the question is whether it's standing on one stone or two.

comment by alexvermeer · 2015-03-13T16:46:10.221Z · LW(p) · GW(p)

Just a reminder that mistakes/problems/errors can be sent to errata@intelligence.org and we'll try fix them!

Replies from: quinox, iarwain1
comment by quinox · 2015-03-15T10:52:47.452Z · LW(p) · GW(p)

I can't mail that address, I get a failure message from Google:

We're writing to let you know that the group you tried to contact (errata) may not exist, or you may not have permission to post messages to the group.

I'll post my feedback here:

Hello,

I got the book "Rationality: From AI to Zombies" via intelligence.org/e-junkie for my Kindle (5th gen, not the paperwhite/touch/fire). So far I've read a dozen pages, but since it will take me a while to get to the end of the book I'll give some feedback right away:

  • The book looks great! Some other ebooks I have don't use page-breaks at the end of a chapter, don't have a Table of Content, have inconsistent font types/sizes etc. The PDF version is very pretty as well.
  • The filename "Rationality.mobi" (AI-Zombie) is the same as "rationality.mobi" (HPMOR)
  • A bunch of inter-book links such as "The Twelve Virtues of Rationality"/"Predictably Wrong"/"Fake Beliefs"/"Noticing Confusion" (all from Biases: An introduction) don't work: On my Kindle I have the option to "Follow link", but when I choose it the page refreshes and I'm still at the same spot.

    Inspecting the .mobi source with Calibre e-book reader I see:

    < a href="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" >Noticing Confusion< /a>

    The links from the TOCs and some other chapters do work properly.

  • Due to the lack of quotation marks and the nonuse of italics I didn't realize the part "As a consequence, it might be necessary" was a quote (Biases: An introduction). The extra margin left and right do indicate something special, but with the experience of so many bad ebooks my brain assumed it was just a broken indentation level.
  • The difference between a link going to the web and one going to a location within the book aren't obvious: one is only a slighter darker grey than the other. In Calibri the links are a nice green/blue, but my kindle doesn't have colours.

Cheers,

Replies from: lukeprog, alexvermeer, adamzerner
comment by lukeprog · 2015-03-15T18:27:14.800Z · LW(p) · GW(p)

I can't mail that address, I get a failure message from Google

Oops. Should be fixed now.

comment by alexvermeer · 2015-03-16T16:11:09.487Z · LW(p) · GW(p)

The book looks great!

Thanks!

A bunch of inter-book links such as ...

D'oh. It's all good in the epub, but something broke (for very dumb reasons) converting the mobi. It's fixed now. If you've already bought the book though Amazon or e-junkie, you'll have to re-download the file to get the fixed one (in a few hours, while Amazon approves the new book). Sorry about that.

The difference between a link going to the web and one going to a location within the book aren't obvious: one is only a slighter darker grey than the other. In Calibri the links are a nice green/blue, but my kindle doesn't have colours.

Not much we can do about this. Amazon is very restrictive in how you can modify the styling of links. It works fine for displays with color, but people with e-ink displays are out of luck. :-(

Thanks.

comment by iarwain1 · 2015-03-13T16:57:33.865Z · LW(p) · GW(p)

we'll try fix them

I think you meant "try to fix them" :)

Replies from: pinyaka
comment by pinyaka · 2015-03-13T17:14:42.133Z · LW(p) · GW(p)

You should send that to errata@intelligence.org.

comment by Ben Pace (Benito) · 2015-03-13T07:12:11.657Z · LW(p) · GW(p)

Yay! Now I'm sending this to all of my friends!

Replies from: Caue
comment by Caue · 2015-03-13T17:46:12.835Z · LW(p) · GW(p)

My first reaction as well.

But that is easy. What I haven't figured out yet is how to get them to read it.

Replies from: Benito
comment by Ben Pace (Benito) · 2015-03-13T18:15:24.119Z · LW(p) · GW(p)

I've found that the people most interested in reading it are the ones I've already gotten addicted to HPMOR.

comment by bramflakes · 2015-03-13T23:13:36.628Z · LW(p) · GW(p)

One of the most common complaints about the old Sequences was that there was no canonical default order, especially for people who didn't want to read the entire blog archive chronologically.

I was tricked into doing this. Years ago someone posted an ebook claiming to be the Sequences, but was actually just every single Yudkowsky blog post from 2006 to 2010 -_-

It took until noticing that only Yudkowsky's side of the FOOM debate was in there that I realized what had happened

Replies from: ciphergoth, Vulture
comment by Paul Crowley (ciphergoth) · 2015-03-14T13:08:54.714Z · LW(p) · GW(p)

It wasn't meant as a trick! Organising them would have been very hard.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2015-03-15T05:52:06.363Z · LW(p) · GW(p)

Can confirm!

comment by Vulture · 2015-03-16T01:28:59.433Z · LW(p) · GW(p)

Just as a little bit of a counterpoint, I loved the 2006-2010 ebook and was never particularly bothered by the length. I read the whole thing at least twice through, I think, and have occasionally used it to look up posts and so on. The format just worked really well for me. This may be because I am an unusually fast reader, or because I was young and had nothing else to do. But it certainly isn't totally useless :P

Replies from: bramflakes
comment by bramflakes · 2015-03-16T23:20:08.000Z · LW(p) · GW(p)

Oh, I didn't mean to imply I didn't like it! It was a welcome companion for hundreds of long school bus journeys.

comment by slicko · 2015-03-13T18:47:50.718Z · LW(p) · GW(p)

Good work guys!

This might be the excuse I need to finally go through the complete sequences as opposed to relying on cherry-picking posts whenever I encounter a reference I don't already know.

comment by spxtr · 2015-03-16T07:49:42.190Z · LW(p) · GW(p)

I am impressed. The production quality on this is excellent, and the new introduction by Rob Bensinger is approachable for new readers. I will definitely be recommending this over the version on this site.

comment by [deleted] · 2015-03-14T19:48:37.650Z · LW(p) · GW(p)

I paid $0 because I'd rather not pay transaction fees on a donation to charity. You can donate to MIRI directly here:

https://intelligence.org/donate/

And CFAR here:

http://rationality.org/donate/

Replies from: malo
comment by Malo (malo) · 2015-03-15T16:19:47.099Z · LW(p) · GW(p)

See my comment here about this.

Replies from: None
comment by [deleted] · 2015-03-15T17:25:20.733Z · LW(p) · GW(p)

I used and prefer Bitcoin, which wasn't an option for the eBook and which carries smaller fees.

comment by Ixiel · 2015-03-14T12:03:57.070Z · LW(p) · GW(p)

Excellent, thank you! Any update on when the real book will be available for purchase for those of us who don't do ebooks?

Replies from: Ivan_Tishchenko
comment by Ivan_Tishchenko · 2015-10-20T11:32:12.065Z · LW(p) · GW(p)

I second this question! I want to have this book in flesh, staying on my bookshelf.

comment by imuli · 2015-03-13T18:45:15.053Z · LW(p) · GW(p)

The zip file has some extra Apple metadata files included. Nothing too revealing, just dropbox bits.

comment by kotrfa · 2015-03-13T16:24:34.087Z · LW(p) · GW(p)

Can I know to who and where the money for the book goes?

Replies from: alexvermeer
comment by alexvermeer · 2015-03-13T16:40:17.942Z · LW(p) · GW(p)

From Amazon, 30% goes to Amazon and 70% goes to MIRI.

From e-junkie (the pay-what-you-want option): 100% goes to MIRI, minus PayPal transaction fees (a few %).

Replies from: DanielLC, adamzerner
comment by DanielLC · 2015-03-14T02:25:19.730Z · LW(p) · GW(p)

Couldn't you pay $0.00, send the money to MIRI, and avoid transaction fees?

Replies from: RobbBB, malo, alexvermeer
comment by Rob Bensinger (RobbBB) · 2015-03-15T05:55:49.261Z · LW(p) · GW(p)

Yeah. Main reason to do it this way is fear of trivial inconveniences.

comment by Malo (malo) · 2015-03-15T16:17:38.359Z · LW(p) · GW(p)

Depending on how you sent money to MIRI, we'd incur transaction fees anyway (donating through PayPal using a PayPal account or CC). ACH donations have lower fees, and checks don't have any, but both of those take staff time to process, so unless the donation was say $50 or more, it probably wouldn't be worth it.

Replies from: None
comment by [deleted] · 2015-03-15T17:26:18.649Z · LW(p) · GW(p)

What about Bitcoin?

Replies from: malo, ike
comment by Malo (malo) · 2015-03-15T23:56:30.202Z · LW(p) · GW(p)

No fees, but also takes some extra staff time (additional bookkeeping/accounting work is involved), so there is some cost to it. If we got more BTC donations it would reduce the time cost per donation, due to effects of batching, but as it stands now, they are usually processed (record added to our donor database and accounting software) on an individual basis.

One thing that takes a significant amount of time is when someone mis-pays a Coinbase invoice (sends a different amount of BTC then they indicated on the Coinbase form on our site). Coinbase treats these payments in a different way that ends up requiring more time to process on our end.

All that being said we like having the BTC donation option, and it always makes me happy to see one come in. So if making contributions via BTC is your preference, I'm all for it :)

comment by ike · 2015-03-15T18:21:31.745Z · LW(p) · GW(p)

They use coinbase, so according to this it's free up to $1 million.

Replies from: None
comment by [deleted] · 2015-03-15T19:29:12.021Z · LW(p) · GW(p)

It should be free, period. Coinbase doesn't charge fees for registered non-for-profits.

comment by alexvermeer · 2015-03-14T03:38:23.044Z · LW(p) · GW(p)

Yup, but those are convenient distribution platforms.

comment by Adam Zerner (adamzerner) · 2015-03-15T13:32:04.428Z · LW(p) · GW(p)

Perhaps this should be noted in the main article. I was thinking about buying it through Amazon until I saw this!

comment by MarkusRamikin · 2015-04-01T08:56:54.768Z · LW(p) · GW(p)

For reasons, I suggest that Bayesian Judo doesn't make EY look good to people who aren't already cheering for his team, and maybe it wasn't wise to include it.

More generally, the book feels a bit... neutered. Things like, for example, changing "if you go ahead and mess around with Wulky's teenage daughter" to "if you go ahead and insult Wulky". The first is concrete, evocative, and therefore strong, while the latter is fuzzy and weak. Though my impression may be skewed just because I remember the original examples so well.

comment by ilzolende · 2015-03-15T22:28:34.395Z · LW(p) · GW(p)

I am thinking of recommending this to people, all of whom are unlikely to pay. Is having people acquire this for $0 who would otherwise not have read it beneficial or harmful to MIRI? (If the answer is "harmful because of paying for people to download it", I can email it to my friends with a payment link instead of directing them to your website.)

Replies from: malo
comment by Malo (malo) · 2015-03-15T23:59:36.453Z · LW(p) · GW(p)

Definitely beneficial, there is no cost worth considering when it comes to the next marginal person getting the book through our site, even if their selection is $0. So don't worry about directing them there.

comment by Squark · 2015-03-13T08:32:43.019Z · LW(p) · GW(p)

Congratulations, well done!

Side note: the "Glossary" link seems to be broken.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2015-03-13T12:18:47.350Z · LW(p) · GW(p)

Should be working now. I accidentally made it an internal link.

comment by Rangi · 2015-03-13T16:33:00.386Z · LW(p) · GW(p)

With SumatraPDF 3.0 on Windows 8.1 x64, the links in the PDF version do not show up. With Adobe Reader 11 on Windows 7 x86, they look fine. On the other hand, SumatraPDF can also handle the MOBI and EPUB versions.

Replies from: bramflakes
comment by bramflakes · 2015-03-14T00:08:28.371Z · LW(p) · GW(p)

I'm getting problems too. The contents pages look like this, for example.

comment by jrincayc · 2016-05-02T19:03:07.969Z · LW(p) · GW(p)

I have been creating a tex version at: https://github.com/jrincayc/rationality-ai-zombies

Replies from: jrincayc, jrincayc, jrincayc
comment by jrincayc · 2018-02-12T03:31:33.983Z · LW(p) · GW(p)

I have used Lulu to print the book, instructions are at: https://github.com/jrincayc/rationality-ai-zombies Or you could print it somewhere else that allows you to print a 650 page 8.5 by 11 inch book. (If you try it with a different place, let me know) I have read through the entire printed version and fixed all the formatting issues that I found in the beta7 release in the new beta8 release.

comment by jrincayc · 2017-08-27T12:53:10.434Z · LW(p) · GW(p)

I have relinked the footnotes. It is now reasonably editable. I've put up pdfs at https://github.com/jrincayc/rationality-ai-zombies/releases

comment by jrincayc · 2016-05-20T02:55:27.365Z · LW(p) · GW(p)

There is still a lot of work to do before I consider it done, but it is more or less useable for some purposes. I printed off a copy for myself from Lulu for about $12. Here is the two column version that can be printed out as a single volume: http://jjc.freeshell.org/rationality-ai-zombies/rationality_from_ai_to_zombies_two_column_beta2.pdf

comment by Gust · 2015-03-19T06:03:39.164Z · LW(p) · GW(p)

Hi, and thanks for the awesome job! Will you keep a public record of changes you make to the book? I'm coordinating a translation effort, and that would be important to keep it in sync if you change the actual text, not just fix spelling and hyperlinking errors.

Edit: Our translation effort is for Portuguese only, and can be found at http://racionalidade.com.br/wiki .

Replies from: RobbBB, hydkyll
comment by Rob Bensinger (RobbBB) · 2015-03-19T06:15:53.010Z · LW(p) · GW(p)

Yes, we'll keep a public record of content changes, or at least a private record that we'd be happy to share with people doing things like translation projects.

comment by hydkyll · 2015-04-12T15:09:54.709Z · LW(p) · GW(p)

How is that translation coming along? I could help with German.

Replies from: Gust
comment by Gust · 2015-04-13T14:44:16.678Z · LW(p) · GW(p)

We're translating to Brazilian Protuguese only, since that's our native language.

comment by Kaj_Sotala · 2015-03-18T09:15:26.488Z · LW(p) · GW(p)

I liked Robby's introduction to the book overall, but I find it somewhat ironic that right after the prologue where Eliezer mentions that one of his biggest mistakes in writing the Sequences was focusing on abstract philosophical problems that are removed from people's daily problems, the introduction begins with

Imagine reaching into an urn that contains seventy white balls and thirty red ones, and plucking out ten mystery balls.

The first (though not necessarily best) example of how to rewrite this in less abstract form that comes to mind would be something like "Imagine that you're standing by the entrance of a university whose students are seven tenths female and three tenths male, and observing ten students go in..."; with the biased example being "On the other hand, suppose that you happen to be standing by the entrance of the physics department, which is mostly male even though the university in general is mostly female."

Some unnecessary technical jargon that could have been gotten rid of also caught my eye in the first actual post: e.g. "Rational agents make decisions that maximize the probabilistic expectation of a coherent utility function" could have been rewritten to be more broadly understandable, e.g. "rational agents make decisions that are the most likely to produce the kinds of outcomes they'd like to see".

I could spend some time making notes of these kinds of things and offering suggested rewrites for making the printed book more broadly accessible - would MIRI be interested in that, or would they prefer to keep the content as is?

Replies from: RobbBB, Gram_Stone
comment by Rob Bensinger (RobbBB) · 2015-03-19T00:33:35.509Z · LW(p) · GW(p)

Part of the idea behind the introduction is to replace an early series of posts: "Statistical Bias", "Inductive Bias", and Priors as Mathematical Objects. These get alluded to various times later in the sequences, and the posts 'An Especially Elegant Evolutionary Psychology Project', 'Where Recursive Justification Hits Bottom', and 'No Universally Compelling Arguments' all call back to the urn example. That said, I do think a more interesting example (whether or not it's more 'ordinary' and everyday) would be a better note to start the book on.

Do feel free to send stylistic or substantive change ideas to errata@intelligence.org, not just spelling errors.

comment by Gram_Stone · 2015-03-18T09:30:37.768Z · LW(p) · GW(p)

This came to mind for me as well. This, from Burdensome Details, popped out at me: "Moreover, they would need to add absurdities—where the absurdity is the log probability, so you can add it—rather than averaging them." All this does for me is pattern-match to a Wikipedia article I once read about the concept of entropy in information theory; I don't really know what it means in any precise sense or why it might be true. And the essay even seems to stand on its own without that part. I've come to ignore my fear of not understanding things unless I don't understand pretty much everything I'm reading, but I think a lot of people would get scared that they didn't know enough to read the book and just stop reading.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2015-03-18T22:16:57.882Z · LW(p) · GW(p)

Come to think of it, we could collect proposed rewrites / deletions to some wiki page: this seems suitable for a communal effort. The "deletions" wouldn't actually need to be literal deletions, they could just be moved into a footnote. E.g. in the Burdensome Details article, a footnote saying something like "technically, you can measure probabilities by logarithms and..."

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2015-03-19T00:37:21.962Z · LW(p) · GW(p)

I like the idea of turning a lot of these jargony asides, especially early in the book, into footnotes. We'll be needing to make heavier use of footnotes anyway in order to explicitly direct people to other parts of the series in places where there will no longer be a clickable link. (Though we won't do this for most clickable links, just for the especially interesting / important ones.)

You're welcome to use a wiki page to list suggested changes, or a Google Doc; or just send a bunch of e-mails to errata@intelligence.org with ideas.

comment by Tenoke · 2015-03-13T11:25:23.595Z · LW(p) · GW(p)

Awesome! How large is it altogether (in words)?

Replies from: alexvermeer
comment by alexvermeer · 2015-03-13T16:40:42.640Z · LW(p) · GW(p)

Approximately 600,000 words!

Replies from: lukeprog, Transfuturist
comment by lukeprog · 2015-03-13T21:48:28.029Z · LW(p) · GW(p)

Which is roughly the length of War and Peace or Atlas Shrugged.

comment by Transfuturist · 2015-03-14T05:53:15.531Z · LW(p) · GW(p)

Ah, so about as large as it takes for a fanfic to be good. :P

comment by [deleted] · 2015-03-13T09:43:35.718Z · LW(p) · GW(p)

Don't have paypal or credit card or bitcoins or similar stuff, 0 price for now, I will look into donating from my Maestro debit card or maybe a direct transfer although international transfer rates may make that not worth the while. That and cash are the only methods I use - I rarely need anything I cannot buy with them. (I use gift cards purchased in shops for steam and google play.) I am thinking about purchasing some bitcoins for € for such donations purpose, if anyone can recommend a safe and debit card (or sofort.com) compatible service?

Replies from: Gram_Stone
comment by Gram_Stone · 2015-03-13T17:13:34.723Z · LW(p) · GW(p)

If you set the price to $0.00 then you don't need to give any payment information.

comment by MrMind · 2015-03-13T08:47:01.961Z · LW(p) · GW(p)

That's awesome!

Replies from: MrMind
comment by MrMind · 2015-03-13T08:49:10.860Z · LW(p) · GW(p)

Both links don't work though: you have lesswrong.com/ prefixing every correct address.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2015-03-13T12:11:25.005Z · LW(p) · GW(p)

Fixed! Thanks.

comment by John_Mitchell · 2015-06-12T12:03:14.432Z · LW(p) · GW(p)

Do people think there is value in making an audio book from this?

I was thinking it would be possible to do in a similar process to the HPMOR audiobook with people contributing different chapters. If there is interest in doing this and if it is permitted to be done then I will happily volunteer to coordinate the effort. If this idea does have support then given the discussion below about how the book could be improved, would it make more sense to postpone an audiobook to allow for sensible changes, or is that an unnecessary delay in search of unreachable perfection?

Replies from: Vaniver
comment by Vaniver · 2015-06-12T15:28:24.079Z · LW(p) · GW(p)

Do people think there is value in making an audio book from this?

Yes; one is being made by Castify.

comment by [deleted] · 2016-08-13T19:09:34.390Z · LW(p) · GW(p)

I just finished listening to the Audiobook version of Rationality: From AI to Zombie. Lots of thanks to Yudkowsky and everyone else that was involved in making this book and the audio book. I do not know who the reader of the audio book is, but thanks all the same.

I am writing this comment as my way of prizing this book. I will try to summarize what I have personalty learned from it, in the hope that someone who was involved, will read this post and fell some pride in having helped me in my self improvement. But I am also writing this comment because I just want to express my thoughts after finishing the book.

I have not have any major change of mind, but I have several minor ones, which might very well continue to grow.

Listening to Yudkowsky's words have made me more confident, because he is saying many things that I already intuitively knew, but I could not properly explain it my self, and could therefore not be sure I was right. I am still not 100% certain I am right, but I am more confident, and I believe that this is a good thing. Smart people should be confident. No, this is not hind site bias, because:

  • I did not allays instantly agree, so I do know the difference.
  • I been actively introspecting since I was 12, so I know most of my brains tricks.

I never set out to be a rationalist. I don't even remember having a pre-LessWrong concept for the word "rationalist". There where just, correct thinking and in-correct thinking, and obviously correct thinking is the way that systematically leads you to the truth, because how else would you measure correctness. Maybe this saved me from falling in to some of the rationalist tropes that Yudkovsky warns about. Or maybe I avoided them because I have read to little science fiction. Or maybe it was because I looked at these types of tropes and saw an author who kinged to the, obviously wrong, but warm and fussy, idea that every human has the same number of skill points.

I wonder who setts out to be rational, with out having something specific they need rationality for. Maybe the same kind of people that identifies as an atheist? I am an atheist, but I don't identify as such, because in my country, this is mostly a non issue.

I found LessWrong because my new boyfriend encouraged me to read here, and I actually got through the book, because I like audiobooks.

The pre-LessWrong me was a truth seeker, and as such, I though a lot about the way as applied to truth seeking. I had a crisis of faith, a several years a go, questioning the validity of science. But never really though about applying systematic reasoning to decision under uncertainty. When, in my past, I was confronted with a decision, which I did not know how to reason out, I used to deliberately hand over the decision to my feelings. Because, I reasoned, if I don't know what is right anyway, I might as well save me the fight of going against my impulses. I hope that I can use what I have learned here to do better.

An other thing I have realized is that I am such a pushover for perceived social norms. I have notice a significant mental shift in my brain, just from having some one in my ear, who casually mentions many words and cryonics, as if these where the most normal things in the world. Intellectually I was already convinced, I already knew the right answer before listening to the book, but I still needed the extra nagging, to get all of may brain on board with it. I think that this has been the single most important insight I got from the book.

One reason I have not tried to develop the art of rational decision making before, is that I knew that I was not strong enough to counter my emotional preferences. But I was wrong. I now have one, systematically applicable self hack, and probably there are more out there to find. I have hope to be able to take charge of my motivation, and I have reasons to fight for control.

Current me is an aspiring effective altruist. I do not strive to be a perfect altruist, because I do have some selfish preferences, that I do not expect to go away. But I am going to get my ass out of the comfortable bubble of I can''t do anything anyway, and do something. Though I have not decided yet if I am going to take the path of earn to give, or if I should get directly involved in some project my self. I am looking in to both ways.

Finlay, here is one my favorite quotes from the book:

I pause. “Well…” I say slowly. “Frankly, I’m not entirely sure myself where this ‘reality’ business comes from. I can’t create my own reality in the lab, so I must not understand it yet. But occasionally I believe strongly that something is going to happen, and then something else happens instead. I need a name for whatever-it-is that determines my experimental results, so I call it ‘reality’. This ‘reality’ is somehow separate from even my very best hypotheses. Even when I have a simple hypothesis, strongly supported by all the evidence I know, sometimes I’m still surprised. So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief’, and the latter thingy ‘reality’.”

Replies from: Linda Linsefors
comment by Linda Linsefors · 2020-07-05T06:14:39.219Z · LW(p) · GW(p)

I'm leaving this comment so that I can find my way back here in the future.

Replies from: Bohaska
comment by Bohaska · 2023-09-27T02:28:41.311Z · LW(p) · GW(p)

Mind if you can write a follow-up review about how you joined the rationalist/EA community? Interested to see how your journey progressed 🙂

Replies from: Linda Linsefors
comment by Linda Linsefors · 2023-09-27T21:18:03.206Z · LW(p) · GW(p)

I got into AI Safety. My interest in AI Safety lured me to a CFAR workshop, since it was a joint event with MIRI. I came for the Agent Foundations research, but the CFAR turned out just as valuable. It helped me start to integrate my intuitions with my reasoning, though IDC [? · GW] and other methods. I'm still in AI Safety, mostly organising, but also doing some thinking, and still learning. 

My resume lists all the major things I've been doing. Not the most interesting format, but I'm probably not going to write anything better anytime soon.
Resume - Linda Linsefors - Google Docs

comment by [deleted] · 2015-04-02T06:35:18.090Z · LW(p) · GW(p)

Does the book (especially the printed version) have training problems after sections? (I don't have it, sorry if the question is redundant).

Replies from: None
comment by [deleted] · 2015-04-02T07:08:20.538Z · LW(p) · GW(p)

It does not.

Replies from: None
comment by [deleted] · 2015-04-02T07:19:27.005Z · LW(p) · GW(p)

Maybe it should, for people who won't discuss things online for some reason.

comment by jrincayc · 2018-05-12T04:11:44.437Z · LW(p) · GW(p)

Is a printed six-volume set still being worked on?

Replies from: Raemon, Elo
comment by Raemon · 2018-05-12T09:22:05.491Z · LW(p) · GW(p)

There are printed versions of book 2, that are given out sometimes at CFAR.

Replies from: jrincayc
comment by jrincayc · 2018-05-14T01:19:37.798Z · LW(p) · GW(p)

How to actually change your mind (book 2) is definitely a great section of Rationality: From AI to Zombies.

comment by Elo · 2018-05-12T04:50:41.976Z · LW(p) · GW(p)

Not that I know of.

comment by FiftyTwo · 2015-03-18T09:06:49.102Z · LW(p) · GW(p)

Might be worth including the Amazon.co.uk and other store links.

comment by [deleted] · 2016-08-15T17:24:40.000Z · LW(p) · GW(p)

A friend of mine is interested in reading this book, but would prefer a printed copy. Is there any chance that this book will be published any time soon?

Replies from: jrincayc
comment by jrincayc · 2017-08-27T12:58:09.904Z · LW(p) · GW(p)

I have used the two column version: https://github.com/jrincayc/rationality-ai-zombies/releases/download/beta3/rationality_from_ai_to_zombies_2c.pdf with https://www.lulu.com/ to make a printed version for myself. (Update: beta3 has quite a few problems that have been fixed in newer versions, so grab a new release if you are printing it: https://github.com/jrincayc/rationality-ai-zombies/releases )

Note that there are problems with the that pdf, so it isn't perfect, but it might work. The regular PDF is too long to print as a single book.

comment by [deleted] · 2015-06-11T15:05:16.467Z · LW(p) · GW(p)

Is there anything on procrastination? I'm tempted to buy this bookinstead cause the dude has an alright podcast too. I don't listen to it anymore cause it's boring and not consistently novel information but yeah.

When I feel like this I don't want to read chapters that are complex sounding like Rationality and Politics and Death Spirals that without having read the sequences, don't mean shit to me and could equally appear in some random Trotskyist propoganda from the weird organisation down the road.

When are these pop-rationality books gonna be replaced by a new generation of books on say Bonferri corrections for everyday life or a conceptual introduction to regression?

you're reaching neither the unitiated nor furthering the knowledge of the adepts. You're just preaching to the choir and making some coin from it! Defend your honour!

edit 1: fixed links

comment by xoda · 2015-04-14T01:28:57.136Z · LW(p) · GW(p)

Sorry for my problem.I tried 15 times downloading,only once started and stopped at 1.5M/30.6M. Others can't even get to track. I wish to use another source ,or some kind friends could send the pdf to 513493106@qq.com?DEEPLY BOW for your help!