Rationality Reading Group: Part U: Fake Preferences

post by Gram_Stone · 2016-02-24T23:29:06.972Z · LW · GW · Legacy · 5 comments

Contents

    Ends: An Introduction
  U. Fake Preferences
None
5 comments

This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.


Welcome to the Rationality reading group. This fortnight we discuss Ends: An Introduction (pp. 1321-1325) and Part U: Fake Preferences (pp. 1329-1356). This post summarizes each article of the sequence, linking to the original LessWrong post where available.

Ends: An Introduction

U. Fake Preferences

257. Not for the Sake of Happiness (Alone) - Tackles the Hollywood Rationality trope that "rational" preferences must reduce to selfish hedonism - caring strictly about personally experienced pleasure. An ideal Bayesian agent - implementing strict Bayesian decision theory - can have a utility function that ranges over anything, not just internal subjective experiences.

258. Fake Selfishness - Many people who espouse a philosophy of selfishness aren't really selfish. If they were selfish, there are a lot more productive things to do with their time than espouse selfishness, for instance. Instead, individuals who proclaim themselves selfish do whatever it is they actually want, including altruism, but can always find some sort of self-interest rationalization for their behavior.

259. Fake Morality - Many people provide fake reasons for their own moral reasoning. Religious people claim that the only reason people don't murder each other is because of God. Selfish-ists provide altruistic justifications for selfishness. Altruists provide selfish justifications for altruism. If you want to know how moral someone is, don't look at their reasons. Look at what they actually do.

260. Fake Utility Functions - Describes the seeming fascination that many have with trying to compress morality down to a single principle. The sequence leading up to this post tries to explain the cognitive twists whereby people smuggle all of their complicated other preferences into their choice of exactly which acts they try to justify using their single principle; but if they were really following only that single principle, they would choose other acts to justify.

261. Detached Lever Fallacy - There is a lot of machinery hidden beneath the words, and rationalist's taboo is one way to make a step towards exposing it.

262. Dreams of AI Design - It can feel as though you understand how to build an AI, when really, you're still making all your predictions based on empathy. Your AI design will not work until you figure out a way to reduce the mental to the non-mental.

263. The Design Space of Minds-in-General - When people talk about "AI", they're talking about an incredibly wide range of possibilities. Having a word like "AI" is like having a word for everything which isn't a duck.

 


This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Part V: Value Theory (pp. 1359-1450). The discussion will go live on Wednesday, 9 March 2016, right here on the discussion forum of LessWrong.

5 comments

Comments sorted by top scores.

comment by Gunnar_Zarncke · 2016-02-26T22:18:33.657Z · LW(p) · GW(p)

I like your summaries very much. I haven't read all of the sequences and your summaries allow me to find many topics I overlooked before (because I didn't want to invest the time to even skim them all).

Replies from: Gram_Stone
comment by Gram_Stone · 2016-02-26T22:37:07.207Z · LW(p) · GW(p)

Thanks. I wrote some of them, but I copy most from here.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-02-26T23:23:25.649Z · LW(p) · GW(p)

This. Is. Crazy. Unbelievable. I didn't even know that these summaries existed. Whoever wrote these: Thanks to all of you diligent LW gardeners! Can these summaries please please be made more available. Like linked on the entry page or something. Just my suggestion for LW 2.0.

Replies from: Gram_Stone, ScottL
comment by Gram_Stone · 2016-02-26T23:51:34.656Z · LW(p) · GW(p)

This is totally just an impression from memory, so may any scorned gardeners forgive me, but I believe that user:ciphergoth (Paul Crowley) is almost solely responsible for those summaries. Also, some are just excerpts from the essays themselves.

comment by ScottL · 2016-02-28T02:15:44.577Z · LW(p) · GW(p)

They are mentioned on the wiki FAQ page which has some other useful links as well. If you want to go over all the LW concepts and topics, then you might find this page that I wrote a while ago to be useful. It provides a list of concise definitions for most of the LW concepts.