Rationality Reading Group: Part M: Fragile Purposes

post by Gram_Stone · 2015-11-05T02:08:05.693Z · LW · GW · Legacy · 1 comments

Contents

  M. Fragile Purposes
None
1 comment

This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.


Welcome to the Rationality reading group. This fortnight we discuss Part M: Fragile Purposes (pp. 617-674). This post summarizes each article of the sequence, linking to the original LessWrong post where available.

M. Fragile Purposes

143. Belief in Intelligence - What does a belief that an agent is intelligent look like? What predictions does it make?

144. Humans in Funny Suits - It's really hard to imagine aliens that are fundamentally different from human beings.

145. Optimization and the Intelligence Explosion - An introduction to optimization processes and why Yudkowsky thinks that an intelligence explosion would be far more powerful than calculations based on human progress would suggest.

146. Ghosts in the Machine - There is a way of thinking about programming a computer that conforms well to human intuitions: telling the computer what to do. The problem is that the computer isn't going to understand you, unless you program the computer to understand. If you are programming an AI, you are not giving instructions to a ghost in the machine; you are creating the ghost.

147. Artificial Addition - If you imagine a world where people are stuck on the "artifical addition" (i.e. machine calculator) problem, the way people currently are stuck on artificial intelligence, and you saw them trying the same popular approaches taken today toward AI, it would become clear how silly they are. Contrary to popular wisdom (in that world or ours), the solution is not to "evolve" an artificial adder, or invoke the need for special physics, or build a huge database of solutions, etc. -- because all of these methods dodge the crucial task of understanding what addition involves, and instead try to dance around it. Moreover, the history of AI research shows the problems of believing assertions one cannot re-generate from one's own knowledge.

148. Terminal Values and Instrumental Values - Proposes a formalism for a discussion of the relationship between terminal and instrumental values. Terminal values are world states that we assign some sort of positive or negative worth to. Instrumental values are links in a chain of events that lead to desired world states.

149. Leaky Generalizations - The words and statements that we use are inherently "leaky", they do not precisely convey absolute and perfect information. Most humans have ten fingers, but if you know that someone is a human, you cannot confirm (with probability 1) that they have ten fingers. The same holds with planning and ethical advice.

150. The Hidden Complexity of Wishes - There are a lot of things that humans care about. Therefore, the wishes that we make (as if to a genie) are enormously more complicated than we would intuitively suspect. In order to safely ask a powerful, intelligent being to do something for you, that being must share your entire decision criterion, or else the outcome will likely be horrible.

151. Anthropomorphic Optimism - Don't bother coming up with clever, persuasive arguments for why evolution will do things the way you prefer. It really isn't listening.

152. Lost Purposes - On noticing when you're still doing something that has become disconnected from its original purpose.

 


This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Part N: A Human's Guide to Words (pp. 677-801) and Interlude: An Intuitive Explanation of Bayes's Theorem (pp. 803-826). The discussion will go live on Wednesday, 18 November 2015, right here on the discussion forum of LessWrong.

1 comments

Comments sorted by top scores.

comment by Gram_Stone · 2015-11-05T02:19:10.186Z · LW(p) · GW(p)

The tag 'rationalityreadinggroup' has been added to all existing and will be added to all future Rationality Reading Group posts that I have written and will write, per Gunnar_Zarncke et al.'s request. Once more, I cannot edit the announcement post or Parts A or B, or fix their tags.