Rationality Reading Group: Part X: Yudkowsky's Coming of Age

post by Gram_Stone · 2016-04-06T23:05:28.197Z · LW · GW · Legacy · 1 comments

Contents

    Beginnings: An Introduction
  X. Yudkowsky's Coming of Age
None
1 comment

This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.


Welcome to the Rationality reading group. This fortnight we discuss Beginnings: An Introduction (pp. 1527-1530) and Part X: Yudkowsky's Coming of Age (pp. 1535-1601). This post summarizes each article of the sequence, linking to the original LessWrong post where available.

Beginnings: An Introduction

X. Yudkowsky's Coming of Age

292. My Childhood Death Spiral - Wherein Eliezer describes how a history of being rewarded for believing that 'intelligence is more important than experience or wisdom' initially led him to dismiss the possibility that most possible smarter-than-human artificial intelligences will cause unvaluable futures if constructed.

293. My Best and Worst Mistake - When Eliezer went into his death spiral around intelligence, he wound up making a lot of mistakes that later became very useful.

294. Raised in Technophilia - When Eliezer was quite young, it took him a very long time to get to the point where he was capable of considering that the dangers of technology might outweigh the benefits.

295. A Prodigy of Refutation - Eliezer's skills at defeating other people's ideas led him to believe that his own (mistaken) ideas must have been correct.

296. The Sheer Folly of Callow Youth - Eliezer's big mistake was when he took a mysterious view of morality.

297. That Tiny Note of Discord - Eliezer started to dig himself out of his philosophical hole when he noticed a tiny inconsistency.

298. Fighting a Rearguard Action Against the Truth - When Eliezer started to consider the possibility of Friendly AI as a contingency plan, he permitted himself a line of retreat. He was now able to slowly start to reconsider positions in his metaethics, and move gradually towards better ideas.

299. My Naturalistic Awakening - Eliezer actually looked back and realized his mistakes when he imagined the idea of an optimization process.

300. The Level Above Mine - There are people who have acquired more mastery over various fields than Eliezer has over his.

301. The Magnitude of His Own Folly Eliezer considers his training as a rationalist to have started the day he realized just how awfully he had screwed up.

302. Beyond the Reach of God - Compare the world in which there is a God, who will intervene at some threshold, against a world in which everything happens as a result of physical laws. Which universe looks more like our own?

303. My Bayesian Enlightenment - The story of how Eliezer Yudkowsky became a Bayesian.

 


This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Part Y: Challenging the Difficult (pp. 1605-1647). The discussion will go live on Wednesday, 20 April 2016, right here on the discussion forum of LessWrong.

1 comments

Comments sorted by top scores.

comment by buybuydandavis · 2016-04-12T02:56:14.667Z · LW(p) · GW(p)

That Tiny Note of Discord

Maybe some people would prefer an AI do particular things, such as not kill them, even if life is meaningless.

I see two transitions here.

First, instead of talking about what "is" right, it's now about some people prefer. We're not talking about a disembodied property of rightness, nor are we talking about rightness that people as a type prefer, but we're talking about what some people actually prefer. We're thinking about a subset of an actual population of beings, and what they do, and we're not assuming that they're all identical in what they do.

The move to trace concepts back to actual concretes is a winner. Values disconnected from Valuers is a loser.

Second, even if life is meaningless by some conception of meaning, life still goes on, and people will still have preferences. The problem of fulfilling their preferences remains, even if we decide that there "is no meaning" to life. In fact, the problem is there even if we decide that there is meaning to life, because then the question of the relation of me satisfying my preferences vs. this ethereal "meaning" naturally arises.

On the question of meaning, EY was doing a classic, we can't have X without Y, therefore we assume Y, where Y is a meaning to life. But notice that does not establish what we have Y, or that we even particularly need Y for anything that we want, as this doesn't establish that we actually need X either. You can find that neither X nor Y amount to coherent concepts, and that pesky problem of satisfying preferences remains.

Between Stirner and Korzybski, I think you have a cure for most of the conceptual confusion around morality, and you won't find yourself making early EY's mistakes.