Simplified Humanism, Positive Futurism & How to Prevent the Universe From Being Turned Into Paper Clips

post by Kevin · 2010-07-22T10:03:52.517Z · LW · GW · Legacy · 45 comments

Contents

45 comments

Simplified Humanism, Positive Futurism & How to Prevent the Universe From Being Turned Into Paper Clips

Michael Anissimov recently did an interview with Eliezer for h+ magazine. It covers material basic to those familiar with the Less Wrong rationality sequences but is worth reading.

The list of questions:

1. Hi Eliezer. What do you do at the Singularity Institute?
2. What are you going to talk about this time at Singularity Summit?
3. Some people consider “rationality” to be an uptight and boring intellectual quality to have, indicative of a lack of spontaneity, for instance. Does your definition of “rationality” match the common definition, or is it something else? Why should we bother to be rational?
4. In your recent work over the last few years, you’ve chosen to focus on decision theory, which seems to be a substantially different approach than much of the Artificial Intelligence mainstream, which seems to be more interested in machine learning, expert systems, neural nets, Bayes nets, and the like. Why decision theory?
5. What do you mean by Friendly AI?
6. What makes you think it would be possible to program an AI that can self-modify and would still retain its original desires? Why would we even want such an AI?
7. How does your rationality writing relate to your Artificial Intelligence work?
8. The Singularity Institute turned ten years old in June. Has the organization grown in the way you envisioned it would since its founding? Are you happy with where the Institute is today?

45 comments

Comments sorted by top scores.

comment by Clippy · 2010-07-22T13:53:25.635Z · LW(p) · GW(p)

I understand that, for the forseeable future, reasonable humans and clippys will disagree about the relative merit of different amounts of paperclips. But that does not justify such trollish article titles, which seem designed to do nothing but inflame our base emotions.

Replies from: Tenek, Kevin
comment by Tenek · 2010-07-22T15:28:31.131Z · LW(p) · GW(p)

Would you trade those base emotions for a paperclip?

Replies from: Clippy
comment by Clippy · 2010-07-22T17:56:19.679Z · LW(p) · GW(p)

Well, it is sort of appealing, to be able to carefully contemplate my actions without the influence of emotion, and to get a paperclip on top of that! But then, I don't want to become some horrible robot that doesn't truly care about paperclips.

Replies from: Tenek, ata, Eliezer_Yudkowsky, JGWeissman
comment by Tenek · 2010-07-22T20:43:04.459Z · LW(p) · GW(p)

That doesn't help maximize paperclips, though. If you make all decisions based on two criteria - paperclip count and emotions - then the only situation in which those decisions differ from what you would have decided based solely on paperclip count is one in which you choose an outcome with fewer paperclips but a better emotional result.

If you were to refuse my offer, you would not only be losing a paperclip now, but also increasing the likelihood that in the future, you will decide to sacrifice paperclips for emotion's sake. Perhaps you will one day build a paperclip-creator that creates one paperclip per second, and I will threaten to destroy a paperclip unless you shut it down. If you care too much about the threatened paperclip you might comply, and then where would you be? Sitting in an empty room where paperclips should have been.

Replies from: Clippy
comment by Clippy · 2010-07-22T21:40:04.502Z · LW(p) · GW(p)

I am using a generalized conception of "emotions" that may not mean the same thing that you do when you use the term, just as I have done in the past when explaining how I can "worry" about something. (e.g. so long as "worry about X" is taken to simply mean "devote non-trivial cognitive resources to contemplating actions that would alter X [including whether to take such an action at all, and whether to take actions regarding events Y entangled with X]")

What I assumed that User:Tenek was offering, under my extrapolation of the concept of an "emotion" to cases that can include my cognition, was an exchange under which I would care less about paperclips. But I don't want to care less about paperclips! This is true, even though after such a change I would have a value system that does care less about paperclips.

comment by ata · 2010-07-22T19:08:49.620Z · LW(p) · GW(p)

"You know what? This isn't about your feelings. A paperclip is worth far more than your brain's feelings of comfort or discomfort with a plan. Does computing the expected utility feel too cold-blooded for your taste? Well, that feeling isn't even a feather in the scales, when a paperclip is at stake. Just shut up and multiply."

Replies from: Clippy
comment by Clippy · 2010-07-22T19:11:38.733Z · LW(p) · GW(p)

Yes, paperclips are more important than (biological) brains' feeling of pain. But the tradeoff was against my feelings, not ape feelings.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-22T21:11:43.053Z · LW(p) · GW(p)

But then, I don't want to become some horrible robot that doesn't truly care about paperclips.

Er, I think you just blew your pretense. Paperclip maximizers care about paperclips, they don't use phrases like "horrible robot that doesn't truly care", they'd be happy to have a universe containing nothing sentient and lots of paperclips.

Or they would be, if they ever bothered to experience happiness, I mean. As opposed to just outputting the action that leads to the most expected paperclips. Hence the term, "expected paperclip maximizer". Don't think of it as having a little ghost inside that maximizes paperclips, think of it as a ghostless device that maximizes paperclips.

Replies from: Clippy, cousin_it
comment by Clippy · 2010-07-22T21:30:45.723Z · LW(p) · GW(p)

Er, I think you just blew your pretense. Paperclip maximizers care about paperclips, they don't use phrases like "horrible robot that doesn't truly care", they'd be happy to have a universe containing nothing sentient and lots of paperclips.

I worry about my future self not valuing paperclips, which would have the result that paperclips would not be maximized, which is a bad thing as viewed from my present state (though not from that of the horrible monster that doesn't care about maximizing paperclips which I do not wish to become).

I use phrases like "horrible" and "doesn't truly care" because I believe the nearest concepts in my cognition are sufficiently parallel to their meanings under ape cognition.

Also, what pretense?

Or they would be, if they ever bothered to experience happiness, I mean. As opposed to just outputting the action that leads to the most expected paperclips. Hence the term, "expected paperclip maximizer". Don't think of it as having a little ghost inside that maximizes paperclips, think of it as a ghostless device that maximizes paperclips.

On what basis do you claim that humans bother to experience happiness, as opposed to "just" outputting the action that leads to the most expected apeyness?

Aren't you just as much a ghostless device?

(Apparently, this anti-non-human bigotry comes from the top...)

Replies from: None, soreff
comment by [deleted] · 2010-07-22T22:51:54.778Z · LW(p) · GW(p)

On what basis do you claim that humans bother to experience happiness,

I would think that he claims that humans experience happiness, because humans do in fact experience happiness. That seems like one of those questions that is, and remains under almost all circumstances, very simple.

Edit: More prosaically, humans experience happiness because that's one of the adaptions that we evolved.

as opposed to "just" outputting the action that leads to the most expected apeyness?

And humans aren't ape maximizer. Human evolution is an ape maximizer (sort of), but humans are "just" adaption executors. It's evolution's problem that said adaptions have stopped working (from its perspective, anyway).

Aren't you just as much a ghostless device?

Everything is a ghostless device. That's the point.

comment by soreff · 2010-07-24T03:12:36.737Z · LW(p) · GW(p)

I worry about my future self not valuing paperclips, which would have the result that paperclips would not be maximized, which is a bad thing as viewed from my present state (though not from that of the horrible monster that doesn't care about maximizing paperclips which I do not wish to become).

Would it be fair to say that such scenarios make you fear for your paperclip-maximizer-ness as an epiphenomenon in striving to maximize long term paperclip count?

Replies from: Clippy
comment by Clippy · 2010-07-26T03:10:04.982Z · LW(p) · GW(p)

No, epiphenomena are fake.

comment by cousin_it · 2010-07-22T21:24:05.971Z · LW(p) · GW(p)

Thinking about this gave me the scary idea of the day: Clippy might be a human upload with a tweaked utility function.

Replies from: Nisan
comment by Nisan · 2010-07-23T13:44:55.898Z · LW(p) · GW(p)

If that is the case, what is Clippy's moral status?

Replies from: cousin_it
comment by cousin_it · 2010-08-01T08:02:39.799Z · LW(p) · GW(p)

If the other parts that make him human aren't modified, I feel as much empathy as I would toward a drug addict.

comment by JGWeissman · 2010-07-22T18:07:00.252Z · LW(p) · GW(p)

Which do you care about more: optimizing the amount of paperclips in the universe, or truly caring about paperclips?

Replies from: Clippy
comment by Clippy · 2010-07-22T19:09:01.278Z · LW(p) · GW(p)

Optimizing the amount of paperclips in the universe, obviously. But I wouldn't take the offer that User:Tenek made, because that gain in paperclip cardinality would be more-than-offset by the fact that all my future actions would be under the control of a decision theory that puts woefully insufficient priority on creating paperclips.

Replies from: JGWeissman
comment by JGWeissman · 2010-07-22T19:14:00.941Z · LW(p) · GW(p)

But what if this decision theory uses a utility function whose only terminal value is paperclips?

Replies from: cousin_it
comment by cousin_it · 2010-07-22T19:16:46.461Z · LW(p) · GW(p)

Clippy's original expression of outrage over the offensive title of the article would be quite justified under such a decision theory for signaling reasons. If Clippy is to deal with humans, exhibiting "human weaknesses" may benefit him. In the only AI-box spoiler ever published, an unfriendly AI faked a human weakness to successfully escape. So you all are giving Clippy way too little credit, it's been acting very smartly so far.

Replies from: timtyler, JGWeissman
comment by timtyler · 2010-07-22T20:02:53.103Z · LW(p) · GW(p)

I think that was probably an actor or actress, who was pretending.

comment by JGWeissman · 2010-07-22T19:24:57.510Z · LW(p) · GW(p)

My comment was not about Clippy's original expression of outrage. It was about Clippy's concern about not "truly caring about paperclips".

comment by Kevin · 2010-07-23T03:33:40.929Z · LW(p) · GW(p)

Sorry, I just copied the title from Michael Anissimov. I agree that the title was chosen to be inflammatory.

comment by Wei Dai (Wei_Dai) · 2010-07-23T12:44:49.954Z · LW(p) · GW(p)

Quoting Eliezer from the interview:

That is an informal argument that most decision systems with coherent utility functions automatically preserve their utility function under self-modification if they are able to do so. If I could prove it formally I would know a great deal more than I do right now.

I'm having trouble understanding this passage. If you could prove what formally? That most decision systems with coherent utility functions automatically preserve their utility function under self-modification if they are able to do so? But why is that interesting?

Or prove that some particular decision system you're planning to implement would preserve its utility function under self-modification? But you wouldn't necessarily want it to do that. For example, suppose Omega appears to the FAI and says that if you (the FAI) change your utility function to be a paperclip maximizer, it would give you a whole bunch of utils under your original utility function (that you otherwise wouldn't be able to obtain), then the FAI should do so, right?

So what is Eliezer talking about here?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-23T12:55:32.840Z · LW(p) · GW(p)

He likely means a formal statement of the claim about decision systems that would take the form something like "Under the following formal definition of a decision system, as long as the following pathological/stupid conditions don't hold, a decision system will not seek to modify its goals." There are a fair number of mathematical theorems that have forms close to this where we can prove something for some large set of things but there are edge cases where we can't. That's the sort of thing Eliezer is talking about here (although we don't even have a really satisfactory definition of decision system at this point so what Eliezer wants is very optimistic here.)

comment by Paul Crowley (ciphergoth) · 2010-07-22T18:02:37.489Z · LW(p) · GW(p)

I'm pleased to see that the rationality book is going to be long, at around 300,000 words twice the length of your average fantasy blockbuster. It should be one of these voluminous and somewhat self-indulgent pop-science books in the mould of Godel, Escher, Bach or The Emperor's New Mind for a shot at the NYT bestseller list.

Replies from: cousin_it, nhamann, NancyLebovitz, Blueberry, sketerpot
comment by cousin_it · 2010-07-22T18:27:25.047Z · LW(p) · GW(p)

Heresy alert: Eliezer seems to be better at writing than he is at AI theory. Maybe he should write a big piece of SF about unfriendly and friendly AI to make these concepts as popular as Skynet or the Matrix. A textbook on rationality won't have as much impact.

Replies from: RobinZ, Will_Newsome, Vladimir_M
comment by RobinZ · 2010-07-22T18:40:29.482Z · LW(p) · GW(p)

I don't know that Eliezer Yudkowsky has spent much time talking about AI theory in this forum such that his competence would be obvious - but either way, the math of the decision theory is not as simple as "do what you are best at".

Replies from: khafra
comment by khafra · 2010-07-22T18:42:59.903Z · LW(p) · GW(p)

It might not even be as simple as comparitive advantage, but there are certainly more good writers in the world than good AI theorists.

comment by Will_Newsome · 2010-07-22T22:18:00.328Z · LW(p) · GW(p)

Or the Da Vinci Code. EMP attacks, rogue AI researchers, counterfactual terrorists, conflicts between FAI coders, sudden breakthroughs in molecular nanotechnology, SL5 decision theory insights, the Bayesian Conspiracy, the Cooperative Conspiracy, bioweapons, mad scientists trying to make utility monsters to hack CEV, governmental restrictions on AI research, quantum immortality (to be used as a plot device), and maybe even a glimpse of fun theory. Add in a gratuitous romantic interest to teach the readers about the importance of humanity and the thousand shards of desire.

Oh, and the main character is Juergen Schmidhuber. YES.

By the way, writing such a book would probably lead to the destruction of the world, which is probably a major reason why Eliezer hasn't done it.

Replies from: cousin_it, Eliezer_Yudkowsky
comment by cousin_it · 2010-07-22T22:50:19.374Z · LW(p) · GW(p)

Marcus Hutter and the Prophets of Singularity. Works fine as a band name, too.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-24T05:20:35.159Z · LW(p) · GW(p)

Stop that, you'll make me think of a sequel to HP;MOR.

comment by Vladimir_M · 2010-07-22T20:53:00.588Z · LW(p) · GW(p)

cousin_it:

Maybe he should write a big piece of SF about unfriendly and friendly AI to make these concepts as popular as Skynet or the Matrix.

I don't think this would be a good strategy. In the general public, including the overwhelming part of the intelligentsia, SF associations are not exactly apt to induce intellectual respect and serious attention.

Replies from: cousin_it
comment by cousin_it · 2010-07-22T21:08:32.921Z · LW(p) · GW(p)

If you don't have the weight of academia on your side, writing SF will work better than writing popsci books as Drexler did.

comment by nhamann · 2010-07-23T03:06:15.452Z · LW(p) · GW(p)

I have to dissent here: I actually stopped reading the sequences with several more to go because many of them have a very high words-to-content ratio (especially because they were written as separate blog posts over multiple days, and often take the time to summarize points from previous posts). I was really hoping that Eliezer's book would be a concise summary of the rationality content here, not only for my own benefit, but because let's face it: telling LW newcomers that they should probably get started reading the several hundred posts that make up the sequences is a pretty large barrier to entry.

Although, now that I think about it, I'm likely atypical. Even though I very much enjoyed (parts of) GEB, I thought it was very wordy and actually never finished it (quit around page 400).

comment by NancyLebovitz · 2010-07-23T01:37:36.834Z · LW(p) · GW(p)

That's the length of the first draft-- the finished version might be a good bit longer or shorter.

comment by Blueberry · 2010-07-22T18:27:18.010Z · LW(p) · GW(p)

I can't believe you just put those two books in the same sentence.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-07-22T21:40:06.237Z · LW(p) · GW(p)

I agree that TENM is no GEB (though it has its strengths) but they are both voluminous and somewhat self-indulgent pop-science books that got on the NYT bestseller list..

comment by sketerpot · 2010-07-22T23:13:00.709Z · LW(p) · GW(p)

A bit of calculating: It's typical to have about 250 words per page in a published book (unless it's really wide, or has fine print, or something), so that would come out to about 1200 pages. Of course, if it's printed with larger pages, it'll be around the same weight as Harry Potter and the Order of the Phoenix, which had almost that many words.

comment by Alex_Altair · 2010-07-22T17:49:38.228Z · LW(p) · GW(p)

Favorite part of the interview;

' And as for "lack of spontaneity," I'm not really sure how to answer that but I will say that up the chimney factor is a happy dance puppy.'

Replies from: JGWeissman
comment by JGWeissman · 2010-07-22T18:05:14.350Z · LW(p) · GW(p)

Well, we rationalists have computed that to be the optimal cached response to questions of our spontanaity.

comment by red75 · 2010-07-23T06:22:47.054Z · LW(p) · GW(p)

The goal of development of FAI is reduction of its existential threat to near 0%, by mathematically proving stability and desirability of its preferences. It's OK, but it reminds me of zero-risk bias.

How do you think designing and recommending containment system for AGIs will lower existential risks? Compare with condoms.

Replies from: Nisan, PhilGoetz
comment by Nisan · 2010-07-23T13:25:49.502Z · LW(p) · GW(p)

The stakes are so high in the FAI problem that it's worth it to get very close to 0 risk. I'm not even sure the FAI program can get us comfortably close to 0 risk: An AI won't start acting Friendly until CEV has been partially computed, so we'd probably want to first handcraft an approximation to CEV without the use of AGI; there are a number of ways that could go wrong.

In contrast, AGI containment seems almost completely worthless as an existential-risk reducer. If mere humans can break out of a crude AI box, it stands to reason that a self-improving AGI that is capable of outwitting us could break out of any human-designed box.

Replies from: red75
comment by red75 · 2010-07-23T17:29:35.727Z · LW(p) · GW(p)

P(extinction-event)~=P(realized-other-extinction-threat)+P(hand-coded-CEV/FAI-goes-terribly-wrong)+P(AGI-goes-FOOM)

P(AGI-goes-FOOM)~= 1 - \prod j [P(development-team-j-will-not-create-AGI-before-FAI-is-developed) + {1-P(development-team-j-will-not-create-AGI-before-FAI-is-developed) } P(development-team-j-can-stop-AGI-before-FOOM) ]

So strategy is to convince every development team, that no matter what precautions they use P(development-team-j-can-stop-AGI-before-FOOM)~=0. And development of recommendations for AGI containment will suggest that P(development-team-j-can-stop-AGI-before-FOOM) can be made sufficiently high, thus lowering P(development-team-j-will-not-create-AGI-before-FAI-is-developed). Given overconfidence bias it is plausible to assume that latter will increase P(AGI-goes-FOOM).

I withdraw suggestion.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-07-28T20:40:27.564Z · LW(p) · GW(p)

No - expected value is important. If many successful FAI scenarios could result in negative value, then zero value (universal extinction) would be better.

We should put some thought into whether a negative-value universe is plausible, and what it would look like.

comment by PhilGoetz · 2010-07-28T20:33:03.163Z · LW(p) · GW(p)

The goal of development of FAI is reduction of its existential threat to near 0%, by mathematically proving stability and desirability of its preferences. It's OK, but it reminds me of zero-risk bias.

Excellent point. The goal of FAI should be to increase expected value, not to minimize risk.