[SEQ RERUN] The True Prisoner's Dilemma

post by MinibearRex · 2012-08-21T06:54:48.018Z · LW · GW · Legacy · 6 comments

Today's post, The True Prisoner's Dilemma was originally published on 03 September 2008. A summary (taken from the LW wiki):

 

The standard visualization for the Prisoner's Dilemma doesn't really work on humans. We can't pretend we're completely selfish.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Dreams of Friendliness, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

6 comments

Comments sorted by top scores.

comment by Decius · 2012-08-26T04:28:59.220Z · LW(p) · GW(p)

Assumption 1: The paperclip maximizer does not have magical information about my intentions, so changing my intentions will not change its decision.

I prefer (D,C) to (C,C), and I prefer (D,D) to (C,D). If I were to know the paperclip maximizer's choice ahead of time, it would not change my choice; and if the paperclip maximizer knew my choice ahead of time, it would not change its choice.

Here's the real question: Are you willing to sacrifice two paperclips to save a billion human lives? Is the paperclip maximizer willing to sacrifice two billion human lives to create one paperclip?

Would it be rational to execute the reverse, even if possible: For the humans to sacrifice human lives to create paperclips, on the possibility that the paperclip maximizer would sacrifice paperclips to save humans? Given that they are better at saving humans than we are, and that we are better at making paperclips than they are, it would appear to be the case.

comment by DanielLC · 2012-08-21T07:30:14.894Z · LW(p) · GW(p)

There's no given reason why you can't just defect and then make four paperclips so it's better for the AI. For that matter, the humans saved will need paperclips, so net good for the AI.

I think it would work if you specify that the AI only cares about paperclips in its own universe.

Replies from: MileyCyrus, Luke_A_Somers, Manfred
comment by MileyCyrus · 2012-08-21T12:13:45.997Z · LW(p) · GW(p)

Stop fighting the hypothetical.

comment by Luke_A_Somers · 2012-08-21T18:15:10.808Z · LW(p) · GW(p)

The paperclip maximizer only cares about the number of paperclips in its own universe, not in ours, so we can't offer to produce or threaten to destroy paperclips here.

Already taken care of.

Replies from: DanielLC
comment by DanielLC · 2012-08-22T02:15:41.488Z · LW(p) · GW(p)

Whoops. Missed that.

comment by Manfred · 2012-08-21T11:23:25.211Z · LW(p) · GW(p)

Well, clearly, they have to be paperclips made of substance S.