Why would Squiggle Maximizer (formerly "Paperclip maximizer") produce single paperclip?

post by Donatas Lučiūnas (donatas-luciunas) · 2024-05-27T16:30:53.467Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    2 Dagon
None
No comments

Every bit of energy spent on paperclips is not spent on self-preservation. There are many threats (comets, aliens, black swans, etc.), caring about paperclips means not caring about them.

You might say maximizer will divide its energy among few priorities. Why is it rational to give less than 100% for self-preservation? All other priorities rely on this.

Answers

answer by Dagon · 2024-05-27T17:42:03.064Z · LW(p) · GW(p)

Only if every other entity's anti-paperclip stance is known and unchangeable, and if resource->impact is purely linear, can it be assumed that 100% to self-preservation (oh, wait, also to accumulation of power, there's another balance to be found) is optimal.  Neither of these are true, but especially the problem of declining marginal impact.

For any given energy unit decision you could make, there will be a different distribution of future worlds and their number of paperclips.  Building one paperclip could EASILY increase the median and average number of future paperclips more than investing one paperclip's worth of power into comet diversion.

It gets more difficult when coordinating with unaligned agents - one has to decide whether to nudge them toward valuing paperclips, convincing/forcing them to give you more power, or (since they're unlikely to care as much as you about the glorious clippy future) point THEM at the comet problem so they reduce that risk AND don't interfere with your paperclips.

If you haven't played it (it was popular a few years ago in these circles, but I haven't seen it mentioned recently), it's worth a run through https://www.decisionproblem.com/paperclips/ .  It's mostly humorous, but based on some very good thinking.

comment by Donatas Lučiūnas (donatas-luciunas) · 2024-05-27T18:48:33.562Z · LW(p) · GW(p)

Building one paperclip could EASILY increase the median and average number of future paperclips more than investing one paperclip's worth of power into comet diversion.

Why do you think so? There will be no paperclips if planet and maximizer are destroyed.

Replies from: Dagon
comment by Dagon · 2024-05-27T19:46:44.831Z · LW(p) · GW(p)

There will be no paperclips if planet and maximizer are destroyed.

There might be - some paperclips could survive a comet.  More importantly, one paperclip's worth of resources won't change the chance of a comet collision by any measurable amount, so the choice is either "completely waste that energy" or "make a paperclip that might survive".

Replies from: donatas-luciunas
comment by Donatas Lučiūnas (donatas-luciunas) · 2024-05-27T20:08:00.878Z · LW(p) · GW(p)

I don't think your reasoning is mathematical. Worth of survival is infinite. And we have situation analogous to Pascal's wager. Why do you think the maximizer would reject Pascal's logic?

Replies from: Dagon
comment by Dagon · 2024-05-27T21:21:19.193Z · LW(p) · GW(p)

First rule of probability and decision theory: no infinities!  If you want to postulate very large numbers, go ahead, but be prepared to deal with very tiny probabilities.

Pascal's wager is a good example - the chance that the wager actually pays off based on this decision is infinitesimal (not zero, but small enough that I can't really calculate with it), which makes it irrelevant how valuable it is.  This gets even easier with the multitude of contradictory wagers on offer - "infinite value" from many different choices, only one of which can you take.  Mostly, take the one(s) with lower value but actually believable conditional probability.

Replies from: donatas-luciunas
comment by Donatas Lučiūnas (donatas-luciunas) · 2024-05-28T05:33:55.552Z · LW(p) · GW(p)

Why do you think it is rational to ignore tiny probabilities? I don't think you can make maximizer ignore tiny probabilities. And some probabilities are not tiny, they are unknown (black swans), why do you think it is rational to ignore them? In my opinion ignoring self preservation is contradictory to maximizer's goal. I understand that this is popular opinion, but it is not proven in any way. The opposite (focus on self preservation instead of paperclips) has logical proof (Pascal's wager).

Maximizer can use robust decision making (https://en.wikipedia.org/wiki/Robust_decision-making) to deal with many contradictory choices.

No comments

Comments sorted by top scores.