[SEQ RERUN] Not Taking Over the World

post by MinibearRex · 2012-12-30T05:14:16.567Z · LW · GW · Legacy · 1 comments

Contents

1 comment

Today's post, Not Taking Over the World was originally published on 15 December 2008. A summary (taken from the LW wiki):

 

It's rather difficult to imagine a way in which you could create an AI, and not somehow either take over or destroy the world. How can you use unlimited power in such a way that you don't become a malevolent deity, in the Epicurean sense?


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was What I Think, If Not Why, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

1 comments

Comments sorted by top scores.

comment by someonewrongonthenet · 2013-01-02T00:56:13.428Z · LW(p) · GW(p)

When I was a little kid, I had this weird delusion that if I told a lie, I'd be transported to an alternate universe in which it were true, with my memory changed as well (rendering the idea unfalsifiable). I scrupulously avoided inaccurate statements at all times for this very reason. So I've thought through this question with a child's earnestness- and eventually concluded that the question is meaningless.

You need to put some bounds on the power for this to make sense. If I had true omnipotence, then I'd necessarily have omniscience as well.

If I had omniscience, that means that I would be simultaneously aware of every possible logical statement and every possible universe. My awareness of every possible universe would of course constitute a simulation of every possible universe.

If I take the position that I am morally responsible for what goes on in my simulated universes, then I'd have to use my powers to block myself from thinking about those universes which contained suffering. (My childhood idea was that if you magically altered a universe, the original simply ceased to exist and was replaced by a copy. Naturally the whole thing is a bit more horrifying when you look at it this way and you'd never use your power - so let's assume the philosophy that you are only morally responsible for things you simulate and that alterations via miracle does not constitute deletion and overwriting)

I'd spend the rest of eternity, simulating every moment in every universes that weren't razed by my threshold for optimality. I would, of course, be experiencing everything in all these universes, so you would see the reflection of my utility function therein, stretched to its maximum given the circumstances.

To be honest, part of my utility function wants to keep a copy of universe exactly as I left it before omnipotence, for sentimental reasons. The other part of me would worry that I had a responsibility not to simulate sub-optimal universes, regardless of my own selfish desires.

The trouble is that my peak possible utility function given omnipotence, may or may not be lower than my peak possible utility function when not given omnipotence. I'm not really sure - I think I'd like omnipotence, but it would be thoroughly disconcerting philosophically. Of course, my utility function will also cause me to accept omnipotence when offered, regardless of whether or not it would lower my peak possible utility...rejecting the offer would bring the lowest maximum utility of all.

But then, I don't know if this omnipotent me resembles the actual me at all...it's all very good when I approximate myself as an agent with utility functions, but in reality I'm a bundle of neurons which cannot meaningly control omnipotent output or contain omniscience, so this hypothetical situation is logically absurd.

So my answer to this question is going to have to be Mu, but if the question did make sense then my first "wish" would be omniscience to accompany my omnipotence while leaving my utility function intact, at which point my magic will no longer operate via wish fulfillment and my decisions would be infinitely wise.

I guess I'm back with the same issue raised by the intelligence explosion. Utility functions aren't real models of how my brain works, so how can I ever be certain that my wish for omniscience is going to go the way I expect it to when I can't even coherently phrase what I want the omnipotent power to do? You can't effectively wield omnipotence power without omniscience.