Not Taking Over the World

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-12-15T22:18:47.000Z · LW · GW · Legacy · 97 comments

Followup toWhat I Think, If Not Why

My esteemed co-blogger Robin Hanson accuses me of trying to take over the world.

Why, oh why must I be so misunderstood?

(Well, it's not like I don't enjoy certain misunderstandings.  Ah, I remember the first time someone seriously and not in a joking way accused me of trying to take over the world.  On that day I felt like a true mad scientist, though I lacked a castle and hunchbacked assistant.)

But if you're working from the premise of a hard takeoff - an Artificial Intelligence that self-improves at an extremely rapid rate - and you suppose such extra-ordinary depth of insight and precision of craftsmanship that you can actually specify the AI's goal system instead of automatically failing -

- then it takes some work to come up with a way not to take over the world.

Robin talks up the drama inherent in the intelligence explosion, presumably because he feels that this is a primary source of bias.  But I've got to say that Robin's dramatic story, does not sound like the story I tell of myself.  There, the drama comes from tampering with such extreme forces that every single idea you invent is wrong.  The standardized Final Apocalyptic Battle of Good Vs. Evil would be trivial by comparison; then all you have to do is put forth a desperate effort.  Facing an adult problem in a neutral universe isn't so straightforward.  Your enemy is yourself, who will automatically destroy the world, or just fail to accomplish anything, unless you can defeat you.  - That is the drama I crafted into the story I tell myself, for I too would disdain anything so cliched as Armageddon.

So, Robin, I'll ask you something of a probing question.  Let's say that someone walks up to you and grants you unlimited power.

What do you do with it, so as to not take over the world?

Do you say, "I will do nothing - I take the null action"?

But then you have instantly become a malevolent God, as Epicurus said:

Is God willing to prevent evil, but not able?  Then he is not omnipotent.
Is he able, but not willing?  Then he is malevolent.
Is both able, and willing?  Then whence cometh evil?
Is he neither able nor willing?  Then why call him God.

Peter Norvig said, "Refusing to act is like refusing to allow time to pass."  The null action is also a choice.  So have you not, in refusing to act, established all sick people as sick, established all poor people as poor, ordained all in despair to continue in despair, and condemned the dying to death?  Will you not be, until the end of time, responsible for every sin committed?

Well, yes and no.  If someone says, "I don't trust myself not to destroy the world, therefore I take the null action," then I would tend to sigh and say, "If that is so, then you did the right thing."  Afterward, murderers will still be responsible for their murders, and altruists will still be creditable for the help they give.

And to say that you used your power to take over the world by doing nothing to it, seems to stretch the ordinary meaning of the phrase.

But it wouldn't be the best thing you could do with unlimited power, either.

With "unlimited power" you have no need to crush your enemies.  You have no moral defense if you treat your enemies with less than the utmost consideration.

With "unlimited power" you cannot plead the necessity of monitoring or restraining others so that they do not rebel against you.  If you do such a thing, you are simply a tyrant who enjoys power, and not a defender of the people.

Unlimited power removes a lot of moral defenses, really.  You can't say "But I had to."  You can't say "Well, I wanted to help, but I couldn't."  The only excuse for not helping is if you shouldn't, which is harder to establish.

And let us also suppose that this power is wieldable without side effects or configuration constraints; it is wielded with unlimited precision.

For example, you can't take refuge in saying anything like:  "Well, I built this AI, but any intelligence will pursue its own interests, so now the AI will just be a Ricardian trading partner with humanity as it pursues its own goals."  Say, the programming team has cracked the "hard problem of conscious experience" in sufficient depth that they can guarantee that the AI they create is not sentient - not a repository of pleasure, or pain, or subjective experience, or any interest-in-self - and hence, the AI is only a means to an end, and not an end in itself.

And you cannot take refuge in saying, "In invoking this power, the reins of destiny have passed out of my hands, and humanity has passed on the torch."  Sorry, you haven't created a new person yet - not unless you deliberately invoke the unlimited power to do so - and then you can't take refuge in the necessity of it as a side effect; you must establish that it is the right thing to do.

The AI is not necessarily a trading partner.  You could make it a nonsentient device that just gave you things, if you thought that were wiser.

You cannot say, "The law, in protecting the rights of all, must necessarily protect the right of Fred the Deranged to spend all day giving himself electrical shocks."  The power is wielded with unlimited precision; you could, if you wished, protect the rights of everyone except Fred.

You cannot take refuge in the necessity of anything - that is the meaning of unlimited power.

We will even suppose (for it removes yet more excuses, and hence reveals more of your morality) that you are not limited by the laws of physics as we know them.  You are bound to deal only in finite numbers, but not otherwise bounded.  This is so that we can see the true constraints of your morality, apart from your being able to plead constraint by the environment.

In my reckless youth, I used to think that it might be a good idea to flash-upgrade to the highest possible level of intelligence you could manage on available hardware.  Being smart was good, so being smarter was better, and being as smart as possible as quickly as possible was best - right?

But when I imagined having infinite computing power available, I realized that no matter how large a mind you made yourself, you could just go on making yourself larger and larger and larger.  So that wasn't an answer to the purpose of life.  And only then did it occur to me to ask after eudaimonic rates of intelligence increase, rather than just assuming you wanted to immediately be as smart as possible.

Considering the infinite case moved me to change the way I considered the finite case.  Before, I was running away from the question by saying "More!"  But considering an unlimited amount of ice cream forced me to confront the issue of what to do with any of it.

Similarly with population:  If you invoke the unlimited power to create a quadrillion people, then why not a quintillion?  If 3^^^3, why not 3^^^^3?  So you can't take refuge in saying, "I will create more people - that is the difficult thing, and to accomplish it is the main challenge."  What is individually a life worth living?

You can say, "It's not my place to decide; I leave it up to others" but then you are responsible for the consequences of that decision as well.  You should say, at least, how this differs from the null act.

So, Robin, reveal to us your character:  What would you do with unlimited power?

97 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Aron · 2008-12-15T22:45:00.000Z · LW(p) · GW(p)

Don't bogart that joint, my friend.

Replies from: None
comment by [deleted] · 2010-12-22T11:07:34.553Z · LW(p) · GW(p)

I would be very interested to know how many (other) OB/LW readers use cannabis recreationally.

comment by Robin_Hanson2 · 2008-12-15T22:58:40.000Z · LW(p) · GW(p)

The one ring of power sits before us on a pedestal; around it stand a dozen folks of all races. I believe that whoever grabs the ring first becomes invincible, all powerful. If I believe we cannot make a deal, that someone is about to grab it, then I have to ask myself whether I would weld such power better than whoever I guess will grab it if I do not. If I think I'd do a better job, yes, I grab it. And I'd accept that others might consider that an act of war against them; thinking that way they may well kill me before I get to the ring.

With the ring, the first thing I do then is think very very carefully about what to do next. Most likely the first task is who to get advice from. And then I listen to that advice.

Yes this is a very dramatic story, one which we are therefore biased to overestimate its likelihood.

I don't recall where exactly, but I'm pretty sure I've already admitted that I'd "grab the ring" before on this blog in the last month.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-12-15T23:03:23.000Z · LW(p) · GW(p)

I'm not asking you if you'll take the Ring, I'm asking what you'll do with the Ring. It's already been handed to you.

Take advice? That's still something of an evasion. What advice would you offer you? You don't seem quite satisfied with what (you think is) my plan for the Ring - so you must already have an opinion of your own - what would you change?

comment by Robin_Hanson2 · 2008-12-15T23:11:33.000Z · LW(p) · GW(p)

Eliezer, I haven't meant to express any dissatisfaction with your plans to use a ring of power. And I agree that someone should be working on such plans even if the chances of it happening are rather small. So I approve of your working on such plans. My objection is only that if enough people overestimate the chance of such scenario, it will divert too much attention from other important scenarios. I similarly think global warming is real, worthy of real attention, but that it diverts too much attention from other future issues.

comment by Billy_Brown · 2008-12-15T23:19:07.000Z · LW(p) · GW(p)

This is a great device for illustrating how devilishly hard it is to do anything constructive with such overwhelming power, yet not be seen as taking over the world. If you give each individual whatever they want you’ve just destroyed every variety of collectivism or traditionalism on the planet , and those who valued those philosophies will curse you. If you implement any single utopian vision everyone who wanted a different one will hate you, and if you limit yourself to any minimal level of intervention everyone who wants larger benefits than you provide will be unhappy.

Really, I doubt that there is any course you can follow that won’t draw the ire of a large minority of humanity, because too many of us are emotionally committed to inflicting various conflicting forms of coercion on each other.

Replies from: Uni, Strange7
comment by Uni · 2011-03-29T10:33:12.367Z · LW(p) · GW(p)

If you use your unlimited power to make everyone including yourself constantly happy by design, and reprogram the minds of everybody into always approving of whatever you do, nobody will complain or hate you. Make every particle in the universe cooperate perfectly to maximize the amount of happiness in all future spacetime (and in the past as well, if time travel is possible when you have unlimited power). Then there would be no need for free will or individual autonomy for anybody anymore.

Replies from: Uni, christopherj
comment by Uni · 2011-03-29T21:12:35.205Z · LW(p) · GW(p)

Why was that downvoted by 3?

What I did was, I disproved Billy Brown's claim that "If you implement any single utopian vision everyone who wanted a different one will hate you". Was it wrong of me to do so?

Replies from: None, nshepperd
comment by [deleted] · 2011-03-29T21:23:05.793Z · LW(p) · GW(p)

Perhaps they see you as splitting hairs between being seen as taking over the world, and actually taking over the world. In your scenario you are not seen as taking over the world because you eliminate the ability to see that - but that means that you've actually taken over the world (to a degree greater than anyone has ever achieved before).

But in point of fact, you're right about the claim as stated. As for the downvotes - voting is frequently unfair, here and everywhere else.

Replies from: Uni
comment by Uni · 2011-03-30T05:05:15.742Z · LW(p) · GW(p)

Thanks for explaining!

I didn't mean to split hairs at all. I'm surprised that so many here seem to take it for granted that, if one had unlimited power, one would choose to let other people remain having some say, and some autonomy. If I would have to listen to anybody else in order to be able to make the best possible decision about what to do with the world, this would mean I would have less than unlimited power. Someone who has unlimited power will always do the right thing, by nature.

And besides:

Suppose I'd have less than unlimited power but still "rather complete" power over every human being, and suppose I'd create what would be a "utopia" only to some, but without changing anybody's mind against their will, and suppose some people would then hate me for having created that "utopia". Then why would they hate me? Because they would be unhappy. If I'd simply make them constantly happy by design - I wouldn't even have to make them intellectually approve of my utopia to do that - they wouldn't hate me, because a happy person doesn't hate.

Therefore, even in a scenario where I had not only "taken over the world", but where I would also be seen as having taken over the world, nobody would still hate me.

Replies from: Alicorn, Uni, TheOtherDave
comment by Alicorn · 2011-03-30T05:12:21.631Z · LW(p) · GW(p)

If I would have to listen to anybody else in order to be able to make the best possible decision about what to do with the world, this would mean I would have less than unlimited power. Someone who has unlimited power will always do the right thing, by nature.

I recommend reading this sequence. Suffice it to say that you are wrong, and power does not bring with it morality.

a happy person doesn't hate.

What is your support for this claim? (I smell argument by definition...)

Replies from: AdeleneDawner, Uni, None
comment by AdeleneDawner · 2011-03-30T07:13:25.333Z · LW(p) · GW(p)

If I would have to listen to anybody else in order to be able to make the best possible decision about what to do with the world, this would mean I would have less than unlimited power. Someone who has unlimited power will always do the right thing, by nature.

I recommend reading this sequence. Suffice it to say that you are wrong, and power does not bring with it morality.

It seems to me that the claim that Uni is making is not the same as the claim that you think e's making, mostly because Uni is using definitions of 'best possible decision' and 'right thing' that are different from the ones that are usually used here.

It looks to me (and please correct me if I'm wrong, Uni) that Uni is basing eir definition on the idea that there is no objectively correct morality, not even one like Eliezer's CEV - that morality and 'the right thing to do' are purely social ideas, defined by the people in a relevant situation.

Thus, if Uni had unlimited power, it would by definition be within eir power to cause the other people in the situation to consider eir actions correct, and e would do so.

If this is the argument that Uni is trying to make, then the standard arguments that power doesn't cause morality are basically irrelevant, since Uni is not making the kinds of claims about an all-powerful person's behavior that those apply to.

E appears to be claiming that an all-powerful person would always use that power to cause all relevant other people to consider their actions correct, which I suspect is incorrect, but e's basically not making any other claims about the likely behavior of such an entity.

comment by Uni · 2011-03-30T08:53:19.495Z · LW(p) · GW(p)

I recommend reading this sequence.

Thanks for recommending.

Suffice it to say that you are wrong, and power does not bring with it morality.

I have never assumed that "power brings with it morality" if we with power mean limited power. Some superhuman AI might very well be more immoral than humans are. I think unlimited power would bring with it morality. If you have access to every single particle in the universe and can put it wherever you want, and thus create whatever is theoretically possible for an almighty being to create, you will know how to fill all of spacetime with the largest possible amount of happiness. And you will do that, since you will be intelligent enough to understand that that's what gives you the most happiness. (And, needless to say, you will also find a way to be the one to experience all that happiness.) Given hedonistic utilitarianism, this is the best thing that could happen, no matter who got the unlimited power and what was initially the moral standards of that person. If you don't think hedonistic utilitarianism (or hedonism) is moral, it's understandable that you think a world filled with the maximum amount of happiness might not be a moral outcome, especially if achieving that goal took killing lots of people against their will, for example. But that alone doesn't prove I'm wrong. Much of what humans think to be very wrong is not in all circumstances wrong. To prove me wrong, you have to either prove hedonism and hedonistic utilitarianism wrong first, or prove that a being with unlimited power wouldn't understand that it would be best for him to fill the universe with as much happiness as possible and experience all that happiness.

a happy person doesn't hate.

What is your support for this claim?

Observation.

Replies from: ameriver, wedrifid, xxd
comment by ameriver · 2011-03-30T09:48:30.540Z · LW(p) · GW(p)

If you have access to every single particle in the universe and can put it wherever you want, and thus create whatever is theoretically possible for an almighty being to create, you will know how to fill all of spacetime with the largest possible amount of happiness.

What I got out of this sentence is that you believe someone (anyone?), given absolute power over the universe, would be imbued with knowledge of how to maximize for human happiness. Is that an accurate representation of your position? Would you be willing to provide a more detailed explanation?

And you will do that, since you will be intelligent enough to understand that that's what gives you the most happiness.

Not everyone is a hedonistic utilitarian. What if the person/entity who ends up with ultimate power enjoys the suffering of others? Is your claim is that their value system would be rewritten to hedonistic utilitarianism upon receiving power? I do not see any reason why that should be the case. What are your reasons for believing that a being with unlimited power would understand that?

comment by wedrifid · 2011-03-30T15:12:37.338Z · LW(p) · GW(p)

To prove me wrong, you have to either prove hedonism and hedonistic utilitarianism wrong first, or prove that a being with unlimited power wouldn't understand that it would be best for him to fill the universe with as much happiness as possible and experience all that happiness.

I'm not sure about 'proof' but hedonistic utilitarianism can be casually dismissed out of hand as not particularly desirable and the idea that giving a being ultimate power will make them adopt such preferences is absurd.

Replies from: ameriver
comment by ameriver · 2011-03-31T04:41:09.805Z · LW(p) · GW(p)

I'd be interested to hear a bit more detail as to why it can be dismissed out of hand. Is there a link I could go read?

comment by xxd · 2012-01-27T18:07:11.619Z · LW(p) · GW(p)

This is a cliche and may be false but it's assumed true: "Power corrupts and absolute power corrupts absolutely".

I wouldn't want anybody to have absolute power not even myself, the only possible use of absolute power I would like to have would be to stop any evil person getting it.

To my mind evil = coercion and therefore any human who seeks any kind of coercion over others is evil.

My version of evil is the least evil I believe.

EDIT: Why did I get voted down for saying "power corrupts" - the corrollary of which is rejection of power is less corrupt whereas Eliezer gets voted up for saying exactly the same thing? Someone who voted me down should respond with their reasoning.

Replies from: Ben_Welchner
comment by Ben_Welchner · 2012-01-28T02:32:57.992Z · LW(p) · GW(p)

Given humanity's complete lack of experience with absolute power, it seems like you can't even take that cliche for weak evidence. Having glided through the article and comments again, I also don't see where Eliezer said "rejection of power is less corrupt. The bit about Eliezer sighing and saying the null-actor did the right thing?

(No, I wasn't the one who downvoted)

comment by [deleted] · 2011-06-29T00:34:21.228Z · LW(p) · GW(p)

What's wrong with Uni's claim? If you have unlimited power, one possible choice is to put all the other sentient beings into a state of euphoric intoxication such that they don't hate you. Yes, that is by definition. Go figure out a state for each agent so that it doesn't hate you and put it into that state, then you've got a counter example to Billy's claim above. Maybe a given agent's former volition would have chosen to hate you if it was aware of the state you forced it into later on, but that's a different thing than caring if the agent itself hates you as a result of changes you make. This is a valid counter-example. I have read the coming of age sequence and don't see what you're referring to in there that makes your point. Perhaps you could point me back to some specific parts of those posts.

comment by Uni · 2011-03-30T05:54:27.713Z · LW(p) · GW(p)

Suppose you'd say it would be wrong of me to make the haters happy "against their will". Why would that be wrong, if they would be happy to be happy once they have become happy? Should we not try to prevent suicides either? Not even the most obviously premature suicides, not even temporarily, not even only to make the suicide attempter rethink their decision a little more thoroughly?

Making a hater happy "against his will", with the result that he stops hating, is (I think) comparable to preventing a premature suicide in order to give that person an opportunity to reevaluate his situation and come to a better decision (by himself). By respecting what a person wants right now only, you are not respecting "that person including who he will be in the future", you are respecting only a tiny fraction of that. Strictly speaking, even the "now" we are talking about is in the future, because if you are now deciding to act in someone's interest, you should base your decision on your expectation of what he will want by the time your action would start affecting him (which is not exactly now), rather than what he wants right now. So, whenever you respect someone's preferences, you are (or at least should be) respecting his future preferences, not his present ones.

(Suppose for example that you strongly suspect that, in one second from now, I will prefer a painless state of mind, but that you see that right now, I'm trying to cut off a piece of wood in a way that you see will make me cut me in my leg in one second if you don't interfere. You should then interfere, and that can be explained by (if not by anything else) your expectation of what I will want one second from now, even if right now I have no other preference than getting that piece of wood cut in two.)

I suggest one should respect another persons (expected) distant future preferences more than his "present" (that is, very close future) ones, because his future preferences are more numerous (since there is more time for them) than his "present" ones. One would arguably be respecting him more that way, because one would be respecting more of his preferences - not favoring any one of his preferences over any other one just because it happens to take place at a certain time.

This way, hedonistic utilitarianism can be seen as compatible with preference utilitarianism.

comment by TheOtherDave · 2011-03-30T15:30:28.326Z · LW(p) · GW(p)

This is certainly true. If you have sufficient power, and if my existing values, preferences, beliefs, expectations, etc. are of little or no value to you, but my approval is, then you can choose to override my existing values, preferences, beliefs, expectations, etc. and replace them with whatever values, preferences, beliefs, expectations, etc. would cause me to approve of whatever it is you've done, and that achieves your goals.

comment by nshepperd · 2011-03-30T07:08:55.657Z · LW(p) · GW(p)

While you are technically correct, the spirit of the original post and a charitable interpretation was, as I read it, "no matter what you decide to do with your unlimited power, someone will hate your plan". Of course if you decide to use your unlimited power to blow up the earth, no one will complain because they're all dead. But if you asked the population of earth what they think of your plan to blow up the earth, the response will be largely negative. The contention is that no matter what plan you try to concoct, there will be someone such that, if you told them about the plan and they could see what the outcome would be, they would hate it.

comment by christopherj · 2013-09-16T03:32:54.795Z · LW(p) · GW(p)

Incidentally, it is currently possible to achieve total happiness, or perhaps a close approximation. A carefully implanted electrode to the right part of the brain, will be more desirable than food to a starving rat, for example. While this part of the brain is called the "pleasure center", it might rather be about desire and reward instead. Nevertheless, pleasure and happiness are by necessity mental states, and it should be possible to artificially create these.

Why should a man who is perfectly content, bother to get up to eat, or perhaps achieve something? He may starve to death, but would be happy to do so. And such a man will be content with his current state, which of course is contentment, and not at all resent his current state. Even a less invasive case, where a man is given almost everything he wants, yet not so much so that he does not eventually become dissatisfied with the amount of food in his belly and decide to put more in, even so there will be higher level motivations this man will lose.

While I consider myself a utilitarian, and believe the best choices are those that maximize the values of everyone, I cannot agree with the above situation. For now, this is no problem because people in their current state would not choose to artificially fulfill their desires via electrode implants, nor is it yet possible to actually fulfill everyone's desires in the real world. I shall now go and rethink why I choose a certain path, if I cannot abide reaching the destination.

Replies from: DanielH
comment by DanielH · 2013-10-05T01:34:18.758Z · LW(p) · GW(p)

Welcome to Less Wrong!

First, let me congratulate you on stopping to rethink when you realize that you've found a seeming contradiction in your own thinking. Most people aren't able to see the contradictions in their beliefs, and when/if they do, they fail to actually do anything about them.

While it is theoretically possible to artificially create pleasure and happiness (which, around here, we call wirehading), converting the entire observable universe to orgasmium (maximum pleasure experiencing substance) seems to go a bit beyond that. In general, I think you'll find most people around here are against both, even though they'd call themselves "utilitarians" or similar. This is because there's more than one form of utilitarianism; many Less Wrongers believe other forms, like preference utilitarianism are correct, instead of the original Millsian hedonistic utilitarianism.

Edit: fixed link formatting

comment by Strange7 · 2011-03-29T12:15:14.498Z · LW(p) · GW(p)

If you give each individual whatever they want you’ve just destroyed every variety of collectivism or traditionalism on the planet ,

Unless, of course, anyone actually wants to participate in such systems, in which case you have (for commonly-accepted values of 'want' and 'everyone') allowed them to do so. Someone who'd rather stand in the People's Turnip-Requisitioning Queue for six hours than have unlimited free candy is free to do so, and someone who'd rather watch everyone else do so can have a private world with millions of functionally-indistinguishable simulacra. Someone who demands that other real people participate, whether they want to or not, and can't find enough quasi-volunteers, is wallowing so deep in their own hypocrisy that nothing within the realm of logic could be satisfactory.

comment by Cyan2 · 2008-12-16T00:22:21.000Z · LW(p) · GW(p)
If you invoke the unlimited power to create a quadrillion people, then why not a quadrillion?

One of these things is much like the other...

comment by Dagon · 2008-12-16T00:25:27.000Z · LW(p) · GW(p)

Infinity screws up a whole lot of this essay. Large-but-finite is way way harder, as all the "excuses", as you call them, become real choices again. You have to figure out whether to monitor for potential conflicts, including whether to allow others to take whatever path you took to such power. Necessity is back in the game.

I suspect I'd seriously consider just tiling the universe with happy faces (very complex ones, but still probably not what the rest of y'all think you want). At least it would be pleasant, and nobody would complain.

comment by lowly_undergrad4 · 2008-12-16T00:46:23.000Z · LW(p) · GW(p)

This question is a bit off-topic and I have a feeling it has been covered in a batch of comments elsewhere so if it has, would someone mind directing me to it. My question is this: Given the existence of the multiverse, shouldn't there be some universe out there in which an AI has already gone FOOM? If it has, wouldn't we see the effects of it in some way? Or have I completely misunderstood the physics?

And Eliezer, don't lie, everybody wants to rule the world.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-12-16T00:47:26.000Z · LW(p) · GW(p)

Okay, you don't disapprove. Then consider the question one of curiosity. If Tyler Cowen acquired a Ring of Power and began gathering a circle of advisors, and you were in that circle, what specific advice would you give him?

comment by Robin_Hanson2 · 2008-12-16T01:02:39.000Z · LW(p) · GW(p)

Eliezer, I'd advise no sudden moves; think very carefully before doing anything. I don't know what I'd think after thinking carefully, as otherwise I wouldn't need to do it. Are you sure there isn't some way to delay thinking on your problem until after it appears? Having to have an answer now when it seems an likely problem is very expensive.

comment by PK · 2008-12-16T01:08:07.000Z · LW(p) · GW(p)

What about a kind of market system of states? The purpose of the states will be will be to provide a habitat matching each citizen's values and lifestyle?

-Each state will have it's own constitution and rules. -Each person can pick the state they wish to live in assuming they are accepted in based on the state’s rules. -The amount of resources and territory allocated to each state is proportional to the number of citizens that choose to live there. -There are certain universal meta-rules that supercede the states' rules such as... -A citizen may leave a state at any time and may not be held in a state against his or her will. -No killing or significant non-consensual physical harm permitted; at most a state could permanently exile a citizen. -There are some exceptions such as the decision power of children and the mentally ill. -Etc.

Anyways, this is a rough idea of what I would do with unlimited power. I would build this, unless I came across a better idea. In my vision, citizens will tend to move into states they prefer and avoid states they dislike. Over time good states will grow and bad states will shrink or collapse. However states could also specialize and for example, you could have a small state with rules and a lifestyle just right for a small dedicated population. I think this is an elegant way of not imposing a monolithic "this is how you should live" vision on every person in the world yet the system will still kill bad states and favor good states whatever those attractors are.

P.S. In this vision I assume the Earth is "controlled"(meta rules only) by a singleton super-AI with nanotech. So we don't have to worry about things like crime(forcefields), violence(more forcefields) or basic necessities such as food.

comment by Patri_Friedman · 2008-12-16T02:18:15.000Z · LW(p) · GW(p)

I'm glad to hear that you aren't trying to take over the world. The less competitors I have, the better.

comment by Ben11 · 2008-12-16T03:05:32.000Z · LW(p) · GW(p)

@lowly undergrad

Perhaps you're thinking of The Great Filter (http://hanson.gmu.edu/greatfilter.html)?

comment by James_D._Miller · 2008-12-16T03:05:37.000Z · LW(p) · GW(p)

"Eliezer, I'd advise no sudden moves; think very carefully before doing anything."

But about 100 people die every minute!

Replies from: Uni
comment by Uni · 2011-03-29T10:46:07.983Z · LW(p) · GW(p)

100 people is practically nothing compared to the gazillions of future people whose lives are at stake. I agree with Robin Hanson, think carefully for very long. Sacrifice the 100 people per minute for some years if you need to. But you wouldn't need to. With unlimited power, it should be possible to freeze the world (except yourself, and your computer and the power supply and food you need, et cetera) to absolute zero temperature for indefinite time, to get enough time to think about what to do with the world.

Or rather: with unlimited power, you would know immediately what to do, if unlimited power implies unlimited intelligence and unlimited knowledge by definition. If it doesn't, I find the concept "unlimited power" poorly defined. How can you have unlimited power without unlimited intelligence and unlimited knowledge?

So, just like Robin Hanson says, we shouldn't spend time on this problem. We will solve in the best possible way with our unlimited power as soon as we have got unlimited power. We can be sure the solution will be wonderful and perfect.

Replies from: Houshalter
comment by Houshalter · 2013-09-30T05:54:28.347Z · LW(p) · GW(p)

Or rather: with unlimited power, you would know immediately what to do, if unlimited power implies unlimited intelligence and unlimited knowledge by definition. If it doesn't, I find the concept "unlimited power" poorly defined. How can you have unlimited power without unlimited intelligence and unlimited knowledge?

The entire point of this was an analogy for creating Friendly AI. The AI would have absurd amounts of power, but we have to decide what we want it to do using our limited human intelligence.

I suppose you could just ask the AI for more intelligence first, but even that isn't a trivial problem. Would it be ok to alter your mind in such a way that it changes your personality or your values? Is it possible to increase your intelligence without doing that? And tons of other issues trying to specify such a specific goal.

comment by Cameron_Taylor · 2008-12-16T03:06:54.000Z · LW(p) · GW(p)

PK: I like your system. One difficulty I notice is that you have thrust the states into the role of the omniscient player in the Newcomb problem. Since the states are unable to punish the members beyond expelling them. They are open to 'hit and run' tactics. They are left with the need to predict accurately which members and potential members will break a rule, 'two box', and be a net loss to the state with no possibility of punishment. They need to choose people who can one box and stay for the long haul. Executions and life imprisonment are simpler, from a game theoretic perspective.

comment by Cameron_Taylor · 2008-12-16T03:13:01.000Z · LW(p) · GW(p)

James, it's ok. I have unlimited power and unlimited precision. I can turn back time. At least, I can rewind the state of the universe such that you can't tell the difference (http://lesswrong.com/lw/qp/timeless_physics/).

comment by Cameron_Taylor · 2008-12-16T03:15:32.000Z · LW(p) · GW(p)

Tangentally, does anyone know what I'm talking about if I lament how much of Eleizer's thought stream ran through my head, prompted by Sparhawk?

comment by Birgitte · 2008-12-16T03:26:32.000Z · LW(p) · GW(p)

Eliezer: Let's say that someone walks up to you and grants you unlimited power.

Lets not exaggerate. A singleton AI wielding nanotech is not unlimited power; it is merely a Big Huge Stick with which to apply pressure to the universe. It may be the biggest stick around, but it's still operating under the very real limitations of physics - and every inch of potential control comes with a cost of additional invasiveness.

Probably the closest we could come to unlimited power, would be pulling everything except the AI into a simulation, and allowing for arbitrary amounts of computation between each tick.

Billy Brown: If you give each individual whatever they want you’ve just destroyed every variety of collectivism or traditionalism on the planet, and those who valued those philosophies will curse you.

It's probably not the worst tradeoff, being cursed only by those who feel their values should take precedence over those of other people.

comment by Bar · 2008-12-16T03:32:26.000Z · LW(p) · GW(p)

But about 100 people die every minute!

If you have unlimited power, and aren't constrained by current physics, then you can bring them back. Of course, some of them won't want this.

Now, if you have (as I'm interpreting this article) unlimited power, but your current faculties, then embarking on a program to bring back the dead could (will?) backfire.

comment by billswift · 2008-12-16T03:33:08.000Z · LW(p) · GW(p)

I think Sparhawk was a fool. But you need to remember, internally he was basically medieval. Also externally you need to remember Eddings is only an English professor and fantasy writer.

comment by Bar · 2008-12-16T03:34:50.000Z · LW(p) · GW(p)

It's probably not the worst tradeoff, being cursed only by those who feel their values should take precedence over those of other people.

Why should your values take precedence over theirs? It sounds like you're asserting that tyranny > collectivism.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-12-16T03:35:43.000Z · LW(p) · GW(p)

@Cameron: Fictional characters with unlimited power sure act like morons, don't they?

Singularitarians: The Munchkins of the real universe.

comment by Pierre-André_Noël · 2008-12-16T03:55:26.000Z · LW(p) · GW(p)

Sorry for being out of topic, but has that 3^^^^3 problem been solved already? I just read the posts and, frankly, I fail to see why this caused so much problems.

Among the things that Jaynes repeats a lot in his book is that the sum of all probabilities must be 1. Hence, if you put probabilities somewhere, you must remove elsewhere. What is the prior probability for "me being able to simulate/kill 3^^^^3 persons/pigs"? Let's call that nonzero number "epsilon". Now, I guess that the (3^^^^3)-1 case should have a probability greater or equal than epsilon, same for (3^^^^3)-2 etc. Even with a "cap" at 3^^^^3, this makes epsilon <= 1/(3^^^^3). And this doesn't consider the case "I fail to fulfill my threat and suddenly change into a sofa", let alone all the >=42^^^^^^^42 possible statements in that meta-multiverse. The integral should be one.

Now, the fact that I make said statement should raise the posterior probability to something larger than epsilon, depending on your trust in me etc, but the order of magnitude is at least small enough to cancel out the "immenseness" of 3^^^^3. Is it that simple or am I missing something?

comment by Peter_de_Blanc · 2008-12-16T04:34:08.000Z · LW(p) · GW(p)

Pierre, it is not true that all probabilities sum to 1. Only for an exhaustive set of mutually exclusive events must the probability sum to 1.

comment by Pierre-André_Noël · 2008-12-16T04:44:59.000Z · LW(p) · GW(p)

Sorry, I have not been specific enough. Each of my 3^^^^3, 3^^^^3-1, 3^^^^3-2, etc. examples are mutually exclusive (but the sofa is part of the "0" case). While they might not span all possibilities (not exhaustive) and could thus sum to less than one, they cannot sum to higher than 1. As I see it, the weakest assumption here is that "more persons/pigs is less or equally likely". If this holds, the "worst case scenario" is epsilon=1/(3^^^^3) but I would guess for far less than that.

comment by Phil_Goetz6 · 2008-12-16T04:51:09.000Z · LW(p) · GW(p)

To ask what God should do to make people happy, I would begin by asking whether happiness or pleasure are coherent concepts in a future in which every person had a Godbot to fulfill their wishes. (This question has been addressed many times in science fiction, but with little imagination.) If the answer is no, then perhaps God should be "unkind", and prevent desire-saturation dynamics from arising. (But see the last paragraph of this comment for another possibility.)

What things give us the most pleasure today? I would say, sex, creative activity, social activity, learning, and games.

Elaborating on sexual pleasure probably leads to wireheading. I don't like wireheading, because it fails my most basic ethical principle, which is that resources should be used to increase local complexity. Valuing wireheading qualia also leads to the conclusion that one should tile the universe with wireheaders, which I find revolting, although I don't know how to justify that feeling.

Social activity is difficult to analyze, especially if interpersonal boundaries, and the level of the cognitive hierarchy to relate to as a "person", are unclear. I would begin by asking whether we would get any social pleasure from interacting with someone whose thoughts and decision processes were completely known to us.

Creative activity and learning may or may not have infinite possibilities. Can we continue constructing more and more complex concepts, to infinity? If so, then knowledge is probably also infinite, for as soon as we have constructed a new concept, we have something new to learn about. If not, then knowledge - not specific knowledge of what you had for lunch today, but general knowledge - may be limited. Creative activity may have infinite possibilities, even if knowledge is finite.

(The answer to whether intelligence has infinite potential has many other consequences; notably, Bayesian reasoners are likely only in a universe in which there are finite useful concepts, because otherwise it will be preferable to be a non-Bayesian reasoning over more complex concepts using faster algorithms.)

Games largely rely on uncertainty, improving mastery, and competition. Most of what we get out of "life", besides relationships and direct hormonal pleasures like sex, food, and fighting, is a lot like what we get from playing a game. One fear is that life will become like playing chess when you already know the entire game tree.

If we are so unfortunate as to live in a universe in which knowledge is finite, then conflict may serve as a substitute for ignorance in providing us a challenge. A future of endless war may be preferable to a future in which someone has won. It may even be preferable to a future of endless peace. If you study the middle ages of Europe, you will probably at some point ask, "Why did these idiots spend so much time fighting, when they could have all become wealthier if they simply stopped fighting long enough for their economies to grow?" Well, those people didn't know that economies could grow. They didn't believe that there was any further progress to be made in any domain - art, science, government - until Jesus returned. They didn't have any personal challenges; the nobility often weren't even allowed to do work. If you read what the nobles wrote, some of them said clearly that they fought because they loved fighting. It was the greatest thrill they ever had. I don't like this option for our future, but I can't rule out the possibility that war might once again be preferable to peace, if there actually is no more progress to be made and nothing to be done.

The answers to these questions also have a bearing on whether it is possible for God, in the long run, to be selfish. It seems that God would be the first person to have his desires saturated, and enter into this difficult position where it is hard to imagine how to want anything. I can imagine a universe, rather like the Buddhist universe, in which various gods, like bubbles, successively float to the top, and then burst into nothingness, from not caring anymore. I can also imagine an equilibrium, in which there are many gods, because the greater power than one acquires, the less interest one has in preserving that power.

comment by luzr · 2008-12-16T05:12:18.000Z · LW(p) · GW(p)

"But considering an unlimited amount of ice cream forced me to confront the issue of what to do with any of it."

"If you invoke the unlimited power to create a quadrillion people, then why not a quadrillion?"

"Say, the programming team has cracked the "hard problem of conscious experience" in sufficient depth that they can guarantee that the AI they create is not sentient - not a repository of pleasure, or pain, or subjective experience, or any interest-in-self - and hence, the AI is only a means to an end, and not an end in itself."

"What is individually a life worth living?"

Really, is not the ultimate answer to the whole FAI issue encoded there?

IMO, the most important thing about AI is to make sure IT IS SENTIENT. Then, with very high probability, it has to consider the very same questions suggested here.

(And to make sure it does, make more of them and make them diverse. Majority will likely "think right" and supress the rest.)

comment by luzr · 2008-12-16T05:24:05.000Z · LW(p) · GW(p)

Phil:

"If we are so unfortunate as to live in a universe in which knowledge is finite, then conflict may serve as a substitute for ignorance in providing us a challenge."

This is inconsistent. What would conflict really do is to provide new information to process ("knowledge").

I guess I can agree with the rest of post. What IMO is worth pointing out that the most pleasures, hormones and insticts excluded, are about processing 'interesting' infromations.

I guess, somewhere deep in all sentient beings, "interesting informations" are the ultimate joy. This has dire implications for any strong AGI.

I mean, the real pleasure for AGI has to be about acquiring new information patterns. Would not it be a little bit stupid to paperclip solar system in that case?

comment by Cameron_Taylor · 2008-12-16T05:31:39.000Z · LW(p) · GW(p)

Errr.... luzr, why would I assume that the majority of GAIs that we create will think in a way I define as 'right'?

comment by Peter_de_Blanc · 2008-12-16T07:37:23.000Z · LW(p) · GW(p)

Pierre, the proposition, "I am able to simulate 3^^^^3 people" is not mutually exclusive with the proposition "I am able to simulate 3^^^^3-1 people."

If you meant to use the propositions D_N: "N is the maximum number of people that I can simulate", then yes, all the D_N's would be mutually exclusive. Then if you assume that P(D_N) ≤ P(D_N-1) for all N, you can indeed derive that P(D_3^^^^3) ≤ 1/3^^^^3. But P("I am able to simulate 3^^^^3 people") = P(D_3^^^^3) + P(D_3^^^^3+1) + P(D^^^^3+2) + ..., which you don't have an upper bound for.

comment by Wei_Dai2 · 2008-12-16T08:18:33.000Z · LW(p) · GW(p)

An expected utility maximizer would know exactly what to do with unlimited power. Why do we have to think so hard about it? The obvious answer is that we are adaptation executioners, not utility maximizers, and we don't have an adaptation for dealing with unlimited power. We could try to extrapolate an utility function from our adaptations, but given that those adaptations deal only with a limited set of circumstances, we'll end up with an infinite set of possible utility functions for each person. What to do?

James D. Miller: But about 100 people die every minute!

Peter Norvig: Refusing to act is like refusing to allow time to pass.

What about acting to stop time? Preserve Earth at 0 kelvin. Gather all matter/energy/negentropy in the rest of the universe into secure storage. Then you have as much time as you want to think.

comment by luzr · 2008-12-16T08:36:13.000Z · LW(p) · GW(p)

"Errr.... luzr, why would I assume that the majority of GAIs that we create will think in a way I define as 'right'?"

It is not about what YOU define as right.

Anyway, considering that Eliezer is existing self-aware sentient GI agent, with obviously high intelligence and he is able to ask such questions despite his original biological programming makes me suppose that some other powerful strong sentient self-aware GI should reach the same point. I also believe that more general intelligence make GI converge to such "right thinking".

What makes me worry most is building GAI as non-sentient utility maximizer. OTOH, I believe that 'non-sentient utility maximizer' is mutually exclusive with 'learning' strong AGI system - in other words, any system capable of learning and exceeding human inteligence must outgrow non-sentience and utility maximizing. I migh be wrong, of course. But the fact that universe is not paperclipped yet makes me hope...

Replies from: xxd
comment by xxd · 2012-01-27T18:20:56.696Z · LW(p) · GW(p)

Could reach the same point.

Said Eliezer agent is programmed genetically to value his own genes and those of humanity.

An artificial Elizer could reach the conclusion that humanity is worth keeping but is by no means obliged to come to that conclusion. On the contrary, genetics determines that at least some of us humans value the continued existence of humanity.

comment by AnnaSalamon · 2008-12-16T09:17:39.000Z · LW(p) · GW(p)

Wei,

Is it any safer to think ourselves about how to extend our adaptation-executer preferences than to program an AI to figure out what conclusions we would come to, if we did think a long time?

I'm thinking here of studies I half-remember about people preferring lottery tickets whose numbers they made up to randomly chosen lottery tickets, and about people thinking themselves safer if they have the steering wheel than if equally competent drivers have the steering wheel. (I only half-remember the studies; don't trust the details.) Do you think a bias like that is involved in your preference for doing the thinking ourselves, or is there reason to expect a better outcome?

comment by Wei_Dai2 · 2008-12-16T09:24:06.000Z · LW(p) · GW(p)

Robin wrote: Having to have an answer now when it seems an likely problem is very expensive.

(I think you meant to write "unlikely" here instead of "likely".)

Robin, what is your probability that eventually humanity will evolve into a singleton (i.e., not necessarily through Eliezer's FOOM scenario)? It seems to me that competition is likely to be unstable, whereas a singleton by definition is. Competition can evolve into a singleton, but not vice versa. Given that negentropy increases as mass squared, most competitors have to remain in the center, and the possibility of a singleton emerging there can't ever be completely closed off. BTW, a singleton might emerge from voluntary mergers, not just one competitor "winning" and "taking over".

Another reason to try to answer now, instead of later, is that coming up with a good answer would persuade more people to work towards a singleton, so it's not just a matter of planning for a contingency.

comment by g · 2008-12-16T10:03:28.000Z · LW(p) · GW(p)

Wei Dai, singleton-to-competition is perfectly possible, if the singleton decides it would like company.

comment by Thomas · 2008-12-16T10:37:57.000Z · LW(p) · GW(p)

I quote:

"The young revolutionary's belief is honest. There will be no betraying catch in his throat, as he explains why the tribe is doomed at the hands of the old and corrupt, unless he is given power to set things right. Not even subconsciously does he think, "And then, once I obtain power, I will strangely begin to resemble that old corrupt guard, abusing my power to increase my inclusive genetic fitness."

comment by JulianMorrison · 2008-12-16T10:59:33.000Z · LW(p) · GW(p)

"no sudden moves; think very carefully before doing anything" - doesn't that basically amount to an admission that human minds aren't up to this, that you ought to hurriedly self-improve just to avoid tripping over your own omnipotent feet?

This presents an answer to Eliezer's "how much self improvement?": there has to be some point at which the question "what to do" becomes fully determined and further improvement is just re-proving the already proven. So you improve towards that point and stop.

comment by Bo2 · 2008-12-16T13:45:11.000Z · LW(p) · GW(p)

This is a general point concerning Robin's and Eliezer's disagreement. I'm posting it in this thread because this thread is the best combination of relevance and recentness.

It looks like Robin doens't want to engage with simple logical arguments if they fall outside of established, scientific frameworks of abstractions. Those arguments could even be damning critiques of (hidden assumptions in) those abstractions. If Eliezer were right, how could Robin come to know that?

comment by derekz2 · 2008-12-16T14:04:19.000Z · LW(p) · GW(p)

I think Robin's implied suggestion -- to not be so quick to discard the option of building an AI that can improve itself in certain ways but not to the point of needing to hardcode something like Coherent Extrapolated Volition. Is it really impossible to make an AI that can become "smarter" in useful ways (including by modifying its own source code, if you like), without it ever needing to take decisions itself that have severe nonlocal effects? If intelligence is an optimization process, perhaps we can choose more carefully what is being optimized until we are intelligent enough to go further.

I suppose one answer is that other people are on the verge of building AIs with unlimited powers so there is no time to be thinking about limiting goals and powers and initiative. I don't believe it, but if true we really are hosed.

It seems to me that if reasoning leads us to conclude that building self-improving AIs is a million-to-one shot to not destroy the world, we could consider not doing it. Find another way.

comment by derekz2 · 2008-12-16T14:05:46.000Z · LW(p) · GW(p)

Oops, Julian Morrison already said something similar.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-12-16T15:00:17.000Z · LW(p) · GW(p)

Just a note of thanks to Phil Goetz for actually considering the question.

comment by Grant · 2008-12-16T15:18:03.000Z · LW(p) · GW(p)

What if creating a friendly AI isn't about creating a friendly AI?

I may prefer Eliezer to grab the One Ring over others who are also trying to grab it, but that does not mean I wouldn't rather see the ring destroyed, or divided up into smaller bits for more even distribution.

I haven't met Eliezer. I'm sure he's a pretty nice guy. But do I trust him to create something that may take over the world? No, definitely not. I find it extremely unlikely that selflessness is the causal factor behind his wanting to create a friendly AI, despite how much he may claim so or how much he may believe so. Genes and memes do not reproduce via selflessness.

comment by V.G. · 2008-12-16T15:39:44.000Z · LW(p) · GW(p)

I am following your blog for a while, and find it extremely entertaining and also informative.

However, some criticism:

  1. You obviously suffer from what Nassim Taleb calls “ludic fallacy”. That is, applying “perfect” mathematical and logical reasoning to a highly “imperfect” world. A more direct definition would be “linear wishful thinking” in an extremely complex, non-linear environment.

  2. It is admirable that one can afford to indulge in such conversations, as you do. However, bear in mind that the very notion of self you imply in your post is very, very questionable (Talking about the presentation of self in everyday life, Erving Goffman once said: “when the individual is in the immediate presence of others, his activity will have a promissory character.” Do you, somehow, recognize yourself? ;) ).

  3. Being humble is so difficult when one is young and extremely intelligent. However, bear in mind that in the long run, what matters is not who will rule the world, or even whether one will get the Nobel Prize. What matters is the human condition. Bearing this in mind will not hamper your scientific efforts, but will provide you with much more ambiguity – the right fertilizer for wisdom.

comment by Pierre-André_Noël · 2008-12-16T15:43:30.000Z · LW(p) · GW(p)

Peter de Blanc: You are right and I came to the same conclusion while walking this morning. I was trying to simplify the problem in order to easily obtain numbers <=1/(3^^^^3), which would solve the "paradox". We now agree that I oversimplified it.

Instead of messing with a proof-like approach again, I will try to clarify my intuition. When you start considering events of that magnitude, you must consider a lot of events (including waking up with blue tentacles as hands to take Eliezer's example). The total probability is limited to 1 for exclusive events. Without proof, there is no reason to put more probability there than anywhere else. There is not much proof for a device exterior to our universe that can "read" our choice (giving five dollars or not) and then perform said claim. I don't think that's even falsifiable "from our universe".

If the claim is not falsifiable, the AI should not accept unless I do something "impossible" from its current framework of thinking. A proof request that I am thinking of is to do some calculations with the order 3^^^^3 computer and shares easily verifiable results that would otherwise take longer than the age of the universe to obtain. The AI could also ask "simulate me and find a proof that would suit me". Once the AI is convinced, it could also throw in another five dollars and ask for some algorithm improvements that would require billion years to achieve otherwise. Or for an ssh access on the 3^^^^3 computer.

comment by Robin_Hanson2 · 2008-12-16T16:32:51.000Z · LW(p) · GW(p)

Wei, yes I meant "unlikely." Bo, you and I have very different ideas of what "logical" means. V.G., I hope you will comment more.

comment by JamesAndrix · 2008-12-16T16:51:22.000Z · LW(p) · GW(p)

Grant: We did not evolve to handle this situation. It's just as valid to say that we have an opportunity to exploit Elizer's youthful evolved altruism, get him or others like him to make an FAI, and thereby lock himself out of most of the potential payoff. Idealists get corrupted, but they also die for their ideals.

comment by Racktip_Oddling · 2008-12-16T17:39:04.000Z · LW(p) · GW(p)

I have been granted almighty power, constrained only by the most fundamental laws of reality (which may, or may not, correspond with what we currently think about such things).

What do I do? Whatever it is that you want me to do. (No sweat off my almighty brow.)

You want me to kill thy neighbour? Look, he's dead. The neighbour doesn't even notice he's been killed ... I've got almighty power, and have granted his wish too, which is to live forever. He asked the same about you, but you didn't notice either.

In a universe where I have "almighty" power, I've already banished all contradictions, filled all wishes, made everyone happy or sad (according to their desires), and am now sipping Laphroaig, thinking, "Gosh, that was easy.".

comment by Wei_Dai2 · 2008-12-16T21:29:11.000Z · LW(p) · GW(p)

Anna Salamon wrote: Is it any safer to think ourselves about how to extend our adaptation-executer preferences than to program an AI to figure out what conclusions we would come to, if we did think a long time?

First, I don't know that "think about how to extend our adaptation-executer preferences" is the right thing to do. It's not clear why we should extend our adaptation-executer preferences, especially given the difficulties involved. I'd backtrack to "think about what we should want".

Putting that aside, the reason that I prefer we do it ourselves is that we don't know how to get an AI to do something like this, except through opaque methods that can't be understood or debugged. I imagine the programmer telling the AI "Stop, I think that's a bug." and the AI responding with "How would you know?"

g wrote: Wei Dai, singleton-to-competition is perfectly possible, if the singleton decides it would like company.

In that case the singleton might invent a game called "Competition", with rules decided by itself. Anti-prediction says that it's pretty unlikely those rules would happen to coincide with the rules of base-level reality, so base-level reality would still be controlled by the singleton.

comment by Tim_Tyler · 2008-12-16T21:52:15.000Z · LW(p) · GW(p)

If living systems can unite, they can also be divided. I don't see what the problem with that idea could be.

comment by samantha · 2008-12-19T07:01:00.000Z · LW(p) · GW(p)

Hmm, there are a lot of problems here.

"Unlimited power" is a non-starter. No matter how powerful the AGI is it will be of finite power. Unlimited power is the stuff of theology not of actually achievable minds. Thus the ditty from Epicurus about "God" does not apply. This is not a trivial point. I have a concern Eliezer may get too caught up in these grand sagas and great dilemnas on precisely such a theological absolutist scale. Arguing as if unlimited power is real takes us well into the current essay.

"Wieldable without side effects or configuration constraints" is more of the same. Imaginary thinking far beyond the constraints of any actually realizable situation. More theological daydreaming. It should go without saying that there is no such thing as operating without any configuration constraints and with perfect foresight of all possibly effects. I grow more concerned.

"It is wielded with unlimited precision"?! Come on, what is the game here? Surely you do not believe this is possible. In real engineering effective effort only needs to be precise enough. Infinite precision would incur infinite costs.

Personally I don't think that making an extremely powerful intelligence that is not sentient is moral. Actually I don't think that it is possible. If its goal is to be friendly to humans it will need to model humans to a very deep level. This will of necessity include human recognition of and projection of agency and self toward . It will need to model itself in relation to the environment and its actions. How can an intelligence with vast understanding and modeling of all around it not model the one part of that environment that is itself? How can it not entertain various ways of looking at itself if it models other minds that do so? I think your notion of a transcendent mind without sentience is yet another impossible pipe dream at best. It could be much worse than that.

I don't believe in unlimited power, certainly not as in the hands of humans, even very bright humans, that create the first AGI. There is no unlimited power and thus the impossible is not in our grasp. It is ridiculous to act as if it is. Either an AGI can be created or it can't. Either it is a good idea to create it without sentience or it isn't. Either this is possible or it is not. Either we can predict its future limiting parameters or we cannot. It seems you believe we or you or some very bright people somewhere have unlimited power to do whatever they decide to do.

comment by MrCheeze · 2011-01-31T03:35:08.913Z · LW(p) · GW(p)

"Give it to you" is a pretty lame answer but I'm at least able to recognise the fact that I'm not even close to being a good choice for having it.

That's more or less completely ignoring the question but the only answers I could ever come up with at the moment are what I think you call cached thoughts here.

comment by xxd · 2012-01-27T17:59:08.790Z · LW(p) · GW(p)

Now this is the $64 google-illion question!

I don't agree that the null hypothesis: take the ring and do nothing with it is evil. My definition of evil is coercion leading to loss of resources up to and including loss of one's self. Thus absolute evil is loss of one's self across humanity which includes as one use case humanity's extinction (but is not limited to humanity's extinction obviously because being converted into zimboes isn't technically extinction..)

Nobody can argue that the likes of Gaddafi exist in the human population: those who are interested in being the total boss of others (even thought they add no value to the lives of others) to the extent that they are willing to kill to maintain their boss position.

I would define these people as evil or with evil intent. I would thus state that I would like under no circumstances somebody like this to grab the ring of power and thus I would be compelled to grab it myself.

The conundrum is that I fit the definition of evil myself. Though I don't seek power to coerce as an end in itself I would like the power to defend myself against involuntary coercion.

So I see a Gaddafi equivalent go to grab the ring and I beat him to it.

What do I do next?

Well I can't honestly say that I have the right to kill the millions of Gaddafi equivalent but I think that on average they add a net negative to the utility of humanity.

I'm left, however, with the nagging suspicion that under certain circumstances, Gaddafi type figures might be beneficial to humanity as a whole. Consider: crowdsourcing the majority of political decisions would probably satisfy the average utility function of humanity. It's fair but not to everybody. We have almost such a system today (even though it's been usurped by corporations). But in times of crisis such as during war, it's more efficient to have rapid decisions made by a small group of "experts" combined with those who need to make ruthless decisions so we can't kill the Gaddafis.

What is therefore optimal in my opinion? I reckon I'd take all the Gaddafis off planet and put them in simulations to be recalled only at times of need and leave sanitized nice people zimbo copies of them. Then I would destroy the ring of power and return to my previous life before I was tempted to torture those who have done me harm in the past.

comment by Douglas_Reay · 2012-03-11T09:28:14.260Z · LW(p) · GW(p)

What would you do with unlimited power?

Perhaps "Master, you now hold the ring, what do you wish me to turn the universe into?" isn't a question you have to answer all at once.

Perhaps the right approach is to ask yourself "What is the smallest step I can take that has the lowest risk of not being a strict improvement over the current situation?"

For example, are we less human or compassionate now we have Google available, than we were before that point?

Supposing an AI researcher, a year before the Google search engine was made available on the internet, ended up with 'the ring'. Suppose the researcher had asked the AI to develop for the researcher's own private use an internet search engine of the type that existing humans might create with 1000 human hours of work (with suitable restrictions upon the AI on how to do this, including "check with me before implementing any part of your plan that affects anything outside your own sandbox") and then put itself on hold to await further orders once the engine had been created. If the AI then did create something like Google, without destroying the world, then did put itself fully on hold (not self modifying, doing stuff outside the sandbox, or anything else except waiting for a prompt) - would that researcher then be in a better position to make their next request of the AI? Would that have been a strict improvement on the researcher's previous position?

Imagine a series of milestones on the path to making a decision about what to do with the universe, and work backwards.

You want a human or group of humans who have their intelligence boosted to make the decision instead of you?

Ok, but you don't want them to lose their compassion, empathy or humanity in the process. What are the options for boosting and what does the AI list as the pros and cons (likely effects) of each process?

What is the minimum significant boost with the highest safety factor? And what person or people would it make sense to boost that way? AI, what do you advise are my best 10 options on that, with pros and cons?

Still not sure, ok, I need a consensus of few top non-boosted currently AI researcher, wise people, smart people, etc. on the people to be boosted and the boosting process, before I 'ok' it. The members of the consensus group should be people who won't go ape-shit, who can understand the problem, who're good at discussing things in groups and reaching good decisions, who'll be willing to cooperate (undeceived and uncoerced) if it is presented to them clearly, and probably a number of other criteria that perhaps you can suggest (like will they politic and demand to be boosted themselves?). Who do you suggest? Options?

Basically, if the AI can be trusted to give honest good advice without hiding an agenda that's different from your own expressed one, and if it can be trusted to not, in the process of giving that good advice, do anything external that you wouldn't do (such as slaying a few humans as guinea pigs in the process of determining options for boosting), then that's the approach I hope the researcher would take: delay the big decisions, in favour of taking cautious minimally risky small steps towards a better capacity to make the big decision correctly.

Mind you, those are two big "if"s.

Replies from: TheOtherDave, wedrifid
comment by TheOtherDave · 2012-03-11T18:16:49.427Z · LW(p) · GW(p)

Perhaps.

Personally, I suspect that if I had (something I was sufficiently confident was) an AI that can be trusted to give honest good advice without a hidden agenda and without unexpected undesirable side-effects, the opportunity costs of moving that slowly would weigh heavily on my conscience. And if challenged for why I was allowing humanity to bear those costs while I moved slowly, I'm not sure what I would say... it's not clear what the delays are gaining me in that case.

Conversely, if I had something I was _in_sufficiently confident was trustworthy AI, it's not clear that the "cautious minimally risky small steps" you describe are actually cautious enough.

Replies from: Douglas_Reay
comment by Douglas_Reay · 2012-03-11T23:33:03.024Z · LW(p) · GW(p)

it's not clear what the delays are gaining me in that case.

Freedom.

The difference between the AI making a decision for humanity about what humanity's ideal future should be, and the AI speeding up humanity's own rise in good decision making capability to the point where humanity can make that same decision (and even, perhaps, come to the same conclusion the AI would have done, weeks earlier, if told to do the work for us), is that the choice was made by us, not an external force. That, to many people, is worth something (perhaps even worth the deaths that would happen in the weeks that utopia was delayed by).

It is also insurance against an AI that is benevolent, but has imperfect understanding of humanity. (The AI might be able to gain a better understanding of humanity by massively boosting its own capability, but perhaps you don't want it to take over the internet and all attached computers - perhaps you'd prefer it to remain sitting in the experimental mainframe in some basement of IBM where it currently resides, at least initially)

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-12T01:06:15.646Z · LW(p) · GW(p)

In the case under discussion (an AI that can be trusted to give honest good advice without a hidden agenda and without unexpected undesirable side-effects) I don't see how the imperfect understanding of humanity matters. Conversely, an AI which would take over resources I don't want it to take over doesn't fall into that category.

That aside though... OK, sure, if the difference to me between choosing to implement protocol A, and having protocol A implemented without my approval, is worth N happy lifetime years (or whatever unit you want to use), then I should choose to retain control and let people die and/or live in relative misery for it.

I don't think that difference is worth that cost to me, though, or worth anything approaching it.

Is it worth that cost to you?

Replies from: Douglas_Reay
comment by Douglas_Reay · 2012-03-12T08:10:40.937Z · LW(p) · GW(p)

Is it worth that cost to you?

Let's try putting some numbers on it.

It is the difference between someone who goes house hunting then, on finding a house that would suit them perfectly, voluntarily decides to move to it; and that same person being forcibly relocated to that same new house, against their will by some well meaning authority.

Using the "three week delay" figure from earlier, a world population of 7 billion, and an average lifespan of 70 years, that gives us approximately 6 million deaths during those three weeks. Obviously my own personal satisfaction and longing for freedom wouldn't be worth that. But there isn't just me to consider - it is also the satisfaction of whatever fraction of those 7 billion people share my attitude towards controlling our own destiny.

If 50% of them shared that attitude, and would be willing to give up 6 weeks of their life to have a share of ownership of The Big Decision (a decision far larger than just which house to live in) then it evens out.

Perhaps the first 10 minutes of the 'do it slowly and carefully' route (10 minutes = 2000 lives) should be to ask the AI to look up figures from existing human sources on what fraction of humanity has that attitude, and how strongly they hold it?

And perhaps we need to take retroactive satisfaction from the future population of humanity into account? What if at least 1% of the humanity from centuries to come gains some pride and satisfaction from thinking the world they live in is one that humanity chose? Or at least 1% would feel resentment if it were not the case?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-12T13:58:05.256Z · LW(p) · GW(p)

Let's try putting some numbers on it.

OK, sure. Concreteness is good. I would say the first step to putting numbers on this is to actually agree on a unit that those numbers represent.

You seem to be asserting here that the proper unit is weeks of life (I infer that from "willing to give up 6 weeks of their life to have a share of ownership of The Big Decision"), but if so, I think your math is not quite right. For example, suppose implementing the Big Decision has a 50% chance of making the average human lifespan a thousand years, and I have a 1% chance of dying in the next six weeks, then by waiting six weeks I'm accepting a .01 chance of losing a .5 chance of 52000 life-weeks... that is, I'm risking an expected value of 260 life-weeks, not 6. Change those assumptions and the EV goes up and down accordingly.

So perhaps it makes sense to immediately have the AI implement temporary immortality... that is, nobody dies between now and when we make the decision? But then again, perhaps not? I mean, suspending death is a pretty big act of interference... what about all the people who would have preferred to choose not to die, rather than having their guaranteed continued survival unilaterally forced on them?

There's other concerns I have with your calculations here, but that's a relatively simple one so I'll pause here and see if we can agree on a way to handle this one before moving forward.

Replies from: Douglas_Reay, Douglas_Reay
comment by Douglas_Reay · 2012-03-12T14:41:44.196Z · LW(p) · GW(p)

Let's try putting some numbers on it.

OK, sure. Concreteness is good. I would say the first step to putting numbers on this is to actually agree on a unit that those numbers represent.

How about QALYs ?

comment by Douglas_Reay · 2012-03-12T15:03:06.399Z · LW(p) · GW(p)

I think your math is not quite right. For example, suppose implementing the Big Decision has a 50% chance of making the average human lifespan a thousand years, and I have a 1% chance of dying in the next six weeks, then by waiting six weeks I'm accepting a .01 chance of losing a .5 chance of 52000 life-weeks... that is, I'm risking an expected value of 260 life-weeks, not 6.

Interesting question.

Economists put a price on a life by looking at things like how much the person would expect to earn (net) during the remainder of their life, and how much money it takes for them to voluntarily accept a certain percentage chance of losing that amount of money. (Yes, that's a vast simplification and inaccuracy). But, in terms of net happiness, it doesn't matter that much which 7 billion bodies are experiencing happiness in any one time period. The natural cycle of life (replacing dead grannies with newborn babies) is more or less neutral, with the grieving caused by the granny dying being balanced by the joy the newborn brings to those same relatives. It matters to the particular individuals involved, but it isn't a net massive loss to the species, yes?

Now no doubt there are things a very very powerful AI (not necessarily the sort we initially have) could do to increase the number of QALYs being experienced per year by the human species. But I'd argue that it is the size of the population and how happy each member of the population is that affects the QALYs, not whether the particular individuals are being replaced frequently or infrequently (except as far as that affects how happy the members are, which depends upon their attitude towards death).

But, either way, unless the AI does something to change overnight how humans feel about death, increasing their life expectancy won't immediately change how much most humans fear a 0.01% chance of dying (even if, rationally, perhaps it ought to).

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-12T17:13:03.748Z · LW(p) · GW(p)

(shrug) OK. This is why it helps to get clear on what unit we're talking about.

So, if I've understood you correctly, you say that the proper unit to talk about -- the thing we wish to maximize, and the thing we wish to avoid risking the loss of -- is the total number of QALYs being experienced, without reference to how many individuals are experiencing it or who those individuals are. Yes?

All right. There are serious problems with this, but as far as I can tell there are serious problems with every choice of unit, and getting into that will derail us, so I'm willing to accept your choice of unit for now in the interests of progress.

So, the same basic question arises: doesn't it follow that if the AI is capable of potentially creating N QALYs over the course of six weeks then the relevant opportunity cost of delay is N QALYs? In which case it seems to follow that before we can really decide if waiting six weeks is worth it, we need to know what the EV of N is. Right?

Replies from: Douglas_Reay, Douglas_Reay
comment by Douglas_Reay · 2012-03-12T17:29:09.347Z · LW(p) · GW(p)

if the AI is capable of potentially creating N QALYs over the course of six weeks then the relevant opportunity cost of delay is N QALYs? In which case it seems to follow that before we can really decide if waiting six weeks is worth it, we need to know what the EV of N is. Right?

Over three weeks, but yes: right.

If the AI makes dramatic changes to society on a very short time scale (such as uploading everyone's brains to a virtual reality, then making 1000 copies of everyone) then N would be very very large.

If the AI makes minimal immediate changes in the short term (such as, for example, elliminating all nuclear bombs and putting in place measures to prevent hostile AIs from being developed - ie acting as insurance versus threats against the existance of the human species) then N might be zero.

What the expected value of N is, depends on what you think the likely comparative chance is of those two sorts of scenarios. But you can't assume, in absense of knowledge, that the chances are 50:50.

And, like I said, you could use the first 10 minutes to find out what the AI predicts N would be. If you ask the AI "If I gave you the go ahead to do what you thought humanity would ask you to do, were it wiser but still human, give me the best answer you can fit into 10 minutes without taking any actions external to your sandbox to the questions: what would your plan of action over the next three weeks be, and what improvement in number of QALYs experienced by humans would you expect to see happen in that time?" and the AI answers "My plans are X, Y and Z and I'd expect N to be of an order of magnitude between 10 and 100 QALYs." then you are free to take the nice slow route with a clear conscience.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-12T20:46:07.376Z · LW(p) · GW(p)

Sure, agreed that if I have high confidence that letting the AI out of its sandbox doesn't have too much of an upside in the short term (for example, if I ask it and that's what it tells me and I trust its answer), then the opportunity costs of leaving it in its sandbox are easy to ignore.

Also agreed that N can potentially be very very large.
In which case the opportunity costs of leaving it in its sandbox are hard to ignore.

comment by Douglas_Reay · 2012-03-12T18:06:25.754Z · LW(p) · GW(p)

So, if I've understood you correctly, you say that the proper unit to talk about -- the thing we wish to maximize, and the thing we wish to avoid risking the loss of -- is the total number of QALYs being experienced, without reference to how many individuals are experiencing it or who those individuals are. Yes?

All right. There are serious problems with this, but as far as I can tell there are serious problems with every choice of unit, and getting into that will derail us, so I'm willing to accept your choice of unit for now in the interests of progress.

As a seperate point, I think that there isn't a consensus on what ought to be maximised is relevant.

Suppose the human species were to spread out onto 1,000,000 planets, and last for 1,000,000 years. What happens to just one planet of humans for one year is very small compared to that. Which means that anything that has even a 1% chance of making a 1% difference in the species-lifespan happiness experienced by our species is still 100,000,000 times more important than a year long delay for our one planet. It would still be 100 times more important that a year off the lifespan of the entire species.

Suppose I were the one who held the ring and, feeling the pressure of 200 lives being lost every minute, I told the AI to do whatever it thought best, or to do whatever maximised the QALYs for humanity and, thereby, set the AIs core values and purpose. An AI being benevolently inclined towards humanity, even a maginally housetrained one that knows we frown upon things like mass murder (despite that being in a good cause), is not the same as a "safe" AI or one with perfect knowledge of humanity. It might develop better knowledge of humanity later, as it grows in power, but we're talking about a fledgling just created AI that's about to have its core purpose expounded to it.

If there's any chance that the holder of the ring is going to give the AI a sub-optimal purpose (maximise the wrong thing) or leave off sensible precautions, that going the 'small step cautious milestone' approach might catch, then that's worth the delay.

But, more to the point, do we know there is a single optimal purpose for the AI to have? A single right or wrong thing to maximise? A single destiny for all species? A genetic (or computer code) template that all species will bioengineer themselves to, with no cultural differences? If there is room for a value to diversity, then perhaps there are multiple valid routes humanity might choose (some, perhaps, involving more sacrifice on humanity's part, in exchange for preserving greater divergance from some single super-happy-fun-fun template, such as valuing freedom of choice). The AI could map our options, advise on which to take for various purposes, even predict which humanity would choose, but it can't both make the choice for us, and have that option be the option that we chose for ourselves.

And if humanity does choose to take a path that places value upon freedom of choice, and if there is a small chance that how The Big Decision was made might have even a small impact upon the millions of planets and millions of years, that's a very big consequence for not taking a few weeks to move slowly and carefully.

Replies from: army1987, TheOtherDave
comment by A1987dM (army1987) · 2012-03-12T20:20:57.720Z · LW(p) · GW(p)

told the AI to do whatever it thought best, or to do whatever maximised the QALYs for humanity

Well, it's formulating a definition for the Q in QALY good enough for an AI to understand it without screwing up that's the hard part.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-12T23:31:28.242Z · LW(p) · GW(p)

Yes. To be fair, we also don't have a great deal of clarity on what we really mean by L, either, but we seem content to treat "you know, lives of systems sufficiently like us" as an answer.

comment by TheOtherDave · 2012-03-12T20:58:54.678Z · LW(p) · GW(p)

Throwing large numbers around doesn't really help. If the potential upside of letting this AI out of its sandbox is 1,000,000 planets 10 billion lives/planet 1,000,000 years * N Quality = Ne22 QALY, then if there's as little as a .00000001% chance of the device that lets the AI out of its sandbox breaking within the next six weeks, then I calculate an EV of -Ne12 QALY from waiting six weeks. That's a lot of QALY to throw away.

The problem with throwing around vast numbers in hypothetical outcomes is that suddenly vanishingly small percentages of those outcomes happening or failing to happen start to feel significant. Humans just aren't very good at that sort of math.

That said, I agree completely that the other side of the coin of opportunity cost is that the risk of letting it out of its sandbox and being wrong is also huge, regardless of what we consider "wrong" to look like.

Which simply means that the moment I'm handed that ring, I'm in a position I suspect I would find crushing... no matter what I choose to do with it, a potentially vast amount of suffering results that might plausibly have been averted had I chosen differently.

That said, if I were as confident as you sound to me that the best thing to maximize is self-determination, I might find that responsibility less crushing. Ditto if I were as confident as you sound to me that the best thing to maximize is anything in particular, including paperclips.

I can't imagine being as confident about anything of that sort as you sound to me, though.

Replies from: Douglas_Reay
comment by Douglas_Reay · 2012-03-12T21:29:05.464Z · LW(p) · GW(p)

The only thing I'm confident of is that I want to hand the decision over to a person or group of people wiser than myself, even if I have to make them in order for them to exist, and that in the mean time I want to avoid doing things that are irreversible (because of the chance the wiser people might disagree and what those things not to have been done) and take as few risks as possible of humanity being destroyed or enslaved in the mean time. Doing things swiftly is on the list, but lower down the order of my priorities. Somewhere in there too is not being needlessly cruel to a sentient being (the AI itself) - I'd prefer to be a parental figure, than a slaver or jailer.

Yes, that's far from being a clear cut 'boil your own' set of instructions on how to cook up a friendly AI; and is trying to maximise, minimise or optimise multiple things at once. Hopefully, though, it is at least food for thought, upon which someone else can build something closer resembling a coherent plan.

comment by wedrifid · 2012-03-11T20:20:03.932Z · LW(p) · GW(p)

Perhaps the right approach is to ask yourself "What is the smallest step I can take that has the lowest risk of not being a strict improvement over the current situation?"

You can get away with (in fact, strictly improve the algorithm by) using only the second of the two caution-optimisers there, so: "What is the smallest step I can take that has the lowest risk of not being a strict improvement over the current situation?"

Naturally when answering the question you will probably consider small steps - and in the unlikely even that a large step is safer, so much the better!

Replies from: Douglas_Reay
comment by Douglas_Reay · 2012-03-11T23:45:15.529Z · LW(p) · GW(p)

Assuming the person making the decision is perfect at estimating risk.

However since the likelihood is that it won't be me creating the first ever AI, but rather that the person who does is reading this advice, I'd prefer to stipulate that they should go for small steps even if, in their opinion, there is some larger step that's less risky.

The temptation exists for them to ask, as their first step, "AI of the ring, boost me to god-like wisdom and powers of thought", but that has a number of drawbacks they may not think of. I'd rather my advice contain redundant precautions, as a safety feature.

"Of the steps of the smallest size that still advances things, which of those steps has the lowest risk?"

Another way to think about it is to take the steps (or give the AI orders) that can be effectively accomplished with the AI boosting itself by the smallest amount. Avoid, initially, making requests that to accomplish the AI will need to massively boost itself; if you can improve your decision making position just through requests that the AI can handle with its current capacity.

Replies from: wedrifid
comment by wedrifid · 2012-03-12T05:40:28.216Z · LW(p) · GW(p)

Assuming the person making the decision is perfect at estimating risk.

Or merely aware of the same potential weakness that you are. I'd be overwhelmingly uncomfortable with someone developing a super-intelligence without the awareness of their human limitations at risk assessment. (Incidentally 'perfect' risk assessment isn't required. They make the most of whatever risk assessment ability they have either way.)

"Of the steps of the smallest size that still advances things, which of those steps has the lowest risk?"

I consider this a rather inferior solution - particularly in as much as it pretends to be minimizing two things. Since steps will almost inevitably be differentiated by size the assessment of lowest risks barely comes into play. An algorithm that almost never considers risk rather defeats the point.

If you must artificially circumvent the risk assessment algorithm - presumably to counter known biases - then perhaps make the "small steps" a question of satisficing rather than minimization.

Replies from: Douglas_Reay
comment by Douglas_Reay · 2012-03-12T07:17:47.590Z · LW(p) · GW(p)

Since steps will almost inevitably be differentiated by size the assessment of lowest risks barely comes into play. An algorithm that almost never considers risk rather defeats the point.

If you must artificially circumvent the risk assessment algorithm - presumably to counter known biases - then perhaps make the "small steps" a question of satisficing rather than minimization.

Good point.

How would you word that?

comment by fractalman · 2013-06-26T07:33:41.974Z · LW(p) · GW(p)

I assume we're actually talking NEARLY unlimited power: no actually time-traveling to when you were born and killing yourself just to solve the grandfather paradox once and for all; given information theory, and the power to bypass the "no quantum xerox" limitation, I could effectively reset the relevant light cone and run it under read-only mode to gather information needed to make an afterlife for everyone's who's died...if i could also figure out how to pre-prune the run to ensure it winds up at exactly the same branch.

But move one is to hit my thought fast-forward button. Just in case I CAN'T do that.

Then I'd install an afterlife. A lot of humans already believe in one, and it solves the problem of death with minimal interference in the status quo. It'll probably be mediocre at first, but it WILL be given cable TV. Whether it gets internet access or not depends on what I decide during that thinking speedup...(I can see issues either way, even with read-only access, and yes I intend to be cautious about it...)

The next step would be to spend time on sites like furaffinity asking for volunteers to get used to transformations. If the volunteer freaks out, they remember it as if it were a weird dream...which, given the crowd in question, should not be that difficult to convince them of. (If they freak out too badly i guess they won't remember it at all; not sure how i'd set a threshold on that). (I expect to see some of them writing up their experiences, too. perhaps complaining that the real deal wasn't as great as the fantasy...) -in short, I get data at a relatively low cost.

At some point, I get super-effective healing devices into all the hospitals. I may tamper with the effectiveness of the already existing anasthesia mechanisms, making people feel mere "tiredness" instead of excruciating pain.

At SOME point I start opening two-way travel between the afterlife and the regular world-after opening "cloud and harp heaven", "barbarian heaven", and, of course, "hell"-albeit with safewords. And time limits, so nobody winds up accidentally signing up for 1 whole year...

And yes, I do very much expect that people will sign up for that last one.