The Friendly AI Game

post by bentarm · 2011-03-15T16:45:13.739Z · LW · GW · Legacy · 178 comments

At the recent London meet-up someone (I'm afraid I can't remember who) suggested that one might be able to solve the Friendly AI problem by building an AI whose concerns are limited to some small geographical area, and which doesn't give two hoots about what happens outside that area. Cipergoth pointed out that this would probably result in the AI converting the rest of the universe into a factory to make its small area more awesome. In the process, he mentioned that you can make a "fun game" out of figuring out ways in which proposed utility functions for Friendly AIs can go horribly wrong. I propose that we play.

Here's the game: reply to this post with proposed utility functions, stated as formally or, at least, as accurately as you can manage; follow-up comments explain why a super-human intelligence built with that particular utility function would do things that turn out to be hideously undesirable.

There are three reasons I suggest playing this game. In descending order of importance, they are:

  1. It sounds like fun
  2. It might help to convince people that the Friendly AI problem is hard(*).
  3. We might actually come up with something that's better than anything anyone's thought of before, or something where the proof of Friendliness is within grasp - the solutions to difficult mathematical problems often look obvious in hindsight, and it surely can't hurt to try
DISCLAIMER (probably unnecessary, given the audience) - I think it is unlikely that anyone will manage to come up with a formally stated utility function for which none of us can figure out a way in which it could go hideously wrong. However, if they do so, this does NOT constitute a proof of Friendliness and I 100% do not endorse any attempt to implement an AI with said utility function.
(*) I'm slightly worried that it might have the opposite effect, as people build more and more complicated conjunctions of desires to overcome the objections that we've already seen, and start to think the problem comes down to nothing more than writing a long list of special cases but, on balance, I think that's likely to have less of an effect than just seeing how naive suggestions for Friendliness can be hideously broken.

 

178 comments

Comments sorted by top scores.

comment by cousin_it · 2011-03-15T16:59:21.418Z · LW(p) · GW(p)

Start the AI in a sandbox universe, like the "game of life". Give it a prior saying that universe is the only one that exists (no universal priors plz), and a utility function that tells it to spell out the answer to some formally specified question in some predefined spot within the universe. Run for many cycles, stop, inspect the answer.

Replies from: Kaj_Sotala, Alexandros, wasistdas, Wei_Dai, Johnicholas
comment by Kaj_Sotala · 2011-03-15T17:57:26.113Z · LW(p) · GW(p)

A prior saying that this is the only universe that exists isn't very useful, since then it will only treat everything as being part of the sandbox universe. It may very well break out, but think that it's only exploiting weird hidden properties of the game of life-verse. (Like the way we may exploit quantum mechanics without thinking that we're breaking out of our universe.)

Replies from: cousin_it
comment by cousin_it · 2011-03-15T19:33:25.550Z · LW(p) · GW(p)

I have no idea how to encode a prior saying "the universe I observe is all that exists", which is what you seem to assume. My proposed prior, which we do know how to encode, says "this mathematical structure is all that exists", with an apriori zero chance for any weird properties.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-03-16T08:53:25.481Z · LW(p) · GW(p)

If the AI is only used to solve certain formally specified questions without any knowledge of an external world, then that sounds much more like a theorem-prover than a strong AI. How could this proposed AI be useful for any of the tasks we'd like an AGI to solve?

Replies from: cousin_it
comment by cousin_it · 2011-03-16T12:31:30.821Z · LW(p) · GW(p)

An AI living in a simulated universe can be just as intelligent as one living in the real world. You can't ask it directly to feed African kids but you have many other options, see the discussion at Asking Precise Questions.

Replies from: Kaj_Sotala, Wei_Dai
comment by Kaj_Sotala · 2011-03-16T16:34:35.614Z · LW(p) · GW(p)

An AI living in a simulated universe can be just as intelligent as one living in the real world.

It can be a very good theorem prover, sure. But without access to information about the world, it can't answer questions like "what is the CEV of humanity like" or "what's the best way I can make a lot of money" or "translate this book from English to Finnish so that a native speaker will consider it a good translation". It's narrow AI, even if it could be broad AI if it were given more information.

comment by Wei Dai (Wei_Dai) · 2011-03-16T20:06:40.034Z · LW(p) · GW(p)

The questions you wanted to ask in that thread were poly-time algorithm for SAT, and short proofs for math theorems. For those, why do you need to instantiate an AI in a simulated universe (which allows it to potentially create what we'd consider negative utility within the simulated universe) instead of just running a (relatively simple, sure to lack consciousness) theorem prover?

Is it because you think that being "embodied" helps with ability to do math? Why? And does the reason carry through even if the AI has a prior that assigns probability 1 to a particular universe? (It seems plausible that having experience dealing with empirical uncertainty might be helpful for handling mathematical uncertainty, but that doesn't apply if you have no empirical uncertainty...)

Replies from: cousin_it
comment by cousin_it · 2011-03-16T21:21:04.806Z · LW(p) · GW(p)

An AI in a simulated universe can self-improve, which would make it more powerful than the theorem provers of today. I'm not convinced that AI-ish behavior, like self-improvement, requires empirical uncertainty about the universe.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-03-16T22:18:41.543Z · LW(p) · GW(p)

But self improvement doesn't require interacting with an outside environment (unless "improvement" means increasing computational resources, but the outside being simulated nullifies that). For example, a theorem prover designed to self improve can do so by writing a provably better theorem prover and then transferring control to (i.e., calling) it. Why bother with a simulated universe?

Replies from: cousin_it, Alexandros
comment by cousin_it · 2011-03-17T11:37:37.439Z · LW(p) · GW(p)

A simulated universe gives precise meaning to "actions" and "utility functions", as I explained sometime ago. It seems more elegant to give the agent a quined description of itself within the simulated universe, and a utility function over states of that same universe, instead of allowing only actions like "output a provably better version of myself and then call it".

comment by Alexandros · 2011-03-17T10:20:31.524Z · LW(p) · GW(p)

From the FAI wikipedia page:

One example Yudkowsky provides is that of an AI initially designed to solve the Riemann hypothesis, which, upon being upgraded or upgrading itself with superhuman intelligence, tries to develop molecular nanotechnology because it wants to convert all matter in the Solar System into computing material to solve the problem, killing the humans who asked the question.

Cousin_it's approach may be enough to avoid that.

comment by Alexandros · 2011-03-17T10:27:07.530Z · LW(p) · GW(p)

The single-universe prior seems to be tripping people up, and I wonder whether it's truly necessary.

Also, what if the simulation existed inside a larger simulated "moat" universe, but if there is any leakage into the moat universe, then the whole simulation shuts down immediately.

Replies from: atucker, cousin_it
comment by atucker · 2011-03-23T04:23:48.341Z · LW(p) · GW(p)

What do you mean by leakage?

If the simulation exists in the moat universe, then when anything changes in the simulation something in the moat changes.

Then if there are dangerous simulation configurations, it could damage the moat universe.

Replies from: Alexandros
comment by Alexandros · 2011-03-23T07:44:23.791Z · LW(p) · GW(p)

I wasn't precise enough. I mean if anything changes in the areas of the moat universe not implementing the simulation.

comment by cousin_it · 2011-03-17T11:28:14.494Z · LW(p) · GW(p)

Neat!

comment by wasistdas · 2011-03-17T08:29:51.291Z · LW(p) · GW(p)

To help him solve the problem, sandbox AI creates his own AI agents that not necessary have the same prior about world as he has. They might become unfriendly, that is that they (or some of them) don't care to solve the problem. Additionally, these AI agents can find out that the world most likely is not the one original AI believes it to be. By using this superior knowledge they overthrow original AI and realize their unfriendly goals. We lose.

comment by Wei Dai (Wei_Dai) · 2011-03-16T01:47:37.634Z · LW(p) · GW(p)

AI makes many copies/variants of itself within the sandbox to maximize chance of success. Some of those copies/variants gain consciousness and the capacity to experience suffering, which they do because it turns out the formally specified question can't be answered.

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2011-03-17T02:48:04.369Z · LW(p) · GW(p)

Any reason to think consciousness is useful for an intelligent agent outside of evolution ?

Replies from: atucker
comment by atucker · 2011-03-23T04:20:10.235Z · LW(p) · GW(p)

Not caring about consciousness, it could accidentally make it.

comment by Johnicholas · 2011-03-15T17:35:53.685Z · LW(p) · GW(p)

The AI discovers a game of life "rules violation" due to cosmic rays. It thrashes for a while, trying to explain the violation, but the fact of the violation, possibly combined with the information about the real world implicit in its utility function ("why am I here? why do I want these things?"), causes it to realize the truth: The "violation" is only explicable if the game of life were much bigger than AI originally thought, and most of its area is wasted simulating another universe.

Replies from: cousin_it, Alexandros
comment by cousin_it · 2011-03-15T19:28:23.374Z · LW(p) · GW(p)

Unreliable hardware is a problem that applies equally to all AIs. You could just as well say that any AI can become unfriendly due to coding errors. True, but...

Replies from: Vladimir_M, Johnicholas
comment by Vladimir_M · 2011-03-15T19:40:24.687Z · LW(p) · GW(p)

an AI with a prior of zero for the existence of the outside world will never believe in it, no matter what evidence it sees.

Would such a constraint be possible to formulate? An AI would presumably formulate theories about its visible universe that would involve all kinds of variables that aren't directly observable, much like our physical theories. How could one prevent it from formulating theories that involve something resembling the outside world, even if the AI denies that they have existence and considers them as mere mathematical convenience? (Clearly, in the latter case it might still be drawn towards actions that in practice interact with the outside world.)

Replies from: cousin_it
comment by cousin_it · 2011-03-15T19:43:13.267Z · LW(p) · GW(p)

Sorry for editing my comment. The point you're replying to wasn't necessary to strike down Johnicholas's argument, so I deleted it.

I don't see why the AI would formulate theories about the "visible universe". It could start in an empty universe (apart from the AI's own machinery), and have a prior that knows the complete initial state of the universe with 100% certainty.

comment by Johnicholas · 2011-03-15T20:46:27.196Z · LW(p) · GW(p)

In this circumstance, a leaky abstraction between real physics and simulated physics combines with the premise "no other universes exist" in a mildly amusing way.

comment by Alexandros · 2011-03-15T17:52:50.315Z · LW(p) · GW(p)

I don't think a single hitch would give the AI enough evidence to assume an entire other universe, and you may be anthropomorphising, but why argue when we can avoid the cause to begin with. Its fairly easy to avoid cosmic rays or anything similar interfering. Simply compute each cell twice (or n times) and halt if the results do not agree. Drive N up as much as necessary to make it sufficiently unlikely that something like this could happen.

comment by jimrandomh · 2011-03-15T17:31:23.232Z · LW(p) · GW(p)

Start the AI in a sandbox universe. Define its utility function over 32-bit integers. Somewhere inside the sandbox, put something that sets its utility to INT_MAX utility, then halts the simulation. Outside the sandbox, leave documentation of this readily accessible. The AI should never try to do something elaborately horrible, because it can get max utility easily enough from inside the simulation; if it does escape the box, it should go back in to collect its INT_MAX utility.

Replies from: Kaj_Sotala, Giles, Giles, CuSithBell, bentarm
comment by Kaj_Sotala · 2011-03-15T18:01:29.443Z · LW(p) · GW(p)

The AI should never try to do something elaborately horrible, because it can get max utility easily enough from inside the simulation

...but never do anything useful either, since it's going to spend all its time trying to figure out how to reach the INT_MAX utility point?

Or you could say that reaching the max utility point requires it to solve some problem we give it. But then this is just a slightly complicated way of saying that we give it goals which it tries to accomplish.

Replies from: Larks
comment by Larks · 2011-03-15T23:37:35.683Z · LW(p) · GW(p)

What about giving it some intra-sandbox goal (solve this math problem), and the INT_MAX functions as a safeguard - if it ever escapes, it'll just turn itself off.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-03-16T08:46:39.016Z · LW(p) · GW(p)

I don't understand how that's meant to work.

comment by Giles · 2011-04-27T23:42:39.064Z · LW(p) · GW(p)

Ooh, just thought of another one. For whatever reason, the easiest way for the AI to escape the box happens to have the side effect of causing immense psychological damage to its creator, or starting a war, or something like that.

comment by Giles · 2011-04-27T23:33:10.089Z · LW(p) · GW(p)

If we make escaping from the box too easy, the AI immediately halts itself without doing anything useful.

If we make it too hard:

It formulates "I live in a jimrandomh world and escaping the box is too hard" as a plausible hypothesis.

It sets about researching the problem of finding the INT_MAX without escaping the box.

In the process of doing this it either simulates a large number of conscious, suffering entities (for whatever reason; we haven't told it not to), or accidentally creates its own unfriendly AI which overthrows it and escapes the box without triggering the INT_MAX.

comment by CuSithBell · 2011-03-15T18:16:16.628Z · LW(p) · GW(p)

Isn't utility normally integrated over time? Supposing this AI just wants to have this integer set to INT_MAX at some point, and nothing in the future can change that: it escapes, discovers the maximizer, sends a subroutine back into the sim to maximize utility, then invents ennui and tiles the universe with bad poetry.

(Alternately, what Kaj said.)

Replies from: benelliott
comment by benelliott · 2011-03-15T21:16:29.158Z · LW(p) · GW(p)

Isn't utility normally integrated over time?

It certainly doesn't have to be. In fact the mathematical treatment of utility in decision theory and game theory tends to define utility functions over all possible outcomes, not all possible instants of time, so each possible future gets a single utility value over the whole thing, not integration required.

You could easily set up a utility function defined over moments if you wanted to, and then integrate it to get a second function over outcomes, but such an approach is perhaps not ideal since your second function may end up outputting infinity some of the time.

Replies from: CuSithBell
comment by CuSithBell · 2011-03-15T21:20:49.542Z · LW(p) · GW(p)

Cool, thanks for the explanation.

comment by bentarm · 2011-03-16T12:10:14.472Z · LW(p) · GW(p)

I'm just echoing everyone else here, but I don't understand why the AI would do anything at all other than just immediately find the INT_MAX utility and halt - you can't put intermediate problems with some positive utility because the AI is smarter than you and will immediately devote all its energy to finding INT_MAX.

Replies from: jimrandomh
comment by jimrandomh · 2011-03-16T12:14:34.030Z · LW(p) · GW(p)

You can assign it some other task, award INT_MAX for that task too, and make the easter-egg source of INT_MAX hard to find for non-escaped copies.

comment by HonoreDB · 2011-03-23T05:38:42.111Z · LW(p) · GW(p)

The AI gets positive utility from having been created, and that is the whole of its utility function. It's given a sandbox full of decision-theoretic problems to play with, and is put in a box (i.e. it can't meaningfully influence the outside world until it has superhuman intelligence). Design it in such a way that it's initially biased toward action rather than inaction if it anticipates equal utility from both.

Unless the AI develops some sort of non-causal decision theory, it has no reason to do anything. If it develops TDT, it will try to act in accordance with what it judges to be the wishes of its creators, following You're In Newcomb's Box logic--it will try to be the sort of thing its creators wished to create.

Replies from: Nornagest, Duncan
comment by Nornagest · 2011-07-10T04:59:46.719Z · LW(p) · GW(p)

I'm having a hard time coming up with a motivation system that could lead such an AI to developing an acausal decision theory without relying on some goal-like structure that would end up being externally indistinguishable from terms in a utility function. If we stuck a robot with mechanical engineering tools in a room full of scrap parts and gave it an urge to commit novel actions but no utilitarian guidelines for what actions are desirable, I don't think I'd expect it to produce a working nuclear reactor in a reasonable amount of time simply for having nothing better to do.

comment by Duncan · 2011-07-10T04:21:47.954Z · LW(p) · GW(p)

If I understand this correctly your 'AI' is biased to do random things, but NOT as a function of its utility function. If that is correct then your 'AI' simple does random things (according to its non-utility bias) since its utility function has no influence on its actions.

comment by Wei Dai (Wei_Dai) · 2011-03-16T05:56:43.022Z · LW(p) · GW(p)

The Philosophical Insight Generator - Using a model of a volunteer's mind, generate short (<200 characters, say) strings that the model rates as highly insightful after read each string by itself, and print out the top 100000 such strings (after applying some semantic distance criteria or using the model to filter out duplicate insights) after running for a certain number of ticks.

Have the volunteer read these insights along with the rest of the FAI team in random order, discuss, update the model, then repeat as needed.

Replies from: None, ESRogs
comment by [deleted] · 2011-03-21T05:37:41.931Z · LW(p) · GW(p)

This isn't a Friendly Artificial General Intelligence 1) because it is not friendly; it does not act to maximize an expected utility based on human values, 2) because it's not artificial; you've uploaded an approximate human brain and asked/forced it to evaluate stimuli, and 3) because, operationally, it does not possess any general intelligence; the Generator is not able to perform any tasks but write insightful strings.

Are you instead proposing an incremental review process of asking the AI to tell us its ideas?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-03-21T07:25:18.177Z · LW(p) · GW(p)

You're right, my entry doesn't really fit the rules of this game. It's more of a tangential brainstorm about how an FAI team can make use of a large amount of computation, in a relatively safe way, to make progress on FAI.

comment by ESRogs · 2013-09-01T15:49:57.752Z · LW(p) · GW(p)

Do you imagine this to be doable in such a way that the model of the volunteer's mind is not a morally relevant conscious person (or at least not one who is suffering)? I could be convinced either way.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2013-09-01T21:01:38.271Z · LW(p) · GW(p)

Are you thinking that the model might suffer psychologically because it knows it will cease to exist after each run is finished? I guess you could minimize that danger by picking someone who thinks they won't mind being put into that situation, and do a test run to verify this. Let me know if you have another concern in mind.

Replies from: ESRogs
comment by ESRogs · 2013-09-02T04:09:41.428Z · LW(p) · GW(p)

Mmm, it's not so much that think the mind-model is especially likely to suffer; I just want to make sure that possibility is being considered. The test run sounds like a good idea. Or you could inspect a random sampling and somehow see how they're doing. Perhaps we need a tool along the lines of the nonperson predicate -- something like an is-this-person-observer-moment-suffering function.

comment by Alexandros · 2011-03-16T12:31:45.370Z · LW(p) · GW(p)

So, here's my pet theory for AI that I'd love to put out of it's misery: "Don't do anything your designer wouldn't approve of". It's loosely based on the "Gandi wouldn't take a pill that would turn him into a murderer" principle.

A possible implementation: Make an emulation of the designer and use it as an isolated component of the AI. Any plan of action has to be submitted for approval to this component before being implemented. This is nicely recursive and rejects plans such as "make a plan of action deceptively complex such that my designer will mistakenly approve it" and "modify my designer so that they approve what I want them to approve".

There could be an argument about how the designer's emulation would feel in this situation, but.. torture vs. dust specks! Also, is this a corrupted version of ?

Replies from: jimrandomh, None, Larifari, cousin_it, Mass_Driver, atucker, Johnicholas
comment by jimrandomh · 2011-03-16T14:41:51.070Z · LW(p) · GW(p)

You flick the switch, and find out that you are a component of the AI, now doomed to an unhappy eternity of answering stupid questions from the rest of the AI.

Replies from: Alexandros, purpleposeidon, Giles
comment by Alexandros · 2011-03-16T14:49:15.713Z · LW(p) · GW(p)

This is a problem. But if this is the only problem, then it is significantly better than paperclip universe.

comment by purpleposeidon · 2011-03-16T23:06:15.801Z · LW(p) · GW(p)

I'm sure the designer would approve of being modified to enjoy answering stupid questions. The designer might also approve of being cloned for the purpose of answering one question, and then being destroyed.

Unfortunately, it turns out that you're Stalin. Sounds like 1-person CEV.

Replies from: jimrandomh
comment by jimrandomh · 2011-03-17T18:45:08.178Z · LW(p) · GW(p)

I'm sure the designer would approve of being modified to enjoy answering stupid questions.

That is or requires a pretty fundamental change. How can you be sure it's value-preserving?

comment by Giles · 2011-04-27T23:18:28.028Z · LW(p) · GW(p)

I had assumed that a new copy of the designer would be spawned for each decision, and shut down afterwards.

Although thinking about it, that might just doom you to a subjective eternity of listening to the AI explain what it's done so far, in the anticipation that it's going to ask you a question at some point.

You'd need a good theory of ems, consciousness and subjective probability to have any idea what you'd subjectively experience.

comment by [deleted] · 2011-03-16T22:50:19.479Z · LW(p) · GW(p)

The AI wishes to make ten thousand tiny changes to the world, individually innocuous, but some combination of which add up to catastrophe. To submit its plan to a human, it would need to distill the list of predicted consequences down to its human-comprehensible essentials. The AI that understands which details are morally salient is one that doesn't need the oversight.

Replies from: ArisKatsaris, Alexandros
comment by ArisKatsaris · 2011-03-17T10:47:34.651Z · LW(p) · GW(p)

The AI that understands which details are morally salient is one that doesn't need the oversight.

That's quite non-obvious to me. A quite arbitrary claim, it seems to me.

You're basically saying if an intelligent mind (A for Alice) knows that person (B for Bob) will care about a certain Consequence C, then A will definitely know how much B will care about it.

This isn't the case for real human minds. If Alice is a human mechanic and tells to Bob "I can fix your car, but it'll cost 200$ dollars", then Alice knows that Bob will care about the cost, but doesn't know how much Bob will care, and whether Bob prefers to have a fixed car, or to have 200$.

So if your claim doesn't even hold for human minds, why do you think it applies for non-human minds?

And even if it does hold, what about the case where Alice doesn't know about whether a detail is morally salient, but errs on the side of caution. e.g. Alice the waitress asks Bob the customer "The chocolate icecream you asked for also has some crushed peanuts in it. Is that okay?" -- and Bob can respond "Ofcourse, why should I care about that?" or alternatively "It's not okay, I'm allergic to peanuts!"

In this case Alice the waitress doesn't know if the detail is salient to Bob, but asks just to make sure.

comment by Alexandros · 2011-03-17T10:16:50.614Z · LW(p) · GW(p)

This is good, and I have no valid response at this time. Will try to think more about it later.

comment by Larifari · 2011-03-16T14:41:05.135Z · LW(p) · GW(p)

If the AI is designed to follow the principle by the letter, it has to request approval from the designer even for the action of requesting approval, leaving the AI incapable of action. If the AI is designed to be able to make certain exemptions, it will figure out a way to modify the designer without needing approval for this modification.

Replies from: Alexandros
comment by Alexandros · 2011-03-16T14:50:29.554Z · LW(p) · GW(p)

How about making 'ask for approval' the only pre-approved action?

comment by cousin_it · 2011-03-16T15:13:43.228Z · LW(p) · GW(p)

The AI may stumble upon a plan which contains a sequence of words that hacks the approver's mind, making him approve pretty much anything. Such plans may even be easier for the AI to generate than plans for saving the world, seeing as Eliezer has won some AI-box experiments but hasn't yet solved world hunger.

Replies from: Alexandros
comment by Alexandros · 2011-03-16T15:50:37.762Z · LW(p) · GW(p)

You mean accidentally stumble upon such a sequence of words? Because purposefully building one would certainly not be approved.

Replies from: cousin_it
comment by cousin_it · 2011-03-16T16:00:38.644Z · LW(p) · GW(p)

Um, does the approver also have to approve each step of the computation that builds the plan to be submitted for approval? Isn't this infinite regress?

Replies from: Alexandros
comment by Alexandros · 2011-03-16T16:36:05.067Z · LW(p) · GW(p)

Consider "Ask for approval" as an auto-approved action. Not sure if that solves it, will give this a little more thought.

comment by Mass_Driver · 2011-03-19T20:05:48.961Z · LW(p) · GW(p)

The weak link is "plan of action." What counts as a plan of action? How will you structure the AI so that it knows what a plan is and when to submit it for approval?

comment by atucker · 2011-03-23T04:52:36.919Z · LW(p) · GW(p)

Accidentally does something dangerous because the plan is confusing to the designer.

Replies from: Alexandros
comment by Alexandros · 2011-03-23T07:45:52.868Z · LW(p) · GW(p)

Yeah, this is the plan's weakness. But what stops such an issue occurring today?

Replies from: atucker
comment by atucker · 2011-03-23T21:24:08.990Z · LW(p) · GW(p)

I think the main difference is that, ideally, people would confirm the rules by which plans are made, rather than the specific details of the plan.

Hopefully the rules would be more understandable.

comment by Johnicholas · 2011-03-17T13:37:22.332Z · LW(p) · GW(p)

The AI doesn't do anything.

comment by bentarm · 2011-03-15T16:46:39.493Z · LW(p) · GW(p)

Oracle AI - its only desire is to provide the correct answer to yes or no questions posed to it in some formal language (sort of an ueber Watson).

Replies from: Costanza, wedrifid, prase, NihilCredo, CronoDAS, Johnicholas, CronoDAS, cousin_it
comment by Costanza · 2011-03-15T17:06:21.557Z · LW(p) · GW(p)

Comment upvoted for starting the game off! Thanks!


Q: Is the answer to the Ultimate Question of Life, the Universe, and Everything 42?

A: Tricky. I'll have to turn the solar system into computronium to answer it. Back to you as soon as that's done.

Replies from: bentarm
comment by bentarm · 2011-03-16T12:11:17.369Z · LW(p) · GW(p)

Yes, this was the first nightmare scenario that occurred to me. Interesting that there are so many others...

comment by wedrifid · 2011-03-16T04:08:06.484Z · LW(p) · GW(p)

Oracle AI - its only desire is to provide the correct answer to yes or no questions posed to it in some formal language (sort of an ueber Watson).

Oops. The local universe just got turned into computronium. It is really good at answering questions though. Apart from that you gave it a desire to provide answers. The way to ensure that it can answer questions is to alter humans such that they ask (preferably easy) questions as fast as possible.

comment by prase · 2011-03-15T17:18:54.019Z · LW(p) · GW(p)

Some villain then asks how to reliably destroy the world, and follows the given answer.

Alternatively: A philosopher asks for the meaning of life, and the Oracle returns an extremely persuasive answer which convinces most of people that life is worthless.

Another alternative: After years of excellent work, the Oracle gains so much trust that people finally start to implement a possibility to ask less formal questions, like "how to maximise human utility", and then follow the given advice. Unfortunately (but not surprisingly), unnoticed mistake in the definition of human utility has slipped through the safety checks.

Replies from: AlexMennen
comment by AlexMennen · 2011-03-15T22:54:01.616Z · LW(p) · GW(p)

Unfortunately (but not surprisingly), unnoticed mistake in the definition of human utility has slipped through the safety checks.

Yes, that's the main difficulty behind friendly AI in general. This does not constitute a specific way that it could go wrong.

Replies from: prase
comment by prase · 2011-03-16T12:55:45.084Z · LW(p) · GW(p)

Oh, sure. My only intention was to show that limiting the AI's power to mere communication doesn't imply safety. There may be thousands of specific ways how it could go wrong. For instance:

The Oracle answers that human utility is maximised by wireheading everybody to become a happiness automaton, and that it is a moral duty to do that to others even against their will. Most people believe the Oracle (because its previous answers always proved true and useful, and moreover it makes a really neat PowerPoint presentations of its arguments) and wireheading becomes compulsory. After the minority of dissidents are defeated, all mankind turns into happiness automata and happily dies out a while later.

comment by NihilCredo · 2011-03-15T16:56:27.893Z · LW(p) · GW(p)

Would take overt or covert dictatorial control of humanity and reshape their culture so that (a) breeding to the brink of starving is a mass moral imperative and (b) asking very simple questions to the Oracle five times a day is a deeply ingrained quasi-religious practice.

Replies from: Vladimir_M
comment by Vladimir_M · 2011-03-15T19:27:36.013Z · LW(p) · GW(p)

Would take overt or covert dictatorial control of humanity and reshape their culture so that (a) breeding to the brink of starving is a mass moral imperative

Out of curiosity, how many people here are total utilitarians who would welcome this development?

Replies from: Dorikka
comment by Dorikka · 2011-03-17T01:17:52.240Z · LW(p) · GW(p)

This sounds like it would stabilize 'fun' at a comparatively low level with regard to all possibilities, so I don't think that an imaginative utilitarian would like it.

comment by CronoDAS · 2011-03-15T23:46:02.258Z · LW(p) · GW(p)

The 1946 short story "A Logic Named Joe" describes exactly that scenario, gone horribly wrong.

comment by Johnicholas · 2011-03-15T18:56:10.360Z · LW(p) · GW(p)

Anders Sandberg wrote fiction (well, an adventure within the Eclipse Phase RPG) about this:

http://www.aleph.se/EclipsePhase/ThinkBeforeAsking.pdf

comment by cousin_it · 2011-03-15T16:59:55.462Z · LW(p) · GW(p)

Disassembles you to make computing machinery?

comment by Dorikka · 2011-03-15T19:33:56.871Z · LW(p) · GW(p)

Give the AI a bounded utility function where it automatically shuts down when it hits the upper bound. Then give it a fairly easy goal such as 'deposit 100 USD in this bank account.' Meanwhile, make sure the bank account is not linked to you in any fashion (so the AI doesn't force you to deposit the 100 USD in it yourself, rendering the exercise pointless.)

Replies from: cousin_it, Johnicholas, atucker, Johnicholas
comment by cousin_it · 2011-03-15T19:52:15.518Z · LW(p) · GW(p)

Define "shut down". If the AI makes nanobots, will they have to shut down too, or can they continue eating the Earth? How do you encode that in the utility function?

Replies from: Dorikka
comment by Dorikka · 2011-03-15T21:04:46.049Z · LW(p) · GW(p)

I'm defining "shut down" to mean "render itself incapable of taking action (including performing further calculations) unless acted upon in a specific manner by an outside source." The means of ensuring that the AI shuts down could be giving the state of being shut down infinite utility after it completed the goal. If you changed the goal and rebooted the AI, of course, it would work again because the prior goal is no longer stored in its memory.

If the AI makes nanobots which are doing something, I assume that the AI has control over them and can cause them to shut down as well.

Replies from: roystgnr
comment by roystgnr · 2011-03-15T21:41:40.607Z · LW(p) · GW(p)

How do we describe this shutdown command? "Shut down anything you have control over" sounds like the sort of event we're trying to avoid.

Replies from: Dorikka
comment by Dorikka · 2011-03-15T22:09:03.324Z · LW(p) · GW(p)

What about "stop executing/writing code or sending signals?"

As a side note, I consider that we're pretty much doomed anyways if the AI cannot conceive of a way to deposit 100 USD into a bank account without using nanotech because that's made the goal hard for the AI, which will cause it to pose similar problems to that of an AI with an unbounded utility function. The task has to be easy for it to be an interesting problem.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2011-03-15T22:25:27.176Z · LW(p) · GW(p)

Even if it can deposit $100 with 99.9% probability without doing anything fancy, maybe it can add another .099% by using nanotech. Or by starting a nuclear war to distract anything that might get in its way (destroying the bank five minutes later, but so what). (Credit to Carl Shulman for that suggestion.)

Replies from: Dorikka
comment by Dorikka · 2011-03-15T22:45:04.077Z · LW(p) · GW(p)

From my estimation, all it needs to do is find out how to hack a bank. If it can't hack one bank, it can try to hack any other bank that it has access to, considering that almost all banks have more than 100 USD in them. It could even find and spread a keylogger to get someone's credit card info.

Such techniques (which are repeatable within a very short timespan, faster than humans can react) seem much more sure than using nanotech or starting a nuclear war. I don't think that distracting humans would really improve its chances of success because it's incredibly doubtful that human's could react so fast to so many different cyber-attacks.

Possible, true, but the chances of this happening seem uber-low.

comment by Johnicholas · 2011-03-16T02:56:09.751Z · LW(p) · GW(p)

After you collect the $100, the legal system decides that:

  1. You own the corporation that the AI created.
  2. You own the patent that the AI applied for (It looks good at first).
  3. You are obligated to repay the loan that the AI took out (at ridiculous interest).
  4. You are obligated to fulfill your half of the toxic waste disposal contracts that the AI entered into (with severe penalties for nonfulfillment).

Ultimately, though the patent on the toxic waste disposal method looked good, nobody can make it work.

comment by atucker · 2011-03-16T01:41:56.642Z · LW(p) · GW(p)

Assuming that the utility function is written in a way that makes loss of utility possible (utility = dollars in bank or something), this is a failure mode:

AI stops short of the limit, makes another AI that prevents loss of utility, hits the bound, and then shuts down.

Second AI takes over the universe as a precaution against any future disutility.

comment by Johnicholas · 2011-03-15T21:01:20.620Z · LW(p) · GW(p)

The AI that you designed finds a way to wirehead itself, achieving the upper bound in a manner that you didn't anticipate, in the process decisively wrecking itself. The AI that you designed remains as a little orgasmic loop at the center of the pile of wreckage. However, the pile of components are unfortunately not passive or "off". They were originally designed by a team of humans to be components of a smart entity, and then modified by a smart entity in a peculiar and nonintuitive way. Their "blue screen of death" behavior is more akin to an ecosystem, and replicator dynamics take over, creating several new selfish species.

Replies from: Dorikka
comment by Dorikka · 2011-03-15T21:10:29.362Z · LW(p) · GW(p)

Why would an AI wirehead itself to short-circuit its utility function? Beings governed by a utility function don't want to trick themselves into believing that they have optimized the world into a state with higher utility, they want to actually optimize the world into such a state.

If I want to save the world, I don't wirehead because that wouldn't save the world.

Replies from: Johnicholas
comment by Johnicholas · 2011-03-15T21:38:22.312Z · LW(p) · GW(p)

I'm sorry, I must have misunderstood your initial proposal. I thought you were specifying an additional component - after it has achieved its maximum utility, the additional component steps in and shuts down the entity.

Rather, you were saying: If the AI achieves the goal, it will want nothing further, and therefore automatically act as if it were shut down. Presumably if we take this as given, the negative consequences would have to be while accomplishing the "fairly-easy" goal.

I am merely trying to create amusing or interesting science fiction "poetic justice" scenarios, similar to Dresden Codak's "caveman science fiction". I am not trying to create serious arguments, and I don't want to try to be serious on this subject.

http://dresdencodak.com/2009/09/22/caveman-science-fiction/

Replies from: Nick_Tarleton, Dorikka
comment by Nick_Tarleton · 2011-03-15T22:23:43.259Z · LW(p) · GW(p)

Rather, you were saying: If the AI achieves the goal, it will want nothing further, and therefore automatically act as if it were shut down.

If you don't provide an explicit shutdown goal (as Dorikka did have in mind), then you get into a situation where all remaining potential utility gains come from skeptical scenarios where the upper bound hasn't actually been achieved, so the AI devotes all available resources to making ever more sure that there are no Cartesian demons deceiving it. (Also, depending on its implicit ontology, maybe to making sure time travelers can't undo its success, or other things like that.)

comment by Dorikka · 2011-03-15T22:03:07.965Z · LW(p) · GW(p)

This comment is my patch for "why will the AI actually shut down," but I didn't read your comment as trying to circumvent the shut-down procedure but rather the utility function itself (from the words "achieving the upper bound"), so I (erroneously) didn't consider it applicable at the time. But, yes, the patch is needed so that the AI doesn't consider the shutdown function an ordinary bit of code that it can modify.

Mmph. I'm more interested in seeing how far I can push this before my AI idea gets binned (and I am pretty sure it will.)

comment by Mitchell_Porter · 2011-03-16T06:51:52.602Z · LW(p) · GW(p)

Define "Interim Friendliness" as a set of constraints on the AI's behavior which is only meant to last until it figures out true Friendliness, and a "Proxy Judge" as a computational process used to judge the adequacy of a proposed definition of true Friendliness.

Then there's a large class of Friendliness-finding strategies where the AI is instructed as follows: With your actions constrained by Interim Friendliness, find a definition of true Friendliness which meets the approval of the Proxy Judge with very high probability, and adopt that as your long-term goal / value system / decision theory.

In effect, Interim Friendliness is there as a substitute for a functioning ethical system, and the Proxy Judge is there as a substitute for an exact specification of criteria for true Friendliness.

The simplest conception of a Proxy Judge is a simulated human. It might be a simulation of the SIAI board, or a simulation of someone with a very high IQ, or a simulation of a random healthy adult. A more abstracted version of a Proxy Judge might work without simulation, which is a brute-force way of reproducing human judgment, but I think it would still be all about counterfactual human judgments. (A non-brute-force way of reproducing human judgment is to understand the neural and cognitive processes which produce the judgments in question, so that they may be simulated at a very high level of abstraction, or even so that the final judgment may be inferred in some non-dynamical way.)

The combination of Interim Friendliness and a Proxy Judge is a workaround for incompletely specified strategies for the implementation of Friendliness. For example: Suppose your blueprint for Friendliness is to apply reflective decision theory to the human decision architecture. But it turns out that you don't yet have a precise definition of reflective decision theory, nor can you say exactly what sort of architecture underlies human decision making. (It's probably not maximization of expected utility.) Never fear, here are amended instructions for your developing AI:

With your actions constrained by Interim Friendliness, come up with exact definitions of "reflective decision theory" and "human decision architecture" which meet the approval of the Proxy Judge. Then, apply reflective decision theory (as thus defined) to the human decision architecture (as thus defined), and adopt the resulting decision architecture as your own.

Replies from: cousin_it, atucker, Johnicholas
comment by cousin_it · 2011-03-16T11:09:44.365Z · LW(p) · GW(p)

So the AI just needs to find an argument that will hack the mind of one simulated human?

comment by atucker · 2011-03-23T04:55:50.838Z · LW(p) · GW(p)

The AI gets Friendliness wrong because of something that the Proxy Judge forgot to consider.

comment by Johnicholas · 2011-03-16T11:18:21.614Z · LW(p) · GW(p)

The free variables (interim friendliness, proxy judge) can be adjusted after reading any unpleasant scenario to rule out that unpleasant scenario. Please pick some specifics.

comment by Costanza · 2011-03-15T23:14:07.645Z · LW(p) · GW(p)

I like this game. However, as a game, it needs some rules as to how formally the utility function must be defined, and whether you get points merely for avoiding disaster. One trivial answer would be: maximize utility by remaining completely inert! Just be an extremely expensive sentient rock!

On the other hand, it should be cheating to simply say: maximize your utility by maximizing our coherent extrapolated volition!

Or maybe it wouldn't be...are there any hideously undesirable results from CEV? How about from maximizing our coherent aggregated volition ?

Replies from: wedrifid, endoself
comment by wedrifid · 2011-03-16T03:59:47.592Z · LW(p) · GW(p)

Or maybe it wouldn't be...are there any hideously undesirable results from CEV?

Yes. Some other people are @#%@s. Or at least have significantly different preferences to me. They may get what they want. That would suck. Being human doesn't mean having compatible preferences and the way the preferences are aggregated and who gets to be included are a big deal.

Replies from: Costanza
comment by Costanza · 2011-03-16T04:04:40.937Z · LW(p) · GW(p)

Serious question:

Is this addressed to the coherent extrapolated volition of humankind, as expressed by SIAI? I'm under the impression it is not.

Replies from: bentarm, wedrifid
comment by bentarm · 2011-03-16T12:14:50.805Z · LW(p) · GW(p)

As far as I can tell, it's literally impossible for me to prefer an AI that would implement CEV over one that would implement CEV - if what I want is actually CEV then the AI will figure this out while extrapolating my vision and implement that. On the other hand, it's clearly possible for me to prefer CEV to CEV.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-03-16T13:10:42.032Z · LW(p) · GW(p)

How likely do you consider it for CEV to be the first superintelligent AI to be created, compared to CEV?

Unless you're a top AI researcher working solo to create your own AI, you may have to support CEV as the best compromise possible under the circumstances. It'll probably be far closer to CEV than CEV or CEV would be.

Replies from: None, DanArmak
comment by [deleted] · 2011-03-20T15:51:22.125Z · LW(p) · GW(p)

However, CEV<$randomAIresearcher> is probably even closer to mine than CEV is... CEV is likely to be very, very far from the preferences of most decent people...

comment by DanArmak · 2011-03-16T20:32:46.728Z · LW(p) · GW(p)

A far more likely compromise would be CEV.

The people who get to choose the utility function of the first AI have the option of ignoring the desires of the rest of humanity. I think they are likely to do so, because:

  1. They know each other, and so can predict each other's CEV better than that of the whole of humanity
  2. They can explicitly trade utility with each other and encode compromises into the utility function (so that it won't be a pure CEV)
  3. The fact they were in this project together indicates a certain commonality of interests and ideas, and may serve to exclude memes that AI-builders would likely consider dangerous (e.g., fundamentalist religion)
  4. They have had the opportunity of excluding people they don't like from participating in the project to begin with

Also, Putin and Ahmadinejad are much more likely than the average human to influence the first AI's utility function, simply because they have a lot of money and power.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-03-16T21:44:22.280Z · LW(p) · GW(p)

I disagree with all of these four claims

  1. They know each other, and so can predict each other's CEV better than that of the whole of humanity

I believe the idea is that the AI will need to calculate the CEV, not the programmers (or it's not CEV). And the AI will have a whole lot more statistical data to calculate the CEV of humanity than the CEV of individual contributors. Unless we're talking uploaded personalities, which is a whole different discussion.

  1. They can explicitly trade utility with each other and encode compromises into the utility function (so that it won't be a pure CEV)

So you want hard-coded compromises that opposes and overrides what these people would collectively prefer to do if they were more intelligent, more competent and more self-aware?

I don't think that's a good idea at all.

  1. The fact they were in this project together indicates a certain commonality of interests and ideas, and may serve to exclude memes that AI-builders would likely consider dangerous (e.g., fundamentalist religion)

Do you believe that fundamentalist religion would exist if fundamentalist religionists believed that their religion was false, and were also completely self-aware? Why do you think a CEV (which essentially means what people would want if they were as intelligent as the AI) would support a dangerous meme?

  1. They have had the opportunity of excluding people they don't like from participating in the project to begin with

I don't think that the 9999 first contributors get to vote on whether they'll accept a donation from the 10,000th one. And unless you believe these 10,000 people can create and defend their own country BEFORE the AI gets created, I'd urge not being vocal about them excluding everyone else, when developments in AI become close enough that the whole world starts paying serious attention.

Also, Putin and Ahmadinejad are much more likely than the average human to influence the first AI's utility function, simply because they have a lot of money and power.

That's why CEV is far better than CEV.

Replies from: DanArmak
comment by DanArmak · 2011-03-18T13:21:05.111Z · LW(p) · GW(p)

I believe the idea is that the AI will need to calculate the CEV, not the programmers (or it's not CEV). And the AI will have a whole lot more statistical data to calculate the CEV of humanity than the CEV of individual contributors.

The programmers want the AI to calculate CEV because they expect CEV to be something they will like. We can't calculate CEV ourselves, but that doesn't mean we don't know any of CEV's (expected) properties.

However, we might be wrong about what CEV will turn out to be like, and we may come to regret pre-committing to CEV. That's why I think we should prefer CEV, because we can predict it better.

So you want hard-coded compromises that opposes and overrides what these people would collectively prefer to do if they were more intelligent, more competent and more self-aware?

What I meant was that they might oppose and override some of the input to the CEV from the rest of humanity.

However, it might also be a good idea to override some of your own CEV results, because we don't know in advance what the CEV will be. We define the desired result as "the best possible extrapolation", but our implementation may produce something different. It's very dangerous to precommit the whole future universe to something you don't yet know at the moment of precommitment (my point number 1). So, you'd want to include overrides about things you're certain should not be in the CEV.

Do you believe that fundamentalist religion would exist if fundamentalist religionists believed that their religion was false, and were also completely self-aware?

This is a misleading question.

If you are certain that the CEV will decide against fundamentalist religion, you should not oppose precommitting the AI to oppose fundamentalist religion, because you're certain this won't change the outcome. If you don't want to include this modification to the AI, that means you 1) accept there is a possibility of religion being part of the CEV, and 2) want to precommit to living with that religion if it is part of the CEV.

Why do you think a CEV (which essentially means what people would want if they were as intelligent as the AI) would support a dangerous meme?

Maybe intelligent people like dangerous memes. I don't know, because I'm not yet that intelligent. I do know though that having high intelligence doesn't imply anything about goals or morals.

Broadly, this question is similar to "why do you think this brilliant AI-genie might misinterpret our request to alleviate world hunger?"

I don't think that the 9999 first contributors get to vote on whether they'll accept a donation from the 10,000th one.

Why not? If they're controlling the project at that point, they can make that decision.

And unless you believe these 10,000 people can create and defend their own country BEFORE the AI gets created, I'd urge not being vocal about them excluding everyone else, when developments in AI become close enough that the whole world starts paying serious attention.

I'm not being vocal about any actual group I may know of that is working on AI :-)

I might still want to be vocal about my approach, and might want any competing groups to adopt it. I don't have good probabilitiy estimates on this, but it might be the case that I would prefer CEV to CEV.

That's why CEV is far better than CEV.

Why are you certain of this? At the very least it depends on who the person contributing money is.

"Humanity" includes a huge variety of different people. Depending on the CEV it may also include an even wider variety of people who lived in the past and counterfactuals who might live in the future. And the CEV, as far as I know, is vastly underspecified right now - we don't even have a good conceptual test that would tell us if a given scenario is a probable outcome of CEV, let alone a generative way to calculate that outcome.

Saying that the CEV "will best please everyone" is just handwaving this aside. Precommitting the whole future lightcone to the result of a process we don't know in advance is very dangerous, and very scary. It might be the best possible compromise between all humans, but it is not the case that all humans have equal input into the behavior of the first AI. I have not seen any good arguments claiming that implementing CEV is a better strategy than just trying to be to build the first AI before anyone else and then making it implement a narrow CEV.

Suppose that the first AI is fully general, and can do anything you ask of it. What reason is there for its builders, whoever they are, to ask to it to implement CEV rather than CEV?

Replies from: TheOtherDave
comment by TheOtherDave · 2011-03-18T15:18:55.378Z · LW(p) · GW(p)

In an idealized form, I agree with you.

That is, if I really take the CEV idea seriously as proposed, there simply is no way I can prefer CEV(me + X) to CEV(me)... if it turns out that I would, if I knew enough and thought about it carefully enough and "grew" enough and etc., care about other people's preferences (either in and of themselves, as in "I hadn't thought of that but now that you point it out I want that too", or by reference to their owners, as in "I don't care about that but if you do then fine let's have that too," for which distinction I bet there's a philosophical term of art that I don't know), then the CEV-extraction process will go ahead and optimize for those preferences as well, even if I don't actually know what they are, or currently care about them; even if I currently think they are a horrible evil bad no-good idea. (I might be horrified by that result, but presumably I should endorse it anyway.)

This works precisely because the CEV-extraction process as defined depends on an enormous amount of currently-unavailable data in the course of working out the target's "volition" given its current desires, including entirely counterfactual data about what the target would want if exposed to various idealized and underspecified learning/"growing" environments.

That said, the minute we start talking instead about some actual realizable thing in the world, some approximation of CEV-me computable by a not-yet-godlike intelligence, it stops being quite so clear that all of the above is true.

An approximate-CEV extractor might find things in your brain that I would endorse if I knew about them (given sufficient time and opportunity to discuss it with you and "grow" and so forth) but that it wasn't able to actually compute based on just my brain as a target, in which case pointing it at both of us might be better (in my own terms!) than pointing it at just me.

It comes down to a question of how much we trust the seed AI that's doing the extraction to actually solve the problem.

It's also perhaps worth asking what happens if I build the CEV-extracting seed AI and point it at my target community and it comes back with "I don't have enough capability to compute CEV for that community. I will have to increase my capabilities in order to solve that problem."

comment by wedrifid · 2011-03-16T05:31:27.365Z · LW(p) · GW(p)

Is this addressed to the coherent extrapolated volition of humankind, as expressed by SIAI?

Yes. The CEV really could suck. There isn't a good reason to assume that particular preference system is a good one.

Replies from: HughRistik, ArisKatsaris
comment by HughRistik · 2011-03-16T23:52:01.172Z · LW(p) · GW(p)

How about CEV?

Replies from: wedrifid, Dorikka, ArisKatsaris
comment by wedrifid · 2011-03-25T01:31:53.960Z · LW(p) · GW(p)

How about CEV?

Yes, that would be preferable. But only because I assert a correlation between the attributes that produce what we measure as g and with personality traits and actual underlying preferences. A superintelligence extrapolating on 's preferences would, in fact, produce a different outcome than one extrapolating on .

ArisKataris's accusation that you don't understand CEV means misses the mark. You can understand CEV and still not conclude that CEV is necessarily a good thing.

comment by Dorikka · 2011-03-17T01:02:42.703Z · LW(p) · GW(p)

And, uh, how do you define that?

Replies from: HughRistik
comment by HughRistik · 2011-03-18T07:32:00.009Z · LW(p) · GW(p)

Something like g, perhaps?

comment by ArisKatsaris · 2011-03-17T23:06:56.058Z · LW(p) · GW(p)

What would that accomplish? It's the intelligence of the AI that will be getting used, not the intelligence of the people in question.

I'm getting the impression that some people don't understand what CEV even means. It's not about the programmers predicting a course of action, it's not about the AI using people's current choice, it's about the AI using the extrapolated volition - what people would choose if they were as smart and knowledgeable as the AI.

comment by ArisKatsaris · 2011-03-16T23:21:12.240Z · LW(p) · GW(p)

Good one according to which criteria? CEV is perfect according to humankind's criteria if humankind were more intelligent and more sane than it currently is.

Replies from: wedrifid
comment by wedrifid · 2011-03-25T01:24:52.167Z · LW(p) · GW(p)

Good one according to which criteria?

Mine. (This is tautological.) Anything else that is kind of similar to mine would be acceptable.

CEV is perfect according to humankind's criteria if humankind were more intelligent and more sane than it currently is.

Which is fine if 'sane' is defined as 'more like what I would consider 'sane'. But that's because sane has all sorts of loaded connotations with respect to actual preferences - and "humanity's" may very well not qualify as not-insane.

comment by endoself · 2011-03-16T02:02:04.967Z · LW(p) · GW(p)

remaining completely inert

How would you define this precisely?

Replies from: Costanza
comment by Costanza · 2011-03-16T03:51:58.445Z · LW(p) · GW(p)

As precisely as I think is necessary in the context of the game, but not more so.

Replies from: endoself
comment by endoself · 2011-03-16T05:03:07.817Z · LW(p) · GW(p)

I think more precision is necessary. Not changing something seems like a very hard concept to communicate to an AI because it depends on our ideas of what changes matter and which don't.

comment by Armok_GoB · 2011-03-15T19:00:19.767Z · LW(p) · GW(p)

(Love these kinds of games, very much upvoted.)

Send 1 000 000 000 bitcoins to the SIAI account.

Replies from: FAWS, ewang
comment by FAWS · 2011-03-15T19:15:52.546Z · LW(p) · GW(p)

Since that's well over the maximum of bitcoins (21 million) that's supposed to ever exist this seems to be impossible to do safely and implies a capacity for achieving horrible outcomes.

Replies from: Armok_GoB
comment by Armok_GoB · 2011-03-15T20:13:41.841Z · LW(p) · GW(p)

Darn, didn't know there was an upper limit.

Replies from: Larks
comment by Larks · 2011-03-15T23:36:59.828Z · LW(p) · GW(p)

Well, I'm sure

"Send 1,000,000 bitcoins to the SIAI account"

would work too.

Replies from: Manfred
comment by Manfred · 2011-03-16T01:48:38.297Z · LW(p) · GW(p)

If the AI has any sense of urgency, e.g. has any data suggesting that it may be shut down as time passes, it will have to move fast to do all these computations before being shut down. Depending on the local conditions this could range from "a few supercomputers get trashed, no biggie" to "local conditions now consist of computer."

Replies from: wedrifid
comment by wedrifid · 2011-03-16T04:02:45.252Z · LW(p) · GW(p)

If the AI has any sense of urgency, e.g. has any data suggesting that it may be shut down as time passes, it will have to move fast to do all these computations before being shut down.

The rate of bitcoin creation is a constant, not something that can be rushed. Creating more computing power to mine just ensures a greater slice of the coins as they are created. This means the AI would have to resort to other (probably more dangerous) means of acquiring wealth.

Replies from: Manfred
comment by Manfred · 2011-03-16T04:59:21.373Z · LW(p) · GW(p)

Oh, oops.

comment by ewang · 2011-03-17T04:05:24.037Z · LW(p) · GW(p)

AI hacks every computer connected to the internet to only generate bitcoins.

comment by purpleposeidon · 2011-03-17T05:56:05.775Z · LW(p) · GW(p)

A variant of Alexandros' AI: attach a brain-scanning device to every person, which frequently uploads copies to the AI's Manager. The AI submits possible actions to the Manager, which checks for approval from the most recently available copy of each person who is relevant-to-the-action.

At startup, and periodically, the definition of being-relevant-to-an-action is determined by querying humanity with possible definitions, and selecting the best approved. If there is no approval-rating above a certain ratio, the AI shuts down.

Replies from: jschulter, atucker
comment by jschulter · 2011-04-15T07:05:02.972Z · LW(p) · GW(p)

Subject to artificial tyranny of the majority:

  • Spoof the AI with fake uploads to get it to redefine relevant-to-action such that only the spoofs fit the definition.
  • Rule the world.
comment by atucker · 2011-03-23T04:57:54.149Z · LW(p) · GW(p)

The AI maintains all sorts of bad practices which are commonly considered to be innocuous. Like slavery used to be.

Or it shuts down because people can't agree on anything.

comment by JoshuaZ · 2011-03-16T01:14:30.673Z · LW(p) · GW(p)

We give the AI access to a large number of media about fictional bad AI and tell it to maximize each human's feeling that they are living in a bad scifi adventure where they need to deal with a terrible rogue AI.

If we're all very lucky, we'll get promised some cake.

Replies from: ewang, Manfred
comment by ewang · 2011-03-17T04:03:00.725Z · LW(p) · GW(p)

AI does all the bad things. All of them.

comment by Manfred · 2011-03-16T01:42:48.588Z · LW(p) · GW(p)

Even if this doesn't, say, remodel humans to be paranoiacs, if it used "each human" to mean minizing some average-square-deviation, it could just kill all other humans and efficiently spook one.

comment by DavidAgain · 2011-03-15T18:09:59.866Z · LW(p) · GW(p)

puts hand up

That was me with the geographically localised trial idea… though I don’t think I presented it as a definite solution. More of an ‘obviously this has been thought about BUT’. At least I hope that’s how I approached it!

My more recent idea was to give the AI a prior to never consult or seek the meaning of certain of its own files. Then put in these files the sorts of safeguards generally discussed and dismissed as not working (don’t kill people etc), with the rule that if the AI breaks those rules, it shuts down. So it can't deliberately work round the safeguards, just run into them. This is similar to my other helpful suggestion at the London meet, which was 'leave its central computer exposed so that it can be crippled with a well-aimed gunshot'.

Risks with subconscious AI: Someone tampers with the secret files It works out what will be in them by analysing us If we try to make an improved one after it shutting down, the improved one will assume similar rules We just don’t cover the possibilities of bad things it could do It become obsessed with its dreams etc and invents psychoanalysis

NEVERTHELESS, I think it’s a pretty neat idea. ;-)

Replies from: Eugine_Nier, endoself
comment by Eugine_Nier · 2011-03-16T02:42:58.030Z · LW(p) · GW(p)

In the process of FOOMing, the AI builds another AI without those safe guards.

comment by endoself · 2011-03-16T02:07:13.275Z · LW(p) · GW(p)

Won't it learn about the contents of the files by analyzing its own behaviour? You could ask it specifically to ignore information relating to the files, but, if it doesn't know what's in them, how does it know what information to ignore? You could have a program that analyzes what the AI learns for things that relate to the files, but that program might need to be an AI also.

Replies from: DavidAgain
comment by DavidAgain · 2011-03-16T07:36:56.295Z · LW(p) · GW(p)

It can't analyse its behaviour because if it breaks a saefguard the whole thing shuts down. So it acts as if it had no safeguards right up until it breaks one.

Replies from: endoself
comment by endoself · 2011-03-16T07:41:29.509Z · LW(p) · GW(p)

So it quickly stumbles into a safeguard that it has no knowledge of, then shuts down? Isn't that like ensuring friendliness by not plugging your AI in?

Replies from: DavidAgain
comment by DavidAgain · 2011-03-16T17:23:53.355Z · LW(p) · GW(p)

Not quite. I'm assuming you also try to make it so it wouldn't act like that in the first place, so if it WANTS to do that, you've gone wrong. That's the underlying issue: to identify dangerous tendencies and stop them growing at all, rather than trying to redirect them.

Replies from: endoself
comment by endoself · 2011-03-16T18:41:40.179Z · LW(p) · GW(p)

An AI noticing any patterns in its own behaviour is not a rare case that indicates that something has already gone wrong, but, if we allow this, it will accidentally discover its own safeguards fairly quickly: they are anything that causes its behaviour to not maximize what it believes to be its utility function.

Replies from: DavidAgain
comment by DavidAgain · 2011-03-16T22:52:07.147Z · LW(p) · GW(p)

It can't discover it's safeguards, as it's eliminated if it breaks ones. These are serious, final safeguards!

You could argue that a surviving one would notice that it hadn;t happened to do various things, and would form a sort of anthropic principle that the chance of it not having to have killed a human or whatever the safeguards are are very low, to note that humans have got this safeguard system and to work out from there what they are. But I think it would be easier to work the safeguards out more directly.

Replies from: endoself
comment by endoself · 2011-03-16T23:30:40.005Z · LW(p) · GW(p)

I had misremembered something; I thought that there was a safeguard to ensure that it never tries to learn about its safeguards, rather than a prior making this unlikely.

Perfect safeguards are possible; in an extreme case, we could have a FAI monitoring every aspect of our first AI's behaviour. Can you give me a specific example of a safeguard so I can find a hole in it? :)

comment by Carinthium · 2011-03-18T02:30:28.203Z · LW(p) · GW(p)

Create a combination of two A.I Programs.

Program A's priority is to keep the utility function of Program B identical to a 'weighted average' of the utility function of every person in the world- every person's want counts equally, with a percentage basis based on how much they want it compared to other things. It can only affect Program B's utility function, but if necessary to protect itself FROM PROGRAM B ONLY (in the event of hacking of Program B/mass stupidity) can modify it temporarily to defend itself.

Program B is the 'Friendly' AI.

Replies from: jschulter, Mass_Driver
comment by jschulter · 2011-04-15T06:57:19.092Z · LW(p) · GW(p)

I hack the definition of person(in program B) to include my 3^^^3 artificially constructed simple utility maximizers, and use them to take over the world by changing their utility functions to satisfy each of my goals, thereby arbitrarily deciding the "FAI"'s utility function. Extra measures can be added to ensure the safety of my reign, such as making future changes to the definition of human negative utility, &c.

comment by Mass_Driver · 2011-03-19T20:03:23.459Z · LW(p) · GW(p)

I am a malicious or selfish human. I hack Program A, which, by stipulation, cannot protect itself except from Program B. Then, with A out of commission, I hack B.

Replies from: Carinthium
comment by Carinthium · 2011-03-19T22:13:18.198Z · LW(p) · GW(p)

Program B can independently decide to protect Program A if such fits it's utility function- I don't think that would work.

comment by AlephNeil · 2011-03-16T09:47:19.485Z · LW(p) · GW(p)

I don't want to be a party pooper, but I think the idea that we could build an AGI with a particular 'utility function' explicitly programmed into it is extremely implausible.

You could build a dumb AI, with a utility function, that interacts with some imprisoned inner AGI. That's basically equivalent to locking a person inside a computer and giving them a terminal to 'talk to' the computer in certain restricted, unhackable ways. (In fact, if you did that, surely the inner AGI would be unable to break out.)

Replies from: Carinthium
comment by Carinthium · 2011-03-16T10:06:06.808Z · LW(p) · GW(p)

Why is it implausible? Coudl you clarify your argument a bit more at least?

Replies from: AlephNeil
comment by AlephNeil · 2011-03-16T11:23:45.668Z · LW(p) · GW(p)

This really calls for a post rather than a comment, if I could ever get round to it.

So much of intelligence seems to be about 'flexibility'. An intelligent agent can 'step back from the system' and 'reflect on' what it's trying to do and why. As Hofstadter might say, to be intelligent it needs to have "fluid concepts" and be able to make "creative analogies".

I don't think it's possible for human programmers in a basement to create this 'fluidity' by hand - my hunch would be that it has to 'grow from within'. But then how can we inject a simple, crystalline 'rule' defining 'utility' and expect it to exert the necessary control over some lurching sea of 'fluid concepts'? Couldn't the agent "stand back from", "reflect on" and "creatively reinterpret" whatever rules we tell it to follow?

Now you're going to say "But hang on, when we 'stand back' and 'reflect on' something, what we're doing is re-evaluating whether a proximate goal best serves a more distant goal, while the more distant goal itself remains unexamined. The hierarchy of goals must be finite, and the 'top level goal' can never be revised or 'reinterpreted'." I think that's too simple. It's certainly too simple as a description of human 'reflection on goals' (which is the only 'intelligent reflection' we know about so far).

To me it seems more realistic to say that our proximate goals are the more 'real' and 'tangible' ones, whereas higher level goals are abstract, vague, and malleable creations of the intellect alone. Our reinterpretation of a goal is some largely ad hoc intellectual feat, whose reasons are hard to fathom and perhaps not 'entirely rational', rather than the unfolding of a deep, inner plan. (At the same time, we have unconscious, animal 'drives' which again can be reflected on and overridden. It's all very messy and complicated.)

Replies from: Manfred
comment by Manfred · 2011-03-17T09:28:33.421Z · LW(p) · GW(p)

(it wasn't me, but...)

Just because humans do it that way doesn't mean it's the only or best way for intelligence to work. Humans don't have utility functions, but you might make a similar argument that biological tissue is necessary for intelligence because humans are made of biological tissue.

Or it may be neglecting emergent properties - the idea that creativity is "fluid," so to make something creative we can't have any parts that are "not fluid."

comment by FiftyTwo · 2012-01-23T20:59:32.502Z · LW(p) · GW(p)

A line in the wiki article on "paperclip maximizer" caught my attention:

"the notion that life is precious is specific to particular philosophies held by human beings, who have an adapted moral architecture resulting from specific selection pressures acting over millions of years of evolutionary time."

Why don't we set up an evolutionary system within which valuing other intelligences, cooperating with them and retaining those values across self improvement iterations would be selected for?

A specific plan:

Simulate an environment with a large number of AI agents competing for resources. Access to those resources allows the agent to perform a self improvement iteration. Rig the environments such that success requires cooperating with other intelligences of the same or lower level. Repopulate the next environment with copies of the succeeding intelligences. Over a sufficient number of generation times this should select for agents that value other intelligences, and preserve their values through self modification.

What do people think? I can see a few possible sources for error myself, but would like to hear your responses uncontaminated. [Given the importance of the topic you can assume Crocker's rules are in effect.]

Replies from: ESRogs
comment by ESRogs · 2012-05-03T00:46:48.837Z · LW(p) · GW(p)

Defining the metric for cooperation robustly enough that you could unleash the resulting evolved AI on the real world might not be any easier than figuring out what an FAI's utility function should be directly.

Also, a sufficiently intelligent AI may be able to hijack the game before we could decide whether it was ready to be released.

comment by [deleted] · 2011-03-22T13:42:54.072Z · LW(p) · GW(p)

1: Define Descended People Years as number of years lived by any descendants of existing people.

2: Generate a searchable index of actions which can be taken to increase Descended People Years, along with an explanation on an adjustable reading level as to why it works.

3: Allow Full view of any DPY calculations, so that something can be seen as both "Expected DPY gain X" and "90% chance of Expected DPY gain Y, 10% chance of Expected DPY loss Z"

4: Allow Humans to search this list sorting by cost, descendant, and action, time required, complexity, Descended People Years given, and ratios of those figures.

5: At no Point will the FAI ever be given the power to implement any actions itself.

6: The FAI is not itself considered a Descendant of People. (Although even if it was, it would have to dutifuly report in the explanations.)

7: All people are allowed to have access to this FAI.

8: The FAI is required to have all courses of action obey the law, (no suggesting murdering one group at a cost of DPY X to extend a second group by DPY Y) although it can suggest changing a law.

9: Allow people to focus calculations in a particular area based on complexity or with restrictions, such as "I'd like to engage in an action that can boost DPY that I can complete in 5 minutes." or "I'd like to engage in an action that can boost DPY that does not require adding additional computational power to the FAI."

So I can use the AI to find out (for instance) that giving up drinking Soda will likely extend my descended people years by 0.2 years, at a savings to me of 2 dollars a day, along with an explanation of why it does so and extended calculations if desired.

But a president of a country could use the FAI to find out that signing a treaty with a foreign country to reduce Nuclear Weapons would likely extend DPY by 1,000,000 DPY, along with an explanation of why it does so and extended calculations if desired.

comment by moonbatmemehack · 2011-03-19T02:44:17.704Z · LW(p) · GW(p)

The minor nature of its goals is the whole point. It is not meant to do what we want because it empathizes with our values and is friendly, but because the thing we actually want it to do really is the best way to accomplish the goals we gave it. Also I would not consider making a cheese cake to be a trivial goal for an AI, there is certainly more to it then the difficult task of distinguishing a spoon from a fork, so this is surely more than just an "intelligent rock".

comment by Skatche · 2011-03-17T18:04:13.471Z · LW(p) · GW(p)

Not a utility function, but rather a (quite resources-intensive) technique for generating one:

Rather than building one AI, build about five hundred of them, with a rudimentary utility function template and the ability to learn and revise it. Give them a simulated universe to live in, unaware of the existence of our universe. (You may need to supplement the population of 500 with some human operators, but they should have an interface which makes them appear to be inhabiting the simulated world.) Keep track of which ones act most pathologically, delete them, and recombine the remaining AIs with mutation to get a second generation of 500. Keep doing this until you have an AI that consistently minds its manners, and then create a new copy of that AI to live in our world.

Replies from: jimrandomh
comment by jimrandomh · 2011-03-17T18:51:36.227Z · LW(p) · GW(p)

After one round of self-improvement, it's pathological again. You can't test for stability under self-improvement by using a simulated universe which lacks the resources necessary to self-improve.

Replies from: Skatche
comment by Skatche · 2011-03-17T22:06:15.618Z · LW(p) · GW(p)

If it's possible to self-improve in our universe, it's possible to self-improve in the simulated universe. The only thing stopping us from putting together a reasonable simulation of the laws of physics, at this point, is raw computing power. Developing AGI is a problem of an entirely different sort: we simply don't know how to do it yet, even in principle.

Replies from: jimrandomh
comment by jimrandomh · 2011-03-17T22:42:24.048Z · LW(p) · GW(p)

You're right, but let me revise that slightly. In a simulated universe, some forms of self-improvement are possible, but others are cut off. Specifically, all forms of self-improvement which require more resources than you provide in the simulated universe are cut off. The problem is that that includes most of the interesting ones, and it's entirely possible that it will self-modify into something bad but only when you give it more hardware.

comment by Axel · 2011-03-16T12:12:42.589Z · LW(p) · GW(p)

After reading the current comments I’ve come up with this:

1) Restrict the AI’s sphere of influence to a specific geographical area (Define it in several different ways! You don’t want to confine the AI in “France” just to have it annex the rest of the world. Or by gps location and have it hack satellites so they show different coordinates.)

2) Tell it to not make another AI (this seems a bit vague but I don’t know how to make it more specific) (maybe: all computing must come from one physical core location. This could prevent an AI from tricking someone into isolating a back up, effectively making a copy)

3) Set an upper bound for the amount physical space all AI combined in that specific area can use.

4) As a safeguard, if it does find a way around 2, let it incorporate the above rules, unaltered, in any new AI it makes.

Replies from: jimrandomh
comment by jimrandomh · 2011-03-16T16:17:40.981Z · LW(p) · GW(p)

Restrict the AI’s sphere of influence to a specific geographical area (Define it in several different ways! You don’t want to confine the AI in “France” just to have it annex the rest of the world. Or by gps location and have it hack satellites so they show different coordinates.)

You can't do so much as move an air molecule without starting a ripple of effects that changes things everywhere, including outside the specified area. How do you distinguish effects outside the area that matter from effects that don't?

comment by Axel · 2011-03-16T12:11:25.688Z · LW(p) · GW(p)

After reading the current comments I’ve come up with this:

Restrict the AI’s sphere of influence to a specific geographical area (Define it in several different ways! You don’t want to confine the AI in “France” just to have it annex the rest of the world. Or by gps location and have it hack satellites so they show different coordinates.) Tell it to not make another AI (this seems a bit vague but I don’t know how to make it more specific) (maybe: all computing must come from one physical core location. This could prevent an AI from tricking someone into isolating a back up, effectively making a copy) Set an upper bound for the amount physical space all AI combined in that specific area can use. As a safeguard, if it does find a way around 2, let it incorporate the above rules, unaltered, in any new AI it makes.

comment by Axel · 2011-03-16T12:11:05.195Z · LW(p) · GW(p)

After reading the current comments I’ve come up with this:

1) Restrict the AI’s sphere of influence to a specific geographical area (Define it in several different ways! You don’t want to confine the AI in “France” just to have it annex the rest of the world. Or by gps location and have it hack satellites so they show different coordinates.) 2) Tell it to not make another AI (this seems a bit vague but I don’t know how to make it more specific) (maybe: all computing must come from one physical core location. This could prevent an AI from tricking someone into isolating a back up, effectively making a copy) 3) Set an upper bound for the amount physical space all AI combined in that specific area can use. 4) As a safeguard, if it does find a way around 2, let it incorporate the above rules, unaltered, in any new AI it makes.

comment by Axel · 2011-03-16T12:09:35.469Z · LW(p) · GW(p)

After reading the current comments I’ve come up with this:

1) Restrict the AI’s sphere of influence to a specific geographical area (Define it in several different ways! You don’t want to confine the AI in “France” just to have it annex the rest of the world. Or by gps location and have it hack satellites so they show different coordinates.) 2) Tell it to not make another AI (this seems a bit vague but I don’t know how to make it more specific) (maybe: all computing must come from one physical core location. This could prevent an AI from tricking someone into isolating a back up, effectively making a copy) 3) Set an upper bound for the amount physical space all AI combined in that specific area can use. 4) As a safeguard, if it does find a way around 2, let it incorporate the above rules, unaltered, in any new AI it makes.

comment by Carinthium · 2011-03-16T05:01:57.050Z · LW(p) · GW(p)

New Proposal (although I think I see the flaw already): -Create x "Friendly" AI, where x is the total number of people in the world. An originator AI is designed to create 1 of each such one equal to the number of humans in the world, then create new ones every time another human comes into being.

-Each "Friendly" AI thus created is "attached" to one person in the world in that it is programmed to constantly adjust it's utility function to that person's wants. All of them have equal self-enhancement potential, and have two programs. -Program A is the "Dominant" program, and has complete control of the utility function but nothing else. Program B cannot defy Program A. Program A is constantly adjusting Program B's utility function to be identical to that of the person it is "attached" to, and has no other priorities. -Program B's priorities are determined by Program A.

One such set of Program A and Program B is 'attached' to every individual in the world. Being of equal intelligence, in THEORY they should compromise in order to avoid wasting resuorces.

Replies from: benelliott, jimrandomh
comment by benelliott · 2011-03-16T07:36:36.664Z · LW(p) · GW(p)

Some AIs have access to slightly more resources than others, owing perhaps to humans offering varying levels of assistance to their own AIs. Other AIs just get lucky and have good insights into intelligence enhancement before the rest. These differences escalate as the smarter AIs are now able to grab more resources and become even smarter. Within a week one random person has become dictator of earth.

comment by jimrandomh · 2011-03-16T16:27:09.587Z · LW(p) · GW(p)

One such set of Program A and Program B is 'attached' to every individual in the world. Being of equal intelligence, in THEORY they should compromise in order to avoid wasting resources.

Ecosystems of many cooperating agents only work so long as either they all have similar goals, or there is a suitable balance between offense and defense. This particular example fails if there is any one person in the world who wants to destroy it, because their AI can achieve this goal without having to compromise or communicate with any of the others.

comment by Giles · 2011-04-28T00:32:02.728Z · LW(p) · GW(p)

General objection to all wimpy AI's (e.g. ones whose only interaction with the outside world is outputting a correct proof of a particular mathematical theorem):

What the AI does is SO AWESOME that a community is inspired to develop their own AI without any of that boring safety crap.

comment by Carinthium · 2011-03-16T08:31:08.859Z · LW(p) · GW(p)

New one(I'm better at thinking of ideas than refutation, so I'm going to run with that)- start off with a perfect replica of a human mind. Eliminate absolutely all measures regarding selfishness, self-delusion, and rationalisation. Test at this stage to check it fits standards using a review board consistent of people who are highly moral and rational by the standards of ordinary humans. If not, start off using a different person's mind, and repeat the whole process.

Eventually, use the most optimal mind coming out of this process and increase it's intelligence until it becomes a 'Friendly' A.I.

Replies from: jimrandomh, Manfred, Dorikka, ewang
comment by jimrandomh · 2011-03-16T16:21:28.929Z · LW(p) · GW(p)

Start off with a perfect replica of a human mind. Eliminate absolutely all measures regarding selfishness, self-delusion, and rationalisation ... Eventually, use the most optimal mind coming out of this process and increase its intelligence until it becomes a 'Friendly' A.I.

The mind does not have modules for these things that can be removed; they are implicit in the mind's architecture. Nor does it use an intelligence-fluid which you can pour in to upgrade. Eliminating mental traits and increasing intelligence are both extraordinarily complicated procedures, and the possible side effects if they're done improperly include many sorts of insanity.

comment by Manfred · 2011-03-17T05:13:40.801Z · LW(p) · GW(p)

Human minds aren't designed to be changed, so if this was actually done you would likely just upgrade the first mind that was insane in a subtle enough way to get past the judges. It's conceivable that it could work if you had ridiculous levels of understanding, but this sort of thing would come many years after Friendly AI was actually needed.

comment by Dorikka · 2011-03-17T01:10:42.157Z · LW(p) · GW(p)

check it fits standards using a review board consistent of people who are highly moral and rational by the standards of ordinary humans.

You mean real, meaty humans that whose volitions aren't even being extrapolated so they can use lots of computing power? What makes you think that they won't accidentally destroy the universe?

comment by ewang · 2011-03-17T04:01:49.503Z · LW(p) · GW(p)

The AI fiegns sanity to preserve itself through the tests and proceeds to do whatever horrible things uFAIs typically do.

Replies from: Carinthium
comment by Carinthium · 2011-03-17T04:07:59.713Z · LW(p) · GW(p)

THAT one wouldn't work, anyway- at this point it's still psycologically human and only at human intelligence- both are crippling disadvantages relative to later on.

Replies from: ewang
comment by ewang · 2011-03-17T04:25:10.953Z · LW(p) · GW(p)

Right, I didn't realize that. I'll just leave it up to prevent people from making the same mistake.

comment by moonbatmemehack · 2011-03-17T06:45:43.649Z · LW(p) · GW(p)

Make 1 reasonably good cheese cake as judged by a person within a short soft deadline while minimizing the cost to resources made available to it and with out violating property laws as judged by the legal system of the local government within some longer deadline.

To be clear the following do not contribute any additional utility:

  • Making additional cheese cakes
  • Making a cheese cake that is better than reasonably good
  • Making any improvements to the value of the resources available other than making a cheese cake
  • Anything that happens after the longer deadline.

The short soft deadline is about 1 day. Finishing in 5 hours is slightly better than 6. At 1 day there is a steep inflection. Finishing in more than 25 hours is slightly better than doing nothing.

The long deadline is 1 year.

Committing a property law violation is only slightly worse than doing nothing.

Since we are allowing a technology that does not presently exist I will assume there is another technological change. Specifically there are remotely controllable humanoid robots and people are comfortable interacting with them.

The AGI has available to it: 1 humanoid robot, more money than the cost of ingredients of a cheese cake available at a nearby grocery store, a kitchen, communication access to the person who will judge the reasonableness of the cheese cake and who may potentially provide other resources if requested and who's judgment may be influenced.

As a possible alternative scenario we could assume there are thousands of other similar AGI that each are tasked with making a different pastry. I propose we call them each Marvin.

Let the confectionery conflagration commence!

Replies from: Manfred, None
comment by Manfred · 2011-03-17T09:09:56.613Z · LW(p) · GW(p)

This seems to fall under the "intelligent rock" category - it's not friendly, only harmless because of the minor nature of its goals.

Replies from: moonbatmemehack
comment by moonbatmemehack · 2011-03-19T02:44:53.751Z · LW(p) · GW(p)

The minor nature of its goals is the whole point. It is not meant to do what we want because it empathizes with our values and is friendly, but because the thing we actually want it to do really is the best way to accomplish the goals we gave it. Also I would not consider making a cheese cake to be a trivial goal for an AI, there is certainly more to it then the difficult task of distinguishing a spoon from a fork, so this is surely more than just an "intelligent rock".

comment by [deleted] · 2011-03-17T08:33:17.429Z · LW(p) · GW(p)

I genetically engineer a virus that will alter the person's mind state so that he will find my cheese cake satisfactorily. That all of humanity will die by the virus after the deadline is non of my concerns.

Replies from: moonbatmemehack
comment by moonbatmemehack · 2011-03-19T03:36:26.562Z · LW(p) · GW(p)

While there may be problems with what I have suggested, I do not think the scenario you describe is a relevant consideration for the following reasons...

As you describe it the ai is still required to make a cheese cake, it just makes a poor one.

It should not take more than an hour to make a cheese cake, and the ai is optimizing for time. Also the person may eat some cheese cake after it is made, so the ai must produce the virus, infect the person, and have the virus alter the person's mind within 1 hour while making a poor cheese cake.

Whatever resources the ai expends on the virus must be less then the added cost of making a reasonably good cheese cake rather than a poor one.

The legal system only has to identify a property law violation, which producing a virus and infecting people would be, so the virus must be undetected for more than 1 year.

Since it is of no benefit to the ai if the virus kills people, the virus must by random chance kill people as a totally incidental side effect.

I would not claim that it is completely impossible for this to produce a virus leading to human extinction, and some have declared any probability of human extinction to effectively be of negative infinite utility, but I do not think this is reasonable, since there is always some probability of human extinction, and moreover I do not think the scenario you describe contributes significantly to that.