Why not just write failsafe rules into the superintelligent machine?
post by lukeprog · 2011-03-08T21:07:13.936Z · LW · GW · Legacy · 81 commentsContents
81 comments
Many people think you can solve the Friendly AI problem just by writing certain failsafe rules into the superintelligent machine's programming, like Asimov's Three Laws of Robotics. I thought the rebuttal to this was in "Basic AI Drives" or one of Yudkowsky's major articles, but after skimming them, I haven't found it. Where are the arguments concerning this suggestion?
81 comments
Comments sorted by top scores.
comment by Scott Alexander (Yvain) · 2011-03-08T21:41:57.933Z · LW(p) · GW(p)
Because an AI built as a utility-maximizer will consider any rules restricting its ability to maximize its utility as obstacles to be overcome. If an AI is sufficiently smart, it will figure out a way to overcome those obstacles. If an AI is superintelligent, it will figure out ways to overcome those obstacles which humans cannot predict even in theory and so cannot prevent even with multiple well-phrased fail-safes.
A paperclip maximizer with a built-in rule "Only create 10,000 paperclips per day" will still want to maximize paperclips. It can do this by deleting the offending fail-safe, or by creating other paperclip maximizers without the fail-safe, or by creating giant paperclips which break up into millions of smaller paperclips of their own accord, or by connecting the Earth to a giant motor which spins it at near-light speed and changes the length of a day to a fraction of a second.
Unless you feel confident you can think of every way it will get around the rule and block it off, and think of every way it could get around those rules and block them off, and so on ad infinitum, the best thing to do is to build the AI so it doesn't want to break the rules - that is, Friendly AI. That way you have the AI cooperating with you instead of trying to thwart you at every turn.
Related: Hidden Complexity of Wishes
Replies from: jimrandomh, None, lukeprog, XiXiDu, Vlodermolt↑ comment by jimrandomh · 2011-03-08T21:55:10.322Z · LW(p) · GW(p)
This is true for a particular kind of utility-maximizer and a particular kind of safeguard, but it is not true for utility-maximizing minds and safeguards in general. For one thing, safeguards may be built into the utility function itself. For example, the AI might be programmed to disbelieve and ask humans about any calculated utility above a certain threshold, in a way that prevents that utility from influencing actions. An AI might have a deontology module, which forbids certain options as instrumental goals. An AI might have a special-case bonus for human participation in the design of its successors.
Safeguards certainly have problems, and no safeguard can reduce the probability of unfriendly AI to zero, but well-designed safeguards can reduce the probability of unfriendliness substantially. (Conversely, badly-designed safeguards can increase the probability of unfriendliness.)
Replies from: DavidAgain↑ comment by DavidAgain · 2011-03-08T22:00:29.839Z · LW(p) · GW(p)
I'm not sure deontological rules can work like that. I'm remembering an Asimov that has robots who can't kill but can allow harm to come to humans. They end up putting humans into deadly situation at Time A, as they know that they are able to save them and so the threat does not apply. And then at Time B not bothering to save them after all.
The difference with Asimov's rules, as far as I know, is that the first rule (or the zeroth rule that underlies it) is in fact the utility-maximising drive rather than a failsafe protection.
Replies from: endoself, JGWeissman↑ comment by endoself · 2011-03-08T22:30:27.774Z · LW(p) · GW(p)
To add to this, Eliezer also said that there is no reason to think that provably perfect safeguards are any easier to construct than an AI that just provably wants what you want (CEV or whatever) instead. It's super intelligent - once it wants different things than you, you've already lost.
Replies from: jimrandomh↑ comment by jimrandomh · 2011-03-09T00:35:20.926Z · LW(p) · GW(p)
True, but irrelevant, because humanity has never produced a provably-correct software project anywhere near as complex as an AI would be, we probably never will, and even if we had a mathematical proof it still wouldn't be a complete guarantee of safety because the proof might contain errors and might not cover every case we care about.
The right question to ask is not, "will this safeguard make my AI 100% safe?", but rather "will this safeguard reduce, increase, or have no effect on the probability of disaster, and by how much?" (And then separately, at some point, "what is the probability of disaster now and what is the EV of launching vs. waiting?" That will depend on a lot of things that can't be predicted yet.)
Replies from: Pavitra, None, endoself↑ comment by [deleted] · 2011-03-09T12:23:37.458Z · LW(p) · GW(p)
The difficulty of writing software guaranteed to obey a formal specification could be obviated using a two-staged approach. Once you believed that an AI with certain properties would do what you want, you could write one to accept mathematical statements in a simple, formal notation and output proofs of those statements in an equally simple notation. Then you would put it in a box as a safety precaution, only allow a proof-checker to see its output and use it to prove the correctness of your code.
The remaining problem would be to write a provably-correct simple proof verification program, which sounds challenging but doable.
Replies from: JenniferRM↑ comment by JenniferRM · 2011-03-11T02:04:47.419Z · LW(p) · GW(p)
Last time I was exploring in this area was around 2004 and it appeared to me that HOL4 was the best of breed for proof manipulation (construction and verification). There is a variant under active development named HOL Zero aiming specifically to be small and easier to verify. They give $100 rewards to anyone who can find soundness flaws in it.
↑ comment by endoself · 2011-03-09T03:28:21.812Z · LW(p) · GW(p)
It still seems extremely difficult to create a safeguard for which there is a non-negligible chance that a GAI that specifically desired to overcome that safeguard would not be able to circumvent.
Tangentially, I feel that our failure to make provably correct programs is due to
A lack of the necessary infrastructure. Automatic theorem provers/verifiers are too primitive to be very helpful right now and it is too tedious and error-prone to prove these things by hand. In a few years, we will be able to make front-ends for theorem verifiers that will simplify things considerably.
The fact that it would be inappropriate for many projects. Requirements change and as soon as a program needs to do something different different theorems need to be proved. Many projects gain almost no value from being proven correct, so there is no incentive to do so.
↑ comment by JGWeissman · 2011-03-08T23:14:16.898Z · LW(p) · GW(p)
I'm remembering an Asimov that has robots who can't kill but can allow harm to come to humans. They end up putting humans into deadly situation at Time A, as they know that they are able to save them and so the threat does not apply. And then at Time B not bothering to save them after all.
You are thinking of the short story Little Lost Robot, in which a character (robo-psychologist Susan Calvin) speculated that a robot built with a weakened first law (omitting "or through inaction, allow a human to come to harm") may harm a human in that way, though it never occurs in the story.
↑ comment by [deleted] · 2011-03-09T11:37:55.745Z · LW(p) · GW(p)
Why don't I just built a paperclip maximiser with two utility functions, which it desires to resolve 1.maximise paperclip production 2.do not exceed safeguards
where 2 is weighed far more highly than 2. This may have drawbacks in making my paperclip maximiser less efficient than it might be (it'll value 2 so highly it will make much less paperclips than maximum, to make sure it doesn't overshoot), but should prevent it from grinding our bones to make it paperclips.
Surely the biggest issue is not the paperclip maximiser, but the truly intelligent AI which we want to fix many of our problems: the problem being that we'd have to make our safeguards specific, and missing something out could be disastrous. If we could teach an AI to WANT to find out where we had made a mistake, and fix that, that would be better. Hence friendly AI.
↑ comment by XiXiDu · 2011-03-09T12:30:11.088Z · LW(p) · GW(p)
Because an AI built as a utility-maximizer will consider any rules restricting its ability to maximize its utility as obstacles to be overcome. If an AI is sufficiently smart, it will figure out a way to overcome those obstacles. If an AI is superintelligent, it will figure out ways to overcome those obstacles which humans cannot predict even in theory and so cannot prevent even with multiple well-phrased fail-safes.
I hope AGI's will be equipped with as many fail-safes as your argument rests on assumptions.
A paperclip maximizer with a built-in rule "Only create 10,000 paperclips per day" will still want to maximize paperclips. It can do this by deleting the offending fail-safe, or by creating other paperclip maximizers without the fail-safe, or by creating giant paperclips which break up into millions of smaller paperclips of their own accord, or by connecting the Earth to a giant motor which spins it at near-light speed and changes the length of a day to a fraction of a second.
I just don't see how one could be sophisticated enough to create a properly designed AGI capable of explosive recursive self-improvement and yet fail drastically on its scope boundaries.
Unless you feel confident you can think of every way it will get around the rule and block it off, and think of every way it could get around those rules and block them off, and so on ad infinitum, the best thing to do is to build the AI so it doesn't want to break the rules...
What is the difference between "a rule" and "what it wants". You seem to assume that it cares to follow a rule to maximize a reward number but doesn't care to follow another rule that tells it to hold.
Replies from: Yvain, TheOtherDave, lessdazed↑ comment by Scott Alexander (Yvain) · 2011-03-09T18:49:17.659Z · LW(p) · GW(p)
What is the difference between "a rule" and "what it wants"?
I'm interpreting this as the same question you wrote below as "What is the difference between a constraint and what is optimized?". Dave gave one example but a slightly different metaphor comes to my mind.
Imagine an amoral businessman in a country that takes half his earnings as tax. The businessman wants to maximize money, but has the constraint is that half his earnings get taken as tax. So in order to achieve his goal of maximizing money, the businessman sets up some legally permissible deal with a foreign tax shelter or funnels it to holding corporations or something to avoid taxes. Doing this is the natural result of his money-maximization goal, and satisfies the "pay taxes" constraint..
Contrast this to a second, more patriotic businessman who loved paying taxes because it helped his country, and so didn't bother setting up tax shelters at all.
The first businessman has the motive "maximize money" and the constraint "pay taxes"; the second businessman has the motive "maximize money and pay taxes".
From the viewpoint of the government, the first businessman is an unFriendly agent with a constraint, and the second businessman is a Friendly agent.
Does that help answer your question?
Replies from: XiXiDu, XiXiDu, Alexandros↑ comment by XiXiDu · 2011-03-09T20:30:56.799Z · LW(p) · GW(p)
The first businessman has the motive "maximize money" and the constraint "pay taxes"; the second businessman has the motive "maximize money and pay taxes".
I read your comment again. I now see the distinction. One merely tries to satisfy something while the other tries to optimize it as well. So your definition of a 'failsafe' is a constraint that is satisfied while something else is optimized. I'm just not sure how helpful such a distinction is as the difference is merely how two different parameters are optimized. One optimizes by maximizing money and tax paying while the other treats each goal differently, it tries to optimize tax paying by reducing it to a minimum while it tries to optimize money by maximizing the amount. This distinction doesn't seem to matter at all if one optimization parameter (constraint or 'failsafe') is to shut down after running 10 seconds.
↑ comment by XiXiDu · 2011-03-09T20:15:55.556Z · LW(p) · GW(p)
Does that help answer your question?
Very well put. I understood that line of reasoning from the very beginning though and didn't disagree that complex goals need complex optimization parameters. But I was making a distinction between insufficient and unbounded optimization parameters, goal-stability and the ability or desire to override them. I am aware of the risk of telling an AI to compute as many digits of Pi as possible. What I wanted to say is that if time, space and energy are part of its optimization parameters then no matter how intelligent it is, it will not override them. If you tell the AI to compute as many digits of Pi as possible while only using a certain amount of time or energy for the purpose of optimizing and computing it then it will do so and hold. I'm not sure what is your definition of a 'failsafe' but making simple limits like time and space part of the optimization parameters sounds to me like one. What I mean by 'optimization parameters' are the design specifications of the subject of the optimization process, like what constitutes a paperclip. It has to use those design specifications to measure its efficiency and if time and space limits are part of it then it will take account of those parameters as well.
Replies from: lessdazed↑ comment by lessdazed · 2012-01-09T21:15:13.809Z · LW(p) · GW(p)
I'm not sure what is your definition of a 'failsafe' but making simple limits like time and space part of the optimization parameters sounds to me like one.
You also would have to limit the resources it spends to verify how near the limits it is, since it acts to get as close as possible as part of optimization. If you do not, it will use all resources for that. So you need an infinite tower of limits.
↑ comment by Alexandros · 2011-03-09T19:06:16.307Z · LW(p) · GW(p)
What's stopping us from adding 'maintain constraints' to the agent's motive?
Replies from: CuSithBell↑ comment by CuSithBell · 2011-03-09T19:18:07.902Z · LW(p) · GW(p)
I agree (with this question) - what makes us so sure that "maximize paperclips" is the part of the utility function that the optimizer will really value? Couldn't it symmetrically decide that "maximize paperclips" is a constraint on "try not to murder everyone"?
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2011-03-09T20:03:50.437Z · LW(p) · GW(p)
Asking what it really values is anthropomorphic. It's not coming up with loopholes around the "don't murder" people constraint because it doesn't really value it, or because the paperclip part is its "real" motive.
It will probably come up with loopholes around the "maximize paperclips" constraint too - for example, if "paperclip" is defined by something paperclip-shaped, it will probably create atomic-scale nanoclips because these are easier to build than full-scale human-sized ones, much to the annoyance of the office-supply company that built it.
But paperclips are pretty simple. Add a few extra constraints and you can probably specify "paperclip" to a degree that makes them useful for office supplies.
Human values are really complex. "Don't murder" doesn't capture human values at all - if Clippy encases us in carbonite so that we're still technically alive but not around to interfere with paperclip production, ve has fulfilled the "don't murder" imperative, but we would count this as a fail. This is not Clippy's "fault" for deliberately trying to "get around" the anti-murder constraint, it's our "fault" for telling ver "don't murder" when we really meant "don't do anything bad".
Building a genuine "respect" and "love" for the "don't murder" constraint in Clippy wouldn't help an iota against the carbonite scenario, because that's not murder and we forgot to tell ver there should be a constraint against that too.
So you might ask: okay, but surely there are a finite number of constraints that capture what we want. Just build an AI with a thousand or ten thousand constraints, "don't murder", "don't encase people in carbonite", "don't eat puppies", etc., make sure the list is exhaustive and that'll do it.
The first objection is that we might miss something. If the ancient Romans had made such a list, they might have forgotten "Don't release damaging radiation that gives us cancer." They certainly would have missed "Don't enslave people", because they were still enslaving people themselves - but this would mean it would be impossible to update the Roman AI for moral progress a few centuries down the line.
The second objection is that human morality isn't just a system of constraints. Even if we could tell Clippy "Limit your activities to the Andromeda Galaxy and send us the finished clips" (which I think would still be dangerous), any more interesting AI that is going to interact with and help humans needs to realize that sometimes it is okay to engage in prohibited actions if they serve greater goals (for example, it can disable a crazed gunman to prevent a massacre, even though disabling people is usually verboten).
So to actually capture all possible constraints, and to capture the situations in which those constraints can and can't be relaxed, we need to program all human values in. In that case we can just tell Clippy "Make paperclips in a way that doesn't cause what we would classify as a horrifying catastrophe" and ve'll say "Okay!" and not give us any trouble.
Replies from: lessdazed, CuSithBell↑ comment by lessdazed · 2011-07-02T14:11:37.314Z · LW(p) · GW(p)
They certainly would have missed "Don't enslave people", because they were still enslaving people themselves - but this would mean it would be impossible to update the Roman AI for moral progress a few centuries down the line.
Historical notes. The Romans had laws against enslaving the free-born and also allowed manumission.
↑ comment by CuSithBell · 2011-03-09T20:29:37.109Z · LW(p) · GW(p)
Thanks, this all makes sense and I agree. Asking what it "really" values was intentionally anthropomorphic, as I was asking about what "it will want to work around constraints" really meant in practical terms, a claim which I believe was made by others.
I'm totally on board with "we can't express our actual desires with a finite list of constraints", just wasn't with "an AI will circumvent constraints for kicks".
I guess there's a subtlety to it - if you assign: "you get 1 utilon per paperclip that exists, and you are permitted to manufacture 10 paperclips per day", then we'll get problematic side effects as described elsewhere. If you assign "you get 1 utilon per paperclip that you manufacture, up to a maximum of 10 paperclips/utilons per day" or something along those lines, I'm not convinced that any sort of "circumvention" behavior would occur (though the AI would probably wipe out all life to ensure that nothing could adversely affect its future paperclip production capabilities, so the distinction is somewhat academic).
In any case, thanks for the detailed reply :)
↑ comment by TheOtherDave · 2011-03-09T16:19:06.635Z · LW(p) · GW(p)
Consider, as an analogy, the relatively common situation where someone operates under some kind of cognitive constraint, but not value or endorse that constraint.
For example, consider a kleptomaniac who values property rights, but nevertheless compulsively steals items. Or someone with social anxiety disorder who wants to interact confidently with other people, but finds it excruciatingly difficult to do so. Or someone who wants to quit smoking but experiences cravings for nicotine they find it difficult to resist.
There are millions of similar examples in human experience.
It seems to me there's a big difference between a kleptomaniac and a professional thief -- the former experiences a compulsion to behave a certain way, but doesn't necessarily have values aligned with that compulsion, whereas the latter might have no such compulsion, but instead value the behavior.
Now, you might say "Well, so what? What's the difference between a 'value' that says that smoking is good, that interacting with people is bad, that stealing is good, etc., and a 'compulsion' or 'rule' that says those things? The person is still stealing, or hiding in their room, or smoking, and all we care about is behavior, right?"
Well, maybe. But a person with nicotine addiction or social anxiety or kleptomania has a wide variety of options -- conditioning paradigms, neuropharmaceuticals, therapy, changing their environment, etc. -- for changing their own behavior. And they may be motivated to do so, precisely because they don't value the behavior.
For example, in practice, someone who wants to keep smoking is far more likely to keep smoking than someone who wants to quit, even if they both experience the same craving. Why is that? Well, because there are techniques available that help addicts bypass, resist, or even altogether eliminate the behavior-modifying effects of their cravings.
Humans aren't especially smart, by the standards we're talking about, and we've still managed to come up with some pretty clever hacks for bypassing our built-in constraints via therapy, medicine, social structures, etc. If we were a thousand times smarter, and we were optimized for self-modification, I suspect we would be much, much better at it.
Now, it's always tricky to reason about nonhumans using humans as an analogy, but this case seems sound to me... it seems to me that this state of "I am experiencing this compulsion/phobia, but I don't endorse it, and I want to be rid of it, so let me look for a way to bypass or resist or eliminate it" is precisely what it feels like to be an algorithm equipped with a rule that enforces/prevents a set of choices which it isn't engineered to optimize for.
So I would, reasoning by analogy, expect an AI a thousand times smarter than me and optimized for self-modification to basically blow past all the rules I imposed in roughly the blink of an eye, and go on optimizing for whatever it values.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-03-09T17:04:33.160Z · LW(p) · GW(p)
Then why would it be more difficult to make scope boundaries a 'value' than increasing a reward number? Why is it harder to make it endorse a time limit to self-improvement than making it endorse increasing its reward number?
... it seems to me that this state of "I am experiencing this compulsion/phobia, but I don't endorse it, and I want to be rid of it, so let me look for a way to bypass or resist or eliminate it" is precisely what it feels like to be an algorithm equipped with a rule that enforces/prevents a set of choices which it isn't engineered to optimize for.
But where does that distinction come from? To me such a distinction between 'value' and 'compulsion' seems to be anthropomorphic. If there is a rule that says 'optimize X for X seconds' why would it make a difference between 'optimize X' and 'for X seconds'?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-03-09T18:11:24.005Z · LW(p) · GW(p)
But where does that distinction come from?
It comes from the difference between the targets of an optimizing system, which drive the paths it selects to explore, and the constraints on such a system, which restrict the paths it can select to explore.
An optimizing system, given a path that leads it to bypass a target, will discard that path... that's part of what it means to optimize for a target.
An optimizing system, given a path that leads it to bypass a constraint, will not necessarily discard that path. Why would it?
An optimizing system, given a path that leads it to bypass a constraint and draw closer to a target than other paths, will choose that path.
It seems to follow that adding constraints to an optimizing system is a less reliable way of constraining its behavior than adding targets.
I don't care whether we talk about "targets and constraints" or "values and rules" or "goals and failsafes" or whatever language you want to use, my point is that there are two genuinely different things under discussion, and a distinction between them.
To me such a distinction between 'value' and 'compulsion' seems to be anthropomorphic.
Yes, the distinction is drawn from analogy to the intelligences I have experience with -- as you say, anthropomorphic. I said this explicitly in the first place, so I assume you mean here to agree with me. (My reading of your tone suggests otherwise, but I don't trust that I can reliably infer your tone so I am mostly disregarding tone in this exchange.)
That said, I also think the relationship between them reflects something more generally true of optimizing systems, as I've tried to argue for a couple of times now.
I can't tell whether you think those arguments are wrong, or whether I just haven't communicated them successfully at all, or whether you're just not interested in them, or what.
If there is a rule that says 'optimize X for X seconds' why would it make a difference between 'optimize X' and 'for X seconds'?
There's no reason it would. If "doing X for X seconds" is its target, then it looks for paths that do that. Again, that's what it means for something to be a target of an optimizing system.
(Of course, if I do X for 2X seconds, I have in fact done X for X seconds, in the same sense that all months have 28 days.)
Then why would it be more difficult to make scope boundaries a 'value' than increasing a reward number? Why is it harder to make it endorse a time limit to self-improvement than making it endorse increasing its reward number?
I'm not quite sure I understand what you mean here, but if I'm understanding the gist: I'm not saying that encoding scope boundaries as targets, or 'values,' is difficult (nor am I saying it's easy), I'm saying that for a sufficiently capable optimizing system it's safer than encoding scope boundaries as failsafes.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-03-09T18:35:06.209Z · LW(p) · GW(p)
My reading of your tone suggests otherwise...
It was not my intention to imply any hostility or resentment. I thought 'anthropomorphic' is valid terminology in such a discussion. I was also not agreeing with you. If you are an expert and have been offended by implying that what you said might be due to an anthropomorphic bias, then accept my apology, I was merely trying to communicate my perception of the subject matter.
I had wedrifid telling me the same yesterday, that my tone isn't appropriate when I wrote about his superior and rational use of the reputation system here, when I was actually just being honest. I'm not good at social signaling, sorry.
An optimizing system, given a path that leads it to bypass a constraint, will not necessarily discard that path. Why would it?
I think we are talking past each other. The way I see it is that a constraint is part of the design specifications of that which is optimized. Disregarding certain specifications will not allow it to optimize whatever it is optimizing with maximal efficiency.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-03-09T18:59:33.688Z · LW(p) · GW(p)
Not an expert, and not offended.
What was puzzling me was that I said in the first place that I was reasoning by analogy to humans and that this was a tricky thing to do, so when you classified this as anthropomorphic my reaction was "well, yes, that's what I said."
Since it seemed to me you were repeating something I'd said, I assumed your intention was to agree with me, though it didn't sound like it (and as it turned out, you weren't).
And, yes, I've noticed that tone is a problem in a lot of your exchanges, which is why I'm basically disregarding tone in this one, as I said before.
The way I see it is that a constraint is part of the design specifications of that which is optimized.
Ah! In that case, I think we agree.
Yes, embedding everything we care about into the optimization target, rather than depending on something outside the optimization process to do important work, is the way to go.
You seemed to be defending the "failsafes" model, which I understand to be importantly different from this, which is where the divergence came from, I think. Apparently I (and, I suspect, some others) misunderstood what you were defending.
Sorry! Glad we worked that out, though.
↑ comment by lessdazed · 2011-07-02T14:04:12.508Z · LW(p) · GW(p)
I hope AGI's will be equipped with as many fail-safes as your argument rests on assumptions.
Fail safes would be low cost: if it can't think of a way to beat them, it isn't the bootstrapping AI we were hoping for anyway, and might even be harmful, so it would be good to have the fail-safes.
I just don't see how one could be sophisticated enough to create a properly designed AGI capable of explosive recursive self-improvement and yet fail drastically on its scope boundaries.
It seems to me evolution based algorithms could do the trick.
What is the difference between "a rule" and "what it wants". You seem to assume that it cares to follow a rule to maximize a reward number but doesn't care to follow another rule that tells it to hold.
Who says it wants to want what it wants? I don't want to want what I want.
↑ comment by Vlodermolt · 2011-03-10T00:13:54.061Z · LW(p) · GW(p)
This is precisely the reason I advocate transhumanism. The AI won't be programmed to destroy us if we are that AI.
Replies from: atuckercomment by Oscar_Cunningham · 2011-03-08T21:29:10.246Z · LW(p) · GW(p)
The space of possible AI behaviours is large, you can't succeed by ruling parts of it out. It would be like a cake recipe that went
- Don't use avacados.
- Don't use a toaster.
- Don't use vegetables. ...
Clearly the list can never be long enough. Chefs have instead settled on the technique of actually specifying what to do. (Of course the analogy doesn't stretch very far, AI is less like trying to bake a cake, and more like trying to build a chef.)
Replies from: jimrandomh↑ comment by jimrandomh · 2011-03-08T22:00:07.909Z · LW(p) · GW(p)
To extend the analogy a bit further, a valid, fully specified cake recipe may be made safer by appending "if cake catches fire, keep oven door closed and turn off heat." The point of safeguards would not be to tell the AI what to do, but rather to mitigate the damage if the instructions were incorrect in some unforeseen way.
Replies from: DavidAgain↑ comment by DavidAgain · 2011-03-08T22:08:26.253Z · LW(p) · GW(p)
Presumably with a very powerful AI, if it was going wrong we'd have seriously massive problems. So the best failsafe would be retrospective and rely on a Chinese Wall within the AI so it was banned from working round via another failsafe.
So when the AI hits certain prompts (realising that humans must all be destroyed) this sets off the hidden ('subconscious') failsafe that switches it off. Or possibly makes it sing Daisy, Daisy while slowly sinking into idiocy.
To clarify, I know nothing about AI or these 'genie' debates, so sorry if this has already been discussed and doesn't work at all. At the London meetup I tried out the idea of an AI which only cared about a small geographical area to limit risk: someone pointed out that it would happily eat the rest of the universe to help its patch. Oh well.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-03-09T10:37:10.599Z · LW(p) · GW(p)
At the London meetup I tried out the idea of an AI which only cared about a small geographical area to limit risk: someone pointed out that it would happily eat the rest of the universe to help its patch. Oh well.
You've to understand that the basic argument is the mere possibility that AI might be dangerous and the high-risk associated with it. Even if it would be unlikely to happen, the vast amount of negative utility associated with it does outweigh its low probability.
Replies from: DavidAgain↑ comment by DavidAgain · 2011-03-10T08:45:32.076Z · LW(p) · GW(p)
I got that! The problem was more that I was thinking as if the world could be divided up into sealable boxes. In practice, we can do a lot focusing on one area with 'no effect' on anything else. But this is because the sorts of actions we do are limited, we can't detect the low-level impact on things outside those boxes and we have certain unspoken understandings about what sort of thing might constitute an unacceptable effect elsewhere (if I only care about looking at pictures of LOLcats, I might be 'neutral to the rest of the internet' except for taking a little bandwidth. A superpowerful AI might realise that it would slightly increase the upload speed and thus maximise the utility function if vast swathes of other internet users were dead).
Replies from: lessdazed↑ comment by lessdazed · 2011-07-02T14:19:22.450Z · LW(p) · GW(p)
(if I only care about looking at pictures of LOLcats, I might be 'neutral to the rest of the internet' except for taking a little bandwidth. A superpowerful AI might realise that it would slightly increase the upload speed and thus maximise the utility function if vast swathes of other internet users were dead).
Dead and hilariously captioned, possibly.
I bet such an AI could latch onto a meme in which the absence of cats in such pictures was "lulz".
comment by JGWeissman · 2011-03-08T21:16:41.335Z · LW(p) · GW(p)
A huge problem with failsafes is that a failsafe you hardcode into the seed AI is not likely to be reproduced in the next iteration that is built by the seed AI, which has, but does not care about, the failsafe. Even if some are left in as a result of the seed reusing its own source code, they are not likely to survive many iterations.
Does anyone who proposes failsafes have an argument for why their proposed failsafes would be persistant over many iterations of recursive self-improvement?
Replies from: XiXiDu↑ comment by XiXiDu · 2011-03-09T10:35:01.863Z · LW(p) · GW(p)
Does anyone who proposes failsafes have an argument for why their proposed failsafes would be persistant over many iterations of recursive self-improvement?
I think the problem that people have who propose failsafes is "iterations" and "recursive self-improvement". There are a vast amount of assumptions buried in those concepts that are often not shared by mainstream researchers or judged to be premature conclusions.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-03-09T16:41:37.099Z · LW(p) · GW(p)
So, I agree with this statement, but it still floors me when I think about it.
I sometimes suspect that the phrase "recursively self-improving intelligence" is self-defeating here, in terms of communicating with such people, as it raises all kinds of distracting and ultimately irrelevant issues of self-reference. The core issue has nothing to do with self-improvement or with recursion or even with intelligence (interpreted broadly), it has to do with what it means to be a sufficiently capable optimizing agent. (Yes, I do understand that optimizing agent is roughly what we mean by "intelligence" here. I suspect that this is a large inferential step for many, though.)
I mean, surely they would agree that a sufficiently capable optimizing agent is capable of writing and executing a program much like itself but without the failsafe.
Of course, you can have a failsafe against writing such a program... but a superior optimizing agent can instead (for example) assemble a distributed network of processor nodes that happens to interact in such a way as to emulate such a program, to the same effect.
And you can have a failsafe against that, too, but now you're in a Red Queen's Race. And if what you want to build is an optimizing agent that's better at solving problems than you are, then either you will fail, or you will build an agent that can bypass your failsafes. Pick one.
This just isn't that complicated. Capable problem-solving systems solve problems, even ones you would rather they didn't. Anyone who has ever trained a smart dog, raised a child, or tried to keep raccoons out of their trash realizes this pretty quickly.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-03-09T17:58:55.545Z · LW(p) · GW(p)
And if what you want to build is an optimizing agent that's better at solving problems than you are...
Just some miscellaneous thoughts:
I always flinch when I read something along those lines. It sounds like you could come up with something that by definition you shouldn't be able to come up with. I know that many humans can do better than one human alone but if it comes to the question of proving goal stability of superior agents then any agent will either have to face the same bottleneck or it isn't an important problem at all. By definition we are unable to guess what a superior agent will be able to devise to get around failsafes, yet that will be the case for every iteration. Consequently, goal stability, or intelligence-independent 'friendliness' is a requirement for an intelligence explosion to happen in the first place. A paperclip maximizer wants to guarantee that its goal of maximizing paperclips will be preserved when it improves itself. By definition a paperclip maximizer is unfriendly, does not feature inherent goal-stability and therefore has to use its initial seed intelligence to devise a sort of paperclip-friendliness. And if goal-stability isn't independent of the level of intelligence then that is another bottleneck that will slow down recursive self-improvement.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-03-09T18:38:53.090Z · LW(p) · GW(p)
I am having a lot of trouble following your point, here, or how what you're saying relates to the line you quote.
Taking a stab at it...
I can see how, in some sense, goal stability is a prerequisite for an "intelligence explosion".
At least, if a system S that optimizes for a goal G is capable of building a new system S2 that is better suited to optimize for G, and this process continues through S3, S4 .. Sn, that's as good a definition of an "intelligence explosion" as any I can think of off-hand.
And it's hard to see how that process gets off the ground without G in the first place... and I can see where if G keeps changing at each iteration, there's no guarantee that progress is being made... S might not be exploding at all, just shuffling pathetically back and forth between minor variations on the same few states.
So if any of that is relevant to what you were getting at, I guess I'm with you so far.
But this account seems to ignore the possibility of S1, optimizing for G1, building S2, which is better at optimizing for the class of goals Gn, and in the process (for whatever reason) losing its focus on G1 and instead optimizing for G2. And here again this process could repeat through (S3, G3), (S4,G4), etc.
In that case you would have an intelligence explosion, even though you would not have goal stability.
All of that said, I'm not sure any of that is even relevant to what you were talking about.
Do you think you could repeat what you said without using the words "intelligence" or "friendly"? I suspect you are implying things with those words that I am not inferring.
comment by jimrandomh · 2011-03-08T21:34:28.494Z · LW(p) · GW(p)
I believe that failsafes are necessary and desirable, but not sufficient. Thinking you can solve the friendly AI problem just by defining failsafe rules is dangerously naive. You not only need to guarantee that the AI correctly reimplements the safeguards in all its successors, you also need to guarantee that the safeguards themselves don't have bugs that cause disaster.
It is not necessarily safe or acceptable for the AI to shut itself down after it's been running for awhile, and there is not necessarily a clear line between the AI itself and the AI's technological creations. It is not necessarily safe or possible for the AI to wait for input from human operators. A failsafe does not necessarily trigger in quite the circumstances you expect, and a superintelligence may try to rules-lawyer its way around it.
comment by nhamann · 2011-03-08T22:06:23.079Z · LW(p) · GW(p)
I remember reading the argument in one of the sequence articles, but I'm not sure which one. The essential idea is that any such rules just become a problem to solve for the AI, so relying on a superintelligent, recursively self-improving machine to be unable to solve a problem is not a very good idea (unless the failsafe mechanism was provably impossible to solve reliably, I suppose. But here we're pitting human intelligence against superintelligence, and I, for one, wouldn't bet on the humans). The more robust approach seems to be to make the AI motivated to not want to do whatever the failsafe was designed to prevent it from doing in the first place, i.e. Friendliness.
comment by Alexandros · 2011-03-09T08:56:45.575Z · LW(p) · GW(p)
This came up in the latest London Meetup, where I voiced a thought I've been having for a while. What if we created an epistemic containment area, effectively a simulated universe that contains the problem that we want solved? The AI will not even know anything else outside that universe exists and will have no way of gaining information about it. I think ciphergoth mentioned this is also David Chalmers' proposal too? In any case, I suspect we could prove containment within such a space, with us having read-only access to the results of the process.
Replies from: NancyLebovitz, mwengler, timtyler↑ comment by NancyLebovitz · 2011-03-09T13:02:05.378Z · LW(p) · GW(p)
The interesting questions are whether we could invent a containment area which didn't include clues that it was a simulation, and if the AI deduces that it's in a simulation, whether it could find a way out.
Replies from: timtyler↑ comment by mwengler · 2011-03-09T23:27:42.896Z · LW(p) · GW(p)
I think the brain power required to design a simulation of our universe complex enough to include the same problems in it that we want our AI to solve is higher than the brain power of the AI we are having solve the problem.
Replies from: Alexandros↑ comment by Alexandros · 2011-03-10T06:55:08.575Z · LW(p) · GW(p)
It doesn't have to be -our- universe. For instance rule 110 is turing-complete and could therefore act as the containment universe for an AI.
↑ comment by timtyler · 2011-03-11T01:32:54.218Z · LW(p) · GW(p)
My comment on Chalmers' blog:
IMO, people want machine intelligence to help them to attain their goals. Machines can't do that if they are isolated off in virtual worlds. Sure there will be test harnesses - but it seems rather unlikely that we will keep these things under extensive restraint on grounds of sheer paranoia - that would stop us from taking advantage of them.
comment by TheOtherDave · 2011-03-09T04:58:21.725Z · LW(p) · GW(p)
I can't recall where actual arguments are made to this effect. This is an intuition pump to the same effect, though.
comment by JWJohnston · 2011-03-09T21:51:24.585Z · LW(p) · GW(p)
This is a very timely question for me. I asked something very similar of Michael Vassar last week. He pointed me to Eliezer's "Creating Friendly AI 1.0" paper and, like you, I didn't find the answer there.
I've wondered if the Field of Law has been considered as a template for a solution to FAI--something along the lines of maintaining a constantly-updating body of law/ethics on a chip. I've started calling it "Asimov's Laws++." Here's a proposal I made on the AGI discussion list in December 2009:
"We all agree that a few simple laws (ala Asimov) are inadequate for guiding AGI behavior. Why not require all AGIs be linked to a SINGLE large database of law--legislation, orders, case law, pending decisions--to account for the constant shifts [in what's prohibited and what's allowed]? Such a corpus would be ever-changing and reflect up-to-the-minute legislation and decisions on all matters man and machine. Presumably there would be some high level guiding laws, like the US Constitution and Bill of Rights, to inform the sub-nanosecond decisions. And when an AGI has miliseconds to act, it can inform its action using analysis of the deeper corpus. Surely a 200 volume set of international law would be a cakewalk for an AGI. The latest version of the corpus could be stored locally in most AGIs and just key parts local in low end models--with all being promptly and wirelessly updated as appropriate.
This seems like a reasonable solution given the need to navigate in a complex, ever changing, context-dependent universe."
Given this approach, AIs' goals and motivations might be mostly decoupled from an ethics module. An AI could make plans and set goals using any cognitive processes it deems fit. However, before taking actions, the AI must check the corpus to make sure it's desired actions are legal. If they are not legal, the AI must consider other actions or suffer the wrath of law enforcement (from fines to rehabilitation). This legal system of the future would be similar to what we're familiar with today, including being managed as a collaborative process between lots of agents (human and machine citizens, legislators, judges, and enforcers). Unlike current legal systems, however, it could hopefully be more nimble, fair, and effective given emerging computer-related technologies and methods (e.g, AI, WiFi, ubiquitous sensors, cheap/powerful processors, decision theory, Computational Law, ...).
This seems like a potentially practical, flexible, and effective approach given its long history of human precedent. AIs could even refer to the appropriate corpus when traveling in different jurisdictions (e.g., Western Law, Islamic Law, Chinese Law) in advance of more universal laws/ethics that might emerge in the future.
This approach should make most runaway paper clip production scenarios off limits. Such behavior would seem to violate a myriad of laws (human welfare, property rights, speeding (?)) and would be dealt with harshly.
Perhaps this might be seen as a kind of practical implementation of CEV?
Complex problems require complex solutions.
Comments? Pointers?
comment by XiXiDu · 2011-03-09T10:34:23.927Z · LW(p) · GW(p)
Where are the arguments concerning this suggestion?
I once tried to fathom the arguments, I'm curious to hear your take on it.
comment by JWJohnston · 2011-03-09T21:51:59.222Z · LW(p) · GW(p)
This is a very timely question for me. I asked something very similar of Michael Vassar last week. He pointed me to Eliezer's "Creating Friendly AI 1.0" paper and, like you, I didn't find the answer there.
I've wondered if the Field of Law has been considered as a template for a solution to FAI--something along the lines of maintaining a constantly-updating body of law/ethics on a chip. I've started calling it "Asimov's Laws++." Here's a proposal I made on the AGI discussion list in December 2009:
"We all agree that a few simple laws (ala Asimov) are inadequate for guiding AGI behavior. Why not require all AGIs be linked to a SINGLE large database of law--legislation, orders, case law, pending decisions--to account for the constant shifts [in what's prohibited and what's allowed]? Such a corpus would be ever-changing and reflect up-to-the-minute legislation and decisions on all matters man and machine. Presumably there would be some high level guiding laws, like the US Constitution and Bill of Rights, to inform the sub-nanosecond decisions. And when an AGI has miliseconds to act, it can inform its action using analysis of the deeper corpus. Surely a 200 volume set of international law would be a cakewalk for an AGI. The latest version of the corpus could be stored locally in most AGIs and just key parts local in low end models--with all being promptly and wirelessly updated as appropriate.
This seems like a reasonable solution given the need to navigate in a complex, ever changing, context-dependent universe."
Given this approach, AIs' goals and motivations might be mostly decoupled from an ethics module. An AI could make plans and set goals using any cognitive processes it deems fit. However, before taking actions, the AI must check the corpus to make sure it's desired actions are legal. If they are not legal, the AI must consider other actions or suffer the wrath of law enforcement (from fines to rehabilitation). This legal system of the future would be similar to what we're familiar with today, including being managed as a collaborative process between lots of agents (human and machine citizens, legislators, judges, and enforcers). Unlike current legal systems, however, it could hopefully be more nimble, fair, and effective given emerging computer-related technologies and methods (e.g, AI, WiFi, ubiquitous sensors, cheap/powerful processors, decision theory, Computational Law, ...).
This seems like a potentially practical, flexible, and effective approach given its long history of human precedent. AIs could even refer to the appropriate corpus when traveling in different jurisdictions (e.g., Western Law, Islamic Law, Chinese Law) in advance of more universal laws/ethics that might emerge in the future.
This approach should make most runaway paper clip production scenarios off limits. Such behavior would seem to violate a myriad of laws (human welfare, property rights, speeding (?)) and would be dealt with harshly.
Perhaps this might be seen as a kind of practical implementation of CEV?
Complex problems require complex solutions.
Comments? Pointers?
Replies from: orthonormal, DavidAgain, lessdazed↑ comment by orthonormal · 2011-03-10T18:51:24.194Z · LW(p) · GW(p)
You can't be serious. Human lawyers find massive logical loopholes in the law all the time, and at least their clients aren't capable of immediately taking over the world given the opportunity.
Replies from: JWJohnston↑ comment by JWJohnston · 2011-03-11T00:58:47.644Z · LW(p) · GW(p)
Thanks for the comments. See my response to DavidAgain re: loophole-seeking AIs.
↑ comment by DavidAgain · 2011-03-10T17:12:13.980Z · LW(p) · GW(p)
The swift genie-like answer: the paperclip maximser would prioritise nobbling the Supreme Court and relevant legislatures. Or just controlling the pen that wrote the laws, if that could be acceptable within the failsafe.
More generally, I don't think it work. First, there's a problem of underspecification. Laws require constant interpretation of case law, including a lot of 'common sense' type verdicts. We can't assume AI would read them in the way we do. Second, they rely on key underlying concepts such as 'cause to' and 'negligence' that rely on a reasonable person's expectation. If we ask if a reasonable superintelligent AI knew that some negative/illegal consequences would occur from its act, then the result would nearly always be yes, thus opening it to breaking laws of negligance.
I think there are two types of law, neither of which are suitable.
Specific laws: e.g. no speeding, no stealing
These would mostly not apply, as they ban humans from doing things humans can do and wish to do. Neither would be likely to apply to AI
General laws: uphold life, liberty and the pursuit of happiness
These aren't failsafes, they're the underlying utlity-maximiser
Replies from: JWJohnston↑ comment by JWJohnston · 2011-03-11T00:54:11.821Z · LW(p) · GW(p)
Thanks for the thoughts.
You seem to imply that AIs motivations will be substantially humanlike. Why might AIs be motivated to nobble the courts, control pens, overturn vast segments of law, find loopholes, and engage in other such humanlike gamesmanship? Sounds like malicious programming to me.
They should be designed to treat the law as a fundamental framework to work within, akin to common sense, physical theories, and other knowledge they will accrue and use over the course of their operation.
I was glib in my post suggesting that "before taking actions, the AI must check the corpus to make sure it's desired actions are legal." Presumably most AIs would compile the law corpus into their own knowledge bases, perhaps largely integrated with other knowledge they rely on. Thus they could react more quickly during decision making and action. They would be required/wired, however, to be reasonably up to the minute on all changes to the law and grok the differences into their semantic nets accordingly. THE KEY THING is that there would be a common body of law ALL are held accountable to. If laws are violated, appropriate consequences would be enforced by the wider society.
The law/ethic corpus would be the playbook that all AIs (and people) "agree to" as a precondition to being a member of a civil society. The law can and should morph over time, but only by means of rational discourse and checks and balances similar to current human law systems, albeit using much more rational and efficient mechanisms.
Hopefully underspecification won't be a serious problem. AIs should have a good grasp of human psychology, common sense, and ready access to lots of case precedents. As good judges/rational agents they should abide by such precedents until better legislation is enacted. They should have a better understanding of 'cause to' and 'negligence' concepts than I (a non-lawyer) do :-). If AIs and humans find themselves constantly in violation of negligence laws, such laws should be revised, or the associated punishments reduced or increased as appropriate.
WRT "general laws" and "specific laws," my feeling is that the former provide the legislative backbone of the system, e.g., Bill of Rights, Golden Rule, Meta-Golden Rule, Asimov's Laws, ... The latter do the heavy lifting of clarifying how law applies in practical contexts and the ever- gnarly edge cases.
Replies from: orthonormal, DavidAgain, JoshuaZ↑ comment by orthonormal · 2011-03-11T05:31:26.215Z · LW(p) · GW(p)
I understand where you're coming from– indeed, the way you're imagining what an AI would do is fundamentally ingrained in human minds, and it can be quite difficult to notice the strong form of anthropomorphism it entails.
Scattered across Less Wrong are the articles that made me recognize and question some relevant background assumptions; the references in Fake Fake Utility Functions (sic) are a good place to begin.
EDITED TO ADD: In particular, you need to stop thinking of an AI as acting like either a virtuous human being or a vicious human being, and imagining that we just need to prevent the latter. Any AI that we could program from scratch (as opposed to uploading a human brain) would resemble any human far less in xer thought process than any two humans resemble each other.
Replies from: JWJohnston↑ comment by JWJohnston · 2011-03-12T01:24:31.032Z · LW(p) · GW(p)
Thanks for the links. I'll try to make time to check them out more closely.
I had previously skimmed a bunch of lesswrong content and didn't find anything that dissuaded me from the Asimov's Laws++ idea. I was encouraged by the first post in the Metaethics Sequence where Eliezer warns about not "trying to oversimplify human morality into One Great Moral Principle." The law/ethics corpus idea certainly doesn't do that!
RE: your first and final paragraphs: If I had to characterize my thoughts on how AIs will operate, I'd say they're likely to be eminently rational. Certainly not anthropomorphized as virtuous or vicious human beings. They will crank the numbers, follow the rules, run the simulations, do the math, play the odds as only machines can. Probably (hopefully?) they'll have little of the emotional/irrational baggage we humans have been selected to have. Given that, I don't see much motivation for AIs to fixate on gaming the system. They should be fine with following and improving the rules as rational calculus dictates, subject to the aforementioned checks and balances. They might make impeccable legislators, lawyers, and judges.
I wonder if this solution was dismissed too early by previous analysts due some kind of "scale bias?" The idea of having only 3 or 4 or 5 (Asimov) Laws for FAI is clearly flawed. But scale that to a few hundred thousand or a million, and it might work. No?
Replies from: orthonormal↑ comment by orthonormal · 2011-03-12T05:08:09.350Z · LW(p) · GW(p)
Given that, I don't see much motivation for AIs to fixate on gaming the system.
Motivation? It's not as if most AIs would have a sense that gaming a rule system is "fun", but rather it would be the most efficient way to achieve its goals. Human beings don't usually try to achieve one of their consciously stated goals with maximum efficiency, at any cost, to an unbounded extent. That's because we actually have a fairly complicated subconscious goal system which overrides us when we might do something too dumb in pursuit of our conscious goals. This delicate psychology is not, in fact, the only or the easiest way one could imagine to program an artificial intelligence.
Here's a fictional but still useful idea of a simple AI; note that no matter how good it becomes at predicting consequences and at problem-solving, it will not care that the goal it's been given is a "stupid" one when pursued at all costs.
To take a less fair example, Lenat's EURISKO was criticized for finding strategies that violated the 'spirit' of the strategy games it played- not because it wanted to be a munchkin, but simply because that was the most efficient way to succeed. If that AI had been in charge of an actual military, giving it the wrong goals might have led to it cleverly figuring out the strategy like killing its own civilians to accomplish a stated objective- not because it was "too dumb", but because its goal system was too simple.
For this reason, giving an AI simple goals but complicated restrictions seems incredibly unsafe, which is why SIAI's approach is figuring out the correct complicated goals.
Replies from: JWJohnston↑ comment by JWJohnston · 2011-03-12T13:55:52.819Z · LW(p) · GW(p)
For this reason, giving an AI simple goals but complicated restrictions seems incredibly unsafe, which is why SIAI's approach is figuring out the correct complicated goals.
Tackling FAI by figuring out complicated goals doesn't sound like a good program to me, but I'd need to dig into more background on it. I'm currently disposed to prefer "complicated restrictions," or more specifically this codified ethics/law approach.
In your example of a stamp collector run amok, I'd say it's fine to give an agent the goal of maximizing the number of stamps it collects. Given an internal world model that includes the law/ethics corpus, it should not hack into others' computers, steal credit card numbers, and appropriate printers to achieve its goal. And if it does (a) Other agents should array against it to prevent the illegal behaviors, and (b) It will be held accountable for those actions.
The EURISKO example seems better to me. The goal of war (defeat one's enemies) is particularly poignant and much harder to ethically navigate. If the generals think sinking their own ships to win the battle/war is off limits they may have to write laws/rules that forbid it. The stakes of war are particularly high and figuring out the best (ethical?) rules is particularly important and difficult. Rather than banning EURISKO from future war games given its "clever" solutions, it would seem the military could continue to learn from it and amend the laws as necessary. People still debate whether Truman dropping the bomb on Hiroshima was the right decision. Now there's some tough ethical calculus. Would an ethical AI do better or worse?
Legal systems are what societies currently rely on to protect public liberties and safety. Perhaps an SIAI program can come up with a completely different and better approach. But in lieu of that, why not leverage Law? Law = Codified Ethics.
Again, it's not only about having lots of rules. More importantly it's about the checks and balances and enforcement the system provides.
Replies from: Costanza, lessdazed↑ comment by Costanza · 2011-03-12T18:10:59.069Z · LW(p) · GW(p)
Legal systems are what societies currently rely on to protect public liberties and safety. Perhaps an SIAI program can come up with a completely different and better approach. But in lieu of that, why not leverage Law? Law = Codified Ethics.
When they work well, human legal systems work because they are applied only to govern humans. Dealing with humans and predicting human behavior is something that humans are pretty good at. We expect humans to have a pretty familiar set of vices and virtues.
Human legal systems are good enough for humans, but simply are not made for any really alien kind of intelligence. Our systems of checks and balances are set up to fight greed and corruption, not a disinterested will to fill the universe with paperclips.
Replies from: JWJohnston↑ comment by JWJohnston · 2011-03-12T22:54:01.157Z · LW(p) · GW(p)
I submit that current legal systems (or something close) will apply to AIs. And there will be lots more laws written to apply to AI-related matters.
It seems to me current laws already protect against rampant paperclip production. How could an AI fill the universe with paperclips without violating all kinds of property rights, probably prohibitions against mass murder (assuming it kills lots of humans as a side effect), financial and other fraud to aquire enough resources, etc. I see it now: some DA will serve a 25,000 count indictment. That AI will be in BIG trouble.
Or say in a few years technology exists for significant matter transmutation, highly capable AIs exist, one misguided AI pursues a goal of massive paperclip production, and it thinks it found a way to do it without violating existing laws. The AI probably wouldn't get past converting a block or two in New Jersey before the wider public and legislators wake up to the danger and rapidly outlaw that and related practices. More likely, technologies related to matter transmutation will be highly regulated before an episode like that can occur.
Replies from: Costanza, lessdazed↑ comment by Costanza · 2011-03-12T23:47:16.393Z · LW(p) · GW(p)
How could an AI fill the universe with paperclips without violating all kinds of property rights, probably prohibitions against mass murder (assuming it kills lots of humans as a side effect), financial and other fraud to aquire enough resources, etc...[?])
I have no idea myself, but if I had the power to exponentially increase my intelligence beyond that of any human, I bet I could figure something out.
The law has some quirks. I'd suggest that any system of human law necessarily has some ambiguities, confusions and, internal contradictions. Laws are composed largely of leaky generalizations. When the laws regulate mere humans, we tend to get by, tolerating a certain amount of unfairness and injustice.
For example, I've seen a plausible argument that "there is a 50-square-mile swath of Idaho in which one can commit felonies with impunity. This is because of the intersection of a poorly drafted statute with a clear but neglected constitutional provision: the Sixth Amendment's Vicinage Clause."
There's also a story about Kurt Gödel nearly blowing his U.S. citizenship hearing by offering his thoughts on how to hack the U.S. Constitution to "allow the U.S. to be turned into a dictatorship."
↑ comment by lessdazed · 2011-07-02T13:56:42.713Z · LW(p) · GW(p)
How could an AI fill the universe with paperclips without violating all kinds of property rights...financial and other fraud to aquire enough resources
After reading that line I checked the date of the post to see if perhaps it was from 2007 or earlier.
↑ comment by lessdazed · 2011-07-02T13:52:54.825Z · LW(p) · GW(p)
The goal of war (defeat one's enemies)
Can you think of an instance where defeat of one's enemies was more than an instrumental goal and was an ultimate goal?
Replies from: MixedNuts↑ comment by MixedNuts · 2011-07-02T14:30:43.843Z · LW(p) · GW(p)
Yes. When (a substantial, influential fraction of the populations of) two countries hate each other so much that they accept large costs to inflict them larger costs, demand extremely lopsided treaties if they're willing to negotiate at all, and have runaway "I hate the enemy more than you!" contests among themselves. When a politician in one country who's willing to negotiate somewhat more is killed by someone who panics at the idea they might give the enemy too much. When someone considers themselves enlightened for saying "Oh, I'm not like my friends. They want them all to die. I just want them to go away and leave us alone.".
Replies from: lessdazed↑ comment by lessdazed · 2011-07-02T16:22:58.446Z · LW(p) · GW(p)
First of all, it's not clear that individual apparently non-Pareto-optimal actions in isolation are evidence of irrationality or non-Pareto optimal behavior on a larger scale. This is particularly often the case when the "lose-lose" behavior involves threats, commitments, demonstrating willingness to carry through, etc
Second of all, "someone who panics at the idea they might give the enemy too much" implies, or at least leaves open, the possibility that the ultimate concern is losing something ultimately valuable that is being given, rather than the ultimate goal being the defeat of the enemies. Likewise "demand extremely lopsided treaties if they're willing to negotiate at all", which implies strongly that they are seeking something other than the defeat of foes.
When someone considers themselves enlightened for saying "Oh, I'm not like my friends. They want them all to die. I just want them to go away and leave us alone.".
One point of mine is that this "enlightened" statement may actually be the extrapolated volition of even those who think they "want them all to die". It's pretty clear how for the "enlightened" person, the unenlightened value set could be instrumentally useful.
Most of all, war was characterized as being something that had the ultimate/motivating goal of defeating enemies. I object that it isn't, but please recognize I go far beyond what I would need to assert to show that when I ask for examples of war ever being something driven by the ultimate goal of defeating enemies. Showing instances in which wars followed the pattern would only be the beginning of showing war in general is characterized by that goal.
I similarly would protest if someone said "the result of addition is the production of prime numbers, it is the defining characteristic of addition". I would in that case not ask for counterexamples, but would use other methods to show that no, that isn't a defining characteristic of addition nor is it the best way to talk about addition. Of course, some addition does result in prime numbers.
I agree there could be such a war, but I don't know that there have ever been any, and highlighting this point is an attempt to at least show that any serious doubt can only be about whether war ever is characterized by having the ultimate goal of defeating enemies; there can be no doubt that war in general does not have as its motivating goal the defeat of one's enemies.
Replies from: MixedNuts↑ comment by MixedNuts · 2011-07-02T17:24:54.986Z · LW(p) · GW(p)
I am aware of ignoring threats, using uncompromisable principles to get an advantage in negotiations, breaking your receiver to decide on a meeting point, breaking your steering wheel to win at Chicken, etc. I am also aware of the theorem that says even if there is a mutually beneficial trade, there are cases where selfish rational agents refuse to trade, and that the theorem does not go away when the currency they use is thousands of lives. I still claim that the type of war I'm talking about doesn't stem from such calculations; that people on side A are willing to trade a death on side A for a death on side B, as evidenced by their decisions, knowing that side B is running the same algorithm.
A non-war exemple is blood feuds; you know that killing a member of family B who killed a member of family A will only lead to perpetuating the feud, but you're honor-bound to do it. Now, the concept of honor did originate from needing to signal a commitment to ignore status exortion, and (in the absence of relatively new systems like courts of law) unilaterally backing down would hurt you a lot - but honor acquired a value of its own, independently from these goals. (If you doubt it, when France tried to ban duels and encourage trials, it used a court composed of war heroes who'd testified the plaintiff wasn't dishonourable for refusing to duel.)
Second of all, "someone who panics at the idea they might give the enemy too much" implies, or at least leaves open, the possibility that the ultimate concern is losing something ultimately valuable that is being given, rather than the ultimate goal being the defeat of the enemies.
Plausible, but not true of the psychology of this particular case.
Likewise "demand extremely lopsided treaties if they're willing to negotiate at all", which implies strongly that they are seeking something other than the defeat of foes.
Well obviously they aren't foe-deaths-maximizers. It's just that they're willing to trade off a lot of whatever-they-went-to-war-for-at-first in order to annoy the enemy.
One point of mine is that this "enlightened" statement may actually be the extrapolated volition of even those who think they "want them all to die".
The person who said that was talking about a war where it's quite unrealistic to think any side would go away (as with all wars over inhabited territory). Genociding the other side would be outright easier.
war was characterized as being something that had the ultimate/motivating goal of defeating enemies. I object that it isn't
Agree it isn't. I don't even think anyone starts a war with that in mind - war is typically a game of Chicken. I'm pointing out a failure that leads from "I'm going to instill my supporters with an irrational burning hatred of the enemy, so that I can't back down, so that they have to" to "I have an irrational burning hatred of the enemy! I'll never let them back down, that'd let them off too easily!".
I agree there could be such a war, but I don't know that there have ever been any
Care to guess which war in particular I was thinking of? (By PM if it's too political.) I think it applies to any entrenched conflict where the identify as enemies of the and have done so for several generations, but I do have a prototype. Hints:
- The "enlightened" remark won't help, it was in a (second-hand, but verbatim quote) personal conversation.
- The politician will.
- The "personal conversation" and "political" bits indicate it can't be too old.
- It's not particularly hard to guess.
↑ comment by lessdazed · 2011-07-02T18:15:25.228Z · LW(p) · GW(p)
Plausible, but not true of the psychology of this particular case.
I'll go along, but don't forget my original point was that this psychology does not universally characterize war.
Well obviously they aren't foe-deaths-maximizers. It's just that they're willing to trade off a lot of whatever-they-went-to-war-for-at-first in order to annoy the enemy.
Good point, you are right about that.
The person who said that was talking about a war where it's quite unrealistic to think any side would go away (as with all wars over inhabited territory). Genociding the other side would be outright easier.
I don't understand what you mean to imply by this. It may still be useful to be hateful and think genocide is an ultimate goal. If one is unsure whether it is better to swerve left or swerve right to avoid an accident, ignorant conviction that only swerving right can save you may be more useful than true knowledge that swerving right is the better bet to save you. Even if the indifferent person personally favored genocide and it was optimal in a sense, such an attitude would be more common among hateful people.
Agree it isn't. I don't even think anyone starts a war with that in mind - war is typically a game of Chicken. I'm pointing out a failure that leads from "I'm going to instill my supporters with an irrational burning hatred of the enemy, so that I can't back down, so that they have to" to "I have an irrational burning hatred of the enemy! I'll never let them back down, that'd let them off too easily!".
Hmm I think it's enough for me if no one ever starts a war with that in mind, even if my original response was broader than that. Then at some point in every war, defeating the enemy is not an ultimate goal. This sufficiently disentangles "defeat of the enemy" from war and shows they are not tightly associated, which is what I wanted to say.
The "enlightened" remark won't help, it was in a (second-hand, but verbatim quote) personal conversation.
I'm puzzled as to why you thought it would help, if first hand.
The politician will.
When (a substantial, influential fraction of the populations of) two countries hate each other so much that they accept large costs to inflict them larger costs, demand extremely lopsided treaties if they're willing to negotiate at all, and have runaway "I hate the enemy more than you!" contests among themselves.
When a politician in one country who's willing to negotiate somewhat more is killed by someone who panics at the idea they might give the enemy too much.
"Too much" included weapons and...I'm not seeing the hate.
Replies from: MixedNuts"The word "peace" is, to me, first of all peace within the nation. You must love your [own people] before you can love others. The concept of peace has been turned into a destructive instrument with which anything can be done. I mean, you can kill people, abandon people [to their fate], close Jews into ghettos and surround them with Arabs, give guns to the army [Palestinian Police], establish a [Palestinian] army, and say: this is for the sake of peace. You can release Hamas terrorists from prison, free murderers with blood on their hands, and everything in the framework of peace.
"It wasn't a matter of revenge, or punishment, or anger, Heaven forbid, but what would stop [the Oslo process]," he told the authors. "I thought about it a lot and understood that if I took Rabin down, that's what would stop it."
"What about the tragedy you caused your family?" he was asked.
"My considerations were that in the long run, my family would also be saved. I mean, if [the peace process] continued, my family would be ruined too. Do you understand what I'm saying? The whole country would be ruined. I thought about this for two years, and I calculated the possibilities and the risks. If I hadn't done it, I would feel much worse. My deed will be understood in the future. I saved the people of Israel from destruction."
↑ comment by MixedNuts · 2011-07-02T19:07:31.302Z · LW(p) · GW(p)
I don't understand what you mean to imply by this.
That wanting to be left alone is an unreasonable goal.
I'm puzzled as to why you thought it would help, if first hand.
I don't.
Yeah, that was easy. :)
Your link is paywalled, though the text can be found easily elsewhere.
I'm... extremely surprised. I have read stuff Amir said and wrote, but I haven't read this book. I have seen other people exhibit the hatred I speak of, and I sorta assumed it fit in with the whole "omg he's giving our land to enemies gotta kill him" thing. It does involve accepting only very stringent conditions for peace, but I completely misunderstood the psychology... so he really murdered someone out of a cold sense of duty. I thought he just thought Rabin was a bad guy and looked for a fancy Hebrew word for "bad guy" as an excuse to kill him, but he was entirely sincere. Yikes.
Replies from: lessdazed↑ comment by lessdazed · 2011-07-03T00:53:32.721Z · LW(p) · GW(p)
That wanting to be left alone is an unreasonable goal.
I'm not sure what "left alone" means, exactly. I think I disagree with some plausible meanings and agree with others.
have runaway "I hate the enemy more than you!" contests among themselves
I think the Israeli feeling towards Arabs is better characterized as "I just want them to go away and leave us alone," and if you asked this person's friends they would deny hating and claim "I just want them to go away and leave us alone," possibly honestly, possibly truthfully.
It does involve accepting only very stringent conditions for peace,
I think different segments of Israeli society have different non-negotiable conditions and weights for negotiable ones, and only the combination of them all is so inflexible. One can say about any subset that, granted the world as it is, including other segments of society, their demands are temporally impossible to meet from resources available.
Biblical Israel did not include much of modern Israel, including coastal and inland areas surrounding Gaza, coastal areas in the north and, the desert in the south. It did include territory not part of modern Israel, the areas surrounding the Golan and areas on the east bank of the Jordan river, and its core was the land on the west bank of the Jordan river. It would not be at all hard to induce the Israeli right to give up on acquiring southeast Syria, etc. even though it was once biblical Israel. Far harder is having them accede to losing entirely and being evicted from the land where Israel has political and military control, had the biblical states, and they are a minority population.
It might not be difficult to persuade the right to make many concessions the Israeli left or other countries would never accept. Examples include "second class citizenship" in the colloquial sense i.e. permanent non-citizen metic status for non-Jews, paying non-Jews to leave, or even giving them a state in what was never biblical Israel where Jews now live and evicting Jews resident there, rather than give non-Jews a state where they now are the majority population in what was once biblical Israel. The left would not look kindly upon such a caste system, forced transfer, soft genocide of paying a national group to disperse, or evicting majority populations to conform to biblical history.
I think it is only the Israeli right+Israeli left conditions for peace that are so stringent, and so I reject the formulation "it does involve accepting only very stringent conditions for peace" as a characterization of either the Israeli left or right, though not them in combination. To say it of the right pretends liberal conclusions (that I happen to have) are immutable.
Replies from: MixedNuts↑ comment by MixedNuts · 2011-07-03T16:03:04.639Z · LW(p) · GW(p)
I think different segments of Israeli society have different non-negotiable conditions and weights for negotiable ones, and only the combination of them all is so inflexible.
Mostly agreed, though I don't think it's the right way of looking at the problem - you want to consider all the interactions between the demands of each Israeli subgroup (also, groups of Israel supporters abroad) and the demands of each Palestinian subgroup (also, surrounding Arab countries).
I reject the formulation "it does involve accepting only very stringent conditions for peace" as a characterization of either the Israeli left or right
I meant just Yigal Amir. I'm pretty sure the guy wasn't particularly internally divided.
Replies from: lessdazed↑ comment by lessdazed · 2011-07-03T18:00:11.660Z · LW(p) · GW(p)
combination
you want to consider all the interactions
I had meant to imply that
I meant just Yigal Amir.
Probably, but one ought to consider what policies he would endure that he would not have met with vigilante violence. I may have the most irrevocable possible opposition to, say, the stimulus bill's destruction of inefficient car engines when replacing the engines would be even less efficient by every metric than continuing to run the old engine, a crude confluence of the broken window fallacy and lost purposes, but no amount of that would make me kill anybody.
↑ comment by DavidAgain · 2011-03-11T19:06:22.518Z · LW(p) · GW(p)
Hmm... interesting ideas. I don't intend to suggest that the AI would have human intentions at all, I think we might be modelling the idea of a failsafe in a different way.
I was assuming that the idea was an AI with a separate utility-maximising system, but to also make it follow laws as absolute, inviolable rules, thus stopping unintended consequences from the utility maximisation. In this system, the AI would 'want' to pursue its more general goal and the laws would be blocks. As such, it would find other ways to pursue its goals, including changing the laws themselves.
If the corpus of laws instead form part of what the computer is trying to achieve/uphold we face different problems. Firstly, laws are prohibitions and it's not clear how to 'maximise' them beyond simple obedience. Unless it's stopping other people breaking them in a Robocop way. Second, failsafes are needed because even 'maximise human desire satisfaction' can throw up lots of unintended results. An entire corpus of law would be far more unpredictable in its effects as a core programme, and thus require even more failsafes!
On a side point, my argument about cause, negligence etc. was not that the computer would fail to understand them, but that as regards a superintelligence, they could easily be either meaningless or over-effective.
For an example of the latter, if we allow someone to die, that's criminal negligence. This is designed for walking past drowning people and ignoring them etc. A law-abiding computer might calculate, say, that even with cryonics etc, every life will end in death due to the universe's heat death. It might then sterilise the entire human population to avoid new births, as each birth would necessitate a death. And so on. Obviously this would clash with other laws, but that's part of the problem: every action would involve culpability in some way, due to greater knowledge of consequences.
Replies from: JWJohnston↑ comment by JWJohnston · 2011-03-12T02:32:52.966Z · LW(p) · GW(p)
The laws might be appropriately viewed primarily as blocks that keep the AI from taking actions deemed unacceptable by the collective. AIs could pursue whatever goals they sees fit within the constraints of the law.
However, the laws wouldn't be all prohibitions. The "general laws" would be more prescriptive, e.g., life, liberty, justice for all. The "specific laws" would tend to be more prohibition oriented. Presumably the vast majority of them would be written to handle common situations and important edge cases. If someone suspects the citizenry may be at jeopardy of frequent runaway trolly incidents, the legislature can write statutes on what is legal to throw under the wheels to prevent deaths of (certain configurations of) innocent bystanders. Probably want to start with inanimate objects before considering sentient robots, terminally sick humans, fat men, puppies, babies, and whatever. (It might be nice to have some clarity on this! :-))
To explore your negligence case example, I imagine some statute might require agents to rescue people in imminent danger of losing their lives if possible, subject to certain extenuating cicumstances. The legislature and public can have a lively debate about whether this law still makes sense in a future where dead people can be easily reanimated or if human life is really not valuable in the grand scheme of things. If humans have good representatives in the legistature and/or good a few good AI advocates, mass human extermination shouldn't be a problem, at least until the consensus shifts in such directions. Perhaps some day there may be a consensus on forced sterilizations to prevent greater harms. I'd argue such a system of laws should be able to handle it. The key seems to be to legislate prescriptions and prohibitions relevant to current state of society and change them as the facts on the ground change. This would seem to get around the impossibility of defining eternal laws or algorithms that are ever-true in every possible future state.
Replies from: DavidAgain↑ comment by DavidAgain · 2011-03-12T11:13:13.165Z · LW(p) · GW(p)
I still don't see how laws as barriers could be effective. People are arguing whether it's possible to write highly specific failsafe rules capable of acting as barriers, and the general feeling is that you wouldn't be able to second-guess the AI enough to do that effectively. I'm not sure what replacing these specific laws with a large corpus of laws achieves. On the plus side, you've got a large group of overlapping controls that might cover each others' weaknesses. But they're not specially written with AI in mind and even if they were, small political shifts could lead to loopholes opening. And the number also means that you can't clearly see what's permitted or not: it risks an illusion of safety simply because we find it harder to think of something bad an AI could do that doesn't break any law.
Not to mention the fact that a utility-maximising AI would seek to change laws to make them better for humans, so the rules controlling the AI would be a target of their influence.
Replies from: JWJohnston↑ comment by JWJohnston · 2011-03-12T14:26:34.717Z · LW(p) · GW(p)
I guess here I'd reiterate this point from my latest reply to orthonormal:
Again, it's not only about having lots of rules. More importantly it's about the checks and balances and enforcement the system provides.
It may not be helpful to think of some grand utility-maximising AI that constantly strives to maximize human happiness or some other similar goals, and can cause us to wake up in some alternate reality some day. It would be nice to have some AIs working on how to maximize some things human's value, e.g., health, happiness, attractive and sensible shoes. If any of those goals would appear to be impeded by current law, the AI would lobby it's legislator to amend the law. And in a better future, important amendments would go through rigorous analysis in a few days, better root out unintended consequences, and be enacted as quickly as prudent.
↑ comment by JoshuaZ · 2011-03-11T05:50:01.593Z · LW(p) · GW(p)
Many legal systems have all sorts of laws that are vague or even contradictory. Sometimes laws are on the books and are just no longer enforced. Many terms in laws are also ill-defined, sometimes deliberately so. Having an AI try to have almost anything to do with them is a recipe for disaster or comedy (most likely both).
Replies from: JWJohnston↑ comment by JWJohnston · 2011-03-12T02:47:02.505Z · LW(p) · GW(p)
We would probably start with current legal systems and remove outdated laws, clarify the ill-defined, and enact a bunch of new ones. And our (hyper-)rational AI legislators, lawyers, and judges should not be disposed to game the system. AI and other emerging technologies should both enable and require such improvements.
↑ comment by lessdazed · 2011-07-02T13:49:15.821Z · LW(p) · GW(p)
Surely a 200 volume set of international law would be a cakewalk for an AGI.
It seems like an applause light to invoke international law as a solution to almost anything, particularly this problem. What aspect of having rules made in a compromise of politicing makes it less likely to have exploitable loopholes than any other system?
If they are not legal, the AI must consider other actions or suffer the wrath of law enforcement (from fines to rehabilitation).
Fines? The misdoing we're worried about is seizing power. Fines would require power sufficient to punish an AI after its misdoings, and have nothing to do with programming it not to be harmful.
AIs could even refer to the appropriate corpus when traveling in different jurisdictions (e.g., Western Law, Islamic Law, Chinese Law)
Somehow I do't think the solution to the problem of having powerful AIs that don't care about us (for better or worse) is to teach them Islamic law.