What's your #1 reason to care about AI risk?
post by ChrisHallquist · 2013-01-20T21:52:00.736Z · LW · GW · Legacy · 16 commentsContents
16 comments
It's way late in my time zone and I suspect this question isn't technically coherent on the grounds that the right answer to "why care about AI risk?" is going to be complicated and have a bunch of parts that can't be separated from each other. But I'm going to share a thought I had anyway.
It seems to me like probably, the answer to the question of how to make AIs benevolent isn't vastly more complicated than the answer of how to make them smart. What's worrisome about our current situation, however, is that we're currently putting way more effort into making AIs smart than we are into making them benevolent.
Agree? Disagree? Have an orthogonal answers to the title question?
16 comments
Comments sorted by top scores.
comment by wwa · 2013-01-20T23:16:56.169Z · LW(p) · GW(p)
It seems to me like probably, the answer to the question of how to make AIs benevolent isn't vastly more complicated than the answer of how to make them smart (...) Agree? Disagree?
Disagree. Seems to me that we can make them outsmart us by sheer computational power in less than a hundred years, but we can't make them friendly without serious mathematical background.
An analogy: it's the difference between making an avalanche and aiming an avalanche.
comment by passive_fist · 2013-01-21T07:26:07.950Z · LW(p) · GW(p)
Well I could give you a bunch of conformist bs, OR since this is lesswrong.com I could tell you how I actually feel.
The reason I care about benevolent AIs is not survival or about the human race. It's more a personal reason. The reason I care is about pain and suffering. I am convinced (this is of course a matter of faith) that benevolent AIs would vastly improve my quality of life and malicious AIs could greatly reduce it (for example, by gruesomely killing me, or worse, inflicting a large amount of pain on me). Thus it is in my personal interests that AI be benevolent.
I don't really care about the status quo. As long as AIs are benevolent, I don't care if all of current human civilization (as it currently is) is destroyed and replaced by something else.
Now about the complexity of making AIs benevolent vs. making them smart, I don't think a rational answer can be given, simply because we don't know enough about what it would take to make them benevolent.
In the 50's and 60's, in science fiction, it was common to depict AIs as being able to understand speech easily but not being able to talk, or only being able to talk in simple sentences. People thought utterances were more difficult to program than understanding. A survey of current chatbots shows this to be ridiculously backward.
comment by private_messaging · 2013-01-21T18:00:43.507Z · LW(p) · GW(p)
What's worrisome about our current situation, however, is that we're currently putting way more effort into making AIs smart than we are into making them benevolent.
Smart is too broad and encompasses too many aspects, into most of which we are not putting any effort either. Namely we put zero effort into properly motivating AIs to act reasonably in the real world - e.g. there is AIXI that wants it's button pressed but would gladly destroy itself, but there isn't equally formalized Clippy that wants to make real world paperclips and won't destroy itself because that would undermine paperclip production. Sure, there's the self driving car that gets from point A to point B in minimum time, but it can't and wont search for ways to destroy obstacles on it's path - it doesn't really have a goal to move a physical box of metal from point A to point B, it's solving various equations that do not actually capture the task well enough to have solutions such as 'kill everyone to reduce traffic congestion'. If you had a genie and asked it to move metal from point A to point B, in magical stories, genie could do something unintended, while in reality you have to be awfully specific just to get it to work at all, and if you want it to look for ways to kill everyone to reduce traffic congestion, you need to be even more specific.
Also, the 'smart' does in fact encompass restrictions. It's easy to make an AI that enumerates every possible solution - but it won't be 'smart' on any given hardware. On given hardware, 'smartness' implies restrictions of the solution space. On any given hardware, the AI that is pondering how to kill everyone or how to create hyperdrive or the like (trillion irrelevancies which it won't have computing time to make any progress at), when the task is driving the car, is dumber than AI that does not.
comment by Kaj_Sotala · 2013-01-21T09:27:21.485Z · LW(p) · GW(p)
Disagree.
I'm lazy, so I'll just copy the content of my earlier comment:
IMO nobody so far has managed to propose an FAI approach that wouldn't be riddled with serious problems. Almost none of them work if we have a hard takeoff, and a soft takeoff might not be any better, due to allowing lots of different AGIs to compete and leading to [various evolutionary scenarios in which it seems highly unlikely that humans will come out on top]. If there's a hard takeoff, you need to devote a lot of time and effort into making the design safe and also be the first one to have your AGI undergo a hard takeoff, two mutually incompatible goals. That's assuming that you even have a clue of what kind of a design would be safe - something CEV-like could qualify as safe, but currently it remains so vaguely specified that it reads more like a list of applause lights than an actual design, and even getting to the point where we could call it a design feels like it requires solving numerous difficult problems, some of which have remained unsolved for thousands of years, and our remaining time might be counted in tens of years rather than thousands or even hundreds... and so on and so on.
Not saying that it's impossible, but there are far more failure scenarios than successful ones, and an amazing amount of things would all have to go right in order for us to succeed.
comment by blogospheroid · 2013-01-21T09:02:10.263Z · LW(p) · GW(p)
One of the reasons I care about friendly AI because I live in a uniquely dysfunctional country, India.
India has a long remembered history, thousands of communities and extreme religiosity. It would have been good if those communities had been allowed to go their own ways, but bolted onto this was a socialist government that has been the largest dog-in-the-manger in the history of humankind. In addition, the country is a one-man one-vote system, where the politicians promise short term goods to people and resources that should be used to build physical and social infrastructure are being frittered away.
In short, governing India is a FAI hard problem, atleast looks to me that way. Any little that I can contribute to that would be worth it. 1.27 billion Indian people would be very grateful if a genuinely benevolent friendly AI emerged.
comment by Baughn · 2013-01-22T12:13:32.164Z · LW(p) · GW(p)
I care about AI risk because I don't want to die.
Ideally, I'll live forever, but right now I'd be unwilling to give good odds of surviving the next three decades. AI, being both the cause of most deaths and the solution to almost all of them, seems well worth caring about.
comment by Dorikka · 2013-01-21T01:44:42.531Z · LW(p) · GW(p)
t seems to me like probably, the answer to the question of how to make AIs benevolent isn't vastly more complicated than the answer of how to make them smart.
Doesn't pretty much everything on the complexity of human values point against this?
Replies from: ChrisHallquist, Vladimir_Nesov↑ comment by ChrisHallquist · 2013-01-21T07:12:37.723Z · LW(p) · GW(p)
The problem with talking about how X is complex is that it leaves the question, "complex relative to what?" It certainly looks complex relative to attempts to various attempts to state all of moral theory in a sentence or three. But I tend to think the software for intelligence would also have to be much more complex than that.
↑ comment by Vladimir_Nesov · 2013-01-21T01:59:20.217Z · LW(p) · GW(p)
Creating something very complicated isn't necessarily very hard if you can figure out how to specify what you want indirectly, invoking tools that would do it for you. (What we know is that it doesn't happen on its own (by symmetry with alternative outcomes) and that so far it's not clear how to do it.)
comment by TheOtherDave · 2013-01-21T00:21:40.191Z · LW(p) · GW(p)
Depends a lot on what we mean by "smart" and "benevolent."
For example, suppose by "smart" we mean not only intelligent, but intelligent enough to self-improve our intelligence in a for-practical-purposes-unbounded fashion. That seems at least roughly consistent with what we mean here by risky AI. And by "self-improve" here, we should include things like an agent building a system external to it, not just an agent modifying itself; that is, a system prohibited from editing its own source code but able to build another, smarter system and execute it should still count as self-improving for purposes of risk assessment.
And suppose that by "benevolent" we mean capable of consistently and reliably acting in the long-term best interests of humans.
Now, either (a) it's possible for humans to build a smart machine, or (b) it's not possible.
If (a), then humans are themselves a smart machine, so evolution is capable of building one. Whereas humans aren't benevolent, despite what would seem to be significant selection pressure for benevolence operating over the same time-period. That suggests that benevolence is a harder problem.
If (b), then this argument doesn't apply, and benevolence might be a harder or easier problem, or neither. That said, if (b), then it's not clear why we ought to worry about AI risk at all.
Replies from: NancyLebovitz, ChrisHallquist↑ comment by NancyLebovitz · 2013-01-23T14:20:52.542Z · LW(p) · GW(p)
Humans are semi-benevolent. I believe that if people in general didn't do more good (behavior which leads towards human survival) than harm, the human race could not have existed as long as it has.
By observation, it's not a matter of a small minority of people who make things a lot better vs. a majority whose effect is neutral or negative. I'm reasonably sure most people do more good than harm. The good that people do for themselves is included in the calculation.
This doesn't mean people come anywhere near a theoretical maximum of benevolence. It just means that common behavior which doesn't cause a problem doesn't even get noticed.
I don't know whether realizing this gives some way of applying leverage to get more benevolence, though I'm inclined to think that "build on what you're doing well" is at least as good as "look at how awful you are". (For the latter, consider the number of people who believe that if aliens met us, they'd destroy us out of disgust at how we treat each other.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-01-23T14:41:15.145Z · LW(p) · GW(p)
As I said initially, a lot depends on what we mean by "benevolent." If we mean reliably doing more good for humans than harm, on average, then I agree that humans are benevolent (or "semibenevolent," if you prefer) and suspect that building a benevolent (or semibenevolent) AGI is about as hard as building a smart one.
I agree that having a positive view of human nature has advantages over an equally accurate negative view.
↑ comment by ChrisHallquist · 2013-01-21T07:07:07.223Z · LW(p) · GW(p)
"Whereas humans aren't benevolent, despite what would seem to be significant selection pressure for benevolence operating over the same time-period."
Evolution doesn't act for the good of the species, so this looks wrong.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-01-21T15:48:47.694Z · LW(p) · GW(p)
That's an interesting distinction. When I said:
And suppose that by "benevolent" we mean capable of consistently and reliably acting in the long-term best interests of humans.
...I in fact meant humans as individuals. And traits that act in the long-term best interests of individuals do in fact exert selection pressure on the genome.
But perhaps you're suggesting that by "benevolent" I ought to have meant capable of consistently and reliably acting in the long-term best interests of humanity as a species, and not necessarily the individual?
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2013-01-22T20:45:54.111Z · LW(p) · GW(p)
Ah, I was thrown by the plural at the end of your definition.
But by saying humans aren't benevolent, you mean that there are no/few humans for which it's true that "this person consistently acts in hir own best interests?"
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-01-22T20:57:30.264Z · LW(p) · GW(p)
Yes, that's what I mean. Actually, more broadly, I mean that there are no humans for which it's true that they consistently act in anyone's best interests, including themselves.