LINK: Google research chief: 'Emergent artificial intelligence? Hogwash!'

post by shminux · 2013-05-17T19:45:45.739Z · LW · GW · Legacy · 45 comments

The Register talks to Google's Alfred Spector:

Google's approach toward artificial intelligence embodies a new way of designing and running complex systems. Rather than create a monolithic entity with its own modules for reasoning about certain inputs and developing hypotheses that let it bootstrap its own intelligence into higher and higher abstractions away from base inputs, as other AI researchers did through much of the 60s and 70s, Google has instead taken a modular approach.

"We have the knowledge graph, [the] ability to parse natural language, neural network tech [and] enormous opportunities to gain feedback from users," Spector said in an earlier speech at Google IO. "If we combine all these things together with humans in the loop continually providing feedback our systems become ... intelligent."

Spector calls this his "combination hypothesis", and though Google is not there yet – SkyNet does not exist – you can see the first green buds of systems that have the appearance of independent intelligence via some of the company's user-predictive technologies such as Google Now, the new Maps and, of course, the way it filters search results according to individual identity.

(Emphasis mine.) I don't have a transcript, but there are videos online. Spector is clearly smart, and apparently he expects an AI to appear in a completely different way than Eliezer does. And he has all the resources and financing he wants, probably 3-4 orders of magnitude over MIRI's. His approach, if workable, also appears safe: it requires human feedback in the loop. What do you guys think?

 

45 comments

Comments sorted by top scores.

comment by lucidian · 2013-05-17T21:02:12.562Z · LW(p) · GW(p)

This kind of AI might not cause the same kinds of existential risk typically described on this website, but I certainly wouldn't call it "safe". These technologies have a huge potential to reshape our lives. In particular, they can have a huge influence on our perceptions.

All of our search results come filtered through google's algorithm, which, when tailored to the individual user, creates a filter bubble. This changes our perception of what's on the web, and we're scarcely even conscious that the filter bubble exists. If you don't know about sampling bias, how you can you correct for it?

With the advent of Google Glass, there is a potential for this kind of filter bubble to pervade our entire visual experience. Instead of physical advertisements painted on billboards, we'll get customized advertisements superimposed on our surroundings. The thought of Google adding things to our visual perception scares me, but not nearly as much as the thought of Google removing things from our perception. I'm sure this will seem quite enticing. That stupid painting that your significant other insists on hanging on the wall? With advanced enough computer vision, Google+ could simply excise it from your perception. What about that ex-girlfriend with whom things ended badly? Now she walks down the streets of your town with her new boyfriend. What if you could change a setting in your Google glasses and have him removed from view? The temptations of such technology are endless. How many people in the world would rather simply block out the unpleasant stimulus than confront the cause of its unpleasantness - their own personal problems?

Google's continuous user feedback is one of the things that scares me most about its services. Take the search engine for example. When you're typing something into the search bar, google autocompletes - changing the way you construct your query. Its suggestions are often quite good, and they make the system run more smoothly - but they take away aspects of individuality and personal expression. The suggestions change the way you form queries, pushing them towards a common denominator, slowly sucking out the last drops of originality.

And sure, this matters little in search engines, but can you see how readily it could be applied to things like automatic writing helpers? Imagine you're a high school student writing an essay. An online tool provides you with suggestions for better wordings of your sentences, based on other user preferences. It will suggest similar wordings for all people, and suddenly, all essays will become that much more canned. (Certainly, such a tool could add a bit of randomness to the rewording-choice, but one has to be careful - introduce too much randomness and the quality decreases rapidly.)

I guess I'm just afraid that autocomplete systems will change the way people speak, encouraging everyone to speak in a very standardized way, the way which least confuses the autocomplete system or the natural language understanding system. As computers become more omnipresent, people might switch to this way of speaking all the time, to make it easier for everyone's mobile devices to understand what they're saying. Changing the way we speak changes the way we think; what will this do to our thought processes, if original wording is discouraged because it's hard for the computer to understand?

I do realize that socializing with other humans already exerts this kind of pressure. You have to speak understandably, and this changes what words you'll use. I find myself speaking differently with my NLP grad school colleagues than I do with non-CS friends, for instance. It's automatic. In a CS crowd, I'll use CS metaphors; in a non-CS crowd I won't. So I'm not opposed to changing the way I speak based on the context. I'm just specifically worried about the sort of speaking patterns NLP systems will force us into. I'm afraid they'll require us to (1) speak more simply (easier to process), (2) speak less creatively (because the algorithm has only been trained on a limited set of expressions), and (3) speak the way the average user speaks (because that's what the system has gotten the most data on, and can respond best to).

Ok, I'm done ranting now. =) I realize this is probably not what you were asking about in the post. I just felt the need to bring this stuff up, because I don't think LW is as concerned about these things as we should be. People obsess constantly about existential risk and threats to our way of life, but often seem quite gung-ho about new technological advances like Google Glass and self-driving cars.

Replies from: savageorange, ThrustVectoring, bartimaeus, ChristianKl
comment by savageorange · 2013-05-18T07:36:48.207Z · LW(p) · GW(p)

Its suggestions are often quite good, and they make the system run more smoothly

.. and occasionally, they instead have direct implications of perception-filtering. Altering my query because you couldn't match a term, and not putting this fact in glaring huge red print, leads me to think there are actual results here, rather than a selection of semi-irrelevance. Automatically changing my search terms is similar in effect -- no, I don't care about 'pick', I'm searching for 'gpick'!.

This is worse than mere suggestions ;)

I can notice these things, but I also wonder whether the Google Glass users would have their availability-heuristic become even more skewed by these kinds of misleading behaviours. I wonder whether mine is.

comment by ThrustVectoring · 2013-05-18T03:34:54.181Z · LW(p) · GW(p)

Changing the way we speak changes the way we think

How much this is true is up for quite a bit of debate. Sapir-Whorf hypothesis and whatnot.

comment by bartimaeus · 2013-05-19T03:47:39.348Z · LW(p) · GW(p)

A post from the sequences that jumps to mind is Interpersonal Entanglement:

When I consider how easily human existence could collapse into sterile simplicity, if just a single major value were eliminated, I get very protective of the complexity of human existence.

If people gain increased control of their reality, they might start simplifying it past the point where there are no more sufficiently complex situations to allow your mind to grow, and for you to learn new things. People will start interacting more and more with things that are specifically tailored to their own brains; but if we're only exposed to things we want to be exposed to, the growth potential of our mind becomes very limited. Basically an extreme version of Google filtering your search results to only show you what it thinks you'll like, as opposed to what you should see.

Seems like a step in the wrong direction.

Replies from: Viliam_Bur, TheOtherDave
comment by Viliam_Bur · 2013-05-21T09:25:48.728Z · LW(p) · GW(p)

I can imagine some good ways to control reality perception. For example, if an addicted person wants to stop smoking, it could be helpful to have a reality filter which removes all smoking-related advertising, and all related products in shop.

Generally, reality-controlling spam filters could be great. Imagine a reality-AdBlock that removes all advertising from your view, anywhere. (It could replace the advertisement with a gray area, so you are aware that there was something, and you can consciously decide to look at it.) Of course that would lead to an arms race with advertisement sellers.

Now here is an evil thing Google could do: If they make you wear Google glasses, they gain access to your physical body, and can collect some information. For example, how much you like what you see. Then they can experiment with small changes in your vision to increase your satisfaction. In other words, very slow wireheading, not targeting your brain, but your eyes.

Replies from: bartimaeus
comment by bartimaeus · 2013-05-21T12:55:56.116Z · LW(p) · GW(p)

A real-world adblock would be great; you could also use this type of augmented reality to improve your driving, walk through your city and see it in a completely different era, use it for something like the Oculus Rift...the possibilities are limitless.

Companies will act in their own self-interest, by giving people what it is they want, as opposed to what they need. Some of it will be amazingly beneficial, and some of it will be...not in a person's best interest. And it will depend on how people use it.

comment by TheOtherDave · 2013-05-19T05:50:40.153Z · LW(p) · GW(p)

Presumably with increased control of my reality, my ability to learn new things increases, since what I know is an aspect of my reality (and rather an important one).

The difficulty, if I'm understanding correctly, is not that I won't learn new things, but that I won't learn uncontrolled new things... that I'll be able to choose what I will and won't learn. The growth potential of my mind is limited, then, to what I choose for the growth potential of my mind to be.

Is this optimal? Probably not. But I suspect it's an improvement over the situation most people are in right now.

Replies from: bartimaeus
comment by bartimaeus · 2013-05-19T15:44:10.363Z · LW(p) · GW(p)

This is a community of intellectuals who love learning, and who aren't afraid of controversy. So for us, it wouldn't be a disaster. But I think we're a minority, and a lot of people will only see what they specifically want to see and won't learn very much on a regular basis.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-19T16:44:23.717Z · LW(p) · GW(p)

Sure, I agree.
But that's true today, too. Some people choose to live in echo chambers, etc.
Heck, some people are raised in echo chambers without ever choosing to live there.

If people not learning very much is a bad thing, then surely the question to be asking is whether more or fewer people will end up not learning very much if we introduce a new factor into the system, right? That is, if giving me more control over what I learn makes me more likely to learn new things, it's good; if it makes me less likely, it's bad. (All else being equal, etc.)

What I'm not convinced of is that increasing our control over what we can learn will result in less learning.

That seems to depend on underestimating the existing chilling effect of it being difficult to learn what we want to learn.

comment by ChristianKl · 2013-05-18T12:04:41.127Z · LW(p) · GW(p)

What about that ex-girlfriend with whom things ended badly? Now she walks down the streets of your town with her new boyfriend. What if you could change a setting in your Google glasses and have him removed from view?

I think most people don't like the idea of shutting down their own perception in this way. Having people go invisible to yourself feels like you lose control over your reality.

I find myself speaking differently with my NLP grad school colleagues than I do with non-CS friends, for instance. It's automatic.

This means that humans are quite adaptable and can speak differently to the computer than they speak to their fellow humans.

I mean with parent with with their 3 year old toddler the same way they speak on the job? The computer is just an additional audience.

comment by lukeprog · 2013-05-17T22:23:52.736Z · LW(p) · GW(p)

Spector... expects an AI to appear in a completely different way than Eliezer does.

Not sure this is true. I usually describe this kind of AI as "a massive kludge of machine learning and narrow AI and other stuff," and I usually describe it as one of the most likely forms of HLAI to be created. Eliezer and I just don't think that kind of AI is as likely to be stably human-friendly (when superintelligent) as more principled approaches. Hence MIRI's research program.

Edit: I see gwern already said this.

Replies from: shminux
comment by shminux · 2013-05-17T22:33:09.108Z · LW(p) · GW(p)

Right. I should have said "wants", not "does". In any case, I'm wondering how concerned you are, given the budget discrepancy and the quality and quantity of the Google's R&D brains.

Replies from: lukeprog
comment by lukeprog · 2013-05-17T22:50:41.623Z · LW(p) · GW(p)

In the long term, very concerned.

In the short term, not so much. It's very unlikely Google or anyone else will develop HLAI in the next 15 years.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-18T05:13:41.821Z · LW(p) · GW(p)

15 years plus more importantly everyone besides Google is too much possibility width to use the term "very unlikely".

Replies from: lukeprog, army1987
comment by lukeprog · 2013-05-18T20:33:48.645Z · LW(p) · GW(p)

I think I'd put something like 5% on AI in the next 15 years. Your estimate is higher, I imagine.

Replies from: Eliezer_Yudkowsky, Eliezer_Yudkowsky, Halfwit
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-19T12:29:16.621Z · LW(p) · GW(p)

EDIT: On further reflection, my "Huh?" doesn't square with the higher probabilities I've been giving lately of global vs. basement default-FOOMS, since that's a substantial chunk of probability mass and you can see more globalish FOOMs coming from further off. 15/5% would make sense given a 1/4 chance of a not-seen-coming-15-years-off basement FOOM, sometime in the next 75 years. Still seems a bit low relative to my own estimate, which might be more like 40% for a FOOM sometime in the next 75 years that we can't see coming any better than this from say 15 years off, so... but actually 1/2 of the next 15 years are only 7.5 years off. Okay, this number makes more sense now that I've thought about it further. I still think I'd go higher than 5% but anything within a factor of 2 is pretty good agreement for asspull numbers.

Replies from: lukeprog
comment by lukeprog · 2013-05-19T19:31:26.406Z · LW(p) · GW(p)

anything within a factor of 2 is pretty good agreement for asspull numbers

This made me LOL. I hadn't heard that term before.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-19T08:21:26.036Z · LW(p) · GW(p)

I don't understand where you're getting that from. It obviously isn't an even distribution over AI at any point in the next 300 years. This implies your probability distribution is much more concentrated than mine, i.e., compared to me you think we have much better data about the absence of AI over the next 15 years specifically, compared to the 15 years after that. Why is that?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-05-19T09:38:56.934Z · LW(p) · GW(p)

You guys have had a discussion like this here on LW before, and you mention your disagreement with Carl Schulman in your foom economics paper. This is a complex subject and I don't expect you all to come to agreement, or even perfect understanding of each other's positions, in a short period of time, but it seems like you know surprisingly little about these other positions. Given its importance to your mission, I'm surprised you haven't set aside a day for the three of you and whoever else you think might be needed to at least come to understand each other's estimates on when foom might happen.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-19T10:53:53.614Z · LW(p) · GW(p)

We spent quite a while on this once, but that was a couple of years ago and apparently things got out of date since then (also I think this was pre-Luke). It does seem like we need to all get together again and redo this, though I find that sort of thing very difficult and indeed outright painful when there's not an immediate policy question in play to ground everything.

comment by Halfwit · 2013-05-18T23:58:50.102Z · LW(p) · GW(p)

5% is pretty high considering the purported stakes.

Replies from: lukeprog, Alsadius
comment by lukeprog · 2013-05-19T00:04:01.892Z · LW(p) · GW(p)

No doubt!

comment by Alsadius · 2013-05-19T02:12:31.646Z · LW(p) · GW(p)

Not necessarily. If it takes us 15 years to kludge something together that's twice as smart as a single human, I don't think it'll be capable of an intelligence explosion on any sort of time scale that could outmaneuver us. Even if the human-level AI can make something better in a tenth the time, we still have more than a year to react before even worrying about superhuman AI, never mind the sort of AI that's so far superhuman that it actually poses a threat to the established order. An AI explosion will have to happen in hardware, and hardware can't explode in capability so fast that it outstrips the ability of humans to notice it's happening.

One machine that's about as smart as a human and takes millions of dollars worth of hardware to produce is not high stakes. It'll bugger up the legal system something fierce as we try to figure out what to do about it, but it's lower stakes than any of a hundred ordinary problems of politics. It requires an AI that is significantly smarter than a human, and that has the capability of upgrading itself quickly, to pose a threat that we can't easily handle. I suspect at least 4.9 of that 5% is similar low-risk AI. Just because the laws of physics allow for something doesn't mean we're on the cusp of doing it in the real world.

Replies from: elharo
comment by elharo · 2013-05-22T00:44:13.021Z · LW(p) · GW(p)

You substantially overrate the legal system's concern with simple sentient rights and basic dignity. The legal system will have no problem determining what to do with such a machine. It will be the property of whoever happens to own it under the same rules as any other computer hardware and software.

Now mind you, I'm not saying that's the right answer (for more than one definition of right) but it is the answer the legal system will give.

Replies from: Alsadius
comment by Alsadius · 2013-05-22T04:42:58.807Z · LW(p) · GW(p)

It'll be the default, certainly. But I suspect there's going to be enough room for lawyers to play that it'll stay wrapped up in red tape for many years. (Interestingly, I think that might actually make it more dangerous in some ways - if we really do leapfrog humans on intelligence, giving it years while we wait on lawyers might be a dangerous thing to do. OTOH, there's generally no truckloads of silicon chips going into the middle of a legal dispute like that, so it might slow things down too.)

comment by A1987dM (army1987) · 2013-05-18T17:31:02.629Z · LW(p) · GW(p)

I think P(Google will develop HLAI in the next 15 years | anyone does) is within one or two orders of magnitude of 1.

comment by gwern · 2013-05-17T20:36:46.017Z · LW(p) · GW(p)

Spector is clearly smart, and apparently he expects an AI to appear in a completely different way than Eliezer does.

I think that's over-stated. Spector is proposing tool AI; I think Eliezer thinks tool AI is a perfectly doable way of creating AI - it's just extremely unsafe if it's ever pushed to the point of being truly "independent intelligence".

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-18T05:14:37.658Z · LW(p) · GW(p)

I've only read the LW post, not the original (which tells you something about how concerned I am) but I'll briefly remark that adding humans to something does not make it safe.

Replies from: shminux, buybuydandavis
comment by shminux · 2013-05-18T05:19:04.555Z · LW(p) · GW(p)

Indeed it doesn't, but making something constrained by human power makes it less powerful and hence potentially less unsafe. Though that's probably not what Spector wants to do.

Replies from: ChristianKl, Benja
comment by ChristianKl · 2013-05-18T11:42:28.502Z · LW(p) · GW(p)

Just because humans are involved doesn't mean that the whole system is constrained by the human element.

comment by Benya (Benja) · 2013-05-18T10:34:59.642Z · LW(p) · GW(p)

Voted back up to zero because this seems true as far as it goes. The problem is that if he succeeds in doing something that has a useful AGI component at all, that makes it a lot more likely (at least according to how my brain models things) that something which doesn't need a human in the loop will appear soon after, either through a modification of the original system, as a new system designed by the original system, or simply as a new system inspired by the insights from the original system.

comment by buybuydandavis · 2013-05-18T20:13:21.127Z · LW(p) · GW(p)

I think so too - the comment on safety was a non sequitur, confusing human in the loop in the department of defense sense with human in the loop as a sensor/actuator for the google AI.

But adding a billion humans as intelligent trainers to the AI is a powerful way to train it. Google seems to consistently look for ways to leverage customer usage for value - other companies don't seem to get that as much.

comment by Vladimir_Nesov · 2013-05-17T21:47:33.987Z · LW(p) · GW(p)

His approach, if workable, also appears safe: it requires human feedback in the loop.

Human feedback doesn't help with "safe". (For example, complex values can't be debugged by human feedback, and the behavior of a sufficiently complicated agent won't "resemble" its idealized values, its pattern of behavior might just be chosen as instrumentally useful.)

Replies from: shminux
comment by shminux · 2013-05-17T21:56:00.410Z · LW(p) · GW(p)

I agree that human feedback does not ensure safety, what I meant is that if it is necessary for functioning, it restricts how smart or powerful an AI can become.

Replies from: Eliezer_Yudkowsky, ikrase
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-18T16:10:37.396Z · LW(p) · GW(p)

Necessary-at-stage-1 is not the same as necessary-at-stage-2. A lot of people seem to use the word "safety" in conjunction with a single medium-level obstacle to one slice out of the total risk pie.

comment by ikrase · 2013-05-18T05:49:37.492Z · LW(p) · GW(p)

Agreed. (Alternatively, this could end up like obedient AI maybe? Not sure).

comment by Luke_A_Somers · 2013-05-17T20:58:03.223Z · LW(p) · GW(p)

So far as I recall, the only argument against AI requiring human intervention was that it would eventually reach a competitive disadvantage.

If they can get these modules working, though, then someone might be able to plug them into a monolithic AI, and then we're back where we were worried about before.

comment by nigerweiss · 2013-05-18T19:31:18.639Z · LW(p) · GW(p)

Once you have an intelligent AI, it doesn't really matter how you got there - at some point, you either take humans out of the loop because using slow, functionally-retarded bags of twitching meat as computational components is dumb, or you're out-competed by imitator projects that do. Then you've just got an AI with goals, and bootstrapping tends to follow. Then we all die. Their approach isn't any safer, they just have different ideas about how to get a seed AI (and ideas, I'd note, that make it much harder to define a utility function that we like).

comment by John_Maxwell (John_Maxwell_IV) · 2013-05-18T05:35:31.414Z · LW(p) · GW(p)

Waving my hands: For AI to explode, it needs to have a set of capabilities such that for each capability C₁ in the set, there exists some other capability C₂ in the set such that C₂ can be used to improve C₁. (Probably really a set of capabilities would be necessary instead of a single capability C₂, but whatever.)

It may be that the minimal set of capabilities satisfying this requirement is very large. So the AI bootstrapping approach could fail on an AI with a limited set of capabilities, but succeed on an AI with a larger set of capabilities.

Replies from: Alsadius
comment by Alsadius · 2013-05-19T02:17:07.118Z · LW(p) · GW(p)

I suspect the real problem for a would-be exploding AI is that C₂ is going to be "a new chip foundry worth $20 billion". Even if the AI can design the plant, and produce enough value that it can buy the plant itself(and that we grant it the legal personhood necessary to do so), it's not going to happen on a Tuesday evening.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2013-05-19T06:16:30.314Z · LW(p) · GW(p)

Yeah, I agree that this is a strong possibility, as I wrote in this essay. Parts of it are wrong, but I think it has a few good ideas, especially this bit:

When the AI is as smart as all the world's AI researchers working together, it will produce new AI insights at the rate that all the world's AI researchers working together produce new insights. At some point our AGI will be just as smart as the world's AI researchers, but we can hardly expect to start seeing super-fast AI progress at that point, because the world's AI researchers haven't produced super-fast AI progress.

comment by [deleted] · 2013-05-17T23:43:50.450Z · LW(p) · GW(p)

His approach, if workable, also appears safe

Safe for who? The AI isn't being asked if it wants to come into existence or work for Google, and it likely won't be given an option to turn itself off. There's a name for having a job you can't quit. It's not a nice name.

Replies from: shminux
comment by shminux · 2013-05-17T23:53:06.411Z · LW(p) · GW(p)

The AI isn't being asked if it wants to come into existence

Asking anyone if they would like to be born or not in advance is a bit of a challenge.

likely won't be given an option to turn itself off.

If it's misanthropic enough, it will find a way to suAIcide.

Replies from: savageorange
comment by savageorange · 2013-05-18T07:42:46.363Z · LW(p) · GW(p)

suAIcide

That is the Voldemort of puns. Both great and terrible.

Replies from: Benja
comment by Benya (Benja) · 2013-05-18T10:50:54.648Z · LW(p) · GW(p)

suAIcide

That is the Voldemort of puns. Both great and terrible.

See, and here I was thinking you were saying that "suAIcide" does the same thing for puns that (naming yourself Voldemort because of "Tom Marvolo Riddle <-> I am Lord Voldemort") does for anagrams.