Yoshua Bengio on AI progress, hype and risks

post by V_V · 2016-01-30T01:45:53.290Z · LW · GW · Legacy · 10 comments

Contents

10 comments

LINK

Yoshua Bengio, one the world's leading expert on machine learning, and neural networks in particular, explains his view on these issues in an interview. Relevant quotes:

There are people who are grossly overestimating the progress that has been made. There are many, many years of small progress behind a lot of these things, including mundane things like more data and computer power. The hype isn’t about whether the stuff we’re doing is useful or not—it is. But people underestimate how much more science needs to be done. And it’s difficult to separate the hype from the reality because we are seeing these great things and also, to the naked eye, they look magical

[ Recursive self-improvement ] It’s not how AI is built these days. Machine learning means you have a painstaking, slow process of acquiring information through millions of examples. A machine improves itself, yes, but very, very slowly, and in very specialized ways. And the kind of algorithms we play with are not at all like little virus things that are self-programming. That’s not what we’re doing.

Right now, the way we’re teaching machines to be intelligent is that we have to tell the computer what is an image, even at the pixel level. For autonomous driving, humans label huge numbers of images of cars to show which parts are pedestrians or roads. It’s not at all how humans learn, and it’s not how animals learn. We’re missing something big. This is one of the main things we’re doing in my lab, but there are no short-term applications—it’s probably not going to be useful to build a product tomorrow.

We ought to be talking about these things [ AI risks ]. The thing I’m more worried about, in a foreseeable future, is not computers taking over the world. I’m more worried about misuse of AI. Things like bad military uses, manipulating people through really smart advertising; also, the social impact, like many people losing their jobs. Society needs to get together and come up with a collective response, and not leave it to the law of the jungle to sort things out.

I think it's fair to say that Bengio has joined the ranks of AI researchers like his colleagues Andrew Ng and Yann LeCun who publicly express skepticism towards imminent human-extinction-level AI.


10 comments

Comments sorted by top scores.

comment by Vika · 2016-01-30T04:59:54.904Z · LW(p) · GW(p)

The above-mentioned researchers are skeptical in different ways. Andrew Ng thinks that human-level AI is ridiculously far away, and that trying to predict the future more than 5 years out is useless. Yann LeCun and Yoshua Bengio believe that advanced AI is far from imminent, but approve of people thinking about long-term AI safety.

Okay, but surely it’s still important to think now about the eventual consequences of AI. - Absolutely. We ought to be talking about these things.

Replies from: jsteinhardt
comment by jsteinhardt · 2016-01-30T08:01:43.829Z · LW(p) · GW(p)

+1 To go even further, I would add that it's unproductive to think of these researchers as being on anyone's "side". These are smart, nuanced people and rounding their comments down to a specific agenda is a recipe for misunderstanding.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2016-04-07T17:52:00.764Z · LW(p) · GW(p)

Comparing with articles from a year ago, e.g. http://www.popsci.com/bill-gates-fears-ai-ai-researchers-know-better, this represents significant progress.

I'm a PhD student in Yoshua's lab. I've spoken with him about this issue several times, and he has moved on this issue, as have Yann and Andrew. From my perspective following this issue, there was tremendous progress in the ML community's attitude towards Xrisk.

I'm quite optimistic that such progress with continue, although pessimistic that it will be fast enough and that the ML community's attitude will be anything like sufficient for a positive outcome.

Replies from: HunterJay, satt, The_Jaded_One
comment by HunterJay · 2021-11-03T23:42:32.343Z · LW(p) · GW(p)

I am curious if this has changed over the past 6 years since you posted this comment. Do you get the feeling that high profile researchers have shifted even further towards Xrisk concern, or if they continue with the same views as in 2016? Thanks!

Replies from: capybaralet
comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2021-11-13T13:55:34.131Z · LW(p) · GW(p)

There has been continued progress at about the rate I would've expected -- maybe a bit faster.  I think GPT-3 has helped change people's views somewhat, as have further appreciation of other social issues of AI. 

comment by satt · 2017-01-16T02:08:30.229Z · LW(p) · GW(p)

I'm a PhD student in Yoshua's lab. I've spoken with him about this issue several times, and he has moved on this issue,

Thank you!

comment by The_Jaded_One · 2017-01-12T09:55:23.560Z · LW(p) · GW(p)

Underrated comment of the thread!

comment by Manfred · 2016-02-01T06:38:26.962Z · LW(p) · GW(p)

skepticism towards imminent human-extinction-level AI.

Got around to reading the actual interview. The 'imminent' part is well and thoroughly skepted, but as has been talked to death around here, non-imminent human extinction still seems important. And that part just seems to get totally passed over, which leaves me feeling like there's some disconnect somewhere.

It's almost like this a viewpoint got some celebrity endorsements, which had some idiosyncrasies and were necessarily brief, and then members of the media formed their own opinions based largely just on those celebrity statements, plus their own preconceptions and interests.

comment by TRIZ-Ingenieur · 2016-01-30T16:02:34.624Z · LW(p) · GW(p)

But people underestimate how much more science needs to be done.

The big thing that is missing is meta-cognitive self reflection. It might turn out that even today's RNN structures are sufficient and the only lacking answer is how to interconnect multi-columnar networks with meta-cognition networks.

it’s probably not going to be useful to build a product tomorrow.

Yes. Given the architecture is right and capable few science is needed to train this AGI. It will learn on its own.

The amount of safety related research is for sure underestimated. Evolution of biological brains never needed extra constraints. Society needed and created constraints. And it had time to do so. If science gets the architecture right - do the scientists really know what is going on inside their networks? How can developers integrate safety? There will not be a society of similarly capable AIs that can self-constrain its members. These are critical science issues especially because we have little we can copy from.

comment by tukabel · 2016-01-31T20:26:51.268Z · LW(p) · GW(p)

I'm afraid we will never know whether someone is "close" to (super)human AGI, unless this entity reveals it. Now think nuclear bomb... and superAGI is supposed to be orders of magnitude more powerful/dangerous.

So, not unlike the wartime disappearance of scientific articles on nuclear topics, certain (sudden?) lack of progress reporting press could be an indicator.