Henry Kissinger: AI Could Mean the End of Human History

post by ESRogs · 2018-05-15T20:11:11.136Z · LW · GW · 12 comments

This is a link post for https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/

Contents

12 comments

12 comments

Comments sorted by top scores.

comment by RedMan · 2018-05-15T21:38:03.801Z · LW(p) · GW(p)

See below

Replies from: Charlie Steiner, ialdabaoth
comment by Charlie Steiner · 2018-05-16T06:04:07.215Z · LW(p) · GW(p)

How does one determine that he's 5 years behind (and behind what)?

Honestly, I'm just more impressed by Kissenger. He is, after all, 94 years old.

Replies from: RedMan
comment by RedMan · 2018-05-16T07:21:24.937Z · LW(p) · GW(p)

See below

Replies from: ESRogs
comment by ESRogs · 2018-05-16T09:59:40.345Z · LW(p) · GW(p)
Do you think that the salons he held included leaders in the AI safety field or just people in his normal circles who read a magazine article or two?

Perhaps you're just being facetious, but I think "people... who read a magazine article or two" underestimates the kind of person who would be in Henry Kissinger's circles.

In fact:

And, last year, Henry Kissinger jumped on the peril bandwagon, holding a confidential meeting with top A.I. experts at the Brook, a private club in Manhattan, to discuss his concern over how smart robots could cause a rupture in history and unravel the way civilization works.

My guess is that he consulted the people who seemed like the obvious experts to consult, namely AI experts. And that these may or may not have included people who were up-to-date on the subfield of AI Safety (which would have been more obscure in 2016 when these meetings were taking place).

Replies from: RedMan
comment by RedMan · 2018-05-16T13:27:56.563Z · LW(p) · GW(p)

Based on the content of the initial op-ed, I am confident in my assertion.

Based on long familiarity with Kissinger's work, he knows that not even he is immune to the dunning-kruger effect and takes steps to mitigate it. I assess that this op-ed was written after an extremely credible effort to inform himself on the state of the field of AI. Unfortunately, based on my analysis of the content of the op-ed, that effort either failed to identify the AI safety community of practice, or determined that its current outputs were not worth detailed attention.

Kissinger's eminence is unquestionable, so the fact that up to date ideas about AI safety were not included is indicative of a problem with the state of the AI safety / x risk community of practice's ability to show relevance to people who can actually take meaningful action based on its' conclusions.

If your primary concern in life is x risk from technology, and the guy who literally once talked the military into 'waiting until the President sobered up' to launch the nuclear war the president ordered is either unaware of your work, or doesn't view it as useful, either you have not effectively marketed yourself, or your work is not useful.

Replies from: Raemon
comment by Raemon · 2018-05-17T21:32:41.482Z · LW(p) · GW(p)

Oh, I had a very different read on this. In this article, Kissinger (or possibly people ghost-writing for him), seemed remarkably clear on several of the most important bits of AI safety. I think it's unlikely he'd have run into those bits if he *hadn't* ended up talking to people involved with actual AI safety.

I currently think it more-likely-than-not that he's spoken to some combination of Stuart Russell, Max Tegmark and/or Nick Bostrom.

(This is based in part on some background knowledge that some of those people have gotten to talk to heads of state before).

The fact that he doesn't drill into the latest gritty details (either of the MIRI camp, the OpenAI camp, or anyone else), or mention any specific organizations, strikes me as having way less to do with how informed he is, and way more to do with his goals for the article (which is more to lend his cred to the basic ideas behind AI risk, and to build momentum towards some kind of intervention. As noted elsewhere I'm cautious about government involvement, but if you take that goal at face value, I think this article basically hits the notes I'd want it to hit)

(My guess is he's not fully informed on everything, just because there's a lot to be fully informed on, but the degree to which he's showing an understanding of the issue here has me relatively happy – expecting that when it comes time to Actually Policy Wonk on this, that he'd connected with the right people and make at least a better-than-average* effort to be informed)

*this is not a claim that better-than-average would be good enough, just, good enough that it doesn't feel correct to draw the conclusion that the AI Safety community has utterly failed at marketing.

comment by ialdabaoth · 2018-05-15T22:38:44.209Z · LW(p) · GW(p)

I've been saying this for awhile, yeah.

comment by norswap · 2018-05-16T17:44:22.578Z · LW(p) · GW(p)

Does this contribute something besides "yay, we've gone mainstream"?

I do however think it's interesting that Kissinger makes in Ur-form what I perceive some flaws or shortcuts made by those who are convinced that AI risk is #1 issue. (Not really interesting in debating whether it is or not.) Example: AlphaGo is clearly a sign of the end times. I dramatize of course (although... look at the title), but the real point being made by Kissinger/risk proponents: AlphaGo is good evidence towards some form of agentful AI risk (i.e., not just algorithms gone wrong, à la Facebook timeline).

comment by Charlie Steiner · 2018-05-16T06:18:52.360Z · LW(p) · GW(p)

Have people hashed out the cost/benefit on getting government involved recently? I think I'm above average in preferring to have AI safety work done within corporations/nonprofits.

One way government attention could end well, though, is if someone(s) who's actually able to discriminate useful research from chaff (either off-topic or genuinely vapid) is in charge of a decent-sized pool of government grant money. Lack of guidance in a funding agency seems like the sort of barrier one runs into whem trying to push on the growth of a field. One of the useful roles of a political-facing organization would be to keep track of potential opportunities to recommend people for key positions.

Replies from: Raemon
comment by Raemon · 2018-05-16T07:10:54.434Z · LW(p) · GW(p)

I still lean on the "government is probably bad for this" direction, but it makes sense for Kissinger in particular to come at it from a "use government as your hammer" frame.

Replies from: RedMan
comment by RedMan · 2018-05-16T07:27:59.595Z · LW(p) · GW(p)

Can you think of an AI catalyzed x risk where technologies that worsen risk are likely to succeed in the market, while technologies that reduce it are unlikely to succeed in the market due to coordination or capital problems?

If the answer is yes, you need either the government or philanthropy.