FAI PR tracking well [link]

post by Dr_Manhattan · 2014-08-15T21:23:48.622Z · LW · GW · Legacy · 7 comments

This time, it's by "The Editors" of Bloomberg view (which is very significant in News world). Content is very reasonable explanation of AI concerns, though not novel to this audience.

http://www.bloombergview.com/articles/2014-08-10/intelligent-machines-scare-smart-people

Directionally this is definitely positive, though I'm not sure quite how. Does anyone have have ideas? Perhaps one of the orgs (MIRI, FHI, CSER, FLI) reach out and say hello to the editors? 

7 comments

Comments sorted by top scores.

comment by shminux · 2014-08-15T21:45:37.770Z · LW(p) · GW(p)

Directionally this is definitely positive, though I'm not sure quite how. Does anyone have have ideas?

I'm guessing that it's the high-status people like Hawking and Musk speaking out that makes a difference. For the general public, including journalists, the names Bostrom, Hanson, and Yudkowsky don't ring a bell. The next useful step would be one of these high-status people publicly endorsing MIRI's work. There is some of that happening, but not quite in the mainstream media.

Replies from: lukeprog
comment by lukeprog · 2014-08-15T23:06:20.452Z · LW(p) · GW(p)

The next useful step would be one of these high-status people publicly endorsing MIRI's work.

Yes, though it's not surprising that this should come later and less universally than a more general endorsement of devoting thought to the general problem or Superintelligence specifically, since MIRI's current research agenda is more narrow and hard to understand than the general problem or Superintelligence.

Right now it's an organizational priority for us to summarize MIRI's current technical results and open problems, and explain why we're choosing these particular problems, with a document that has been audience-tested on lots of smart young CS folk who aren't already MIRI fans. That will make it a lot easier for prestigious CS academics to evaluate MIRI as a project, and some of them (I expect) will endorse it to varying degrees.

But there will probably be a lag of several years between prestigious people endorsing Superintelligence or general concern for the problem, and prestigious people endorsing MIRI's particular technical agenda.

Replies from: shminux
comment by shminux · 2014-08-16T00:45:04.800Z · LW(p) · GW(p)

But there will probably be a lag of several years between prestigious people endorsing Superintelligence or general concern for the problem, and prestigious people endorsing MIRI's particular technical agenda.

I suspect that it would go something like

  1. Superintelligence is dangerous
  2. Better make sure it's safe
  3. But Godel (/Loeb) says you can never be 100% sure
  4. But MIRI has shown that you can be very very sure
Replies from: lukeprog
comment by lukeprog · 2014-08-16T02:01:46.800Z · LW(p) · GW(p)

I can't read tone in text so... are statements #3 and #4 jokes? I mean, I straightforwardly disagree with them.

Replies from: shminux
comment by shminux · 2014-08-18T06:36:56.195Z · LW(p) · GW(p)

It's much more likely that I misunderstand something basic about what MIRI does.

Replies from: lukeprog
comment by lukeprog · 2014-08-18T06:44:51.291Z · LW(p) · GW(p)

Okay, fair enough. To explain briefly:

I disagree with (3) because the Lobian obstacle is just an obstacle to a certain kind of stable self-modification in a particular toy model, and can't say anything about what kinds of safety guarantees you can have for superintelligences in general.

I disagree with (4) because MIRI hasn't shown that there are ways to make a superintelligence 90% or more likely (in a subjective Bayesian sense) to be stably friendly, and I don't expect us to have shown that in another 20 years, and plausibly not ever.

Replies from: shminux
comment by shminux · 2014-08-18T07:01:50.870Z · LW(p) · GW(p)

Thanks! I guess I was unduly optimistic. Comes with being a hopeful but ultimately clueless bystander.