[Link] John Baez interviews Eliezer

post by Morendil · 2011-03-07T07:49:26.124Z · LW · GW · Legacy · 22 comments

First part: "This Week's Finds (Week 311)".

Second part: "This Week's Finds (Week 312)"

22 comments

Comments sorted by top scores.

comment by Psy-Kosh · 2011-03-08T06:17:42.220Z · LW(p) · GW(p)

Interesting interview. Possibly a dumb question though, but... in the interview Eliezer said that as soon as they actually solved all the reflective decision theory stuff, they could code the thing.

But... don't they also have to actually solve all the tricky "exactly how to properly implement the CEV idea in the first place" thing? ie, even given they've got a way to provably maintain the utility function under self modification, they still have to give it the right utility function. (Or did I just misunderstand that interview answer?)

(Or has SIAI actually made really good progress on that to the point that the reflective decision theory side of things is the big roadblock now?)

Replies from: XiXiDu
comment by XiXiDu · 2011-03-09T11:23:54.081Z · LW(p) · GW(p)

... in the interview Eliezer said that as soon as they actually solved all the reflective decision theory stuff, they could code the thing.

I second the question, all it needs to get superhuman intelligence is reflective decision theory?

Replies from: Psy-Kosh
comment by Psy-Kosh · 2011-03-09T15:53:53.498Z · LW(p) · GW(p)

I second the question, all it needs to get superhuman intelligence is reflective decision theory?

Well, that plus a way to efficiently approximately compute it in real life scenarios, of course.

But my question was more given that, I thought there was still work to be done as far as precisely formulating what exactly it is that it should do.

comment by Risto_Saarelma · 2011-03-07T11:29:22.521Z · LW(p) · GW(p)

The impression on the amount and level of early-age training a serious mathematical competence requires I get from that is somewhat terrifying. On the other hand, I do appreciate pointing out the difference between appreciating feats of maximally difficult virtuoso mathematics and using mathematics as a helpful framework for describing things in an exact and unambiguous way.

Replies from: Nisan
comment by Nisan · 2011-03-07T18:15:29.019Z · LW(p) · GW(p)

I should point out that training for high school math competitions is not required for mathematical competence: Some good mathematicians were IMO or Putnam champions, and some have never competed. Competition math is a somewhat special skill.

Of course, acquiring proficiency in math requires work, just like anything else.

comment by endoself · 2011-03-07T21:35:04.985Z · LW(p) · GW(p)

The next installment will presumably be next week. Does anyone know if this is correct? Also, how many weeks will this run for?

Gah . . . the comments on that interview. People are being wrong on the internet and it won't even help to correct them because the issues are too complicated and multiple responses will just make me look fanatical.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-03-08T02:04:49.616Z · LW(p) · GW(p)

Gah . . . the comments on that interview. People are being wrong on the internet and it won't even help to correct them because the issues are too complicated and multiple responses will just make me look fanatical.

I know what you mean, especially the third comment about tame foxes. Sometimes I forget that not everyone who discusses AI on the internet has read Surface Analogies and Deep Causes.

comment by Paul Crowley (ciphergoth) · 2011-03-07T08:37:18.817Z · LW(p) · GW(p)

I think there must be more coming; it's short, ends abruptly, and opens with:

This week I’ll start an interview

comment by XiXiDu · 2011-03-07T10:10:57.635Z · LW(p) · GW(p)

I intend to plunge into the decision theory of self-modifying decision systems and never look back. (And finish the decision theory and implement it and run the AI, at which point, if all goes well, we Win.) (This Week’s Finds (Week 311))

...

After all, if you had the complete decision process, you could run it as an AI, and I'd be coding it up right now. (Eliezer_Yudkowsky 12 October 2009 06:19:28PM)

Can this be interpreted the way that Eliezer Yudkowsky believes that he himself, or the SIAI, will not only define friendliness but actually implement it and run a fooming AI to take over the universe? If they really believe that and if it is likely that they can succeed, I still think that even given a very low probability of them being dishonest one should seriously consider how it can be guaranteed that the AI they run is actually friendly. Let me ask you people who believe that the SIAI can succeed, are you not worried at all about unfriendly humans? You just trust their words? That's really weird. If I don't misunderstand what he is saying in those two quotes above, or if he isn't joking, he's actually saying that he'll run a fooming AI.

Replies from: Eliezer_Yudkowsky, NancyLebovitz
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-03-07T16:25:22.153Z · LW(p) · GW(p)

If you want to offer a concrete proposal for verifying the trustworthiness of nine people in a basement, offer it. Otherwise you're just giving people an excuse to get lost in thought and implement the do-nothing option instead of implementing the best time-sensitive policy proposal offered so far.

Replies from: XiXiDu, benelliott
comment by XiXiDu · 2011-03-07T17:14:17.175Z · LW(p) · GW(p)

If you want to offer a concrete proposal for verifying the trustworthiness of nine people in a basement, offer it.

  • Pay independent experts to peer-review your work.
  • Make the finances of the SIAI easily accessible.
  • Openly explain why and for what you currently need more money.
  • Publish progress reports for people to assess how close you are to run a fooming AI.
  • Publish a roadmap, set certain goals and openly pronounce success or failure.
  • Devise a plan that allows the examination by independent experts of a possible seed AI before you run it.

I came up with the above in about 1 minute. You shouldn't even have to ask me how one could verify the trustworthiness of a charity. There are many more obvious ways to approach that problem.

Replies from: Nisan, Normal_Anomaly
comment by Nisan · 2011-03-07T18:21:45.715Z · LW(p) · GW(p)

Those sound like good ideas (except the first one), but they aren't ideas for allaying your fears that SIAI will make an evil AI (except the last one). They are ideas for allaying your fears that SIAI won't put your donation to good use. (Except the last one.)

Replies from: Larks, nerzhin
comment by Larks · 2011-03-07T20:26:04.372Z · LW(p) · GW(p)

Yes - they'd show SIAI is doing something, but not that it's doing the right thing. And a 99% competent SIAI could well be worse than a 0% competent one – if they create a fooming UFAI a few years earlier.

It seems hard to think of anything that would verify that the nine are doing the right thing without risking AGI knowledge leaking out - I'd much sooner take my chances with a bunch of dudes in a basement who at least know there's a problem then an IBM team who just want moar awesum.

If Friendliness turns out to be largely independent of the AGI bit I suppose it could be usefully published - both for feedback, and to raise awareness, and LW etc. could critique it.

Replies from: Pavitra
comment by Pavitra · 2011-03-08T03:50:53.061Z · LW(p) · GW(p)

The realistic outcomes for humanity are uFAI foom, FAI foom, or extinction by some other means. Soon doesn't matter all that much; the only significant question is probability of an eventual Friendly foom. Those "few years earlier" only matter if someone else would have run a Friendly AGI in those few intervening years.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2011-03-08T10:44:01.066Z · LW(p) · GW(p)

EDITED TO ADD: None of this changes the substance of your article, but just to pick a few nits:

"Foom" refers to a scenario in which we reach superintelligence rapidly enough to take humanity by surprise. That isn't certain - it's imaginable that we could have, say, several years of moderately superhuman intelligence.

Also, while these may be the long term realistic outcomes, in the short term another possible outcome is global catastrophe short of extinction, which would slow things down some.

Replies from: Pavitra
comment by Pavitra · 2011-03-08T18:34:43.558Z · LW(p) · GW(p)

I don't think any of that changes the substance of my argument.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2011-03-09T12:05:49.087Z · LW(p) · GW(p)

Sorry, should have been clearer that I was just nitpicking. Will edit.

comment by nerzhin · 2011-03-07T19:13:03.618Z · LW(p) · GW(p)

They are ideas for allaying fears that SIAI is incompetent or worse. Which, since it is devoted to building an AI, would tend to allay fears that it is building an evil one.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-03-07T20:17:10.579Z · LW(p) · GW(p)

Basically incompetent organizations that try to build AI just won't do anything.

comment by Normal_Anomaly · 2011-03-08T02:04:44.745Z · LW(p) · GW(p)

Openly explain why and for what you currently need more money.

I'm especially interested in this. I'm open to the idea that SIAI is the maximally useful charity, but since I don't know why they need the money I'm currently giving it to Village Reach.

comment by benelliott · 2011-03-07T16:58:39.556Z · LW(p) · GW(p)

He has

I'm not sure I would personally endorse those possibilities, but let it not be said that he complains without offering solutions.

comment by NancyLebovitz · 2011-03-08T02:38:49.688Z · LW(p) · GW(p)

I don't know whether there's any way to absolutely prove that SIAI will get it right (though I hope that if they come up with a proof of Friendliness they make it public), but I trust them more than their most likely competitors which I think would be governments.