Suggestions for a presentation on FAI?

post by ShardPhoenix · 2011-02-11T06:09:27.227Z · LW · GW · Legacy · 9 comments

Next week I'm going to be doing a 10-15 minute presentation on Friendly AI to a local group of programmers. They're already familiar with concepts such as the singularity. My basic plan is to cover what FAI is, why it's important, and why it's a hard problem, based on the material on this site.

Does anyone have any specific suggestions of things that should be included, questions that I might need to answer, etc?

9 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2011-02-11T16:15:48.825Z · LW(p) · GW(p)

Roko made a presentation a year ago, which I thought was pretty good. Unfortunately the video seems to have disappeared, but here's a transcript.

Here's another one made by Eliezer about three years ago.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2011-02-13T10:00:17.156Z · LW(p) · GW(p)

Thanks, I'll check them out.

comment by Soki · 2011-02-11T21:09:25.075Z · LW(p) · GW(p)

Despite the fact that your audience is familiar with the singularity, I would still insist on the potential power of an AGI.
You could say something about the AI spreading on the Internet (from a 1000 to 1,000,000 time increase in processing power), bootstrapping nanotech and rewriting its source code, and that all of this could happen very quickly.

Ask them what they think such an AI would do, and if they show signs of anthropomorphism explain them that they are biased (mind projection fallacy for example).
You can also ask them what goal they would give to such an AI and show what kind of disaster may follow.
That can lead you to the complexity of wishes (a computer does not have common sens) and the complexity of human values.

I would also choose a nice set of links to lesswrong.org and singinst.com that they could read after the presentation.

It would be great if you could give us some feedback after your presentation : What worked, what did they find odd, what was their reaction, what questions did they ask.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2011-02-23T12:17:40.191Z · LW(p) · GW(p)

I did the presentation a week ago. It went over quite well - several people told me they enjoyed it. In general, people asked pretty sensible questions (eg, about IA vs. AI) that helped generate some good discussion.

Here are the slides (pretty brief - I didn't want to include too much text): http://www.megaupload.com/?d=6YTPVVFX

comment by Perplexed · 2011-02-11T19:28:30.562Z · LW(p) · GW(p)

10-15 minutes isn't much time. One thing I hope you touch on (perhaps in the 'why it is hard' portion) is the ambiguity - 'friendly' to whom or what?

  • friendly to me?
  • friendly to my values?
  • friendly to mankind?
  • friendly to mankind's values?

It needs to be pointed out that these things are not necessarily all the same, and furthermore, they are probably not constant. People change. Populations change their members. An AI may be friendly to mankind when built, and be completely stable, but gradually become unfriendly to mankind because mankind changes (we changed quite a bit over the last million years, didn't we?).

Replies from: None
comment by [deleted] · 2011-02-11T20:34:44.191Z · LW(p) · GW(p)

And for how hard the problem is, I'd emphasize how likely we are to be wrong about determining or extrapolating exactly what we want, in each sense listed above.

An engaging way to open a talk is to challenge the audience, then give them this info as first steps. If I can suggest an example, I'd say something like "Asimov's 3 Laws looked innocuous and even in our best interest, but they were terribly flawed. So can we find a real solution? One that will hold for powerful AGI? This is an open problem."

If you don't run out of time, connect it back to self-modifying code (if only to ground the problem for a group of programmers). We need software that will self-modify to satisfy dynamic criteria that may exist, but are obscured to us. And we may only get one shot.

That will make someone ask about take-off and cooperating agents, and you can use the Q&A time to mention FOOMing, negentropy etc.

comment by Alexandros · 2011-02-13T16:50:32.609Z · LW(p) · GW(p)

Consider leading in with an introduction to the law of unintended consequences (look up in wikipedia, I am on mobile, sorry.) and some relevant examples. I think this is a fairly intuitive meme that people understand, and once loaded, FAI can be introduced smoothly from there.

comment by [deleted] · 2011-02-11T19:48:10.530Z · LW(p) · GW(p)

Have you given a lot of talks? If not, I'd plan a 5-7 minute talk, which will actually take 10-15 minutes. Mark each of your points as either 1-minute or 2-minutes. If this was already obvious or habit, awesome :-)

Replies from: ShardPhoenix
comment by ShardPhoenix · 2011-02-13T10:00:48.313Z · LW(p) · GW(p)

I plan on practicing it out loud a couple of times at least, to improve fluidity and check the length. If anything I find I tend to talk too fast, not too slow :).