Eliezer's YU lecture on FAI and MOR [link]

post by Dr_Manhattan · 2013-03-07T16:09:54.710Z · score: 2 (9 votes) · LW · GW · Legacy · 7 comments

http://yucsc.com/ey.aspx

comment by Kaj_Sotala · 2013-03-07T17:49:25.015Z · score: 8 (8 votes) · LW · GW

YU = Yeshiva University, apparently.

comment by Qiaochu_Yuan · 2013-03-08T02:19:42.670Z · score: 6 (8 votes) · LW · GW

Summary?

comment by Gastogh · 2013-03-08T09:09:23.668Z · score: 6 (6 votes) · LW · GW

I read the first half, skimmed the second, and glanced at a handful of the slides. Based on that, I would say it's mostly introductory material with nothing new for those who have read the sequences. IOW, a summary of the lecture would basically be a summary of a summary of LW.

Argues for folk theorem that in general, rational agents will preserve their utility functions during self-optimization.

The Ghandi example works because he was posited with one goal. With multiple competing goals, I'd expect some goals to lose, and having lost, be more likely to lose the next time.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-07T23:21:42.714Z · score: 1 (1 votes) · LW · GW

"Utility functions." Omohundro argues that agents which don't have utility functions will have to acquire them. I'm not totally sure I believe this is a universal law but I suspect that something like it is true in a lot of cases, for reasons like those above.

comment by shminux · 2013-03-07T20:52:25.653Z · score: -2 (4 votes) · LW · GW

The Ghandi example works because he was posited with one goal.

And unchanged circumstances. What would Ghandi do when faced with a trolley problem?

comment by RichardHughes · 2013-03-07T22:10:00.072Z · score: -1 (3 votes) · LW · GW

Same thing as 'multiple competing goals', where those goals are 'do not be part of a causal chain that leads to the death of others' and 'reduce the death of others'.