Eliezer's YU lecture on FAI and MOR [link]

post by Dr_Manhattan · 2013-03-07T16:09:54.710Z · LW · GW · Legacy · 7 comments

Contents

7 comments

http://yucsc.com/ey.aspx

7 comments

Comments sorted by top scores.

comment by Kaj_Sotala · 2013-03-07T17:49:25.015Z · LW(p) · GW(p)

YU = Yeshiva University, apparently.

comment by Qiaochu_Yuan · 2013-03-08T02:19:42.670Z · LW(p) · GW(p)

Summary?

Replies from: Gastogh
comment by Gastogh · 2013-03-08T09:09:23.668Z · LW(p) · GW(p)

I read the first half, skimmed the second, and glanced at a handful of the slides. Based on that, I would say it's mostly introductory material with nothing new for those who have read the sequences. IOW, a summary of the lecture would basically be a summary of a summary of LW.

comment by buybuydandavis · 2013-03-07T19:05:35.789Z · LW(p) · GW(p)

Argues for folk theorem that in general, rational agents will preserve their utility functions during self-optimization.

The Ghandi example works because he was posited with one goal. With multiple competing goals, I'd expect some goals to lose, and having lost, be more likely to lose the next time.

Replies from: Eliezer_Yudkowsky, shminux
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-07T23:21:42.714Z · LW(p) · GW(p)

"Utility functions." Omohundro argues that agents which don't have utility functions will have to acquire them. I'm not totally sure I believe this is a universal law but I suspect that something like it is true in a lot of cases, for reasons like those above.

comment by Shmi (shminux) · 2013-03-07T20:52:25.653Z · LW(p) · GW(p)

The Ghandi example works because he was posited with one goal.

And unchanged circumstances. What would Ghandi do when faced with a trolley problem?

Replies from: RichardHughes
comment by RichardHughes · 2013-03-07T22:10:00.072Z · LW(p) · GW(p)

Same thing as 'multiple competing goals', where those goals are 'do not be part of a causal chain that leads to the death of others' and 'reduce the death of others'.