Posts

Comments

Comment by Tarzan,_me_Jane on Morality as Fixed Computation · 2008-08-08T16:24:50.000Z · LW · GW

Whoops! This system doesn't link to the exact comment. Here's the text quote:

@Eliezer: Sophiesdad, you should be aware that I'm not likely to take your advice, or even take it seriously. You may as well stop wasting the effort.

Comment by Tarzan,_me_Jane on Morality as Fixed Computation · 2008-08-08T16:19:59.000Z · LW · GW

Careful, Lara Foster. See:

questioning eliezer's approach

Comment by Tarzan,_me_Jane on Morality as Fixed Computation · 2008-08-08T13:47:46.000Z · LW · GW

From the SIAI website, presumably by Eliezer: what makes us think we can outguess genuinely smarter-than-human intelligence?

Yet we keep having long discussions about what kind of morality to give the smarter-than-human AI. What am I missing?