Goal retention discussion with Eliezer 2014-09-04T22:23:22.292Z


Comment by MaxTegmark on Goal retention discussion with Eliezer · 2014-09-05T23:28:30.202Z · LW · GW

Thanks Wei for these interesting comments. Whether humans can "solve" ontological crises clearly depends one's definition of "solve". Although there's arguably a clear best solution for de Blanc's corridor example, it's far from clear that there is any behavior that deserves being called a "solution" if the ontological update causes the entire worldview of the rational agent to crumble, revealing the goal to have been fundamentally confused and undefined beyond repair. That's what I was getting at with my souls example.

As to what Nick's views are, I plan to ask him about this when I see him tomorrow.

Comment by MaxTegmark on Goal retention discussion with Eliezer · 2014-09-05T23:21:26.857Z · LW · GW

Thanks Eliezer for your encouraging words and for all these interesting comments! I agree with your points, and we clearly agree on the bottom line as well: 1) Building FAI is hard and we’re far from there yet. Sorting out “final goal” issues is part of the challenge. 2) It’s therefore important to further research these questions now, before it’s too late. :-)

Comment by MaxTegmark on Meetup : Nick Bostrom Talk on Superintelligence · 2014-09-04T22:21:46.431Z · LW · GW

This should be awesome, except for the 2-minute introduction that will be given by this annoying Swedish guy (me). Be there for be square! ;-)