Posts

On Equivalence of Supergoals 2018-04-20T15:06:20.377Z

Comments

Comment by Alexey Feldgendler (feldgendler) on On Equivalence of Supergoals · 2018-05-16T09:17:38.951Z · LW · GW

Interesting.

This might have something to do with the fact that the problems are getting harder now that the low-hanging fruit has been picked. Every additional year to life expectancy is harder than the one before. Every cycle of Moore's law is harder because we're starting to deal with sizes comparable to molecules.

I admit that this makes my point weaker.

Comment by Alexey Feldgendler (feldgendler) on On Equivalence of Supergoals · 2018-04-20T19:44:53.185Z · LW · GW

I have to admit that I have no idea; my understanding of the brain isn't enough to even assess the magnitude of the challenge. Intuitively, it seems at least as hard as “find cheap renewable energy”, but I might be completely wrong.

Comment by Alexey Feldgendler (feldgendler) on On Equivalence of Supergoals · 2018-04-20T16:57:51.185Z · LW · GW

My point is that to “model brains better” requires a lot more knowledge about the brains (neurology, microbiology, chemistry) and a lot more computing power, and those things require progress in other areas, and so on, so that sounds like one of those equivalent supergoals.

Comment by Alexey Feldgendler (feldgendler) on On Equivalence of Supergoals · 2018-04-20T16:40:22.700Z · LW · GW

Right. This should read “if you believe in friendly AI singularity”. Updating the post.