Posts

Comments

Comment by Kevin Van Horn (kevin-van-horn) on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-23T23:37:10.557Z · LW · GW

If there are more than a few independent short-term extinction scenarios (from any cause) with a probability higher than 1%, then we are in trouble -- their combined probability would add up to a significant probability of doom.

As far as resources go, even if we threw 100 times the current budget of MIRI at the problem, that would be $175 million, which is

- 0.005% of the U.S. federal budget,

- 54 cents per person living in the U.S., or

- 2 cents per human being.

Comment by Kevin Van Horn (kevin-van-horn) on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-23T18:46:32.101Z · LW · GW

Arguing about the mostly likely outcome is missing the point: when the stakes are as high as survival of the human race, even a 1% probability of an adverse outcome is very worrisome. So my question to Robin Hanson is this: are you 99% certain that the FOOM scenario is wrong?

Comment by Kevin Van Horn (kevin-van-horn) on There's No Fire Alarm for Artificial General Intelligence · 2017-10-19T20:41:36.792Z · LW · GW

The relevant question is not, "How long until superhuman AI?", but "Can we solve the value alignment problem before that time?" The value alignment problem looks very difficult. It probably requires figuring out how to create bug-free software... so I don't expect a satisfactory solution within the next 50 years. Even if we knew for certain that superhuman AI wouldn't arrive for another 100 years, it would make sense to be putting some serious effort into solving the value alignment problem now.