Questions for an AGI project

post by whpearson · 2018-02-02T19:45:34.211Z · LW · GW · 1 comments

I've been thinking a bit about what would cause me to support an AGI project and thought it might be interesting to others, and I'd be interested in other risks or questions.

The questions would be about discovering the projects stance on various risks. By stance I mean.

The types of risks I am interested in are.

So for foom, they might do things like agi estimation where you try and estimate the capability of your part of an AGI at a task. If it turns out to be vastly better than you expect or your estimation is that it will do science vastly better than humans straight out of the box, you halt and catch fire and try and do some ethics and philosophy to get a good goal straight away.

1 comments

Comments sorted by top scores.

comment by whpearson · 2018-02-02T20:45:34.848Z · LW(p) · GW(p)

I suppose there is the risk that the AGI or IA is suffering while helping out humanity as well.