Does it become easier, or harder, for the world to coordinate around not building AGI as time goes on?
score: -3 (19 votes) ·
I'm a trained rationalist and all the things I've read precedently about AI being an existential risk were bullshit.
But I know the Lesswrong community (which I respect) is involved in AI risk.
So where can I find a concise, exhaustive list of all sound arguments pro and con AGI being likely an existential risk?
If no such curated list exist, are people really caring about the potential issue?
I would like to update my belief about the risk.
But I suppose that most people talking about AGI risk have not enough knowledge about what technically constitute an AGI.
I'm currently building an AGI that aims to understand natural language and to optimally answer questions, internally satisfying a coded utilitarian effective altruistic finality system.
The AGI take language as input and output natural language text.
That's it. How can text be an existential risk is to be answered...
There's no reason to give effectors to AGI, just asking her knowledge and optimal decision would be suffisant for revolutionizing humanity well being (e.g optimal politics), and the output would be analysed by rational humans, stopping it from AGI mistakes.
As for thinking that an AGI will become self conscious, this is nonsense and I would be fascinated to be proved otherwise.