Posts
Comments
Very interesting read. Terrifying in fact.
The process involving an advanced AI described here is very robust and conducts predictably to the destruction of humanity and much of the biosphere as well. But is strong AI really the root cause here or solely the condition for a good Hollywood movie ?
These scenari assume, I think wrongly, that AI attaining a critical level is a requirement for this catastrophe to occur. We animals are largely insensitive to slow change and evolved to react to immediate threats. I think strong AI in this scenario is only for us to perceive the change. Whether or not IA attain such levels, the scenarios are still valid and generalisable to human agents, human organisations and processes, and political systems.
The same processes, being enacted by not so slow human beings with not so weak mecanical assistants, chainsaws and bulldozers, supertankers and tall chimneys may lead to similar results although with a different time scale. Even a slow exponential growth will hit the ceiling, given time.
For what I understand of the concept of alignment and provocatively: it aims to ensure, that AI experts aren't to blame for the end of the world.
I recognise that alignment deals with a shorter term danger where AI is involved. Others outside the AI community can take this opportunity to realise that even if the AI folks fix it for AI, all of us need to fix it for the world.
The alignment concept is transferable to human governance. What would it take to identify the relevant incentives aligned with general human and biosphere well-being and socially engineer our societies accordingly? Reforming the value system away from destructive capital growth towards a system that positively reinforces well-being needs some more work and some past reactions to innovative (social) ideas have not always been welcomed with all the positive attitude !
LessWrong is already heading a new way already, that's hopeful.
Clement Marshall