Posts
Comments
Maybe if they all have IQ 200+, they automatically realize that and rather work on alignment than on capabilities? Or come up with a pivotal act.
With Eliezer going [public](https://x.com/tsarnick/status/1882927003508359242) with the IQ enhancement motion he at least must think so? (because if done publicly it'll initiate intelligence enhancement race between US, China and other countries; and that'd normally lead to AI capabilities speed-run unless the amplified people are automatically wiser than that)
After reading Pope and Belrose's work, a viewpoint of "lots of good aligned ASIs already building nanosystems and better computing infra" has solidified in my mind. And therefore, any accidentally or purposefully created misaligned AIs necessarily wouldn't have a chance of long-term competitive existence against the existing ASIs. Yet, those misaligned AIs might still be able to destroy the world via nanosystems; as we wouldn't yet trust the existing AIs with the herculean task of protecting our dear nature against the invasive nanospecies and all such. Byrnes voiced similar concerns in his point 1 against Pope&Belrose.
Assuming AIs don't soon come up with even better crypto/decentralization solutions: I hadn't considered that the smart contracts being too complicated (and thus unsecure) might not hold true anymore once AI-assistants and cyberprotection scale up. Especially the ZK, a natural language for AIs.