Posts
Comments
Comment by
Insc Seeker (insc-seeker) on
All AGI Safety questions welcome (especially basic ones) [~monthly thread] ·
2024-11-17T01:16:19.411Z ·
LW ·
GW
Premise: A person who wants to destroy the world with exceeding amounts of power would be just as big of a problem as an AI with exceeding amounts of power.
Question: Do we just think that AI will be able to obtain that amount of power more effectively than a human, and that the ratio of AIs that will destroy the world to safe AIs is larger than the ratio of world destroying humans to normal humans?