Julian Bradshaw's Shortform
post by Julian Bradshaw · 2025-02-11T17:47:54.657Z · LW · GW · 1 commentsContents
1 comment
1 comments
Comments sorted by top scores.
comment by Julian Bradshaw · 2025-02-11T17:47:54.655Z · LW(p) · GW(p)
Still-possible good future: there's a fast takeoff to ASI in one lab, contemporary alignment techniques somehow work, that ASI prevents any later unaligned AI from ruining world, ASI provides life and a path for continued growth to humanity (and to shrimp, if you're an EA).
Copium perhaps, and certainly less likely in our race-to-AGI world, but possible. This is something like the “original”, naive plan for AI pre-rationalism, but it might be worth remembering as a possibility?