Examples of self-fulfilling prophecies in AI alignment?
post by Chipmonk · 2025-03-03T02:45:51.619Z · LW · GW · 1 commentThis is a question post.
Contents
Answers 8 Chipmonk 8 Chipmonk 4 Chipmonk 3 DivineMango 2 Chipmonk None 1 comment
Like Self-fulfilling misalignment data might be poisoning our AI models [AF · GW], what are historical examples of self-fulfilling prophecies [? · GW] that have affected AI alignment and development?
Put a few potential examples below to seed discussion.
Answers
Training on Documents About Reward Hacking Induces Reward Hacking [LW · GW]
Situational Awareness and race dynamics? h/t Jan Kulveit @Jan_Kulveit [LW · GW]
↑ comment by Xavi CF (xavi-cf) · 2025-04-05T22:59:53.208Z · LW(p) · GW(p)
Situational Awareness probably caused Project Stargate to some extent. Getting the Republican party to take AI seriously enough to let them launch in the White House is no joke and less likely without the essay.
It also started the website-essay meta which is part of why AI 2027, The Compendium, and Gradual Disempowerment all launched the way they did, so there are knock-on effects too.
Superintelligence Strategy is pretty explicitly trying to be self-fulfilling, e.g. "This dynamic stabilizes the strategic landscape without lengthy treaty negotiations—all that is necessary is that states collectively recognize their strategic situation" (which this paper popularly argues exists in the first place)
1 comment
Comments sorted by top scores.