Posts

Frontier Models are Capable of In-context Scheming 2024-12-05T22:11:17.320Z

Comments

Comment by Bronson Schoen (bronson-schoen) on A shortcoming of concrete demonstrations as AGI risk advocacy · 2024-12-12T00:27:05.293Z · LW · GW

I am more optimistic that we can get such empirical evidence for at least the most important parts of the AI risk case, like deceptive alignment, and here's one reason as comment on offer:

Can you elaborate on what you were pointing to in the linked example? The thread specifically I’ve seen a few people mention recently but I seem to be missing the conclusion they’re drawing from it.

Comment by Bronson Schoen (bronson-schoen) on A shortcoming of concrete demonstrations as AGI risk advocacy · 2024-12-11T23:55:48.645Z · LW · GW

crisis-mongering about risk when there is no demonstration/empirical evidence to ruin the initially perfect world pretty immediately

I think the key point of this post is precisely the question of “is there any such demonstration, short of the actual real very bad thing happening in a real setting that people who discount these as serious risks would accept as empirical evidence worth updating on?”