Posts
Comments
Really appreciate this post! The recommendation "Evaluators should ensure that effective capability elicitation techniques are used for their evaluations" is especially important. Zero-shot, single-turn prompts with no transformations no longer seem representative of a model's impact on the public (who, in aggregate or with scant determination, will be inflicting many variants of unsanctioned prompts with many shots or many turns)
I'm curious why in example 1 the ability to manipulate ("persuade") is called a capability evaluation, making limited results eligible for the sandbagging label, whereas in example 6 (under the name "blackmail") it is called an alignment evaluation, making limited results ineligible for that label?
Both examples tuned out the manipulation enough to hide it in testing, with worse results in production. Can someone help me to better learn the nuances we'd like to impose on sandbagging? Benchmark evasion is an area I started getting into only in november.
Hi! Just introducing myself to this group. I'm a cybersecurity professional, enjoyed various deep learning adventures over the last 6 years and inevitably managing AI related risks in my information security work. Went through BlueDot's AI safety fundamentals last spring with lots of curiosity and (re?)discovered LessWrong. Looking forward to visiting more often, and engaging with the intelligence of this community to sharpen how I think.