Posts

Can Large Language Models effectively identify cybersecurity risks? 2024-08-30T20:20:21.345Z

Comments

Comment by emile delcourt (emile-delcourt) on An Introduction to AI Sandbagging · 2024-12-13T06:37:27.841Z · LW · GW

Really appreciate this post! The recommendation "Evaluators should ensure that effective capability elicitation techniques are used for their evaluations" is especially important. Zero-shot, single-turn prompts with no transformations no longer seem representative of a model's impact on the public (who, in aggregate or with scant determination, will be inflicting many variants of unsanctioned prompts with many shots or many turns)

Comment by emile delcourt (emile-delcourt) on An Introduction to AI Sandbagging · 2024-12-13T06:06:37.571Z · LW · GW

I'm curious why in example 1 the ability to manipulate ("persuade") is called a capability evaluation, making limited results eligible for the sandbagging label, whereas in example 6 (under the name "blackmail") it is called an alignment evaluation, making limited results ineligible for that label?

Both examples tuned out the manipulation enough to hide it in testing, with worse results in production. Can someone help me to better learn the nuances we'd like to impose on sandbagging? Benchmark evasion is an area I started getting into only in november.

Comment by emile delcourt (emile-delcourt) on Open Thread Summer 2024 · 2024-08-30T19:39:54.328Z · LW · GW

Hi! Just introducing myself to this group. I'm a cybersecurity professional, enjoyed various deep learning adventures over the last 6 years and inevitably managing AI related risks in my information security work.  Went through BlueDot's AI safety fundamentals last spring with lots of curiosity and (re?)discovered LessWrong. Looking forward to visiting more often, and engaging with the intelligence of this community to sharpen how I think.