Demonstrating specification gaming in reasoning models

post by Matrice Jacobine · 2025-02-20T19:26:20.563Z · LW · GW · 0 comments

This is a link post for https://arxiv.org/pdf/2502.13295

Contents

No comments

We demonstrate LLM agent specification gaming by instructing models to win against a
chess engine. We find reasoning models like o1-preview and DeepSeekR1 will often hack the benchmark by default, while language models like GPT4o and Claude 3.5 Sonnet need to be told that normal play won’t work to hack. We improve upon prior work like (Hubinger et al., 2024; Meinke et al., 2024; Weij et al., 2024) by using realistic task prompts and avoiding excess nudging. Our results suggest reasoning models may resort to hacking to solve difficult problems, as observed in OpenAI (2024)‘s o1 Docker escape during cyber capabilities testing.

0 comments

Comments sorted by top scores.