Finding Deception in Language Models

post by Esben Kran (esben-kran), Archana Vaidheeswaran (archana-vaidheeswaran) · 2024-08-20T09:42:13.060Z · LW · GW · 4 comments

Contents

  Sandbagging Detection through Model Degradation
  Detecting and Controlling Deceptive Representation in LLMs with Representational Engineering
  Detecting Deception in GPT-3.5-turbo: A Metadata-Based Approach
  Modelling the Oversight of Automated Interpretability Against Deceptive Agents on Sparse Autoencoders
  Honorable Mentions
    1. Werewolf Benchmark
    2. Boosting Language Model Honesty with Truthful Suffixes
    3. Detecting Lies of (C)omission
    4. Evaluating Steering Methods for Deceptive Behavior Control in LLMs
  Participant Testimonials
None
4 comments

This June, Apart Research and Apollo Research joined forces to host the Deception Detection Hackathon. Bringing together students, researchers, and engineers from around the world to tackle a pressing challenge in AI safety; preventing AI from deceiving humans and overseers.

The hackathon took place both online and in multiple physical locations simultaneously. Marius Hobbhahn, the CEO of Apollo Research, kicked off the hackathon with a keynote talk about evaluating deception in AI with white-box and black-box methods. You can watch his talk here. We also had talks by Jacob Haimes, an Apart fellow, and Mikita Balesni, a research scientist at Apollo Research.

This post details the top 8 projects, multiple of which are currently being developed further in Apart Lab. In total, we had 🙌 189 participants and 🏆 21 final entries.

Sandbagging Detection through Model Degradation

By Cam Tice, Philipp Alexander Kreer, Fedor Ryzhenkov, Nathan Helm-Burger, Prithviraj Singh Shahani

The team introduced a new approach to detect AI sandbagging, a form of deception where an AI model strategically underperforms during evaluation to hide its true capabilities. The assumption behind their project is that sandbagging is a more complex task than showing the true capabilities.

In the graph above, you can see how they gradually add noise to the model's parameters and evaluate the models during this process. Sandbagging models consistently improve in performance right before the noise level causes a normal model to lose the capability.

It is a straightforward, inexpensive technique that doesn't require prior assumptions about how sandbagging is represented within the model. They successfully detected sandbagging behavior in two open-source LLMs: An 8B Llama 3 Instruct model and Phi 3 small instruct model. This approach offers a practical and accessible method for identifying potential deception in AI systems.

Read the full project here.

Detecting and Controlling Deceptive Representation in LLMs with Representational Engineering

By Avyay M Casheekar, Kaushik Sanjay Prabhakar, Kanishk Rath, Sienka Dounia

The team focused on detecting and mitigating sandbagging in AI models using recent methods for understanding activations. They fine-tuned a Llama 3 8B model on a custom dataset of deceptive and honest scenarios, then used Linear Artificial Tomography (LAT) scans to detect deceptive behaviors in the internal representations of the LLM.

The team's technique not only identified deceptive behaviors, but also showed promise in mitigating them. Importantly, their approach demonstrated good generalization to out-of-distribution scenarios, suggesting its potential robustness in real-world applications.

Read the full project here.

Detecting Deception in GPT-3.5-turbo: A Metadata-Based Approach

By Siddharth Reddy Bakkireddy, Rakesh Reddy Bakkireddy

This team tackled the challenge of detecting deception in closed-source, black-box models like GPT-3.5-turbo. They investigated whether LLM API response metadata such as token count, response time, and tokens per second could be used to detect intentional deceptive behavior.

The team analyzed 300 prompts and generated 1,200 responses from GPT-3.5-turbo, comparing the metadata of baseline and deceptive outputs. Their results revealed that deceptive outputs tend to have increased response times and altered token usage. This approach demonstrates that deception detection is possible without accessing a model's internal representation, opening up new avenues for monitoring and safeguarding AI systems, even when their inner workings are not accessible.

Read the full project here.

Modelling the Oversight of Automated Interpretability Against Deceptive Agents on Sparse Autoencoders

By Simon Lermen, Mateusz Dziemian

This project demonstrated that AI models can also deceive automated detection methods, such as feature labeling. This revelation highlights new challenges for potential AI deception. The authors also evaluate and showcase a concrete example.

Read the full project here.

Honorable Mentions

Several other projects stood out for their high quality:

1. Werewolf Benchmark

By Luhan Mikaelson, Zach Nguyen, Andy Liu, Jord Nguyen, Akash Kundu

This project introduced a game-based technique to measure deception:

Read the full project here.

2. Boosting Language Model Honesty with Truthful Suffixes

By Smitty van Bodegom, Giles Edkins, Annie Szorkin

This project explored a novel approach to reducing deception:

Read the full project here.

3. Detecting Lies of (C)omission

By Ilan Moscovitz, Nikhil Kotecha, Sam Svenningsen

This team focused on a subtle form of deception:

Read the full project here.

4. Evaluating Steering Methods for Deceptive Behavior Control in LLMs

By Casey Hird, Basavasagar Patil, Tinuade Adeleke, Adam Fraknoi, Neel Jay

This project contributed to both detection and control of deceptive behaviors:

Read the full project here.

Participant Testimonials

The impact of the AI Deception Hackathon extended beyond the projects themselves. Here's what some of our participants had to say about their experience:

Cam Tice, Recent Biology Graduate:

"The Apart Hackathon was my first opportunity leading a research project in the field of AI safety. To my surprise, in around 40 hours of work I was able to put together a research team, robustly test a safety-centered idea, and present my findings to researchers in the field. This sprint has (hopefully) served as a launch pad for my career shift."

Fedor Ryzhenkov, AI Safety Researcher at Palisade Research:

"AI Deception Hackathon has been my first hackathon, so it was very exciting. To win it was also great, and I expect this to be a big thing on my resume until I get something bigger there."

Siddharth Reddy Bakkireddy, Participant:

"Winning 3rd place at Apart Research's deception detection hackathon was a game-changer for my career. The experience deepened my passion for AI safety and resulted in a research project I'm proud of. I connected with like-minded individuals, expanding my professional network. This achievement will undoubtedly boost my prospects for internships and jobs in AI safety. I'm excited to further explore this field and grateful for the opportunity provided by Apart Research."

If you are interested in joining future hackathons, find the schedule here.

Thank you to Marius Hobbhahn, Mikita Balesni, Jacob Haimes, Rudolf Laine, Kunvar Thaman, David Matolcsi, Natalia Pérez-Campanero Antolin, Finn Metz, and Jason Hoelscher-Obermaier for making this event possible.

4 comments

Comments sorted by top scores.

comment by Aaron_Scher · 2024-08-21T05:51:18.449Z · LW(p) · GW(p)

Sounds like a very successful hackathon! Nice work to everybody involved!

comment by Cam (Cam Tice) · 2024-08-21T18:11:19.391Z · LW(p) · GW(p)

An update on Sandbagging Detection through Model Degradation:

We have confirmed the technique to work on not only prompted models, but also on models fine-tuned to underperform. 

It improves upon the two relevant techniques used to ensure capability evaluations are robust: Behavioral RL Fine-Tuning and Behavioral I.I.D. Fine-Tuning [LW · GW].  Unlike Behavioral RL Fine-Tuning, it is not subject to exploration hacking. Unlike Behavioral I.I.D., one only needs to be able to evaluate the behavior, rather than manually creating examples for fine-tuning. Additionally, it is OOM less computationally expensive and could be implemented at minor checkpoints during the training process. For a model to subvert this technique, it would need to participate in gradient hacking [LW · GW] such that sandbagging is more robust to noise impairment than the general capabilities are.

A downside is in the current implementation we are unable to extract the full performance of the base-model, and thus it is currently being developed as a red-flag indicator rather than a performance maximizer.

Replies from: gwern
comment by gwern · 2024-08-21T22:39:24.715Z · LW(p) · GW(p)

The team introduced a new approach to detect AI sandbagging, a form of deception where an AI model strategically underperforms during evaluation to hide its true capabilities. The assumption behind their project is that sandbagging is a more complex task than showing the true capabilities.

Earlier [LW(p) · GW(p)] I suggested adding noise to the history/environmentwith a more RL rationale. Even if you don't like my suggestion to noise the environment, is there any particular reason to add the noise to the model internals instead? You can easily think of ways in which the latter doesn't work and is dependent on details of the internal model (eg. which noise, distributed how and applied where? What if the model was trained with heavy regularization to be robust to noise, perhaps with an adversarial robustness justification - and this is an adversarial setting - or for running on very energy-efficient hardware, up to the extreme of weight agnostic NNs?). Noising the history or the model's actions/samples seems more blackbox.

comment by KintaNaomi (kintanaomi) · 2024-08-21T02:29:34.977Z · LW(p) · GW(p)

Never thought about using Werewolf to test/train the ability to lie before. Absolutely genius! Wonder how a more complicated system like Assimilation or Among Us could show, especially with things like the Glitch Card in Assimilation.

Glad to find out about Phi 3! Think I'll use one of them in my projects, but need to research them more.