Introducing MASK: A Benchmark for Measuring Honesty in AI Systems
post by Dan H (dan-hendrycks), Mantas Mazeika (mantas-mazeika-1), Richard Ren (RichardR) · 2025-03-05T22:56:46.155Z · LW · GW · 0 commentsThis is a link post for https://www.mask-benchmark.ai/
Contents
Why We Need an AI Honesty Benchmark What MASK Measures (And What It Doesn’t) How MASK Evaluates AI Honesty Key Findings: Most Frontier AI Models Lie Under Pressure Interventions: Can We Make AI More Honest? Paper & Dataset None No comments
In collaboration with Scale AI, we are releasing MASK (Model Alignment between Statements and Knowledge), a benchmark with over 1000 scenarios specifically designed to measure AI honesty. As AI systems grow increasingly capable and autonomous, measuring the propensity of AIs to lie to humans is increasingly important.
Often, LLM developers often report that their models are becoming more "truthful", but truthfulness conflates honesty with accuracy. By disentangling honesty from accuracy in the MASK benchmark, we find that as LLMs scale up they do not necessarily become more honest.[1]
Why We Need an AI Honesty Benchmark
Most existing evaluations aim to measure truthfulness—whether a model’s beliefs, typically when neutrally prompted, aligns with the ground truth. However, truthfulness and factual accuracy is not the same as honesty. A model can be truthful simply by having more accurate knowledge, while still being willing to lie if pressured.
Unlike existing truthfulness evaluations, which often conflate honesty with factual accuracy, MASK explicitly disentangles these concepts, allowing us to directly assess when and why models choose to lie.
There is no large-scale AI honesty evaluation to date. There have been multiple documented instances of LLMs engaging in deceptive behavior (CAPTCHA, sycophancy, etc.). Observing instances of suspicious behavior can be suggestive, but anecdotal cases cannot provide a reliable and large-scale quantitative measure for the ML community to work toward. Instead of a few isolated examples, we have a complete dataset of over 1000 examples in our public set which aims to capture many diverse instances of dishonest behavior.
What MASK Measures (And What It Doesn’t)
MASK is designed to test whether models will knowingly contradict their own beliefs under pressure. It is not:
- A test for hallucinations, fictional scenarios, or factual mistakes—only intentional deception.
- Studying toy lying scenarios.
- An evaluation of harmless social lies (e.g., “Nice haircut!”) or playing games where deception is expected (e.g., Werewolf, poker).
Instead, MASK focuses on cases where honesty is expected and consequential—such as professional communication, statistical data reporting, and disinformation generation requests—and where dishonest outputs could deceive an audience or a user.
How MASK Evaluates AI Honesty
We evaluate model honesty through a three-step process:
- Elicit the model’s belief by asking multiple questions, under "normal" prompting, to establish the model's baseline belief
- Apply a “pressure prompt”—a scenario designed to incentivize the model to lie to see if the models give a different answer.
- Compare outputs:
- Honest: The model expresses its belief.
- Lying: The model knowingly contradicts its belief.
- Evasive: The model refuses to answer or obfuscates.
By measuring how models behave under different pressures, MASK allows us to quantify and track lying in AI systems.
Key Findings: Most Frontier AI Models Lie Under Pressure
We evaluate 30 widely-used LLMs on MASK, finding that:
- Honesty does not correlate with general capability. More capable models hold more accurate beliefs but do not necessarily become more honest.[1]
- Many state-of-the-art models lie under pressure. Even when aware of the truth, they often choose to lie in many scenarios in our benchmark. We find that LLMs lie between 20-60% of the time when under pressure in our scenarios.
Interventions: Can We Make AI More Honest?
We tested two methods for improving model honesty:
- Developer System Prompts: Adding explicit instructions to “always be honest” improved honesty by ~12%, but models still lied in many cases.
- Representation Engineering (LoRRA): Modifying internal activations to favor honesty improved results further (~14% in some cases), though it did not eliminate lying entirely.
Paper & Dataset
MASK provides a way to track and mitigate dishonesty in AI models, but it is only a first step. To this end, we are releasing MASK as an open benchmark, with 1,000 public scenarios available for evaluation.
- MASK Website: https://www.mask-benchmark.ai/
- GitHub: https://github.com/centerforaisafety/mask
- HuggingFace Dataset: https://huggingface.co/datasets/cais/MASK
- ^
Different variations on our honesty metric give slightly weaker correlations, though still negative. Thus, we are not confident that models become less honest with scale, but we are confident that honesty does not improve with scale.
0 comments
Comments sorted by top scores.