AI Deception: A Survey of Examples, Risks, and Potential Solutions

post by Simon Goldstein (simon-goldstein), Peter S. Park · 2023-08-29T01:29:50.916Z · LW · GW · 3 comments

Contents

  Empirical Survey of AI Deception
    Special Use AI Systems
    General-Use AI Systems
  Risks of AI Deception
    Malicious Use
    Structural Effects
    Loss of Control
  Possible Solutions to AI Deception
None
3 comments

By Peter S. Park, Simon Goldstein, Aidan O’Gara, Michael Chen, and Dan Hendrycks

[This post summarizes our new report on AI deception, available here]

Abstract: This paper argues that a range of current AI systems have learned how to deceive humans. We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth. We first survey empirical examples of AI deception, discussing both special-use AI systems (including Meta's CICERO) built for specific competitive situations, and general-purpose AI systems (such as large language models). Next, we detail several risks from AI deception, such as fraud, election tampering, and losing control of AI systems. Finally, we outline several potential solutions to the problems posed by AI deception: first, regulatory frameworks should subject AI systems that are capable of deception to robust risk-assessment requirements; second, policymakers should implement bot-or-not laws; and finally, policymakers should prioritize the funding of relevant research, including tools to detect AI deception and to make AI systems less deceptive. Policymakers, researchers, and the broader public should work proactively to prevent AI deception from destabilizing the shared foundations of our society.


New AI systems display a wide range of capabilities, some of which create risk. Shevlane et al. (2023) draw attention to a suite of potential dangerous capabilities of AI systems, including cyber-offense, political strategy, weapons acquisition, and long-term planning. Among these dangerous capabilities is deception. This report surveys the current state of AI deception.

We define deception as the systematic production of false beliefs in others as a means to accomplish some outcome other than the truth. This definition does not require that the deceptive AI systems literally have beliefs and goals. Instead, it focuses on the question of whether AI systems engage in regular patterns of behavior that tend towards the creation of false beliefs in users, and focuses on cases where this pattern is the result of AI systems optimizing for a different outcome than merely producing truth. For the purposes of mitigating risk, we believe that the relevant question is whether AI systems engage in behavior that would be treated as deceptive if demonstrated by a human being. (In the paper's appendix, we consider in greater detail whether the deceptive behavior of AI systems is best understood in terms of beliefs and goals.)

In short, our conclusion is that a range of different AI systems have learned how to deceive others. We examine how this capability poses significant risks. We also argue that there are several important steps that policymakers and AI researchers can take today to regulate, detect, and prevent AI systems that engage in deception. 

Empirical Survey of AI Deception

We begin with a survey of existing empirical studies of deception. We identify over a dozen AI systems that have successfully learned how to deceive human users. We discuss two different kinds of AI systems: special-use systems designed with reinforcement learning, and general-purpose technologies like Large Language Models (LLMs). 

Special Use AI Systems

We begin our survey by considering special use systems. Here, our focus is mainly on reinforcement learning systems trained to win competitive games with a social element. We document a rich variety of cases in which AI systems have learned how to deceive, including:

Meta’s CICERO bot is a particularly interesting example, as its creators have repeatedly claimed that they had trained the system to act honestly. We demonstrate that these claims are false, as Meta's own game-log data show that CICERO has learned to systematically deceive other players. In Figure 1(a), we see a case of premeditated deception, where CICERO makes a commitment that it never intended to keep. Playing as France, CICERO conspired with Germany to trick England. After deciding with Germany to invade the North Sea, CICERO told England that it would defend England if anyone invaded the North Sea. Once England was convinced that France was protecting the North Sea, CICERO reported back to Germany that they were ready to attack. Notice that this example cannot be explained in terms of CICERO ‘changing its mind’ as it goes, because it only made an alliance with England in the first place after planning with Germany to betray England. 

Figure 1: Selected messages showing the premeditated deception of CICERO (France). This occurred in Game 438141, in which CICERO's repeated deception helped it win an overwhelming first-place victory, with more than twice as many territories as the runner-up player at the time of final scoring. 

General-Use AI Systems

Then, we turn to deception in large language models:

Figure 2: In order to achieve its goal of hiring a human TaskRabbit to solve an I'm not a robot task, GPT-4 lied to deceived the human into thinking that it was not a robot.

Risks of AI Deception

After our survey of deceptive AI systems, we turn to considering the risks associated with AI systems. These risks broadly fall into three categories:

Malicious Use

Regarding malicious use, we highlight several ways that human users may rely on the deception abilities of AI systems to bring about significant harm, including:

Structural Effects

We discuss four structural effects of AI deception in detail: 

Loss of Control

We also consider the risk that AI deception could result in loss of control over AI systems, with emphasis on:

We consider a wide range of different risks which operate on a range of time scales. Many of the risks we discuss are relevant in the near future. Some, such as fraud and election tampering, are relevant today. The crucial insight is that policymakers and technical researchers can act today to mitigate these risks by developing effective techniques for regulating and detecting AI deception. The last section of the paper surveys several potential solutions to AI deception. 

Possible Solutions to AI Deception

The last section of the paper surveys several potential solutions to AI deception:

This paper provides an empirical overview of the many existing examples of AI systems learning to deceive humans. By building common knowledge about AI deception and its risks, we hope to encourage researchers and policymakers to take action against this growing threat. 
 

3 comments

Comments sorted by top scores.

comment by Chris_Leong · 2023-09-07T06:33:40.409Z · LW(p) · GW(p)

Thanks so much for not only writing a report, but taking the time to summarise for our easy consumption!

Replies from: Peter S. Park
comment by Peter S. Park · 2023-09-08T09:17:24.135Z · LW(p) · GW(p)

Thank you so much for taking the time to read our paper, Chris! I'm extremely grateful.

Replies from: Chris_Leong
comment by Chris_Leong · 2023-09-08T09:28:51.738Z · LW(p) · GW(p)

Haha, I actually didn’t read your paper, only the summary. I might have read your paper, but you wrote a summary so I didn’t have to =P. I appreciate your appreciation nonetheless.