Thousands of malicious actors on the future of AI misuse

post by Zershaaneh Qureshi (zershaaneh-qureshi), Corin Katzke (corin-katzke), Convergence Analysis (deric-cheng-1) · 2024-04-01T10:08:42.357Z · LW · GW · 0 comments

Contents

  Methodology
  Results
None
No comments

Announcing the results of a 2024 survey by Convergence Analysis. We’ve just posted the executive summary below, but you can read the full report here

In the largest survey of its kind, Convergence Analysis surveyed 2,779 malicious actors on how they would misuse AI to catastrophic ends. 

In previous [EA · GWwork [EA · GW], we’ve explored the difficulty of forecasting AI risk. Existing attempts rely almost exclusively on data from AI experts and professional forecasters. As a result, the perspectives of perhaps the most important actors in AI risk – malicious actors – are underrepresented in current AI safety discourse. This report aims to fill that gap.

Methodology

We selected malicious actors based on whether they would hypothetically end up in "the bad place" in the TV show, The Good Place. This list included members of US-designated terrorist groups, convicted war criminals, and anyone who has ever appeared on Love Island or The Apprentice.

Results

A majority of participants agreed to reflect on their experience in a follow-up survey if they successfully misuse AI. Unfortunately, none agreed to register their misuse with us in advance.

If you self-identify as a malicious actor, please get in touch here if you’re interested in being contacted to participate in a future study.
 

0 comments

Comments sorted by top scores.