AI Risk Scenario Roleplay

post by Milli | Martin (Milli), Manuel Allgaier (white rabbit), __nobody · 2023-11-07T14:36:12.888Z · ? · GW · 0 comments

Contents

  Game plan
  Scenario
  Logistics
None
No comments


Capacity up to 40 people, first come, first served. Please RSVP on the EA Forum [? · GW]!  (no account needed)


How could AI existential risk play out? Choose one of five roles, play through a plausible scenario with other attendees, and discuss it afterward. This was a popular session at the EAGxBerlin conference [? · GW] with ~70 participants over two sessions and positive feedback, so we're doing it again for EA Berlin. 

Everyone is welcome, also if you’re new to AI safety! People underrepresented in the AI safety field are especially welcome. If you're very new to the field, we recommend you read/skim an introductory text such as this 80,000 Hours article or the Most Important Century series summary before, if possible. 

Game plan

19:00 Doors open, snacks

19:30 Game starts (be there by 19:30 latest)

~20:30 (optional) Stay longer to discuss more, play a second round or socialize + more snacks 
open end (22h or later)

 (Arrive 19:00-19:30 to join the game or after 20:30 to socialize, leave anytime)

Scenario

Imagine it’s the year 2030, OpenAI just announced plans to train a new model with superhuman capabilities for almost every task such as analyzing politics and economics, strategizing, coding, trading at the stock market, writing persuasively, generating realistic audiovisual content and more. It could do all these for $10/h (at human speed). 

Many are excited about the possibilities and dream of a world in which no human ever has to work again. Others are more worried about the risks. Most experts see no evidence that the model is obviously misaligned or itself agentic, but admit that they cannot guarantee safety either. 

If granted permission, OpenAI would start training two weeks from now and then deploy the model in six weeks. The US White House hurriedly organized a five-day summit to agree on an appropriate response. They invited the following stakeholders (choose one):

(this is not exhaustive, other actors such as China, NGOs and competing AI companies also seem relevant, but smaller groups allow more engagement) 

Host: I (Manuel Allgaier) am currently on a sabbatical, upskilling in AI governance and exploring various AI safety & EA meta projects. Before that, I ran EA Berlin (2019-21), EA Germany (2021-22) and led the EAGxBerlin 2022 orga team. I learned about this game while I took the AI Safety Fundamentals Governance course. I found this more creative and intuitive approach to AI safety engaging and useful to complement the usual, more academic approaches, so I developed it further and hosted two games at EAGxBerlin with ~35 participants each. I've studied AI safety since 2019 and did some freelance work in the field since ~2019, but I'm by no means an expert. I feel like I know enough to answer most questions that might come up, but I've invited some people with more expertise just in case. 

Logistics

Food: Martin will bring pita bread, vegetables and dips. Feel free to bring anything you want yourself (preferably no meat, ideally vegan). 

Location: We're grateful for the Chaos Computer Club (CCC) Berlin to host us in their space! How to find it (in German): https://berlin.ccc.de/page/anfahrt[1]. Please contact @__nobody [LW · GW] if you have any questions about CCC or the location.

Questions and feedback welcome via comment, forum pm or Telegram! Looking forward to fun and insightful games :)
- @Manuel Allgaier [LW · GW]  (Telegram, anonymous feedback) & @Milli | Martin [LW · GW]  (Telegram)

  1. ^
    This is the entrance: Ring at "Chaos Computer Club Berlin" and press when it clicks (no voice or buzzer).

0 comments

Comments sorted by top scores.