AI Probability Trees - Katja Grace

post by Nathan Young · 2023-08-24T09:45:47.487Z · LW · GW · 3 comments

Contents

  Longer explanations
  How this falls out by 2040
    AI are good but not godlike (50%)
    ChatGPT20 - competence but no agency (15%)
    Many godlike agentic AI systems blocking one another (4%)
    AI Midtopia/Utopia (16%)
    Saved by policy (12%)
    Doom (15%)
  Who would you like to see this done for? 
  How could this be better?
  Thanks
None
3 comments

I am interviewing AI experts on what they think will happen with AI. Here is Katja Grace and her thoughts. AI risk scares me but often I feel pretty disconnected from it. This has helped me think about it. 

Here are Katja’s thoughts in brief:

You can look at an interactive graph here: 

or see all the graphs I’ve done so far https://estimaker.app/ai. There is an image at the bottom of this page.

You can watch the full video here:

Longer explanations

Please remember this is by 2040. This was originally a table

How this falls out by 2040

Here is how these numbers cash out in possible worlds. These are Mutually Exclusive and Collectively Exhaustive (MECE)

AI are good but not godlike (50%)

AI tools are great. Maybe they can code a lot or support a lot. But they can’t reduce the manpower required to run a department of the Civil Service by 100x. They can’t take on large projects alone, for some reason. It’s like GPT4, but a lot better, but not a step change.

ChatGPT20 - competence but no agency (15%)

Imagine a ChatGPT that can produce anything you ask of it but only does a few tasks or can’t call itself recursively. Unlike the above this is genuinely a step change. You or I could run a hedge fund or a chunk of government. But it will involve us doing the vision.  

Many godlike agentic AI systems blocking one another (4%)

As in the current world, many intelligent systems (people and companies) are trying to reach their outcomes and blocking one another. Somehow this doesn’t lead to the “mid/utopia’s” below. 

AI Midtopia/Utopia (16%)

These are the really good scenarios where we have agentic AGI that doesn’t want bad things. There are a broad spread of possible worlds, from some kind of human uplift, to a kind of superb business as usual where we might still have much to complain about but everyone lives like the richest people do today.

Saved by policy (12%)

Amount of worlds where things would have gone really badly but policy delayed things. These might be any of the other non-doom worlds - perhaps AI has been slowed a lot or perhaps it has better goals. In order to simplify the graph, it doesn’t really deal with what these worlds look like. Please make suggestions

Doom (15%)

Unambiguously bad outcomes. Agentic AGI which wants things we’d consider bad and gets it. My sense is that Katja thinks that most bad outcomes come from AGI taking over, maybe 10% it happening quickly and 90% it happening slowly.

If you would like to see more about this, Katja has much longer explanations here: https://wiki.aiimpacts.org/doku.php?id=arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start 

Who would you like to see this done for? 

I wanted to see work like this, so I thought I’d do it. If you want to see a specific person’s AI risk model, perhaps ask them to talk to me. It takes about 90 minutes of their time and currently I think the marginal gains of every subsequent one are pretty high.

On a more general level, I am pretty encouraged by positive feedback. Should I try to get funding to do more interviews like this?

How could this be better?

We are still in early stages so I appreciate a lot of nitpicky feedback

Thanks

Thanks to Katja Grace for the interview and Rebecca Hawkins for feedback, in particular for suggesting the table layout and to Arden Koehler for good comments (you should read her thread on writing good comments). Thanks to the person who suggested I write this sequence

3 comments

Comments sorted by top scores.

comment by Nathan Young · 2023-08-24T10:10:54.832Z · LW(p) · GW(p)

Does anyone know how to make the table wider?

Replies from: Hoagy
comment by Hoagy · 2023-08-24T23:23:39.117Z · LW(p) · GW(p)

seems like it'd be better formatted as a nested list given the volume of text

Replies from: Nathan Young
comment by Nathan Young · 2023-08-26T10:02:17.924Z · LW(p) · GW(p)

Maybe, but only because Lesswrong doesn't let you have wide tables.