The Shallow Bench

post by Karl Faulks (karl-faulks) · 2024-11-05T05:07:27.357Z · LW · GW · 0 comments

Contents

No comments

[Cross posting from my personal blog: https://spiralprogress.com/2024/10/28/the-shallow-bench/]

Project Hail Mary follows Ryland Grace, a disgraced academic turned high school biology teacher who gets selected as part of a crew of three tasked with saving all of humanity from an impending alien threat.

Not even in sci-fi does a premise this absurd get presented without explanation. What does PHM offer? 

“They found a collection of genes that give a human ‘coma resistance.”

“The main problem is this: On average, only one in every seven thousand humans has that genetic sequence.”

“We wouldn’t be able to send the most qualified people. We’d be sending the seven-thousandth most qualified people.”

In real life, I work in a complex and niche field that my skill set only very tangentially qualifies me for.

Periodically, I’ll meet people who ask how I ended up there. They’re not trying to be mean, it’s more like incredulity. “Seriously, you’re who humanity tasked with this job?” And I look at them and want to say “Yes, I am also not who I would have picked.”

And yet… and yet here I am. And there is no one else.

A lot of the AI Alignment people I’ve met have a similar vibe. Sometimes they used to work in finance, or neuroscience or software engineering on pretty mundane products. And then they spent a few months in self-study, maybe did a “fellowship” or went to some workshops, or otherwise transitioned into the field, and now they are some of the top people at top labs tasked with this fairly important problem.

One insider estimates that there are 300 alignment researchers total, and only 7 at OpenAI. He was on the team and later fired along with several colleagues, so maybe the number is now closer to 0. In any case, it’s an incredibly small field with an incredibly small talent pool to pull from.

In sports, a deep bench refers to a team that has not only a great starting line up, but also great players ready to sub in. This is hugely intimating to face. You can exhaust the starters. You can’t foul them. Even if someone gets hurt there’s another massively talented player ready to take his place. This is the opposite of how things feel in every important field I’ve gotten a look at. The bench for human capital is incredibly shallow.

I am not any kind of powerful insider. I get invited to some rooms, but miss out on a lot. Maybe I just don’t know the right people? Consider instead the much more credible perspective of Nat Friedman who writes: “In many cases it's more accurate to model the world as 500 people than 8 billion”.

Even sci-fi novels only posit a 7000:1 ratio, but Nat is claiming a much more aggressive ten million to one. How is that possible?

Sometimes when asked about my role, I can’t just joke and brush it off. It is a serious question asked by a serious person who really wants to know: why are you the person doing this?

The abstract answer is that unlike in PHM, there is no global government with authoritarian power to appoint people to positions of arbitrary authority, and no guarantee in life that roles are filled with anything close to efficiency. There’s just no mechanism that would make this happen. Maybe hedge fund managers are selected pretty efficiently since they presumably get fired if they don’t make money, but even there we’re just talking about lowering the false positive rate. There is no mechanism to force anyone who could be a great hedge fund manager to go into finance instead of, say, physics or politics.

But the concrete answer is that I sit there, and I enumerate every other person in the world who could be doing my job instead. And I say “Arnold isn’t doing it because he just had a kid and doesn’t want to leave London. Beth can’t do it because she’s doing something more important. Charles could do it, but his visa got denied and there’s no telling when it’ll come through. And Daisy is so burned out from her previous job that every time I ask she just sighs until I hang up.” And that is the complete list of plausible candidates!

An important lesson then, is not to over index on abstract reasoning when dealing with really small sample sizes. Sometimes life is just pretty discrete and things that “should” happen don’t.

Another lesson is to try to get as close as possible to the specific people in question if you’re seriously trying to model a field. In the case of AI Safety, it’s wildly insufficient to talk about the field’s incentives or game theoretic dynamics. You have to talk about the motivations, interests and beliefs of the actual set of actors involved, and if you don’t even know who they are, you are not going to get far.

Finally, this view is just a reason to do things you don’t feel qualified for. I wouldn’t, for instance, compete with Elon on self-landing rockets because he’s already doing it and it seems to be going fairly well!

But if something needs to happen, and no one else is doing it, don’t psych yourself out. You might be wildly under-qualified, you might not be the ideal person for the job. But tragically often, there just isn't anyone else.

0 comments

Comments sorted by top scores.