Ten people on the inside

post by Buck · 2025-01-28T16:41:22.990Z · LW · GW · 16 comments

Contents

16 comments

(Many of these ideas developed in conversation with Ryan Greenblatt)

In a shortform [AF(p) · GW(p)], I described some different levels of resources and buy-in for misalignment risk mitigations that might be present in AI labs:

*The “safety case” regime.* Sometimes people talk about wanting to have approaches to safety such that if all AI developers followed these approaches, the overall level of risk posed by AI would be minimal. (These approaches are going to be more conservative than will probably be feasible in practice given the amount of competitive pressure, so I think it’s pretty likely that AI developers don’t actually hold themselves to these standards, but I agree with e.g. Anthropic that this level of caution is at least a useful hypothetical to consider.) This is the level of caution people are usually talking about when they discuss making safety cases. I usually operationalize this as the AI developer wanting to have <1% chance that their AIs escape in the first year of deployment, and <5% conditional on the model trying pretty hard to subvert their safety measures. (I think some AI developers are hoping to get lower levels of risk than this.)

*The rushed reasonable developer regime.* The much riskier regimes I expect, where even relatively reasonable AI developers are in a huge rush and so are much less able to implement interventions carefully or to err on the side of caution.

*The rushed unreasonable developer regime.* You’re advising an unreasonable AI developer that is incompetent at implementing safety measures and unwilling to put much effort into implementing these measures.

*The existential war regime*. You’re in an existential war with an enemy and you’re indifferent to AI takeover vs the enemy defeating you. This might happen if you’re in a war with a nation you don’t like much, or if you’re at war with AIs. Obviously this is a regime where you should be much more aggressive.

I want to flesh out one particular rushed unreasonable developer scenario that I’ve been thinking about lately: there’s ten people inside the AI company who are really concerned about catastrophic risk from misalignment. The AI company as a whole pays lip service to AI risk broadly construed and talks occasionally about risk from AGI, but they don’t take misalignment risk in particular (perhaps especially risk from schemers) very seriously.

I think this scenario (and similarly pessimistic scenarios) seem important to target with technical research and planning: it seems pretty likely that we’ll only have this level of political will in short timelines (at least within a subset of competitive AI companies) and it seems possible to substantially improve the situation. I worry that a variety of AI safety thinking and planning focuses on overly optimistic scenarios where a responsible developer has a substantial lead and I think more focus on pessimistic scenarios at the margin would be useful.

What should these people try to do? The possibilities are basically the same as the possibilities for what a responsible developer might do:

The main focus of my research is on safety measures, so I’ve thought particularly about what safety measures they should implement. I'll give some more flavor on what I imagine this scenario is like: The company, like many startups, is a do-ocracy, so these 10 people have a reasonable amount of free rein to implement safety measures that they want. They have to tread lightly. They don’t have much political capital. All they can do is make it so that it’s easier for the company to let them do their thing than to fire them. So the safety measures they institute need to be:

I think it’s scarily plausible that we’ll end up in a situation like this. Two different versions of this:

What should we do based on this?

16 comments

Comments sorted by top scores.

comment by Scott Alexander (Yvain) · 2025-01-29T03:59:20.299Z · LW(p) · GW(p)

Does this imply that fewer safety people should quit leading labs to protest poor safety policies?

Replies from: Buck, zac-hatfield-dodds, Seth Herd, steven lee
comment by Buck · 2025-01-29T16:33:15.880Z · LW(p) · GW(p)

I've talked to a lot of people who have left leading AI companies for reasons related to thinking that their company was being insufficiently reckless. I wouldn't usually say that they'd left "in protest"; for example, most of them haven't directly criticized the companies after leaving.

In my experience, the main reason that most of these people left was that they found it very unpleasant to working there and thought their research would be better elsewhere, not that they wanted to protest poor safety policies per se. I usually advise such people against leaving if the company has very few safety staff, but it depends on their skillset.

(These arguments don't apply to Anthropic: there are many people there who I think will try to implement reasonable safety techniques, so on the current margin, the benefits to safety technique implementation of marginal people seems way lower. It might still make sense to work at Anthropic, especially if you think it's a good place to do safety research that can be exported.)

comment by Zac Hatfield-Dodds (zac-hatfield-dodds) · 2025-01-29T10:31:17.073Z · LW(p) · GW(p)

My impression is that few (one or two?) of the safety people who have quit a leading lab did so to protest poor safety policies, and of those few none saw staying as a viable option.

Relatedly, I think Buck far overestimates the influence and resources of safety-concerned staff in a 'rushed unreasonable developer'.

Replies from: habryka4, Buck, ryan_greenblatt
comment by habryka (habryka4) · 2025-01-29T17:58:10.950Z · LW(p) · GW(p)

My impression is that few (one or two?) of the safety people who have quit a leading lab did so to protest poor safety policies, and of those few none saw staying as a viable option.

While this isn't amazing evidence, my sense is there have been around 6 people who quit who in-parallel to them announcing their leave called out OpenAI's reckless attitude towards risk (at various levels of explicitness, but quite strongly in all cases by standard professional norms). 

It's hard to say that people quit "to protest safety policies", but they definitely used their leaving to protest safety policies. My sense is almost everyone who left in the last year (Daniel, William, Richard, Steven Adler, Miles) did so with a pretty big public message.

comment by Buck · 2025-01-29T16:28:12.931Z · LW(p) · GW(p)

Many more than two safety-concerned people have left AI companies for reasons related to thinking that those companies are reckless.

Replies from: ryan_greenblatt
comment by ryan_greenblatt · 2025-01-29T16:55:07.839Z · LW(p) · GW(p)

I think Zac is trying to say they left not to protest, but instead because they didn't think staying was viable (for whatever research and/or implementation they wanted to do).

From my understanding, "staying wouldn't be viable for someone who was willing to work in a potentially pretty unpleasant work environment and focus on implementation (and currently prepping for this implementation)" doesn't seem like an accurate description of the situation. (See also Buck's comment here [LW(p) · GW(p)].)

comment by ryan_greenblatt · 2025-01-29T16:27:25.730Z · LW(p) · GW(p)

Relatedly, I think Buck far overestimates the influence and resources of safety-concerned staff in a 'rushed unreasonable developer'.

As in, you don't expect they'll be able to implement stuff even if it doesn't make anyone's workflow harder or you don't expect they'll be able to get that much compute?

Naively, we might expect ~1% of compute as we might expect around 1000 researchers and 10/1000 is 1%. Buck said 3% because I argued for increasing this number. My case would be that there will be bunch of cases where the thing they want to do is obviously reasonable and potentially justifiable from multiple perspectives (do some monitoring of internal usage, fine-tune a model for forecasting/advice, use models to do safety research) such that they can pull somewhat more compute than just the head count would suggest.

comment by Seth Herd · 2025-01-29T04:49:43.060Z · LW(p) · GW(p)

It does seem to imply that, doesn't it? I respect the people leaving, and I think it does send a valuable message. And it seems very valuable to have safety-conscious people on the inside.

Replies from: Raemon
comment by Raemon · 2025-01-29T05:16:12.344Z · LW(p) · GW(p)

The question is "are the safety-conscious people effectual at all, and what are their opportunity costs?".

i.e. are the cheap things they can do that don't step on anyone's toes that helpful-on-the-margin, better than what they'd be able to do at another company? (I don't know the answer, depends on the people).

comment by Steven Lee (steven lee) · 2025-01-29T05:50:09.997Z · LW(p) · GW(p)

Not Buck but I think it does unless of course they Saw Something and decided that safety efforts weren't going to work. The essay seems to hinge on safety people being able to make models safer, which sounds plausible but I'm sure they already knew that. Given their insider information and conclusions about their ability to make a positive impact, then it seems less plausible that their safety efforts would succeed. Maybe whether or not someone has already quit is an indication of how impactful their safety work is. It also varies by lab, with OpenAI having many safety conscious quitters but other labs having much fewer (I want to say none, but maybe I just haven't heard of any). 

The other thing to think about is whether or not people who quit and claimed it was due to safety reasons were being honest about that. I'd like to believe that they were, but all companies have culture/performance expectations that their employees might not want to meet and quitting for safety reasons sounds better than quitting over performance issues.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2025-01-28T21:31:11.171Z · LW(p) · GW(p)

Yeah, the safety tax implied by davidad's stuff is why I have less hope for them than for your weaker-but-cheaper control schemes. The only safety techniques that count are the ones that actually get deployed in time.

Replies from: adam_scholl
comment by Adam Scholl (adam_scholl) · 2025-01-29T19:44:21.591Z · LW(p) · GW(p)

The only safety techniques that count are the ones that actually get deployed in time.

True, but note this doesn't necessarily imply trying to maximize your impact in the mean timelines world! Alignment plans vary hugely in potential usefulness, so I think it can pretty easily be the case that your highest EV bet would only pay off in a minority of possible futures.

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2025-01-30T00:13:47.572Z · LW(p) · GW(p)

Be that as it may, I nevertheless feel discomfited by the fact that I have been arguing for 2026-2028 arrival of AGI for several years now, and people have been dismissing my concerns and focusing on plans for dealing with AGI in the 2030s or later.

The near-term-AGI space getting systematically neglected because it feels hard to come up with plans for is a bad pattern.

[Edit: I think that the relatively recent work done on pragmatic near-term control by Ryan and Buck at Redwood is a relieving departure from this pattern.]

comment by Siebe · 2025-01-29T15:24:53.776Z · LW(p) · GW(p)

What about whistle-blowing and anonymous leaking? Seems like it would go well together with concrete evidence of risk.

comment by Gunnar_Zarncke · 2025-01-29T07:54:22.738Z · LW(p) · GW(p)

So the safety measures they institute need to be:

Cheap [and] Low compliance overhead

There is an additional alternative: Safety measures that make the main product more useful, e.g., by catching failure cases like jailbreaks.

comment by sjadler · 2025-01-29T09:06:53.492Z · LW(p) · GW(p)

I really appreciate this write up. I felt sad while reading it that I have a very hard time imagining an AI lab yielding to another leader it considers to be irresponsible - or maybe not even yielding to one it considers to be responsible. (I am not that familiar with the inner workings at Anthropic though, and they are probably top of my list on labs that might yield in those scenarios, or might not race desperately if in a close one.)

One reason for not yielding is that it’s probably hard for one lab to definitively tell that another lab is very far ahead of them, since we should expect some important capability info to remain private.

It seems to me then that ways of labs credibly demonstrating leads, without leaking info that allows others to catch up, might be a useful thing to exist - perhaps paired with enforceable conditional commitments to yield if certain conditions are demonstrated.