why did OpenAI employees sign

post by bhauth · 2023-11-27T05:21:28.612Z · LW · GW · 6 comments

This is a question post.

Contents

  Answers
    17 Viliam
    14 aphyer
    12 Thane Ruthenis
    7 quailia
    6 David Hornbein
    6 cata
    2 Bucky
    2 lukehmiles
None
6 comments

Recently, OpenAI employees signed an open letter demanding that the board reinstate Sam Altman, add other board members (giving some names of people allied with Altman), and resign, or else they would quit and follow Altman to Microsoft.

Following those demands would've put the entire organization under the control of 1 person with no accountability to anyone. That doesn't seem like what OpenAI employees wanted to be the case, unless they're dumber than I thought. So, why did they sign? Here are some possible reasons that come to mind:

  1. Altman is just really likeable for people like them - they just like him.
  2. They felt a sense of injustice and outrage over the CEO being fired that they'd never felt over lower-level employees being fired.
  3. They were hired or otherwise rewarded by Altman and thus loyal to him personally.
  4. They believed Altman was more ideologically aligned with them than any likely replacement CEO (including Emmett Shear) would be.
  5. They felt their profit shares would be worth more with Altman leading the company.
  6. They were socially pressured by people with strong views from (3) or (4) or (5).
  7. They were afraid the company would implode and they'd lose their job, and wanted the option of getting hired at a new group in Microsoft, and the risk of signing seemed low once enough other people already signed.
  8. They were afraid Altman would return as CEO and fire or otherwise punish them if they hadn't signed.
  9. Something else?

Which of those reasons do you think drove people signing that letter, and why do you think so?

Answers

answer by Viliam · 2023-11-27T08:24:51.908Z · LW(p) · GW(p)

I have no data on OpenAI situation, but #8 has crossed my mind. (It reminded of the communist elections where the Party got 99% approval.) If Sam Altman returns -- and if he is the kind of person some people describe him as -- you do not want to be one of the few who didn't sign the public letter calling for his return. That would be like putting your name on a public short list of people who don't like the boss.

Of course, #5 is also likely. But notice that the entire point of having the board was to prevent the #5 reasoning to rule the company. Which means that ~all OpenAI employees oppose the OpenAI Charter. Which means that Sam Altman won the revolution (by strategically employing/keeping the kind of people who oppose the company Charter) long before the board even noticed that it started.

(I find it amusing that the document that people in communist Czechoslovakia were afraid not to sign publicly, so that they don't lose their jobs, was called... Anticharter.)

comment by gwern · 2023-11-27T15:17:36.506Z · LW(p) · GW(p)

Which means that ~all OpenAI employees oppose the OpenAI Charter.

It was striking seeing how many commenters and OA employees were quoting Toner quoting the OA Charter (which Sam Altman helped write & signed off on) as proof that she was an unhinged mindless zealot and proof that every negative accusation of the board was true.

It would be like the supermajority of Americans having never heard of the First Amendment and on hearing a president candidate say "the government should not abridge freedom of speech or the press", all start railing about how 'this is some libertarian moonbat trying to entryist the US government to impose their unprecedently extreme ideology about personal freedom, and obviously, totally unacceptable and unelectable. Not abridge speech?! When people abuse their freedom to say so many terrible things, sometimes even criticizing the government? You gotta be kidding - freedom of speech doesn't mean freedom from consequences, like being punished by laws!'

Hard not to see the OA LLC as too fundamentally unaligned with the mission at that point. It seems like at some point, possibly years ago, OA LLC became basically a place that didn't believe in the mission or that AGI risk is a thing and regarded all that stuff as so much PR kayfabe and not, like, serious (except for a few nuts over in the Superalignment group who thankfully can be ignored - after all, it's not like the redteaming ever turns up any real problems, right? you'd've heard). At that point, the OA double-structure has failed. Double-structures like Hershey or Mozilla never pit the nonprofit against the for-profit to this extent, and double-structures like Ikea where it's a tax gimmick, cannot. And it turns out, pitted that much, the for-profit holds most of the cards.

I don't know how much to fault the board for this. They may well have known how much the employee base had diverged from the mission, but what were they going to do? Fire Altman back in 2020, before he could bring in all the people from Dropbox etc who then hired more like them & backed him, never mind the damage to the LLC? (I'm not sure they ever had the votes to do that for any reason, much less a slippery slope reason.) Leak to the press - the press that Altman has spent 15 years leaking to and building up favors with - to try to embarrass him out? ('Lol. lmao. lel.') Politely notify him that it was open war and he had 3 months to defeat them before being fired? Yeah...

Thus far, I don't think there's much of a post-mortem to this other than 'like Arm China, at some point an entity is so misaligned that you can't stop it from collectively walking out the door and simply ignoring you, no matter how many de jure rights or powers you supposedly have or how blatant the entity's misalignment has become. And the only way to fix that is to not get into that situation to begin with'. But if you didn't do that, then OA at this point would probably have accomplished a lot less in terms of both safety & capability, so the choice looked obvious ex ante.

Replies from: Viliam
comment by Viliam · 2023-11-28T11:45:30.131Z · LW(p) · GW(p)

The rules may be nice, but they are not going to enforce themselves.

Many communist countries had freedom of speech and freedom of religion in their constitutions. But those constitutions were never meant to be taken seriously, they were just PR documents for the naive Western journalists to quote from.

comment by trevor (TrevorWiesinger) · 2023-11-27T16:03:42.906Z · LW(p) · GW(p)

Citing a relevant part of the Lex Fridman interview (transcript) which people will probably find helpful to watch, so you can at least eyeball Altman's facial expressions:

LEX FRIDMAN: How do you hire? How do you hire great teams? The folks I’ve interacted with, some of the most amazing folks I’ve ever met.

SAM ALTMAN: It takes a lot of time. I mean, I think a lot of people claim to spend a third of their time hiring. I for real truly do. I still approve every single hire at OpenAI. And I think we’re working on a problem that is like very cool and that great people want to work on. We have great people and people want to be around them. But even with that, I think there’s just no shortcut for putting a ton of effort into this.

I think it's also important to do three-body-problem thinking with this situation; it's also possible that Microsoft or some other third party might have gradually but successfully orchestrated distrust/conflict between two good-guy factions [LW(p) · GW(p)] or acquired access to the minds/culture of OpenAI employees [LW · GW], in which case it's critical for the surviving good guys to mitigate the damage and maximize robustness [LW · GW] against third parties in the future. 

For example, Altman was misled to believe that the board was probably compromised and he had to throw everything at them, and the board was mislead to believe that Altman was hopelessly compromised and they had to throw everything at him (or maybe one of them was actually compromised). I actually wrote about that [LW · GW] 5 days before the OpenAI conflict started (I'd call that a fun fact but not a suspicious coincidence, because things are going faster now, 5 days in 2023 is like 30 days in 2019 time).

answer by aphyer · 2023-11-27T18:32:25.268Z · LW(p) · GW(p)

Disclaimer: I do not work at OpenAI and have no inside knowledge of the situation.

I work in the finance industry.  (Personal views are not those of my employer, etc, etc).

Some years ago, a few people from my team (2 on a team of ~7) were laid off as part of firm staff reductions.

My boss and my boss's boss held a meeting with the rest of the team on the day those people left, explaining what had happened, reassuring us that no further layoffs were planned, describing who would be taking over what parts of the responsibilities of the laid-off people, etc.

On my understanding of employment, this was just...sort of...the basic standard of professionalism and courtesy?

If I had found out about layoffs at my firm through media coverage, or when I tried to email a coworker and their email no longer worked, I would be unhappy.  If the only communication I got from above about reasons for the layoffs was that destroying the company 'would be consistent with the mission', I would be very unhappy.  In any of those cases, I would strongly consider looking for jobs elsewhere.

It has sometimes seemed to me [LW · GW] that the EA/nonprofit space does not follow the rules I am familiar with for the employer/employee relationship.  Perhaps my experience in the famously kindly and generous finance industry has not prepared me for the cutthroat reality of nonprofit altruist organizations.

Nevertheless, any OpenAI employee with views similar to my own would be concerned and plausibly looking for a new job after the board fired the CEO with no justification or communication.  If you want a one-sentence summary of the thought process, it could be: 

'If this is how they treat the CEO, how will they treat me?'

comment by gwern · 2023-11-28T00:21:26.434Z · LW(p) · GW(p)

'If this is how they treat the CEO, how will they treat me?'

You just explained why it's totally disanalogous. An ordinary employee is not a CEO {{citation needed}}.

Replies from: aphyer
comment by aphyer · 2023-11-28T01:21:48.660Z · LW(p) · GW(p)

This is true, but in general the differences between an ordinary employee and a CEO go in the CEO's favor.  I believe this does also extend to 'how are they fired': on my understanding the modal way a CEO is 'fired' is by announcing that they have chosen to retire to pursue other opportunities/spend more time with their family, and receiving a gigantic severance package.

comment by JenniferRM · 2023-11-27T21:12:21.595Z · LW(p) · GW(p)

I laughed out loud on this line...

Perhaps my experience in the famously kindly and generous finance industry has not prepared me for the cutthroat reality of nonprofit altruist organizations.

...and then I wondered if you've seen Margin Call? It is truly a work of art.

My experiences are mostly in startups, but rarely on the actual founding team, so I have seen more stuff that was unbuffered by kind, diligent, "clueless" bosses.

My general impression is that "systems and processes" go a long way into creating smooth rides for the people at the bottom, but those things are not effectively in place (1) at the very beginning and (2) at the top when exceptional situations arise. Credentialed labor is generally better compensated in big organizations precisely because they have "systems" where people turn cranks reliably that reliably Make Number Go Up [LW · GW] and then share out fractional amounts of "the number".

Some years ago, a few people from my team (2 on a team of ~7) were laid off as part of firm staff reductions.

Did you ever see or talk with them again? Did they get nice severance packages? Severance packages are the normal way for oligarchs to minimize expensive conflict, I think.

answer by Thane Ruthenis · 2023-11-27T11:38:00.376Z · LW(p) · GW(p)

More or less all of it, I think.

  • Fundamentally, Sam Altman is a competent interpersonal operator. He'd doubtlessly worked both to be a naturally likeable and loyalty-inspiring person (in a passive way), and to purposefully inspire and select employees for loyalty (actively). That provided the backbone to the effort. No matter how many carrots and sticks were deployed, if Altman didn't earn the loyalty to some extent, this show of support wouldn't have been possible to achieve.
  • By contrast, the board apparently wasn't very involved with the employees, and they did handle the communications terribly. Why would an OpenAI employee automatically assume they're the good guys? (When even we are unsure.)
    • While loyalty to Altman might've varied, the employees for sure didn't have any personal loyalty to the board.
  • And I'm sure there were carrots and sticks deployed aplenty:
    • On the carrots end: there could've been a ton of things, like Microsoft promising raises and guarantees if they jump ship (to make "we'd just quit otherwise" credible), Altman promising raises if he returns, arguments that they'd earn more in the long run if they either jump ship or get Altman back but not if they do nothing, etc.
    • Similarly, a ton of sticks: vivid images of the company imploding without Altman, and losing the investors, and of the board doing more random firings and wrecking things, plus covert suggestions of demotions or purges if people don't support him and he comes back anyway, et cetera.
    • Which specific carrots and sticks were employed matters little, and likely differed from person to person to some extent. The point is just that there were a lot of things that could've sounded convincing with a right spin, and it was a relatively high-time-pressure situation, and the people spearheading the effort were (likely, apparently) good at making use of all of this.
  • The snowball effect/peer pressure obviously played a role. OpenAI employees are obviously different in how loyal/susceptible to the pressure they are, but as more and more people signed, the pressures would've mounted. First the 100 most loyal signed, then the 100 less-loyal ones (who wouldn't have signed if the initiative didn't already get some traction), then the 100 even-less-loyal ones, and so on. 
    • If a random employee X were the first whom they asked to sign, that employee might or might not have refused. But if X is the seven-hundredth employee they're asking, with 699 preceding ones having already signed, is X really going to make a stand? Yes, maybe! But that requires them to be against it on principle, such that they're motivated to swim against the current.
    • And this effect could've been invoked even before the majority of the company signed – just by creating a narrative of the inevitability that of course we're all gonna sign; that this is where the wind blows.
  • Lastly, I think signing the letter doesn't actually commit an employee to anything. It's not really legally binding? The cost of signing it is thus ~zero (except maybe some vague concerns about honor), whereas the supposed rewards for signing it and the punishments for not signing it (which, again, were doubtlessly floated around) are much more concrete.
    • And that point I'm outlining, in itself, is unlikely to be something the people organizing the effort had failed to empathize.

So there you have it: a relatively good boss is ousted by the board you know nothing about for unclear reasons, people close to the epicenter are running around telling you how it's all going to implode now and how we have this costless way to maybe avert it, they're being really pushy about it, it's all very confusing and scary, more and more of the people around you are signing the letter, there's an increasing atmosphere that signing it is just what an OpenAI employee does – would you really not sign?

Which isn't to say it wasn't an impressive accomplishment. The level of coordination required to pull this off was doubtlessly high, it would've required handling all of the aforementioned covert messaging about carrots-and-sticks with a minimal degree of competence, it required the foundation of Sam Altman establishing himself as a good leader, etc.

But I'm wholly unsurprised it worked.

answer by quailia · 2023-11-27T10:32:12.834Z · LW(p) · GW(p)

It seemed like a classic case of prisoner's dilemma, so (5) and (7). The more of your company that signs the petition, the lower the value of your PPUs, making it more attractive to sign. It reached a point where they felt OpenAI's value and their PPUs went to nothing if a critical mass joined Microsoft. In fact, if MS was willing to match compensation, everyone "cooperating" by not signing the petition is a worse outcome for everyone than just joining MS because they had already seen other players move first (Altman, Brockman, other resignations) - that is if we look purely at compensation (not even taking into account the possibility that PPU-equivalent at MS would not be profit capped). In textbook prisoner's dilemma, cooperation leads to the best overall outcome for everyone, yet the best move is to defect if you are unable to coordinate, which is not really the case here.

Further, even if an OAI employee did not care about PPUs at all, and all they care about is the non-profit mission of AI for the betterment of all humanity, they might have felt there was a greater likelihood of achieving that mission at Microsoft than the empty shell of OAI (the safety teams for example - might as well do you best to help safety at the new "leading" organisation, and get paid too).

comment by followthesilence · 2023-11-28T06:10:06.262Z · LW(p) · GW(p)

Not sure if this page is broken or I'm technically inept, but I can't figure out how to reply to qualiia's comment directly:

Primarily #5 and #7 was my gut reaction, but quailia's post articulates rationale better than I could. 

One useful piece of information that would influence my weights: what was OAI's general hiring criteria? If they sought solely "best and brightest" on technical skills and enticed talent primarily with premiere pay packages, I'd lean #5 harder. If they sought cultural/mission fits in some meaningful way I might update lower on #5/7 and higher on others. I read the external blog post about the bulk of OAI compensation being in PPUs, but that's not necessarily incompatible with mission fit.

Well done on the list overall, seems pretty complete, though aphyer provides a good unique reason (albeit adjacent to #2).

answer by David Hornbein · 2023-11-27T23:27:38.536Z · LW(p) · GW(p)

Suppose you're an engineer at SpaceX. You've always loved rockets, and Elon Musk seems like the guy who's getting them built. You go to work on Saturdays, you sometimes spend ten hours at the office, you watch the rockets take off and you watch the rockets land intact and that makes everything worth it.

Now imagine that Musk gets in trouble with the government. Let's say the Securities and Exchange Commission charges him with fraud again, and this time they're *really* going after him, not just letting him go with a slap on the wrist like the first time. SpaceX's board of directors negotiates with SEC prosecutors. When they emerge they fire Musk from SpaceX, and remove Elon and Kimbal Musk from the board. They appoint Gwynne Shotwell as the new CEO.

You're pretty worried! You like Shotwell, sure, but Musk's charisma and his intangible magic have been very important to the company's success so far. You're not sure what will happen to the company without him. Will you still be making revolutionary new rockets in five years, or will the company regress to the mean like Boeing? You talk to some colleagues, and they're afraid and angry. No one knows what's happening. Alice says that the company would be nothing without Musk and rails at the board for betraying him. Bob says the government has been going after Musk on trumped-up charges for a while, and now they finally got him. Rumor has it that Musk is planning to start a new rocket company.

Then Shotwell resigns in protest. She signs an open letter calling for Musk's reinstatement and the resignation of the board. Board member Luke Nosek signs it too, and says his earlier vote to fire Musk was a huge mistake. 

You get a Slack message from Alice saying that she's signed the letter because she has faith in Musk and wants to work at his company, whichever company that is, in order to make humanity a multiplanetary species. She asks if you want to sign.

How do you feel?

comment by Cookiecarver · 2023-11-28T09:07:46.109Z · LW(p) · GW(p)

Replying to David Hornbein.

Thank you for this comment, this was basically my view as well. I think the employees of OpenAI are simply excited about AGI, have committed their lives working long hours to make it a reality and believe AGI would be good for humanity and also good for them personally. My view is that they are very emotionally invested in building AGI and stopping all that progress for reasons that feel speculative, theoretical and not very tangible feels painful.

Not that I would agree with that, assuming this is correct.

comment by Logan Zoellner (logan-zoellner) · 2023-11-28T05:42:13.289Z · LW(p) · GW(p)

>Now imagine that Musk gets in trouble with the government

Now image the same scenario but  Elon has not gotten in trouble with the government and multiple people (including those who fired him) have affirmed he did nothing wrong.

answer by cata · 2023-11-27T15:48:42.811Z · LW(p) · GW(p)

I have no inside information. My guess is #5 with a side of 1, 6, and "the letter wasn't legally binding anyway so who cares."

I think that the lesson here is that if your company says "Work here for the principles in this charter. We also pay a shitload of money" then you are going to get a lot of employees who like getting paid a shitload of money regardless of the charter, because those are much more common in the population than people who believe the principles in the charter and don't care about money.

answer by Bucky · 2023-11-28T14:28:14.004Z · LW(p) · GW(p)
  1. Someone in your company gets fired by a boss you don't know/particularly like without giving any reason
  2. You are mad with the boss and want the decision overturned
  3. You have a credible, attractive BATNA (the Microsoft offer)

These 3 items seem like they would be sufficient to cause something like the Open Letter to happen.

In most cases number 3 is not present which I think is why we don't see things like this happen more often in more organisations.

None of this requires Sam to be hugely likeable or a particularly savvy political operator, just that people generally like him. People seem to suggest he was one or both so this just makes the letter more likely.

I'm sure this doesn't explain it all in OpenAI's case - some/many employees would also have been worried about AI safety which complicates the decision - but I suspect it is the underlying story.

answer by lemonhope (lukehmiles) · 2023-11-27T06:19:39.228Z · LW(p) · GW(p)

I think #5+#6. The people with the most stock tend to be the bosses of the others — the "social pressure" of your boss telling you to sign right now is quite persuasive.

6 comments

Comments sorted by top scores.

comment by jefftk (jkaufman) · 2023-11-27T13:37:52.344Z · LW(p) · GW(p)

#5 was quite concrete and short term: there was a deal with Thrive where employees were about to be able to sell their stock at an 86B valuation, and that wasn't going to go through with a new company direction.

Replies from: o-o
comment by O O (o-o) · 2023-11-27T16:50:20.787Z · LW(p) · GW(p)

I’m confused why the board just didn’t wait a few weeks to announce it after the sale. Seems like a huge blunder unless they were that pressed for time.

Replies from: gwern, JamesPayor
comment by gwern · 2023-11-27T18:06:57.019Z · LW(p) · GW(p)

unless they were that pressed for time.

They were [LW(p) · GW(p)] because they had an extremely fragile coalition and only a brief window of opportunity.

They certainly did not have the power to tell Altman they were going to fire him in several weeks and expect that to stick. None of them, Sutskever included, have ever struck me as that suicidally naive. And it looks like they had good reason to expect that they had little time given the Slack comments Sutskever saw.

Also, remember that Altman has many, many options available to him. Since people seem to think that the board could've just dicked around and had the luxury of waiting a long time, I will highlight one specific tactic that the board should have been very worried about, which possibility did not permit any warning or hint to Altman, and which required moving as fast as possible once reality sank in & they decided to not cede control over OA to Altman: (WSJ)

Some OpenAI executives told her [Helen Toner] that everything relating to their company makes its way into the press.

That is, Altman (or those execs) had the ability to deniably manufacture a Toner scandal at any second by calling up a friendly reporter at, say, The Information, to highlight the (public) paper, which about an hour later (depending on local Pacific Time), would then 'prove' him right about it and provide grounds for an emergency board meeting that day to vote on expelling Toner if she was too stubborn to 'resign'. After which, of course, they would need to immediately vote on new board members to fill out a far-too-small board with Toner gone, whether or not that had been on the official agenda, and this new board would, of course, have to approve of any prior major decisions like 'firing the CEO'. Now, Altman hadn't done this because Altman didn't want the cost of a public scandal, however much of a tempest-in-a-teapot-nothingburger it would be, he was very busy with other things which seemed higher priority and had been neglecting the board, and he didn't think he needed to pay that cost to get Toner off the board. But if he suddenly needed Toner off the board fast as his #1 priority...

The board did not have 'a few weeks'. (After all, once that complex and overwhelmingly important sale was wrapped up... Altman would be less busy and turning his attention to wrapping up other unfinished business he'd neglected.) They did not have days. For all they knew, they could even have had negative hours if Altman had gotten impatient & leaked an hour ago & the scandal had started while they were still discussing what to do. Regardless of whether Toner realized the implied threat at the time (she may have but been unable to do anything about it), once they had Sutskever, they needed to move as fast as possible.

Even if they had decided to take the risk of delay, the only point would have been to do something that would not alert Altman at all, which would be... what, exactly? What sort of meaningful preparation demanded by the board's critics could have been done under those constraints? (Giving Satya Nadella a heads-up? Altman would know within 10 minutes. Trying to recruit Brockman to stay on? 1 minute.)

So, they decided quickly to remove Altman and gave him roughly the minimum notice required by the bylaws of 48h*, without being able to do much besides talk to their lawyers and write the press release - and here we are.

* you may be tempted to reply 'then Altman couldn't've kicked Toner out that fast because he'd need that 48h notice too'; you are very clever, but note that the next section says they can all waive that required notice at the tap of a button, and if he called an 'emergency meeting' & they still believed in him, then they of course would do so - refusing to do so & insisting on 48h amounts to telling him that the jig is up. Whereas them sending him notice for an 'ordinary' meeting in 48h is completely normal and not suspicious, and he had no clue.

comment by James Payor (JamesPayor) · 2023-11-27T17:36:10.199Z · LW(p) · GW(p)

For one thing, this wouldn't be very kind to the investors.

For another, maybe there were some machinations involving the round like forcing the board to install another member or two, which would allow Sam to push out Helen + others?

I also wonder if the board signed some kind of NDA in connection with this fundraising that is responsible in part for their silence. If so this was very well schemed...

This is all to say that I think the timing of the fundraising is probably very relevant to why they fired Sam "abruptly".

comment by mishka · 2023-11-27T06:44:13.449Z · LW(p) · GW(p)

It's a mixture of reasons...

But, first of all, a lot of people (not just people in OpenAI) love Sam on the personal level, that's very clear, and they love both what he is doing (with OpenAI, with Helion, with Retro Biosciences), and how he is presenting himself, what he is saying, his demeanor, and so on.

Next key factor was that any outcome besides Sam's return would have damaged the company and the situation a lot at the worst possible moment, when the company had a clear lead, was riding a huge wave of success, had absolutely best models, and so on. They all understood how crucial was the role Sam played in all that, and how crucial would his role be in the future too. So there were making the strongest possible play to prevent any outcome besides Sam's return. They expected to win, they were playing to maximize the chances of winning, and they did not expect to lose and then to have to decide if they really want to join MSFT (both having to join MSFT and having to stay in the semi-destroyed OpenAI would be bad compared to what they had).

But out of the factors listed, 1+2+3+(4 for many of them, not for all)+5+(6 for some of them)+(7, not so much being afraid of "imploding", but more afraid of becoming a usual miserable corporate place, where one drags oneself to work instead of enjoying one's work)

comment by Mitchell_Porter · 2023-11-27T05:55:59.997Z · LW(p) · GW(p)

Following those demands would've put the entire organization under the control of 1 person with no accountability to anyone. That doesn't seem like what OpenAI employees wanted to be the case

The alternative looked like the outright destruction of the company: "We are unable to work for or with people that lack competence, judgement and care for our mission and employees."