OpenAI: Facts from a Weekend

post by Zvi · 2023-11-20T15:30:06.732Z · LW · GW · 130 comments


  Just the Facts, Ma’am

Approximately four GPTs and seven years ago, OpenAI’s founders brought forth on this corporate landscape a new entity, conceived in liberty, and dedicated to the proposition that all men might live equally when AGI is created.

Now we are engaged in a great corporate war, testing whether that entity, or any entity so conceived and so dedicated, can long endure.

What matters is not theory but practice. What happens when the chips are down?

So what happened? What prompted it? What will happen now?

To a large extent, even more than usual, we do not know. We should not pretend that we know more than we do.

Rather than attempt to interpret here or barrage with an endless string of reactions and quotes, I will instead do my best to stick to a compilation of the key facts.

(Note: All times stated here are eastern by default.)

Just the Facts, Ma’am

What do we know for sure, or at least close to sure?

Here is OpenAI’s corporate structure, giving the board of the 501c3 the power to hire and fire the CEO. It is explicitly dedicated to its nonprofit mission, over and above any duties to shareholders of secondary entities. Investors were warned that there was zero obligation to ever turn a profit:

A block diagram of OpenAI's unusual structure, provided by OpenAI.

Here are the most noteworthy things we know happened, as best I can make out.

  1. On Friday afternoon at 3:28pm, the OpenAI board fired Sam Altman, appointing CTO Mira Murati as temporary CEO effective immediately. They did so over a Google Meet that did not include then-chairmen Greg Brockman.
  2. Greg Brockman, Altman’s old friend and ally, was removed as chairman of the board but the board said he would stay on as President. In response, he quit.
  3. The board told almost no one. Microsoft got one minute of warning.
  4. Mira Murati is the only other person we know was told, which happened on Thursday night.
  5. From the announcement by the board: “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.”
  6. In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period.”
  7. OpenAI’s board of directors at this point: OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner.
  8. Usually a 501c3’s board must have a majority of people not employed by the company. Instead, OpenAI’s said that a majority did not have a stake in the company, due to Sam Altman having zero equity.
  9. In response to many calling this a ‘board coup’: “You can call it this way,” Sutskever said about the coup allegation. “And I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity.” AGI stands for artificial general intelligence, a term that refers to software that can reason the way humans do.When Sutskever was asked whether “these backroom removals are a good way to govern the most important company in the world?” he answered: “I mean, fair, I agree that there is a not ideal element to it. 100%.”
  10. Other than that, the board said nothing in public. I am willing to outright say that, whatever the original justifications, the removal attempt was insufficiently considered and planned and massively botched. Either they had good reasons that justified these actions and needed to share them, or they didn’t.
  11. There had been various clashes between Altman and the board. We don’t know what all of them were. We do know the board felt Altman was moving too quickly, without sufficient concern for safety, with too much focus on building consumer products, while founding additional other companies. ChatGPT was a great consumer product, but supercharged AI development counter to OpenAI’s stated non-profit mission.
  12. OpenAI was previously planning an oversubscribed share sale at a valuation of $86 billion that was to close a few weeks later.
  13. Board member Adam D’Angelo said in a Forbes in January: There’s no outcome where this organization is one of the big five technology companies. This is something that’s fundamentally different, and my hope is that we can do a lot more good for the world than just become another corporation that gets that big.
  14. Sam Altman on October 16: “4 times in the history of OpenAI––the most recent time was in the last couple of weeks––I’ve gotten to be in the room when we push the veil of ignorance back and the frontier of discovery forward. Getting to do that is the professional honor of a lifetime.” There was speculation that events were driven in whole or in part by secret capabilities gains within OpenAI, possibly from a system called Gobi, perhaps even related to the joking claim ‘AI has been achieved internally’ but we have no concrete evidence of that.
  15. Ilya Sutskever co-leads the Superalignment Taskforce, has very short timelines for when we will get AGI, and is very concerned about AI existential risk.
  16. Sam Altman was involved in starting multiple new major tech companies. He was looking to raise tens of billions from Saudis to start a chip company. He was in other discussions for an AI hardware company.
  17. Sam Altman has stated time and again, including to Congress, that he takes existential risk from AI seriously. He was part of the creation of OpenAI’s corporate structure. He signed the CAIS letter. OpenAI spent six months on safety work before releasing GPT-4. He understands the stakes. One can question OpenAI’s track record on safety, many did including those who left to found Anthropic. But this was not a pure ‘doomer vs. accelerationist’ story.
  18. Sam Altman is very good at power games such as fights for corporate control. Over the years he earned the loyalty of his employees, many of whom moved in lockstep, using strong strategic ambiguity. Hand very well played.
  19. Essentially all of VC, tech, founder, financial Twitter united to condemn the board for firing Altman and for how they did it, as did many employees, calling upon Altman to either return to the company or start a new company and steal all the talent. The prevailing view online was that no matter its corporate structure, it was unacceptable to fire Altman, who had built the company, or to endanger OpenAI’s value by doing so. That it was good and right and necessary for employees, shareholders, partners and others to unite to take back control.
  20. Talk in those circles is that this will completely discredit EA or ‘doomerism’ or any concerns over the safety of AI, forever. Yes, they say this every week, but this time it was several orders of magnitude louder and more credible. New York Times somehow gets this backwards. Whatever else this is, it’s a disaster.
  21. By contrast, those concerned about existential risk, and some others, pointed out that the unique corporate structure of OpenAI was designed for exactly this situation. They also mostly noted that the board clearly handled decisions and communications terribly, but that there was much unknown, and tried to avoid jumping to conclusions.
  22. Thus we are now answering the question: What is the law? Do we have law? Where does the power ultimately lie? Is it the charismatic leader that ultimately matters? Who you hire and your culture? Can a corporate structure help us, or do commercial interests and profit motives dominate in the end?
  23. Great pressure was put upon the board to reinstate Altman. They were given two 5pm Pacific deadlines, on Saturday and Sunday, to resign. Microsoft’s aid, and that of its CEO Satya Nadella, was enlisted in this. We do not know what forms of leverage Microsoft did or did not bring to that table.
  24. Sam Altman tweets ‘I love the openai team so much.’ Many at OpenAI respond with hearts, including Mira Murati.
  25. Invited by employees including Mira Murati and other top executives, Sam Altman visited the OpenAI offices on Sunday. He tweeted ‘First and last time i ever wear one of these’ with a picture of his visitors pass.
  26. The board does not appear to have been at the building at the time.
  27. Press reported that the board had agreed to resign in principle, but that snags were hit over who the replacement board would be, and over whether or not they would need to issue a statement absolving Altman of wrongdoing, which could be legally perilous for them given their initial statement.
  28. Bloomberg reported on Sunday 11:16pm that temporary CEO Mira Murati aimed to rehire Altman and Brockman, while board sought alternative CEO.
  29. OpenAI board hires former Twitch CEO Emmett Shear to be the new CEO. He issues his initial statement here. I know a bit about him. If the board needs to hire a new CEO from outside that takes existential risk seriously, he seems to me like a truly excellent pick, I cannot think of a clearly better one. The job set for him may or may not be impossible. Shear’s PPS in his note: PPS: “Before I took the job, I checked on the reasoning behind the change. The board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I’m not crazy enough to take this job without board support for commercializing our awesome models.”
  30. New CEO Emmett Shear has made statements in favor of slowing down AI development, although not a stop. His p(doom) is between 5% and 50%. He has said ‘My AI safety discourse is 100% “you are building an alien god that will literally destroy the world when it reaches the critical threshold but be apparently harmless before that.”’ Here is a thread and video link with more, transcript here or a captioned clip. Here he is tweeting a 2×2 faction chart a few days ago.
  31. Microsoft CEO Satya Nadella posts 2:53am Monday morning: We remain committed to our partnership with OpenAI and have confidence in our product roadmap, our ability to continue to innovate with everything we announced at Microsoft Ignite, and in continuing to support our customers and partners. We look forward to getting to know Emmett Shear and OAI’s new leadership team and working with them. And we’re extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team. We look forward to moving quickly to provide them with the resources needed for their success.
  32. Sam Altman retweets the above with ‘the mission continues.’ Brockman confirms. Other leadership to include Jackub Pachocki the GPT-4 lead, Szymon Sidor and Aleksander Madry.
  33. Nadella continued in reply: I’m super excited to have you join as CEO of this new group, Sam, setting a new pace for innovation. We’ve learned a lot over the years about how to give founders and innovators space to build independent identities and cultures within Microsoft, including GitHub, Mojang Studios, and LinkedIn, and I’m looking forward to having you do the same.
  34. Ilya Sutskever posts 8:15am Monday morning: I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company. Sam retweets with three heart emojis. Jan Leike, the other head of the superalignment team, Tweeted that he worked through the weekend on the crisis, and that the board should resign.
  35. Microsoft stock was down -1% after hours on Friday, was back to roughly its previous value on Monday morning and at the open. All priced in. Neither Google or S&P made major moves either.
  36. 505 of 700 employees of OpenAI, including Ilya Sutskever, sign a letter telling the board to resign and reinstate Altman and Brockman, threatening to otherwise move to Microsoft to work in the new subsidiary under Altman, which will have a job for every OpenAI employee. Full text of the letter that was posted: To the Board of Directors at OpenAI,OpenAl is the world’s leading Al company. We, the employees of OpenAl, have developed the best models and pushed the field to new frontiers. Our work on Al safety and governance shapes global norms. The products we built are used by millions of people around the world. Until now, the company we work for and cherish has never been in a stronger position.The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company. Your conduct has made it clear you did not have the competence to oversee OpenAI.When we all unexpectedly learned of your decision, the leadership team of OpenAl acted swiftly to stabilize the company. They carefully listened to your concerns and tried to cooperate with you on all grounds. Despite many requests for specific facts for your allegations, you have never provided any written evidence. They also increasingly realized you were not capable of carrying out your duties, and were negotiating in bad faith.The leadership team suggested that the most stabilizing path forward – the one that would best serve our mission, company, stakeholders, employees and the public – would be for you to resign and put in place a qualified board that could lead the company forward in stability. Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”Your actions have made it obvious that you are incapable of overseeing OpenAl. We are unable to work for or with people that lack competence, judgement and care for our mission and employees. We, the undersigned, may choose to resign from OpenAl and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman. Microsoft has assured us that there are positions for all OpenAl employees at this new subsidiary should we choose to join. We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.1. Mira Murati2. Brad Lightcap3. Jason Kwon4. Wojciech Zaremba5. Alec Radford6. Anna Makanju7. Bob McGrew8. Srinivas Narayanan9. Che Chang10. Lillian Weng11. Mark Chen12. Ilya Sutskever
  37. There is talk that OpenAI might completely disintegrate as a result, that ChatGPT might not work a few days from now, and so on.
  38. It is very much not over, and still developing.
  39. There is still a ton we do not know.
  40. This weekend was super stressful for everyone. Most of us, myself included, sincerely wish none of this had happened. Based on what we know, there are no villains in the actual story that matters here. Only people trying their best under highly stressful circumstances with huge stakes and wildly different information and different models of the world and what will lead to good outcomes. In short, to all who were in the arena for this on any side, or trying to process it, rather than spitting bile: ❤.

Later, when we know more, I will have many other things to say, many reactions to quote and react to. For now, everyone please do the best you can to stay sane and help the world get through this as best you can.


Comments sorted by top scores.

comment by gwern · 2023-11-22T03:00:13.481Z · LW(p) · GW(p)

The key news today: Altman had attacked Helen Toner (HN, Zvi [LW · GW]) Which explains everything if you recall board structures and voting.

Altman and the board had been unable to appoint new directors because there was an even balance of power, so during the deadlock/low-grade cold war, the board had attrited down to hardly any people. He thought he had Sutskever on his side, so he moved to expel Helen Toner from the board. He would then be able to appoint new directors of his choice. This would have irrevocably tipped the balance of power towards Altman. But he didn't have Sutskever like he thought he did, and they had, briefly, enough votes to fire Altman before he broke Sutskever (as he did yesterday), and they went for the last-minute hail-mary with no warning to anyone.

As always, "one story is good, until another is told"...

Replies from: gwern, lc, Lukas_Gloor, Benito, tristan-wegner, faul_sname
comment by gwern · 2023-11-25T15:25:48.361Z · LW(p) · GW(p)

The WSJ has published additional details about the Toner fight, filling in the other half of the story. The NYT merely mentions the OA execs 'discussing' it, but the WSJ reports much more specifically that the exec discussion of Toner was a Slack channel that Sutskever was in, and that approximately 2 days before the firing and 1 day before Mira was informed* (ie. the exact day Ilya would have flipped if they had then fired Altman about as fast as possible to schedule meetings 48h before & vote), he saw them say that the real problem was EA and that they needed to get rid of EA associations.

The specter of effective altruism had loomed over the politics of the board and company in recent months, particularly after the movement’s most famous adherent, Sam Bankman-Fried, the founder of FTX, was found guilty of fraud in a highly public trial.

Some of those fears centered on Toner, who previously worked at Open Philanthropy. In October, she published an academic paper touting the safety practices of OpenAI’s competitor, Anthropic, which didn’t release its own AI tool until ChatGPT’s emergence. “By delaying the release of Claude until another company put out a similarly capable product, Anthropic was showing its willingness to avoid exactly the kind of frantic corner-cutting that the release of ChatGPT appeared to spur,” she and her co-authors wrote in the paper. Altman confronted her, saying she had harmed the company, according to people familiar with the matter. Toner told the board that she wished she had phrased things better in her writing, explaining that she was writing for an academic audience and didn’t expect a wider public one. Some OpenAI executives told her that everything relating to their company makes its way into the press.

OpenAI leadership and employees were growing increasingly concerned about being painted in the press as “a bunch of effective altruists,” as one of them put it. Two days before Altman’s ouster, they were discussing these concerns on a Slack channel, which included Sutskever. One senior executive wrote that the company needed to “uplevel” its “independence”—meaning create more distance between itself and the EA movement.

OpenAI had lost three board members over the past year, most notably Reid Hoffman [who turns out to have been forced out by Altman over 'conflicts of interest', triggering the stalemate], the LinkedIn co-founder and OpenAI investor who had sold his company to Microsoft and been a key backer of the plan to create a for-profit subsidiary. Other departures were Shivon Zilis, an executive at Neuralink, and Will Hurd, a former Texas congressman. The departures left the board tipped toward academics and outsiders less loyal to Altman and his vision.

So this answers the question everyone has been asking: "what did Ilya see?" It wasn't Q*, it was OA execs letting the mask down and revealing Altman's attempt to get Toner fired was motivated by reasons he hadn't been candid about. In line with Ilya's abstract examples of what Altman was doing, Altman was telling different board members (allies like Sutskever vs enemies like Toner) different things about Toner.

This answers the "why": because it yielded a hard, screenshottable-with-receipts case of Altman manipulating the board in a difficult-to-explain-away fashion - why not just tell the board that "the EA brand is now so toxic that you need to find safety replacements without EA ties"? Why deceive and go after them one by one without replacements proposed to assure them about the mission being preserved? (This also illustrates the "why not" tell people about this incident: these were private, confidential discussions among rich powerful executives who would love to sue over disparagement or other grounds.) Previous Altman instances were either done in-person or not documented, but Altman has been so busy this year traveling and fundraising that he has had to do a lot of things via 'remote work', one might say, where conversations must be conducted on-the-digital-record. (Really, Matt Levine will love all this once he catches up.)

This also answers the "why now?" question: because Ilya saw that conversation on 15 November 2023, and not before.

This eliminates any role for Q*: sure, maybe it was an instance of lack of candor or a capabilities advance that put some pressure on the board, but unless something Q*-related also happened that day, there is no longer any explanatory role. (But since we can now date Sutskever's flip to 15 November 2023, we can answer the question of "how could the board be deceived about Q* when Sutskever would be overseeing or intimately familiar with every detail?" Because he was still acting as part of the Altman faction - he might well be telling the safety board members covertly, depending on how disaffected he became earlier on, but he wouldn't be overtly piping up about Q* in meetings or writing memos to the board about it unless Altman wanted him to. A single board member knowing != "the board candidly kept in the loop".)

This doesn't quite answer the 'why so abruptly?' question. If you don't believe that a board should remove a CEO as fast as possible when they believe the CEO has been systematically deceiving them for a year and manipulating the board composition to remove all oversight permanently, then this still doesn't directly explain why they had to move so fast. It does give one strong clue: Altman was trying to wear down Toner, but he had other options - if there was not any public scandal about the paper (which there was not, no one had even noticed it), well, there's nothing easier to manufacture for someone so well connected, as some OA executives informed Toner:

Some OpenAI executives told her that everything relating to their company makes its way into the press.

This presumably sounded like a well-intended bit of advice at the time, but takes on a different set of implications in retrospect. Amazing how journalists just keep hearing things about OA from little birds, isn't it? And they write those articles and post them online or on Twitter so quickly, too, within minutes or hours of the original tip. And Altman/Brockman would, of course, have to call an emergency last-minute board meeting to deal with this sudden crisis which, sadly, proved him right about Toner. If only the board had listened to him earlier! But they can fix it now...

Unfortunately, this piecemeal description by WSJ leaves out the larger conversational context of that Slack channel, which would probably clear up a lot. For example, the wording is consistent with them discussing how to fire just Toner, but it's also consistent with that being just the first step in purging all EA-connected board members & senior executives - did they? If they did, that would be highly alarming and justify a fast move: eg. firing people is a lot easier than unfiring them, and would force a confrontation they might lose and would wind up removing Altman even if they won. (Particularly if we do not give in to hindsight bias and remember that in the first day, everyone, including insiders, thought the firing would stick and so Altman - who had said the board should be able to fire him and personally designed OA that way - would simply go do a rival startup elsewhere.)

Emmett Shear apparently managed to insist on an independent investigation, and I expect that this Slack channel discussion will be a top priority of a genuine investigation. As Slack has regulator & big-business-friendly access controls, backups, and logs, it should be hard for them to scrub all the traces now; any independent investigation will look for deletions by the executives and draw adverse inferences.

(The piecemeal nature of the Toner revelations, where each reporter seems to be a blind man groping one part of the elephant, suggests to me that the NYT & WSJ are working from leaks based on a summary rather than the originals or a board member leaking the whole story to them. Obviously, the flip-flopped Sutskever and the execs in question, who are the only ones who would have access post-firing, are highly unlikely to be leaking private Slack channel discussions, so this information is likely coming from before the firing, so board discussions or documents, where there might be piecemeal references or quotes. But I could be wrong here. Maybe they are deliberately being cryptic to protect their source, or something, and people are just too ignorant to read between the lines. Sort of like Umbridge's speech on a grand scale.)

* note that this timeline is consistent with what Habryka [LW(p) · GW(p)] says about Toner still scheduling low-priority ordinary meetings like normal just a few days before - which implies she had no idea things were about to happen.

Replies from: gwern, Yvain
comment by gwern · 2023-12-01T17:49:12.995Z · LW(p) · GW(p)

The NYer has confirmed that Altman's attempted coup was the cause of the hasty firing (HN):

...Some members of the OpenAI board had found Altman an unnervingly slippery operator. For example, earlier this fall he’d confronted one member, Helen Toner, a director at the Center for Security and Emerging Technology, at Georgetown University, for co-writing a paper that seemingly criticized OpenAI for “stoking the flames of AI hype.” Toner had defended herself (though she later apologized to the board for not anticipating how the paper might be perceived). Altman began approaching other board members, individually, about replacing her. When these members compared notes about the conversations, some felt that Altman had misrepresented them as supporting Toner’s removal. “He’d play them off against each other by lying about what other people thought”, the person familiar with the board’s discussions told me. “Things like that had been happening for years.” (A person familiar with Altman’s perspective said that he acknowledges having been “ham-fisted in the way he tried to get a board member removed”, but that he hadn’t attempted to manipulate the board.)

...His tactical skills were so feared that, when 4 members of the board---Toner, D’Angelo, Sutskever, and Tasha McCauley---began discussing his removal, they were determined to guarantee that he would be caught by surprise. “It was clear that, as soon as Sam knew, he’d do anything he could to undermine the board”, the person familiar with those discussions said...Two people familiar with the board’s thinking say that the members felt bound to silence by confidentiality constraints...But whenever anyone asked for examples of Altman not being “consistently candid in his communications”, as the board had initially complained, its members kept mum, refusing even to cite Altman’s campaign against Toner.

...The dismissed board members, meanwhile, insist that their actions were wise. “There will be a full and independent investigation, and rather than putting a bunch of Sam’s cronies on the board we ended up with new people who can stand up to him”, the person familiar with the board’s discussions told me. “Sam is very powerful, he’s persuasive, he’s good at getting his way, and now he’s on notice that people are watching.” Toner told me, “The board’s focus throughout was to fulfill our obligation to OpenAI’s mission.” (Altman has told others that he welcomes the investigation---in part to help him understand why this drama occurred, and what he could have done differently to prevent it.)

Some A.I. watchdogs aren’t particularly comfortable with the outcome. Margaret Mitchell, the chief ethics scientist at Hugging Face, an open-source A.I. platform, told me, “The board was literally doing its job when it fired Sam. His return will have a chilling effect. We’re going to see a lot less of people speaking out within their companies, because they’ll think they’ll get fired---and the people at the top will be even more unaccountable.”

Altman, for his part, is ready to discuss other things. “I think we just move on to good governance and good board members and we’ll do this independent review, which I’m super excited about”, he told me. “I just want everybody to move on here and be happy. And we’ll get back to work on the mission”.

Replies from: gwern
comment by gwern · 2023-12-04T16:48:58.592Z · LW(p) · GW(p)

I left a comment over on EAF [EA(p) · GW(p)] which has gone a bit viral, describing the overall picture of the runup to the firing as I see it currently.

The summary is: evaluations of the Board's performance in firing Altman generally ignore that Altman made OpenAI and set up all of the legal structures, staff, and the board itself; the Board could, and should, have assumed good faith of Altman because if he hadn't been sincere, why would he have done all that, proving in extremely costly and unnecessary ways his sincerity? But, as it happened, OA recently became such a success that Altman changed his mind about the desirability of all that and now equally sincerely believes that the mission requires him to be in total control; and this is why he started to undermine the board. The recency is why it was so hard for them to realize that change of heart or develop common knowledge about it or coordinate to remove him given his historical track record - but that historical track record was also why if they were going to act against him at all, it needed to be as fast & final as possible. This led to the situation becoming a powder keg, and when proof of Altman's duplicity in the Toner firing became undeniable to the Board, it exploded.

comment by Scott Alexander (Yvain) · 2023-11-28T06:03:55.522Z · LW(p) · GW(p)

Thanks, this makes more sense than anything else I've seen, but one thing I'm still confused about:

If the factions were Altman-Brockman-Sutskever vs. Toner-McCauley-D'Angelo, then even assuming Sutskever was an Altman loyalist, any vote to remove Toner would have been tied 3-3. I can't find anything about tied votes in the bylaws - do they fail? If so, Toner should be safe. And in fact, Toner knew she (secretly) had Sutskever on her side, and it would have been 4-2. If Altman manufactured some scandal, the board could have just voted to ignore it.

So I still don't understand "why so abruptly?" or why they felt like they had to take such a drastic move when they held all the cards (and were pretty stable even if Ilya flipped).

Other loose ends:

  • Toner got on the board because of OpenPhil's donation. But how did McCauley get on the board?
  • Is D'Angelo a safetyist?
  • Why wouldn't they tell anyone, including Emmett Shear, the full story?
Replies from: gwern, faul_sname, daniel-glasscock, Mitchell_Porter
comment by gwern · 2023-11-28T16:19:40.451Z · LW(p) · GW(p)

I can't find anything about tied votes in the bylaws - do they fail?

I can't either, so my assumption is that the board was frozen ever since Hoffman/Hurd left for that reason.

And there wouldn't've been a vote at all. I've explained it before [LW(p) · GW(p)] but - while we wait for phase 3 of the OA war to go hot - let me take another crack at it, since people seem to keep getting hung up on this and seem to imagine that it's a perfectly normal state of a board to be in a deathmatch between two opposing factions indefinitely, and so confused why any of this happened.

In phase 1, a vote would be pointless, and neither side could nor wanted to force it to a vote. After all, such a vote (regardless of the result) is equivalent to admitting that you have gone from simply "some strategic disagreements among colleagues all sharing the same ultimate goals and negotiating in good faith about important complex matters on which reasonable people of goodwill often differ" to "cutthroat corporate warfare where it's-them-or-us everything-is-a-lie-or-fog-of-war fight-to-the-death there-can-only-be-one". You only do such a vote in the latter situation; in the former, you just keep negotiating until you reach a consensus or find a compromise that'll leave everyone mad.

That's not a switch to make lightly or lazily. You do not flip the switch from 'ally' to 'enemy' casually, and then do nothing and wait for them to find out and make the first move.

Imagine Altman showing up to the board and going "hi guys I'd like to vote right now to fire Toner - oh darn a tie, never mind" - "dude what the fuck?!"

As I read it, the board still hoped Altman was basically aligned (and it was all headstrongness or scurrilous rumors) right up until the end, when Sutskever defected with the internal Slack receipts revealing that the war had already started and Altman's switch had apparently flipped a while ago.

So I still don't understand "why so abruptly?" or why they felt like they had to take such a drastic move when they held all the cards (and were pretty stable even if Ilya flipped).

The ability to manufacture a scandal at any time is a good way to motivate non-procrastination, pace Dr Johnson about the wonderfully concentrating effects of being scheduled to hang. As I pointed out, it gives Altman a great pretext to, at any time, push for the resignation of Toner in a way where - if their switch has not been flipped, like he still believed it had not - still looking to the board like the good guy who is definitely not doing a coup and is just, sadly and regretfully, breaking the tie because of the emergency scandal that the careless disloyal Toner has caused them all, just as he had been warning the board all along. (Won't she resign and help minimize the damage, and free herself to do her academic research without further concern? If not, surely D'Angelo or McCauley appreciate how much damage she's done and can now see that, if she's so selfish & stubborn & can't sacrifice herself for the good of OA, she really needs to be replaced right now...?) End result: Toner resigns or is fired. It took way less than that to push out Hoffman or Zillis, after all. And Altman means so well and cares so much about OA's public image, and is so vital to the company, and has a really good point about how badly Toner screwed up, so at least one of you three have to give it to him. And that's all he needs.

(How well do you think Toner, McCauley, and D'Angelo all know each other? Enough to trust that none of the other two would ever flip on the other, or be susceptible to leverage, or scared, or be convinced?)

Of course, their switch having been flipped at this point, the trio could just vote 'no' 3-3 and tell Altman to go pound sand and adamantly refuse to ever vote to remove Toner... but such an 'unreasonable' response reveals their switch has been flipped. (And having Sutskever vote alongside them 4-2, revealing his new loyalty, would be even more disastrous.)

Why wouldn't they tell anyone, including Emmett Shear, the full story?

How do you know they didn't? Note that what they wouldn't provide Shear was a "written" explanation. (If Shear was so unconvinced, why was an independent investigation the only thing he negotiated for aside from the new board? His tweets since then also don't sound like someone who looked behind the curtain, found nothing, and is profoundly disgusted with & hates the old board for their profoundly incompetent malicious destruction.)

comment by faul_sname · 2023-11-28T18:42:15.953Z · LW(p) · GW(p)

I note that the articles I have seen have said things like

New CEO Emmett Shear has so far been unable to get written documentation of the board’s detailed reasoning for firing Altman, which also hasn’t been shared with the company’s investors, according to people familiar with the situation

(emphasis mine).

If Shear had been unable to get any information about the board's reasoning, I very much doubt that they would have included the word "written".

comment by Daniel (daniel-glasscock) · 2023-11-28T15:28:45.885Z · LW(p) · GW(p)

If the factions were Altman-Brockman-Sutskever vs. Toner-McCauley-D'Angelo, then even assuming Sutskever was an Altman loyalist, any vote to remove Toner would have been tied 3-3.

A 3-3 tie between the CEO founder of the company, the president founder of the company, and the chief scientist of the company vs three people with completely separate day jobs who never interact with rank-and-file employees is not a stable equilibrium. There are ways to leverage this sort of soft power into breaking the formal deadlock, for example: as we saw last week.

comment by Mitchell_Porter · 2023-11-28T09:30:18.923Z · LW(p) · GW(p)

how did McCauley get on the board?

I have envisaged a scenario in which the US intelligence community has an interagency working group on AI, and Toner and McCauley were its defacto representatives on the OpenAI board, Toner for CIA, McCauley for NSA. Maybe someone who has studied the history of the board can tell me whether that makes sense, in terms of its shifting factions. 

Replies from: wassname
comment by wassname · 2023-11-28T12:35:32.348Z · LW(p) · GW(p)

Why would Toner be related to the CIA, and how is McCauley NSA?

If OpenaI is running out money, and is too dependent on Microsoft, defense/intelligence/government is not the worst place for them to look for money. There are even possible futures where they are partially nationalised in a crisis. Or perhaps they will help with regulatory assessment. This possibility certainly makes the Larry Summers appointment take on a different't light with his ties to not only Microsoft, but also the Government.

Replies from: David Hornbein, Mitchell_Porter
comment by David Hornbein · 2023-11-28T16:24:38.356Z · LW(p) · GW(p)

Toner's employer, the Center for Security and Emerging Technology (CSET), was founded by Jason Matheny. Matheny was previously the Director of the Intelligence Advanced Research Projects Activity (IARPA), and is currently CEO of the RAND Corporation. CSET is currently led by Dewey Murdick, who previously worked at the Department of Homeland Security and at IARPA. Much of CSET's initial staff was former (or "former") U.S. intelligence analysts, although IIRC they were from military intelligence rather than the CIA specifically. Today many of CSET's researchers list prior experience with U.S. civilian intelligence, military intelligence, or defense intelligence contractors. Given the overlap in staff and mission, U.S. intelligence clearly and explicitly has a lot of influence at CSET, and it's reasonable to suspect a stronger connection than that.

I don't see it for McCauley though.

comment by Mitchell_Porter · 2023-11-28T14:58:38.230Z · LW(p) · GW(p)

Why would Toner be related to the CIA, and how is McCauley NSA?

Toner's university has a long history of association with the CIA. Just google "georgetown cia" and you'll see more than I can summarize. 

As for McCauley, well, I did call this a "scenario"... The movie maker Oliver Stone rivals Chomsky as the voice of an elite political counterculture who are deadly serious in their opposition to what the American deep state gets up to, and whose ranks include former insiders who became leakers, whistleblowers, and ideological opponents of the system. When Stone, already known as a Wikileaks supporter, decided to turn his attention to NSA's celebrity defector Edward Snowden, he ended up casting McCauley's actor boyfriend as the star. 

My hunch, my scenario, is that people associated with the agency, or formerly associated with the agency, put him forward for the role, with part of the reason being that he was already dating one of their own. What we know about her CV - robotics, geographic information systems, speaks Arabic, mentored by Alan Kay - obviously doesn't prove anything, but it's enough to make this scenario work, as a possibility. 

comment by lc · 2023-11-22T04:27:22.035Z · LW(p) · GW(p)

We shall see. I'm just ignoring the mainstream media spins at this point.

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2023-11-22T08:55:42.272Z · LW(p) · GW(p)

For those of us who don't know yet, criticizing the accuracy of mainstream Western news outlets is NOT a strong bayesian update against someone's epistemics, especially on a site like Lesswrong (doesn't matter how many idiots you might remember ranting about "mainstream media" on other sites, the numbers are completely different here).

There is a well-known dynamic called Gell-Mann Amnesia, where people strongly lose trust in mainstream Western news outlets on a topic they are an expert on, but routinely forget about this loss of trust when they read coverage on a topic that they can't evaluate accuracy on. Western news outlets Goodhart readers by depicting themselves as reliable instead of prioritizing reliability.

If you read major Western news outlets, or are new to major news outlets due to people linking to them on Lesswrong recently, some basic epistemic prep can be found in Scott Alexander's The Media Very Rarely Lies and if it's important, the follow up posts.

comment by Lukas_Gloor · 2023-11-22T03:24:45.109Z · LW(p) · GW(p)

Yeah, that makes sense and does explain most things, except that if I was Helen, I don't currently see why I wouldn't have just explained that part of the story early on?* Even so, I still think this sounds very plausible as part of the story.

*Maybe I'm wrong about how people would react to that sort of justification. Personally, I think the CEO messing with the board constitution to gain de facto ultimate power is clearly very bad and any good board needs to prevent that. I also believe that it's not a reason to remove a board member if they publish a piece of research that's critical of or indirectly harmful for your company. (Caveat that we're only reading a secondhand account of this, and maybe what actually happened would make Altman's reaction seem more understandable.) 

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2023-11-22T03:57:29.369Z · LW(p) · GW(p)

Hm, to add a bit more nuance, I think it's okay at a normal startup for a board to be comprised of people who are likely to almost always side with the CEO, as long as they are independent thinkers who could vote against the CEO if the CEO goes off the rails. So, it's understandable (or even good/necessary) for CEOs to care a lot about having "aligned" people on the board, as long as they don't just add people who never think for themselves.

It gets more complex in OpenAI's situation where there's more potential for tensions between CEO and the board. I mean, there shouldn't necessarily be any tensions, but Altman probably had less of a say over who the original board members were than a normal CEO at a normal startup, and some degree of "norms-compliant maneuvering" to retain board control feels understandable because any good CEO cares a great deal about how to run things. So, it actually gets a bit murky and has to be judged case-by-case. (E.g., I'm sure Altman feels like what happened vindicated him wanting to push Helen off the board.) 

comment by Ben Pace (Benito) · 2023-11-22T05:03:07.290Z · LW(p) · GW(p)

I was confused about the counts, but I guess this makes sense if Helen cannot vote on her own removal. Then it's Altman/Brockman/Sutskever v Tasha/D'Angelo.

Pretty interesting that Sutskever/Tasha/D'Angelo would be willing to fire Altman just to prevent Helen from going. They instead could have negotiated someone to replace her. Wouldn't you just remove Altman from the Board, or maybe remove Brockman? Why would they be willing to decapitate the company in order to retain Helen?

Replies from: gwern, Zvi, chess-teacher
comment by gwern · 2023-11-22T16:47:47.722Z · LW(p) · GW(p)

They instead could have negotiated someone to replace her.

Why do they have to negotiate? They didn't want her gone, he did. Why didn't Altman negotiate a replacement for her, if he was so very upset about the damages she had supposedly done OA...?

"I understand we've struggled to agree on any replacement directors since I kicked Hoffman out, and you'd worry even more about safety remaining a priority if she resigns. I totally get it. So that's not an obstacle, I'll agree to let Toner nominate her own replacement - just so long as she leaves soon."

When you understand why Altman would not negotiate that, you understand why the board could not negotiate that.

I was confused about the counts, but I guess this makes sense if Helen cannot vote on her own removal. Then it's Altman/Brockman/Sutskever v Tasha/D'Angelo.

Recusal or not, Altman didn't want to bring it to something as overt as a vote expelling her. Power wants to conceal itself and deny the coup. The point here of the CSET paper pretext is to gain leverage and break the tie any way possible so it doesn't look bad or traceable to Altman: that's why this leaking is bad for Altman, it shows him at his least fuzzy and PR-friendly. He could, obviously, have leaked the Toner paper at any time to a friendly journalist to manufacture a crisis and force the issue, but that was not - as far as he knew then - yet a tactic he needed to resort to. However, the clock was ticking, and the board surely knew that the issue could be forced at any time of Altman's choosing.

If he had outright naked control of the board, he would scarcely need to remove her nor would they be deadlocked over the new directors; but by organizing a 'consensus' among the OA executives (like Jakub Pachocki?) about Toner committing an unforgivable sin that can be rectified only by stepping down, and by lobbying in the background and calling in favors, and arguing for her recusal, Altman sets the stage for wearing down Toner (note what they did to Ilya Sutskever & how the Altman faction continues to tout Sutskever's flip without mentioning the how) and Toner either resigning voluntarily or, in the worst case, being fired. It doesn't matter which tactic succeeds, a good startup CEO never neglects a trick, and Altman knows them all - it's not for nothing that Paul Graham keeps describing Altman as the most brutally effective corporate fighter he's ever known and describes with awe how eg he manipulated Graham into appointing him president of YC, and eventually Graham had to fire him from YC for reasons already being foreshadowed in 2016. (Note how thoroughly and misogynistically Toner has been vilified on social media by OAer proxies, who, despite leaking to the media like Niagara Falls, somehow never felt this part about Altman organizing her removal to be worth mentioning; every tactic has been employed in the fight so far: they even have law enforcement pals opening an 'investigation'. Needless to say, there's zero chance of it going anywhere, it's just power struggles, similar to the earlier threats to sue the directors personally.) Note: if all this can go down in like 3 days with Altman outside the building and formally fired and much of the staff gone on vacation, imagine what he could have done with 3 months and CEO access/resources/credibility and all the OAers back?

The board was tolerating all this up to the point where firing Toner came up, because it seemed like Sam was just aw-shucks-being-Sam - being an overeager go-getter was the whole point of the CEO, wasn't it? it wasn't like he was trying to launch a coup or anything, surely not - but when he opened fire on Toner for such an incredibly flimsy pretext without, say, proposing to appoint a specific known safety person to replace Toner and maintain the status quo, suddenly, everything changed. (What do you think a treacherous turn looks like IRL? It looks like that.) The world in which Altman is just an overeager commercializer who otherwise agrees with the board and there's just been a bunch of misunderstandings and ordinary conflicts is a different world from the world in which he doesn't care about safety unless it's convenient & regularly deceives and manipulates & has been maneuvering the entire time to irrevocably take over the board to remove his last check. And if you realize you have been living in the second world and that you have the slimmest possible majority, which will crack as soon as Altman realizes he's overplayed his hand and moves overtly to deploy his full arsenal before he forces a vote...

So Altman appears to have made two key mistakes here, because he was so personally overstretched and 2023 has been such a year: first, taking Sutskever for granted. (WSJ: "Altman this weekend was furious with himself for not having ensured the board stayed loyal to him and regretted not spending more time managing its various factions, people familiar with his thinking said.") Then second, making his move with such a flimsy pretext that it snapped the suspension of disbelief of the safety faction. Had he realized Sutskever was a swing vote, he would have worked on him much harder and waited for better opportunities to move against Toner or McCauley. Well, live and learn - he's a smart guy; he won't make the same mistakes twice with the next OA board.

(If you find any of this confusing or surprising, I strongly suggest you read up more on how corporate infighting works. You may not be interested in corporate governance or power politics, but they are now interested in you, and this literature is only going to get more relevant. Some LWer-friendly starting points here would be Bad Blood on narcissist Elizabeth Holmes, Steve Jobs - Altman's biggest hero - and his ouster, the D&D coup, the classic Barbarians at the Gate, the many contemporary instances covered in Matt Levine's newsletter like the Papa Johns coup or most recently, Sculptor, The Gervais Principle, the second half of Breaking Bad, Zvi's many relevant essays on moral mazes/simulacra levels [? · GW]/corporate dynamics from his perspective as a hedge fund guy, and especially the in-depth reporting on how Harvey Weinstein covered everything up for so long which pairs well with Bad Blood.)

Replies from: habryka4
comment by habryka (habryka4) · 2023-11-22T19:14:51.856Z · LW(p) · GW(p)

I... still don't understand why the board didn't say anything? I really feel like a lot of things would have flipped if they had just talked openly to anyone, or taken advice from anyone. Like, I don't think it would have made them global heroes, and a lot of people would have been angry with them, but every time any plausible story about what happened came out, there was IMO a visible shift in public opinion, including on HN, and the board confirming any story or giving any more detail would have been huge. Instead they apparently "cited legal reasons" for not talking, which seems crazy to me.

Replies from: adam_scholl, Linch
comment by Adam Scholl (adam_scholl) · 2023-11-23T22:14:14.002Z · LW(p) · GW(p)

I can imagine it being the case that their ability to reveal this information is their main source of leverage (over e.g. who replaces them on the board).

comment by Linch · 2023-11-22T23:49:07.865Z · LW(p) · GW(p)

My favorite low-probability theory is that he had blackmail material on one of the board members[1], who initially decided after much deliberation to go forwards despite the blackmail, and then when they realized they got outplayed by Sam not using the blackmail material, backpeddled and refused to dox themselves.  And the other 2-3 didn't know what to do afterwards, because their entire strategy was predicated on optics management around said blackmail + blackmail material.

  1. ^

    Like something actually really bad.

comment by Zvi · 2023-11-22T18:14:12.545Z · LW(p) · GW(p)

It would be sheer insanity to have a rule that you can't vote on your own removal, I would think, or else a tied board will definitely shrink right away.

Replies from: MakoYass
comment by mako yass (MakoYass) · 2023-11-22T18:28:36.484Z · LW(p) · GW(p)

Wait, simple majority is an insane place to put the threshold for removal in the first place. Majoritarian shrinking is still basically inevitable if the threshold for removal is 50%, it should be a higher than that, maybe 62%.

And generally, if 50% of a group thinks A and 50% thinks ¬A, that tells you that the group is not ready to make a decision about A.

comment by Chess3D (chess-teacher) · 2023-11-22T05:34:47.629Z · LW(p) · GW(p)

It is not clear, in the non--profit structure of a board, that Helen cannot vote on her own removal.

The vote to remove Sam may have been some trickery around holding a quorum meeting without notifying Sam or Greg.

Replies from: Linch
comment by Linch · 2023-11-22T23:51:05.074Z · LW(p) · GW(p)

I think it was most likely unanimous among the remaining 4, otherwise one of the dissenters would've spoken out by now.

comment by Tristan Wegner (tristan-wegner) · 2023-11-22T07:18:31.931Z · LW(p) · GW(p)

Mr. Altman, the chief executive, recently made a move to push out one of the board’s members because he thought a research paper she had co-written was critical of the company.

Here the paper:

Some more recent (Nov/Okt 2023) publications from her here:

comment by faul_sname · 2023-11-22T06:01:23.415Z · LW(p) · GW(p)

Manifold says 23% (*edit: link doesn't link directly to that option, it shows up if you search "Helen") on

Sam tried to compromise the independence of the independent board members by sending an email to staff “reprimanding” Helen Toner

as "a significant factor for why Sam Altman was fired". It would make sense as a motivation, though it's a bit odd that the board would say that Sam was "not consistently candid" and not "trying to undermine the governance structure of the organization" in that case.

comment by JenniferRM · 2023-11-20T17:53:14.944Z · LW(p) · GW(p)

When I read this part of the letter, the authors seem to be throwing it in the face of the board like it is a damning accusation, but actually, as I read it, it seems very prudent and speaks well for the board.

You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

Maybe I'm missing some context, but wouldn't it be better for Open AI as an organized entity to be destroyed than for it to exist right up to the point where all humans are destroyed by an AGI that is neither benevolent nor "aligned with humanity" (if we are somehow so objectively bad as to not deserve care by a benevolent powerful and very smart entity).

This reminds me a lot of a blockchain project I served as an ethicist, which was initially a "project" that was interested in advancing a "movement" and ended up with a bunch of people whose only real goal was to cash big paychecks for a long time (at which point I handled my residual duties to the best of my ability and resigned, with lots of people expressing extreme confusion and asking why I was acting "foolishly" or "incompetently" (except for a tiny number who got angry at me for not causing a BIGGER explosion than just leaving to let a normally venal company be normally venal without me)).

In my case, I had very little formal power. I bitterly regretted not having insisted "as the ethicist" in having a right to be informed of any board meeting >=36 hours in advance, and to attend every one of them, and to have the right to speak at them.

(Maybe it is a continuing flaw of "not thinking I need POWER", to say that I retrospectively should have had a vote on the Board? But I still don't actually think I needed a vote. Most of my job was to keep saying things like "lying is bad" or "stealing is wrong" or "fairness is hard to calculate but bad to violate if clear violations of it are occurring" or "we shouldn't proactively serve states that run gulags, we should prepare defenses, such that they respect us enough to explicitly request compliance first". You know, the obvious stuff, that people only flinch from endorsing because a small part of each one of us, as a human, is a very narrowly selfish coward by default, and it is normal for us, as humans, to need reminders of context sometimes when we get so much tunnel vision during dramatic moments that we might commit regrettable evils through mere negligence.)

No one ever said that it is narrowly selfishly fun or profitable to be in Gethsemane and say "yes to experiencing pain if the other side who I care about doesn't also press the 'cooperate' button".

But to have "you said that ending up on the cross was consistent with being a moral leader of a moral organization!" flung on one's face as an accusation suggests to me that the people making the accusation don't actually understand that sometimes objective de re altruism hurts.

Maturely good people sometimes act altruistically, at personal cost, anyway because they care about strangers.

Clearly not everyone is "maturely good". 

That's why we don't select political leaders at random, if we are wise.

Now you might argue that AI is no big deal, and you might say that getting it wrong could never "kill literally everyone".

Also it is easy to imagine how a lot of normally venal corporate people (who thought they could get money by lying and saying "AI might kill literally everyone" when they don't believe it to people who do claim to believe it) if a huge paycheck will be given to them for their moderately skilled work contingent on them saying that...

...but if the stakes are really that big then NOT acting like someone who really DID believe that "AI might kill literally everyone" is much much worse than a lady on the side of the road looking helplessly at her broken car. That's just one lady! The stakes there are much smaller!

The big things are MORE important to get right. Not LESS important.

To get the "win condition for everyone" would justify taking larger risks and costs than just parking by the side of the road and being late for where-ever you planned on going when you set out on the journey.

Maybe a person could say: "I don't believe that AI could kill literally everyone, I just think that creating it is just an opportunity to make a lot of money and secure power, and use that to survive the near term liquidation of the proletariate when rambunctious human wage slaves are replaced by properly mind-controlled AI slaves".

Or you could say something like "I don't believe that AI is even that big a deal. This is just hype, and the stock valuations are gonna be really big but then they'll crash and I urgently want to sell into the hype to greater fools because I like money and I don't mind selling stuff I don't believe in to other people."

Whatever. Saying whatever you actually think is one of three legs in a the best definition of  integrity that I currently know of.

(The full three criteria: non-impulsiveness, fairness, honesty.)

OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity... Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.

(Sauce. Italics and bold not in original.)

Compare this again:

You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

The board could just be right about this. 

It is an object level question about a fuzzy future conditional event, that ramifies through a lot of choices that a lot of people will make in a lot of different institutional contexts.

If Open AI's continued existence ensures that artificial intelligence benefits all of humanity then its continued existence would be consistent with the mission. 

If not, not.

What is the real fact of the matter here?

Its hard to say, because it is about the future, but one way to figure out what a group will pursue is to look at what they are proud of, and what they SAY they will pursue.

Look at how the people fleeing into Microsoft argue in defense of themselves:

We, the employees of OpenAl, have developed the best models and pushed the field to new frontiers. Our work on Al safety and governance shapes global norms. The products we built are used by millions of people around the world. Until now, the company we work for and cherish has never been in a stronger position.

This is all MERE IMPACT. This is just the coolaid that startup founders want all their employees to pretend to believe is the most important thing, because they want employees who work hard for low pay.

This is all just "stuff you'd put in your promo packet to get promoted at a FAANG in the mid teens when they were hiring like crazy, even if it was only 80% true, that 'everyone around here' agrees with (because everyone on your team is ALSO going for promo)".

Their statement didn't mention "humanity" even once.

Their statement didn't mention "ensuring" that "benefits" go to "all of humanity" even once.

Microsoft's management has made no similar promise about benefiting humanity in the formal text of its founding, and gives every indication of having no particular scruples or principles or goals larger than a stock price and maybe some executive bonuses or stock buy-back deals.

As is valid in a capitalist republic! That kind of culture, and that kind of behavior, does have a place in it for private companies that manufacture and sell private good to individuals who can freely choose to buy those products

You don't have to be very ethical to make and sell hammers or bananas or toys for children.

However, it is baked into the structure of Microsoft's legal contracts and culture that it will never purposefully make a public good that it knowingly loses a lot of money on SIMPLY because "the benefits to everyone else (even if Microsoft can't charge for them) are much much larger".

Open AI has a clear telos and Microsoft has a clear telos as well. 

I admire the former more than the latter, especially for something as important as possibly creating a Demon Lord, or a Digital Leviathan, or "a replacement for nearly all human labor performed via arm's length transactional relations", or whatever you want to call it.

There are few situations in normal everyday life where the plausible impacts are not just economic, and not just political, not EVEN "just" evolutionary!

This is one of them. Most complex structures in the solar system right now were created, ultimately, by evolution. After AGI, most complex structures will probably be created by algorithms.

Evolution itself is potentially being overturned.

Software is eating the world. 

"People" are part of the world. "Things you care about" are part of the world. 

There is no special carveout for cute babies, or picnics, or choirs, or waltzing with friends, or 20th wedding anniversaries, or taking ecstasy at a rave, or ANYTHING HUMAN.

All of those things are in the world, and unless something prevents that natural course of normal events from doing so: software will eventually eat them too.

I don't see Microsoft and the people fleeing to Microsoft, taking that seriously, with serious language, that endorses coherent moral ideals in ways that can be directly related to the structural features of institutional arrangements to cause good outcomes for humanity on purpose.

Maybe there is a deeper wisdom there?

Maybe they are secretly saying petty things, even as they secretly plan to do something really importantly good for all of humanity?

Most humans are quite venal and foolish, and highly skilled impression management is a skill that politicians and leaders would be silly to ignore.

But it seems reasonable to me to take both sides at their word.

One side talks and walks like a group that is self-sacrificingly willing to do what it takes to ensure that artificial general intelligence benefits all of humanity and the other side is just straightforwardly not.

Replies from: dr_s, ryan_b, xpym
comment by dr_s · 2023-11-20T17:57:09.733Z · LW(p) · GW(p)

Maybe I'm missing some context, but wouldn't it be better for Open AI as an organized entity to be destroyed than for it to exist right up to the point where all humans are destroyed by an AGI that is neither benevolent nor "aligned with humanity" (if we are somehow so objectively bad as to deserve care by a benevolent powerful and very smart entity).

The problem I suspect is that people just can't get out of the typical "FOR THE SHAREHOLDERS" mindset, so a company that is literally willing to commit suicide rather than getting hijacked for purposes antithetic to its mission, like a cell dying by apoptosis rather than going cancerous, can be a very good thing, and if only there was more of this. You can't beat Moloch if you're not willing to precommit to this sort of action. And let's face it, no one involved here is facing homelessness and soup kitchens even if Open AI crashes tomorrow. They'll be a little worse off for a while, their careers will take a hit, and then they'll pick themselves up. If this was about the safety of humanity it would be a no-brainer that you should be ready to sacrifice that much.

Replies from: michael-thiessen, gerald-monroe
comment by Michael Thiessen (michael-thiessen) · 2023-11-21T16:09:40.295Z · LW(p) · GW(p)

Sam's latest tweet suggests he can't get out of the "FOR THE SHAREHOLDERS" mindset.

"satya and my top priority remains to ensure openai continues to thrive

we are committed to fully providing continuity of operations to our partners and customers"

This does sound antithetical to the charter and might be grounds to replace Sam as CEO.

Replies from: dr_s
comment by dr_s · 2023-11-21T16:28:18.576Z · LW(p) · GW(p)

I feel like, not unlike the situation with SBF and FTX, the delusion that OpenAI could possibly avoid this trap maps on the same cognitive weak spot among EA/rationalists of "just let me slip on the Ring of Power this once bro, I swear it's just for a little while bro, I'll take it off before Moloch turns me into his Nazgul, trust me bro, just this once".

This is honestly entirely unsurprising. Rivers flow downhill and companies part of a capitalist economy producing stuff with tremendous potential economic value converge on making a profit.

Replies from: Sune
comment by Sune · 2023-11-21T17:16:45.661Z · LW(p) · GW(p)

The corporate structure of OpenAI was set up as an answer to concerns (about AGI and control over AGIs) which were raised by rationalists. But I don’t think rationalists believed that this structure was a sufficient solution to the problem, anymore than non-rationalists believed it. The rationalists that I have been speaking to were generally mostly sceptical about OpenAI.

Replies from: dr_s
comment by dr_s · 2023-11-21T17:21:18.650Z · LW(p) · GW(p)

Oh, I mean, sure, scepticism about OpenAI was already widespread, no question. But in general it seems to me like there's been too many attempts to be too clever by half from people at least adjacent in ways of thinking to rationalism/EA (like Elon) that go "I want to avoid X-risk but also develop aligned friendly AGI for myself" and the result is almost invariably that it just advances capabilities more than safety. I just think sometimes there's a tendency to underestimate the pull of incentives and how you often can't just have your cake and eat it. I remain convinced that if one wants to avoid X-risk from AGI the safest road is probably to just strongly advocate for not building AGI, and putting it in the same bin as "human cloning" as a fundamentally unethical technology. It's not a great shot, but it's probably the best one at stopping it. Being wishy-washy doesn't pay off.

Replies from: Seth Herd
comment by Seth Herd · 2023-11-21T17:29:13.457Z · LW(p) · GW(p)

I think you're in the majority in this opinion around here. I am noticing I'm confused about the lack of enthusiasm for developing alignment methods for thetypes of AGI that are being developed. Trying to get people to stop building it would be ideal, but I don't see a path to it. The actual difficulty of alignment seems mostly unknown, so potentially vastly more tractable. Yet such efforts make up a tiny part of x-risk discussion.

This isn't an argument for building ago, but for aligning the specific AGI others build.

Replies from: dr_s
comment by dr_s · 2023-11-21T18:14:34.553Z · LW(p) · GW(p)

Personally I am fascinated by the problems of interpretability and I would consider "no more GPTs for you guys until you figure out at least the main functioning principles of GPT-3" a healthy exercise in actual ML science to pursue, but I also have to acknowledge that such an understanding would make distillation far more powerful and thus also lead to a corresponding advance in capabilities. I am honestly stumped at what "I want to do something" looks like that doesn't somehow end up backfiring. It maybe that the problem is just thinking this way in the first place, and this really is just a shudder political problem, and tech/science can only make it worse.

Replies from: Seth Herd
comment by Seth Herd · 2023-11-21T23:54:34.712Z · LW(p) · GW(p)

That all makes sense.

Except that this is exactly what I'm puzzled by: a focus on solutions that probably won't work ("no more GPTs for you guys" is approximately impossible), instead of solutions that still might - working on alignment, and trading off advances in alignment for advances in AGI.

It's like the field has largely given up on alignment, and we're just trying to survive a few more months by making sure to not contribute to AGI at all.

But that makes no sense. MIRI gave up on aligning a certain type of AGI for good reasons. But nobody has seriously analyzed prospects for aligning the types of AGI we're likely to get: language model agents or loosely brainlike collections of deep nets. When I and a few others write about plans for aligning those types of AGI, we're largely ignored. The only substantive comments are "well there are still ways those plans could fail", but not arguments that they're actually likely to fail. Meanwhile, everyone is saying we have no viable plans for alignment, and acting like that means it's impossible. I'm just baffled by what's going on in the collective unspoken beliefs of this field.

Replies from: dr_s
comment by dr_s · 2023-11-22T08:36:51.602Z · LW(p) · GW(p)

I'll be real, I don't know what everyone else thinks, but personally I can say I wouldn't feel comfortable contributing to anything AGI-related at this point because I have very low trust even aligned AGI would result in a net good for humanity, with this kind of governance. I can imagine maybe amidst all the bargains with the Devil there is one that will genuinely pay off and is the lesser evil, but can't tell which one. I think the wise thing to do would be just not to build AGI at all, but that's not a realistically open path. So yeah, my current position is that literally any action I could take advances the kind of future I would want by an amount that is at best below the error margin of my guesses, and at worst negative. It's not a super nice spot to be in but it's where I'm at and I can't really lie to myself about it.

comment by Gerald Monroe (gerald-monroe) · 2023-11-20T20:13:20.606Z · LW(p) · GW(p)

In the cancer case, the human body has every cell begin aligned with the body. Anthropically this has to function until breeding age plus enough offspring to beat losses.

And yes, if faulty cells self destruct instead of continuing this is good, there are cancer treatments that try to gene edit in clean copies of specific genes (p51 as I recall) that mediate this (works in rats...).

However the corporate world/international competition world has many more actors and they are adversarial. OAI self destructing leaves the world's best AI researchers unemployed, removes them from competing in the next round of model improvements - whoever makes a gpt-5 at a competitor will have the best model outright.

Coordination is hard. Consider the consequences if an entire town decided to stop consuming fossil fuels. They pay the extra costs and rebuild the town to be less car dependent.

However the consequence is this lowers the market price of fossil fuels. So others use more. (Demand elasticity makes the effect still slightly positive)

Replies from: dr_s
comment by dr_s · 2023-11-20T23:30:34.517Z · LW(p) · GW(p)

I mean, yes, a company self-destructing doesn't stop much if their knowledge isn't also actively deleted - and even then, it's just a setback of a few months. But also, by going "oh well we need to work inside the system to fix it somehow" at some point all you get is just another company racing with all others (and in this case, effectively being a pace setter). However you put it, OpenAI is more responsible than any other company for how close we may be to AGI right now, and despite their stated mission, I suspect they did not advance safety nearly as much as capability. So in the end, from the X-risk viewpoint, they mostly made things worse.

comment by ryan_b · 2023-11-21T16:03:04.944Z · LW(p) · GW(p)

I agree with all of this in principal, but I am hung up on the fact that it is so opaque. Up until now the board have determinedly remained opaque.

If corporate seppuku is on the table, why not be transparent? How does being opaque serve the mission?

Replies from: JenniferRM
comment by JenniferRM · 2023-11-21T23:56:58.058Z · LW(p) · GW(p)

I wrote a LOT of words in response to this, talking about personal professional experiences that are not something I coherently understand myself as having a duty (or timeless permission?) to share, so I have reduced my response to something shorter and more general. (Applying my own logic to my own words, in realtime!)

There are many cases (arguably stupid cases or counter-producive cases, but cases) that come up more and more when deals and laws and contracts become highly entangling.

Its illegal to "simply" ask people for money in exchange for giving them a transferable right future dividends on a project for how to make money, that you seal with a handshake. The SEC commands silence sometimes and will put you in a cage if you don't.

You get elected to local office and suddenly the Brown Act (which I'd repeal as part of my reboot of the Californian Constitution had I the power) forbids you from talking with your co-workers (other elected officials) about work (the city government) at a party. 

A Confessor [LW · GW] is forbidden kinds of information leak.

Fixing <all of this (gesturing at nearly all of human civilization)> isn't something that we have the time or power to do before we'd need to USE the "fixed world" to handle AGI sanely or reasonably, because AGI is coming so fast, and the world is so broken.

That there is so much silence associated with unsavory actors is a valid and concerning contrast, but if you look into it, you'll probably find that every single OpenAI employee has an NDA already.

OpenAI's "business arm", locking its employees down with NDAs, is already defecting on the "let all the info come out" game.

If the legal system will continue to often be a pay-to-win game and full of fucked up compromises with evil, then silences will probably continue to be common, both (1) among the machiavellians and (2) among the cowards, and (3) among the people who were willing to promise reasonable silences as part of hanging around nearby doing harms reduction. (This last is what I was doing as a "professional ethicist".)

And IT IS REALLY SCARY to try to stand up for what you think you know is true about what you think is right when lots of people (who have a profit motive for believing otherwise) loudly insist otherwise.

People used to talk a lot about how someone would "go mad" and when I was younger it always made me slightly confused, why "crazy" and "angry" were conflated. Now it makes a lot of sense to me.

I've seen a lot of selfish people call good people "stupid" and once the non-selfish person realizes just how venal and selfish and blind the person calling them stupid is, it isn't hard to call that person "evil" and then you get a classic "evil vs stupid" (or "selfish vs altruistic") fight. As they fight they become more "mindblind" to each other? Or something? (I'm working on an essay on this, but it might not be ready for a week or a month or a decade. Its a really knotty subject on several levels.)

Good people know they are sometimes fallible, and often use peer validation to check their observations, or check their proofs, or check their emotional calibration, and when those "validation services" get withdrawn for (hidden?) venal reasons, it can be emotionally and mentally disorienting.

(And of course in issues like this one a lot of people are automatically going to have a profit motive when a decision arises about whether to build a public good or not. By definition: the maker of a public good can't easily charge money for such a thing. (If they COULD charge money for it then it'd be a private good or maybe a club good.))

The Board of OpenAI might be personally sued by a bunch of Machiavellian billionaires, or their allies, and if that happens, everything the board was recorded as saying will be gone over with a fine-toothed comb, looking for tiny little errors.

Every potential quibble is potentially more lawyer time. Every bit of lawyer time is a cost that functions as a financial reason to settle instead of keep fighting for what is right. Making your attack surface larger is much easier than making an existing attack surface smaller.

If the board doesn't already have insurance for that extenuating circumstance, then I commit hereby to donate at least $100 to their legal defense fund, if they start one, which I hope they never need to do.

And in the meantime, I don't think they owe me much of anything, except for doing their damned best to ensure that artificial general intelligence benefits all humanity.

comment by xpym · 2023-11-21T08:36:25.137Z · LW(p) · GW(p)

Maybe I’m missing some context, but wouldn’t it be better for Open AI as an organized entity to be destroyed than for it to exist right up to the point where all humans are destroyed by an AGI that is neither benevolent nor “aligned with humanity” (if we are somehow so objectively bad as to not deserve care by a benevolent powerful and very smart entity).

This seems to presuppose that there is a strong causal effect from OpenAI's destruction to avoiding creation of an omnicidal AGI, which doesn't seem likely? The real question is whether OpenAI was, on the margin, a worse front-runner than its closest competitors, which is plausible, but then the board should have made that case loudly and clearly, because, entirely predictably, their silence has just made the situation worse.

comment by Amalthea (nikolas-kuhn) · 2023-11-20T17:45:23.231Z · LW(p) · GW(p)

Whatever else, there were likely mistakes from the side of the board, but man does the personality cult around Altman make me uncomfortable. 

Replies from: daniel-glasscock, dr_s
comment by Daniel (daniel-glasscock) · 2023-11-20T19:41:05.953Z · LW(p) · GW(p)

It reminds me of the loyalty successful generals like Caesar and Napoleon commanded from their men. The engineers building GPT-X weren't loyal to The Charter, and they certainly weren't loyal to the board. They were loyal to the projects they were building and to Sam, because he was the one providing them resources to build and pumping the value of their equity-based compensation.

Replies from: Sune, dr_s, tristan-wegner
comment by Sune · 2023-11-21T11:41:24.675Z · LW(p) · GW(p)

They were not loyal to the board, but it is not clear if they were loyal to The Charter since they were not given any concrete evidence of a conflict between Sam and the Charter.

comment by dr_s · 2023-11-21T09:59:27.781Z · LW(p) · GW(p)

Feels like an apt comparison given that the way we find out now is what happens when some kind of Senate tries to cut to size the upstart general and the latter basically goes "you and what army?".

comment by Tristan Wegner (tristan-wegner) · 2023-11-21T08:55:42.755Z · LW(p) · GW(p)

From your last link:

Another key difference is that the growth is currently capped at 10x. Similar to their overall company structure, the PPUs are capped at a growth of 10 times the original value.

As the company was doing well recently, with ongoing talks about a investment imply a market cap of $90B, this would mean many employees might have hit their 10x already. The highest payout they would ever get. So all incentive to cash out now (or as soon as the 2-year lock will allow), 0 financial incentive to care about long term value.

This seems worse in aligning employee interest with the long term interest of the company even compare to regular (unlimited allowed growth) equity, where each employee might hope that the valuation could get even higher.


It’s important to reiterate that the PPUs inherently are not redeemable for value if OpenAI does not turn a profit

So it seems the growth cap actually encourages short term thinking, which seems against their long term mission.

Do you also understand these incentives this way? 

comment by dr_s · 2023-11-20T17:49:48.945Z · LW(p) · GW(p)

It's not even a personality cult. Until the other day Altman was a despicable doomer and decel, advocating for regulations that would clip humanity's wings. As soon as he was fired and the "what did Ilya see" narrative emerged (I don't even think it was all serious at the beginning), the immediate response from the e/acc crowd was to elevate him to the status of martyr in minutes and recast the Board as some kind of reactionary force for evil that wants humanity to live in misery forever rather than bask in the Glorious AI Future.

Honestly even without the doom stuff I'd be extremely worried about this being the cultural and memetic environment in which AI gets developed anyway. This stuff is pure poison.

Replies from: nikolas-kuhn
comment by Amalthea (nikolas-kuhn) · 2023-11-20T17:59:58.011Z · LW(p) · GW(p)

It doesn't seem to me like e/acc has contributed a whole lot to this beyond commentary. The rallying of OpenAI employees behind Altman is quite plausibly his general popularity + ability to gain control of a situation. 

At least that seems likely if Paul Graham's assessment of him as a master persuader is to be believed (and why wouldn't it?). 

Replies from: dr_s
comment by dr_s · 2023-11-20T18:03:11.646Z · LW(p) · GW(p)

I mean, the employees could be motivated by a more straightforward sense that the firing is arbitrary and threatens the functioning of OpenAI and thus their immediate livelihood. I'd be curious to understand how much of this is calculated self-interest and how much indeed personal loyalty to Sam Altman, which would make this incident very much a crossing of the Rubicon.

Replies from: michael-thiessen
comment by Michael Thiessen (michael-thiessen) · 2023-11-20T18:52:06.808Z · LW(p) · GW(p)

I do find it quite surprising that so many who work at OpenAI are so eager to follow Altman to Microsoft - I guess I assumed the folks at OpenAI valued not working for big tech (that's more(?) likely to disregard safety) more than it appears they actually did.

Replies from: chess-teacher
comment by Chess3D (chess-teacher) · 2023-11-20T23:35:07.652Z · LW(p) · GW(p)

My guess is they feel that Sam and Greg (and maybe even Ilya) will provide enough of a safety net (compared to a randomized Board overlord) but also a large dose of self-interest once it gains steam and you know many of your coworkers will leave

comment by orthonormal · 2023-11-20T17:22:18.022Z · LW(p) · GW(p)

The most likely explanation I can think of, for what look like about-faces by Ilya and Jan this morning, is realizing that the worst plausible outcome is exactly what we're seeing: Sam running a new OpenAI at Microsoft, free of that pesky charter. Any amount of backpedaling, and even resigning in favor of a less safety-conscious board, is preferable to that.

They came at the king and missed.

Replies from: Lukas_Gloor, Lblack, tachikoma
comment by Lukas_Gloor · 2023-11-20T18:13:47.149Z · LW(p) · GW(p)

Yeah but if this is the case, I'd have liked to see a bit more balance than just retweeting the tribal-affiliation slogan ("OpenAI is nothing without its people") and saying that the board should resign (or, in Ilya's case, implying that he regrets and denounces everything he initially stood for together with the board). Like, I think it's a defensible take to think that the board should resign after how things went down, but the board was probably pointing to some real concerns that won't get addressed at all if the pendulum now swings way too much in the opposite direction, so I would have at least hoped for something like "the board should resign, but here are some things that I think they had a point about, which I'd like to see to not get shrugged under the carpet after the counter-revolution."

Replies from: orthonormal
comment by orthonormal · 2023-11-20T18:27:56.778Z · LW(p) · GW(p)

It's too late for a conditional surrender now that Microsoft is a credible threat to get 100% of OpenAI's capabilities team; Ilya and Jan are communicating unconditional surrender because the alternative is even worse.

Replies from: Seth Herd
comment by Seth Herd · 2023-11-20T23:04:10.026Z · LW(p) · GW(p)

I'm not sure this is an unconditional surrender. They're not talking about changing the charter, just appointing a new board. If the new board isn't much less safety conscious, then a good bit of the organization's original purpose and safeguards are preserved. So the terms of surrender would be negotiated in picking the new board.

Replies from: Linch
comment by Linch · 2023-11-21T00:29:40.495Z · LW(p) · GW(p)

AFAICT the only formal power the board has is in firing the CEO, so if we get a situation where whenever the board wants to fire Sam, Sam comes back and fires the board instead, well, it's not exactly an inspiring story for OpenAI's governance structure.

Replies from: TLK
comment by TLK · 2023-11-21T17:30:01.008Z · LW(p) · GW(p)

This is a very good point. It is strange, though, that the Board was able to fire Sam without the Chair agreeing to it. It seems like something as big as firing the CEO should have required at least a conversation with the Chair, if not the affirmative vote of the Chair. The way this was handled was a big mistake. There needs to be new rules in place to prevent big mistakes like this.

comment by Lucius Bushnaq (Lblack) · 2023-11-20T20:06:00.099Z · LW(p) · GW(p)

If actually enforcing the charter leads to them being immediately disempowered, it‘s not worth anything in the first place. We were already in the “worst case scenario”. Better to be honest about it. Then at least, the rest of the organisation doesn‘t get to keep pointing to the charter and the board as approving their actions when they don‘t.

The charter it is the board’s duty to enforce doesn‘t say anything about how the rest of the document doesn‘t count if investors and employees make dire enough threats, I‘m pretty sure.

Replies from: faul_sname, orthonormal, nikolas-kuhn
comment by faul_sname · 2023-11-20T23:55:20.434Z · LW(p) · GW(p)

If actually enforcing the charter leads to them being immediately disempowered, it‘s not worth anything in the first place.

If you pushed for fire sprinklers to be installed, then yell "FIRE", and turn on the fire sprinklers, causing a bunch of water damage, and then refuse to tell anyone where you thought the fire was and why you thought that, I don't think you should be too surprised when people contemplate taking away your ability to trigger the fire sprinklers.

Keep in mind that the announcement was not something like

After careful consideration and strategic review, the Board of Directors has decided to initiate a leadership transition. Sam Altman will be stepping down from his/her role, effective November 17, 2023. This decision is a result of mutual agreement and understanding that the company's long-term strategy and core values require a different kind of leadership moving forward.

Instead, the board announced

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

That is corporate speak for "Sam Altman was a lying liar about something big enough to put the entire project at risk, and as such we need to cut ties with him immediately and also warn everyone who might work with him that he was a lying liar." If you make accusations like that, and don't back them up, I don't think you get to be outraged that people start doubting your judgement.

Replies from: aphyer
comment by aphyer · 2023-11-21T14:34:29.131Z · LW(p) · GW(p)

If you pushed for fire sprinklers to be installed, then yell "FIRE", and turn on the fire sprinklers, causing a bunch of water damage, and then refuse to tell anyone where you thought the fire was and why you thought that, I don't think you should be too surprised when people contemplate taking away your ability to trigger the fire sprinklers.


The situation is actually even less surprising than this, because the thing people actually initially contemplated doing in response to the board's actions was not even 'taking away your ability to trigger the fire sprinklers' but 'going off and living in a new building somewhere else that you can't flood for lulz'.

As I'm understanding the situation OpenAI's board had and retained the legal right to stay in charge of OpenAI as all its employees left to go to Microsoft.  If they decide they would rather negotiate from their starting point of 'being in charge of an empty building' to 'making concessions' this doesn't mean that the charter didn't mean anything!  It means that the charter gave them a bunch of power which they wasted.

comment by orthonormal · 2023-11-20T20:13:55.024Z · LW(p) · GW(p)

If they thought this would be the outcome of firing Sam, they would not have done so.

The risk they took was calculated, but man, are they bad at politics.

Replies from: dr_s, chess-teacher
comment by dr_s · 2023-11-21T07:38:58.856Z · LW(p) · GW(p)

I keep being confused by them not revealing their reasons. Whatever they are, there's no way that saying them out loud wouldn't give some ammo to those defending them, unless somehow between Friday and now they swung from "omg this is so serious we need to fire Altman NOW" to "oops looks like it was a nothingburger, we'll look stupid if we say it out loud". Do they think it's a literal infohazard or something? Is it such a serious accusation that it would involve the police to state it out loud?

Replies from: faul_sname
comment by faul_sname · 2023-11-21T19:35:38.178Z · LW(p) · GW(p)

At this point I'm beginning to wonder if a gag order is involved.

comment by Chess3D (chess-teacher) · 2023-11-20T23:31:45.320Z · LW(p) · GW(p)

Interesting! Bad at politics is a good way to put it. So you think this was purely a political power move to remove Sam, and they were so bad at projecting the outcomes, all of them thought Greg would stay on board as President and employees would largely accept the change. 

Replies from: orthonormal
comment by orthonormal · 2023-11-21T20:02:39.319Z · LW(p) · GW(p)

No, I don't think the board's motives were power politics; I'm saying that they failed to account for the kind of political power moves that Sam would make in response.

comment by Amalthea (nikolas-kuhn) · 2023-11-20T20:11:31.581Z · LW(p) · GW(p)

It's hard to know for sure, but I think this is a reasonable and potentially helpful perspective. Some of the perceived repercussions on the state of AI safety might be "the band-aid being ripped off". 

comment by Tachikoma (tachikoma) · 2023-11-20T18:14:23.091Z · LW(p) · GW(p)

The important question is, why now? Why with so little evidence to back-up what is such an extreme action?

comment by Alex A (alex-a-1) · 2023-11-21T14:02:46.815Z · LW(p) · GW(p)

RE: the board’s vague language in their initial statement

Smart people who have an objective of accumulating and keeping control—who are skilled at persuasion and manipulation —will often leave little trace of wrongdoing. They’re optimizing for alibis and plausible deniability. Being around them and trying to collaborate with them is frustrating. If you’re self-aware enough, you can recognize that your contributions are being twisted, that your voice is going unheard, and that critical information is being withheld from you, but it’s not easy. And when you try to bring up concerns, they are very good at convincing you that those concerns are actually your fault.

I can see a world where the board was able to recognize that Sam’s behaviors did not align with OpenAI’s mission, while not having a smoking gun example to pin him on. Being unskilled politicians with only a single lever to push (who were probably morally opposed to other political tactics) the board did the only thing they could think of, after trying to get Sam to listen to their concerns. Did it play out well? No.

It’s clear that EA has a problem with placing people who are immature at politics in key political positions. I also believe there may be a misalignment in objectives between the politically skilled members of EA and the rest of us—politically skilled members may be withholding political advice/training from others out of fear that they will be outmaneuvered by those they advise. This ends up working against the movement as a whole.

Replies from: lc, faul_sname
comment by lc · 2023-11-21T20:00:28.749Z · LW(p) · GW(p)

Feels sometimes like all of the good EAs are bad at politics and everybody on our side that's good at politics is not a good EA.

Replies from: Thane Ruthenis
comment by Thane Ruthenis · 2023-11-21T21:02:35.848Z · LW(p) · GW(p)

Yeah, I'm getting that vibe. EAs keep going "hell yeah, we got an actual competent mafioso on our side, but they're actually on our side!", and then it turns out the mafioso wasn't on their side, any more than any other mafioso in history had ever been on anyone's side.

comment by faul_sname · 2023-11-21T19:38:38.503Z · LW(p) · GW(p)

Ok, but then why the statement implying severe misconduct rather than a generic "the board has decided that the style of leadership that Mr. Altman provides is not what OpenAI needs at this time"?

comment by orthonormal · 2023-11-21T20:18:31.471Z · LW(p) · GW(p)

I'm surprised that nobody has yet brought up the development that the board offered Dario Amodei the position as a merger with Anthropic (and Dario said no!).

(There's no additional important content in the original article by The Information, so I linked the Reuters paywall-free version.)

Crucially, this doesn't tell us in what order the board made this offer to Dario and the other known figures (GitHub CEO Nat Friedman and Scale AI CEO Alex Wang) before getting Emmett Shear, but it's plausible that merging with Anthropic was Plan A all along. Moreover, I strongly suspect that the bad blood between Sam and the Anthropic team was strong enough that Sam had to be ousted in order for a merger to be possible.

So under this hypothesis, the board decided it was important to merge with Anthropic (probably to slow the arms race), booted Sam (using the additional fig leaf of whatever lies he's been caught in), immediately asked Dario and were surprised when he rejected them, did not have an adequate backup plan, and have been scrambling ever since.

P.S. Shear is known to be very much on record worrying that alignment is necessary and not likely to be easy; I'm curious what Friedman and Wang are on record as saying about AI x-risk.

Replies from: JamesPayor, Lukas_Gloor
comment by James Payor (JamesPayor) · 2023-11-22T00:01:33.031Z · LW(p) · GW(p)

Has this one been confirmed yet? (Or is there more evidence that this reporting that something like this happened?)

comment by Lukas_Gloor · 2023-11-22T00:42:50.502Z · LW(p) · GW(p)

Having a "plan A" requires detailed advance-planning. I think it's much more likely that their decision was reactive rather than plan-based. They felt strongly that Altman had to go based on stuff that happened, and so they followed procedures – appoint an interim CEO and do a standard CEO search. Of course, it's plausible – I'd even say likely – that an "Anthropic merger" was on their mind as something that could happen as a result of this further down the line. But I doubt (and hope not) that this thought made a difference to their decision.


  • If they had a detailed plan that was motivating their actions (as opposed to reacting to a new development and figuring out what to do as things go on), they would probably have put in a bit more time gathering more potentially incriminating evidence or trying to form social alliances. 
    For instance, even just, in the months or weeks before, visiting OpenAI and saying hi to employees, introducing themselves as the board, etc., would probably have improved staff's perception of how this went down. Similarly, gathering more evidence by, e.g., talking to people close to Altman but sympathetic to safety concerns, asking whether they feel heard in the company, etc, could have unearthed more ammunition. (It's interesting that even the safety-minded researchers at OpenAI basically sided with Altman here, or, at the very least, none of them came to the board's help speaking up against Altman on similar counts. [Though I guess it's hard to speak up "on similar counts" if people don't even really know their primary concerns apart from the vague "not always candid."])
  • If the thought of an Anthropic merge did play a large role in their decision-making (in the sense of "making the difference" to whether they act on something across many otherwise-similar counterfactuals), that would constitute a bad kind of scheming/plotting. People who scheme like that are probably less likely than baseline to underestimate power politics and the difficulty of ousting a charismatic leader, and more likely than baseline to prepare well for the fight. Like, if you think your actions are perfectly justified per your role as board member (i.e., if you see yourself as acting as a good board member), that's exactly the situation in which you're most likely to overlook the possibility that Altman may just go "fuck the board!" and ignore your claim to legitimacy. By contrast, if you're kind of aware that you're scheming and using the fact that you're a board member merely opportunistically, it might more readily cross your mind that Altman might scheme back at you and use the fact that he knows everyone at the company and has a great reputation in the Valley at large.
  • It seems like the story feels overall more coherent if the board perceived themselves to be acting under some sort of time-pressure (I put maybe 75% on this).
    • Maybe they felt really anxious or uncomfortable with the 'knowledge' or 'near-certainty' (as it must have felt to them, if they were acting as good board members) that Altman is a bad leader, so they sped things up because it was psychologically straining to deal with the uncertain situation.
    • Maybe Altman approaching investors made them worry that if he succeeds, he'd acquire too much leverage.
    • Maybe Ilya approached them with something and prompted them to react to it and do something, and in the heat of the moment, they didn't realize that it might be wise to pause and think things through and see if Ilya's mood is a stable one.
    • Maybe there was a capabilities breakthrough and the board and Ilya were worried the new system may not be safe enough especially considering that once the weights leak, people anywhere on the internet can tinker with the thing and improve it with tweaks and tricks. 
    • [Many other possibilities I'm not thinking of.]
    • [Update – I posted this update before gwern's comment but didn't realize it's that it's waaay more likely to be the case than the other ones before he said it] I read a rumor in a new article about talks about how to replace another board member, so maybe there was time pressure before Altman and Brockman would appoint a new board member who would always side with them. 

were surprised when he rejected them

I feel like you're not really putting yourself into the shoes of the board members if you think they were surprised by the time where they asked around for CEOs that someone like Dario (with the reputation of his entire company at risk) would reject them. At that point, the whole situation was such a mess that they must have felt extremely bad and desperate going around frantically asking for someone to come in and help save the day. (But probably you just phrased it like that because you suspect that, in their initial plan where Altman just accepts defeat, their replacement CEO search would go over smoothly. That makes sense to me conditional on them having formed such a detailed-but-naive "plan A.")

Edit: I feel confident in my stance but not massively so, so I reserve maybe 14% to a hypothesis that is more like the one you suggested, partly updating towards habryka's cynicism, which I unfortunately think has had a somewhat good track record recently.

comment by Michael Thiessen (michael-thiessen) · 2023-11-20T16:02:51.625Z · LW(p) · GW(p)

"Before I took the job, I checked on the reasoning behind the change. The board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models."

Replies from: Zvi
comment by Zvi · 2023-11-20T16:22:47.995Z · LW(p) · GW(p)

Yeah, should have put that in the main, forgot. Added now.

comment by Htarlov (htarlov) · 2023-11-21T07:21:16.248Z · LW(p) · GW(p)

Most likely explanation is the simplest fitting one:

  • The Board was angry on lack of communication for some time but with internal disagreement (Greg, Ilya)
  • The things sped up lately. Ilya thought it might be good to change CEO to someone who would slow down and look more into safety as Altman says a lot about safety but speeds up anyway. So he gave a green light on his side (acceptation of change)
  • Then the Board made the moves that they made
  • Then the new CEO wanted to try to hire back Altman so they changed her
  • Then that petition/letter started rolling because the prominent people saw those moves as harming to the company and the goal
  • Ilya also saw that the outcome is bad both for the company and for the goal of slowing down and he saw that if the letter will get more signatures it will be even worse, so he changed his mind and also signed

Take note about the language that Ilya uses. He didn't say they did bad to Altman or that decission was bad. He said that that he changed his mind because of consequences being harm for the company.

Replies from: angmoh
comment by angmoh · 2023-11-21T11:29:52.139Z · LW(p) · GW(p)

This seems about right. Sam is a bit of a cowboy and probably doesn't bother involving the board more than he absolutely has to.

comment by Lukas_Gloor · 2023-11-22T02:55:33.341Z · LW(p) · GW(p)

One thing I've realized more in the last 24h: 

  • It looks like Sam Altman is using a bunch of "tricks" now trying to fight his way back into more influence over OpenAI. I'm not aware of anything I'd consider unethical (at least if one has good reasons to believe one has been unfairly attacked), but it's still the sort of stuff that wouldn't come naturally to a lot of people and wouldn't feel fair to a lot of people (at least if there's a strong possibility that the other side is acting in good faith too).
  • Many OpenAI employees have large monetary incentives on the line and there's levels of peer pressure that are off the charts, so we really can't read too much into who tweeted how many hearts or signed the letter or whatever. 

Maybe the extent of this was obvious to most others, but for me, while I was aware that this was going on, I feel like I underestimated the extent of it. One thing that put things into a different light for me is this tweet

Which makes me wonder, could things really have gone down a lot differently? Sure, smoking-gun-type evidence would've helped the board immensely. But is it their fault that they don't have it? Not necessarily. If they had (1) time pressure (for one reason or another – hard to know at this point) and (2) if they still had enough 'soft' evidence to justify drastic actions. With (1) and (2) together, it could have made sense to risk intervening even without smoking-gun-type evidence.

(2) might be a crux for some people, but I believe that there are situations where it's legitimate for a group of people to become convinced that someone else is untrustworthy without being in a position to easily and quickly convince others. NDAs in play could be one reason, but also just "the evidence is of the sort that 'you had to be there'" or "you need all this other context and individual data points only become compelling if you also know about all these other data points that together help rule out innocuous/charitable interpretations about what happened."

In any case, many people highlighted the short notice with which the board announced their decision and commented that this implies that the board acted in an outrageous way and seems inexperienced. However, having seen what Altman managed to mobilize in just a couple of days, it's now obvious that, if you think he's scheming and deceptive in a genuinely bad way (as opposed to "someone knows how to fight power struggles and is willing to fight them when he feels like he's undeservedly under attack" – which isn't by itself a bad thing), then you simply can't give him a headstart. 

So, while I still think the board made mistakes, I today feel a bit less confident that these mistakes were necessarily as big as I initially thought. I now think it's possible – but far from certain – that we're in a world where things are playing out the way they have mostly because it's a really though situation for the board to be in even when they are right. And, sure, that would've been a reason to consider not starting this whole thing, but obviously that's very costly as well, so, again, tough situation.

I guess a big crux is "how common is it that you justifiably think someone is bad but it'll be hard to convince others?" My stance is that, if you're right, you should eventually be able to convince others if the others are interested in the truth and you get a bunch of time and the opportunity to talk to more people who may have extra info. But you might not be able to succeed if you only have a few days and then you're out if you don't sound convincing enough.

My opinions have been fluctuating a crazy amount recently (I don't think I've ever been in a situation where my opinions have gone up and down like this!), so, idk, I may update quite a bit in the other direction again tomorrow.

Replies from: chess-teacher, Siebe
comment by Chess3D (chess-teacher) · 2023-11-22T04:16:04.047Z · LW(p) · GW(p)

The board could (justifiably based on Sam's incredible mobilization the past days**) believe that they have little to no chance of winning the war of public opinion and focus on doing everything privately since that is where they feel on equal footing.

This doesn't explain fully why they haven't stated reasons in private, but it does seem they provided at least something to Emmett Shear as he said he had a reason from the board that wasn't safety or commercialization (PPS of

** Very few fires employees would even consider pushing back, but to be this successful this quickly is impressive. Not taking a side on it being good or evil, just stating the fact of his ability to fight back after things seemed gloom (betting markets were down below 10%)

Replies from: chess-teacher
comment by Chess3D (chess-teacher) · 2023-11-22T12:01:49.036Z · LW(p) · GW(p)

Well, seems like the board did provide zero evidence in private, too!

Quite the saga: glad it is over and think that Larry Summers is a great independent thinker that could help the board make some smart expansion decisions

Replies from: dr_s
comment by dr_s · 2023-11-22T12:10:43.079Z · LW(p) · GW(p)

I feel like at this point the only truly rational comment is:

what the absolute fuck.

comment by Siebe · 2023-11-22T14:49:27.967Z · LW(p) · GW(p)

this Washington Post article supports the 'Scheming Sam' Hypothesis: anonymous reports mostly from his time at Y Combinator

comment by gilch · 2023-11-22T06:13:53.389Z · LW(p) · GW(p)

He's back. Again. Maybe.

We have reached an agreement in principle for Sam [Altman] to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo.

We are collaborating to figure out the details. Thank you so much for your patience through this.

Anyone know how Larry or Bret feel about x-risk?

Replies from: TrevorWiesinger, gilch
comment by trevor (TrevorWiesinger) · 2023-11-22T07:38:55.463Z · LW(p) · GW(p)

The verge article is better, shows tweets by Toner and Nadella confirming that it wasn't just someone getting access to the OpenAI twitter/x account (unless of course someone acquired access to all the accounts, which doesn't seem likely).

comment by Ben Pace (Benito) · 2023-11-20T19:47:29.007Z · LW(p) · GW(p)

Fun story.

I met Emmett Shear once at a conference, and have read a bunch of his tweeting.

On Friday I turned to a colleague and asked for Shear's email, so that I could email him suggesting he try to be CEO, as he's built a multi-billion company before and has his head screwed on about x-risk.

My colleague declined, I think they thought it was a waste of time (or didn't think it was worth their social capital).

Man, I wish I had done it, that would have been so cool to have been the one to suggest it to him.

comment by dr_s · 2023-11-20T16:17:15.651Z · LW(p) · GW(p)

Man, Sutskever's back and forth is so odd. Hard to make obvious sense of, especially if we believe Shear's claim that this was not about disagreements on safety. Any chance that it was Annie Altman's accusations towards Sam that triggered this whole thing? It seems strange since you'd expect it to only happen if public opinion built up to unsustainable levels.

Replies from: David Hornbein
comment by David Hornbein · 2023-11-20T17:37:55.261Z · LW(p) · GW(p)

My guess: Sutskever was surprised by the threatened mass exodus. Whatever he originally planned to achieve, he no longer thinks he can succeed. He now thinks that falling on his sword will salvage more of what he cares about than letting the exodus happen.

Replies from: dr_s
comment by dr_s · 2023-11-20T17:47:08.388Z · LW(p) · GW(p)

This would be very consistent with the problem being about safety (Altman at MSFT is worse than Altman at OAI for that), but then Shear is lying (understandable that he might have to for political reasons). Or I suppose anything that involved the survival of Open AI, which at this point is threatened anyway.

Replies from: David Hornbein
comment by David Hornbein · 2023-11-20T17:55:30.985Z · LW(p) · GW(p)

Maybe Shear was lying. Maybe the board lied to Shear, and he truthfully reported what they told him. Maybe "The board did *not* remove Sam over any specific disagreement on safety" but did remove him over a *general* disagreement which, in Sutskever's view, affects safety. Maybe Sutskever wanted to remove Altman for a completely different reason which also can't be achieved after a mass exodus. Maybe different board members had different motivations for removing Altman.

Replies from: orthonormal
comment by orthonormal · 2023-11-20T18:09:38.579Z · LW(p) · GW(p)

I agree, it's critical to have a very close reading of "The board did *not* remove Sam over any specific disagreement on safety".

This is the kind of situation where every qualifier in a statement needs to be understood as essential—if the statement were true without the word "specific", then I can't imagine why that word would have been inserted.

Replies from: Kaj_Sotala, DanielFilan, ryan_b
comment by Kaj_Sotala · 2023-11-21T19:47:58.739Z · LW(p) · GW(p)

To elaborate on that, Shear is presumably saying exactly as much as he is allowed to say in public. This implies that if the removal had nothing to do with safety, then he would say "The board did not remove Sam over anything to do with safety". His inserting of that qualifier implies that he couldn't make a statement that broad, and therefore that safety considerations were involved in the removal.

Replies from: michael-thiessen
comment by Michael Thiessen (michael-thiessen) · 2023-11-21T20:16:06.421Z · LW(p) · GW(p)

According to Bloomberg, "Even CEO Shear has been left in the dark, according to people familiar with the matter. He has told people close to OpenAI that he doesn’t plan to stick around if the board can’t clearly communicate to him in writing its reasoning for Altman’s sudden firing."

Evidence that Shear simply wasn't told the exact reason, though the "in writing" part is suspicious. Maybe he was told not in writing and wants them to write it down so they're on the record.

comment by DanielFilan · 2023-11-20T21:15:07.216Z · LW(p) · GW(p)

He was probably kinda sleep deprived and rushed, which could explain inessential words being added.

comment by ryan_b · 2023-11-20T19:13:15.971Z · LW(p) · GW(p)

I would normally agree with this, except it does not seem to me like the board is particularly deliberate about their communication so far. If they are conscientious enough about their communication to craft it down to the word, why did they handle the whole affair in the way they seem to have so far?

I feel like a group of people who did not see fit to provide context or justifications to either their employees or largest shareholder when changing company leadership and board composition probably also wouldn't weigh each word carefully when explaining the situation to a total outsider.

We still benefit from a very close reading, mind you; I just believe there's a lot more wiggle room here than we would normally expect from corporate boards operating with legal advice based on the other information we have.

Replies from: orthonormal
comment by orthonormal · 2023-11-20T20:10:38.338Z · LW(p) · GW(p)
  1. The quote is from Emmett Shear, not a board member.
  2. The board is also following the "don't say anything literally false" policy by saying practically nothing publicly.
  3. Just as I infer from Shear's qualifier that the firing did have something to do with safety, I infer from the board's public silence that their reason for the firing isn't one that would win back the departing OpenAI members (or would only do so at a cost that's not worth paying). 
  4. This is consistent with it being a safety concern shared by the superalignment team (who by and large didn't sign the statement at first) but not by the rest of OpenAI (who view pushing capabilities forward as a good thing, because like Sam they believe the EV of OpenAI building AGI is better than the EV of unilaterally stopping). That's my current main hypothesis.
Replies from: ryan_b, dr_s
comment by ryan_b · 2023-11-21T14:01:03.683Z · LW(p) · GW(p)

Ah, oops! My expectations are reversed for Shear; him I strongly expect to be as exact as humanly possible.

With that update, I'm inclined to agree with your hypothesis.

comment by dr_s · 2023-11-21T07:43:35.263Z · LW(p) · GW(p)

(or would only do so at a cost that's not worth paying)

That's the part that confuses me most. An NDA wouldn't be strong enough reason at this point. As you say, safety concerns might, but that seems pretty wild unless they literally already have AGI and are fighting over what to do with it. The other thing is anything that if said out loud might involve the police, so revealing the info would be itself an escalation (and possibly mutually assured destruction, if there's criminal liability on both sides). I got nothing.

comment by Andrew_Clough · 2023-11-21T15:01:26.863Z · LW(p) · GW(p)

The facts very strongly suggest that the board is not a monolithic entity.  Its inability to tell a sensible story about the reasons for Sam's firing might be due to such a single comprehensible story not existing but different board members having different motives that let them agree on the firing initially but ultimately not on a story that they could jointly endorse.  

comment by Odd anon · 2023-11-20T20:09:18.326Z · LW(p) · GW(p)

There's... too many things here. Too many unexpected steps, somehow pointing at too specific an outcome. If there's a plot, it is horrendously Machiavellian.

(Hinton's quote, which keeps popping into my head: "These things will have learned from us by reading all the novels that ever were and everything Machiavelli ever wrote, that how to manipulate people, right? And if they're much smarter than us, they'll be very good at manipulating us. You won't realise what's going on. You'll be like a two year old who's being asked, do you want the peas or the cauliflower? And doesn't realise you don't have to have either. And you'll be that easy to manipulate. And so even if they can't directly pull levers, they can certainly get us to pull levers. It turns out if you can manipulate people, you can invade a building in Washington without ever going there yourself.")

(And Altman: "i expect ai to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes")

If an AI were to spike in capabilities specifically relating to manipulating individuals and groups of people, this is roughly how I would expect the outcome to look like. Maybe not even that goal-focused or agent-like, given that GPT-4 wasn't particularly lucid. Such an outcome would likely have initially resulted from deliberate probing by safety testing people, asking it if it could say something to them which would, by words alone, result in dangerous outcomes for their surroundings.

I don't think this is that likely. But I don't think I can discount it as a real possibility anymore.

Replies from: Seth Herd, Odd anon, chess-teacher
comment by Seth Herd · 2023-11-20T23:12:22.118Z · LW(p) · GW(p)

I think we can discount it as a real possibility, while accepting Altman's "i expect ai to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes". I think it might be weakly superhuman at persuasion for things like "buy our products", but that doesn't imply being superhuman at working out complex consequences of political maneuvering. Doing that would firmly imply a generally superhuman intelligence, I think.

So I think if this has anything to do with internal AI breakthroughs, it's tangential at most.

Replies from: dr_s
comment by dr_s · 2023-11-21T07:46:58.034Z · LW(p) · GW(p)

I mean, this would not be too hard though. It could be achieved by a simple trick of appearing smarter to some people and then dumber at subsequent interactions with others, scaring the safety conscious and then making them look insane for being scared.

I don't think that's what's going on (why would even an AGI model they made be already so cleverly deceptive and driven? I would expect OAI to not be stupid enough to build the most straightforward type of maximizer) but it wouldn't be particularly hard to think up or do.

comment by Odd anon · 2023-11-21T22:12:37.035Z · LW(p) · GW(p)

Time for some predictions. If this is actually from AI developing social manipulation superpowers, I would expect:

  1. We never find out any real reasonable-sounding reason for Altman's firing.
  2. OpenAI does not revert to how it was before.
  3. More instances of people near OpenAI's safety people doing bizarre unexpected things that have stranger outcomes.
  4. Possibly one of the following:
    1. Some extreme "scissors statements" pop up which divide AI groups into groups that hate each other to an unreasonable degree.
    2. An OpenAI person who directly interacted with some scary AI suddenly either commits suicide or becomes a vocal flat-earther or similar who is weirdly convincing to many people.
    3. An OpenAI person skyrockets to political power, suddenly finding themselves in possession of narratives and phrases which convince millions to follow them.

(Again, I don't think it's that likely, but I do think it's possible.)

Replies from: faul_sname
comment by faul_sname · 2023-11-22T01:42:26.029Z · LW(p) · GW(p)

Things might be even weirder than that if this is a narrowly superhuman AI that is specifically superhuman at social manipulation, but still has the same inability to form new gears-level models exhibited by current LLMs (e.g. if they figured out how to do effective self-play on the persuasion task, but didn't actually crack AGI).

comment by Chess3D (chess-teacher) · 2023-11-20T23:40:41.422Z · LW(p) · GW(p)

While I don't think this is true, it's a fun thought (and can also be pointed at Altman himself, rather than an AGI). Neither are true, but fun to think about

comment by Eli Tyre (elityre) · 2023-11-21T20:32:23.649Z · LW(p) · GW(p)

I love how short this post is! Zvi, you should do more posts like this (in addition to your normal massive-post fare).

comment by jimrandomh · 2023-11-22T01:56:58.793Z · LW(p) · GW(p)

Adam D'Angelo retweeted a tweet implying that hidden information still exists and will come out in the future:

Have known Adam D’Angelo for many years and although I have not spoken to him in a while, the idea that he went crazy or is being vindictive over some feature overlap or any of the other rumors seems just wrong. It’s best to withhold judgement until more information comes out.

comment by Bill Benzon (bill-benzon) · 2023-11-20T18:21:52.291Z · LW(p) · GW(p)

#14: If there have indeed been secret capability gains, so that Altman was not joking about reaching AGI internally (it seems likely that he was joking, though given the stakes, it's probably not the sort of thing to joke about), then the way I read their documents, the board should make that determination:

Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.

Once they've made that determination, then Microsoft will not have access to the AGI technology. Given the possible consequences, I doubt that Microsoft would have found such a joke very amusing.

Replies from: dr_s
comment by dr_s · 2023-11-21T07:51:13.043Z · LW(p) · GW(p)

Honestly this does seem... possible. A disagreement on whether GPT-5 counts as AGI would have this effect. The most safety minded would go "ok, this is AGI, we can't give it to Microsoft". The more business oriented and less conservative would go "no, this isn't AGI yet, it'll make us a fuckton of money though". There would be conflict. But for example seeing how now everyone might switch to Microsoft and simply rebuild the thing from scratch there, Ilya despairs and decides to do a 180 because at least this way he gets to supervise the work somehow.

comment by trevor (TrevorWiesinger) · 2023-11-20T20:00:46.562Z · LW(p) · GW(p)

This conflict has inescapably taken place in the context of US-China competition over AI, as leaders in both countries are well known to pursue AI acceleration for applications like autonomous low-flying nuclear cruise missiles (e.g. in contingencies where military GPS networks fail), economic growth faster than the US/China/rest of the world, and information warfare [LW · GW].

I think I could confidently bet against Chinese involvement, that seems quite reasonable. I can't bet so confidently against US involvement; although I agree that it remains largely unclear, it's also plausible that this situation has a usual suspect [LW · GW] and we could have seen it coming. Either way, victory/conquest by ludicrously powerful orgs like Microsoft seem like the obvious default outcome.

comment by Hoagy · 2023-11-21T11:49:49.567Z · LW(p) · GW(p)

There had been various clashes between Altman and the board. We don’t know what all of them were. We do know the board felt Altman was moving too quickly, without sufficient concern for safety, with too much focus on building consumer products, while founding additional other companies. ChatGPT was a great consumer product, but supercharged AI development counter to OpenAI’s stated non-profit mission.

Does anyone have proof of the board's unhappiness about speed, lack of safety concern and disagreement with founding other companies. All seem plausible but have seen basically nothing concrete.

comment by rotatingpaguro · 2023-11-20T21:39:48.772Z · LW(p) · GW(p)

The theory that my mind automatically generates seeing these happenings is that Ilya was in cahoots with Sam&Greg, and the pantomime was a plot to oust external members of the board.

However, I like to think I'm wise enough to give this 5% probability on reflection.

comment by nem · 2023-11-22T15:04:06.840Z · LW(p) · GW(p)

Is there any chance that Altman himself triggered this? Did something that he knew would cause the board to turn on him, with knowledge that Microsoft would save him?

comment by Campbell Hutcheson (campbell-hutcheson-1) · 2023-11-20T22:46:57.634Z · LW(p) · GW(p)

I'm 90% sure that the issue here was an inexperienced board with Chief Scientist that didn't understand the human dimension of leadership. 

Most independent board members usually have a lot of management experience and so understand that their power on paper is less than their actual power. They don't have day-to-day factual knowledge about the business of the company and don't have a good grasp of relationships between employees. So, they normally look to management to tell them what to do.

Here, two of the board members lacked the organizational experience to know that this was the case. Since any normal board would have tried to take the temperature of the employees before removing the CEO. I think this shows that creating a board for OAI to oversee the development of AGI is an incredibly hard task because they need to both understand AGI and understand the organizational level.

comment by dr_s · 2023-11-20T16:44:08.368Z · LW(p) · GW(p)

What about this?

We can definitely say that the board's decision was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board.

If considered reputable (and not a lie), this would significantly narrow the space of possible reasons.

comment by gh4n · 2023-11-20T16:18:53.891Z · LW(p) · GW(p)

Mira Mutari.

Typo here and below: Murati

Replies from: Zvi
comment by Zvi · 2023-11-20T16:22:59.257Z · LW(p) · GW(p)


comment by Eli Tyre (elityre) · 2023-11-21T20:30:29.943Z · LW(p) · GW(p)

Jan Leike, the other head of the superalignment team, Tweeted that he worked through the weekend on the crisis, and that the board should resign.

No link for this one?

Replies from: tristan-wegner
comment by Tristan Wegner (tristan-wegner) · 2023-11-22T07:58:44.067Z · LW(p) · GW(p)

I have been working all weekend with the OpenAI leadership team to help with this crisis
Jan Leike

Nov 20
I think the OpenAI board should resign

comment by AVoropaev · 2023-11-20T16:03:25.577Z · LW(p) · GW(p)

What's the source of that 505 employees letter? I mean the contents aren't too crazy, but isn't it strange that the only thing we have is a screenshot of the first page?

Replies from: Robert_AIZI, Zvi
comment by Robert_AIZI · 2023-11-20T16:27:10.774Z · LW(p) · GW(p)

It was covered in Axios, who also link to it as a separate pdf with all 505 signatories.

Replies from: Zvi
comment by Zvi · 2023-11-20T16:27:54.988Z · LW(p) · GW(p)

Now claim that it's up to 650/770.

Replies from: Robert_AIZI
comment by Robert_AIZI · 2023-11-20T17:18:07.879Z · LW(p) · GW(p)

That link is broken for me, did you mean to link to this Lilian Weng tweet?

comment by Zvi · 2023-11-20T16:19:12.125Z · LW(p) · GW(p)

Initially I saw it from Kara Swisher (~1mm views) then I saw it from a BB employee. I presume it is genuine.

comment by MadHatter · 2023-11-21T04:50:24.644Z · LW(p) · GW(p)

So much drama!

I predict that in five years we are all just way more zen about everything than we are at this point in time. Heck, maybe in six months.

If AI was going to instantly kill us all upon existing, maybe we'd already be dead?

Replies from: dr_s
comment by dr_s · 2023-11-21T07:52:35.105Z · LW(p) · GW(p)

That's just Yud's idea of a fast takeoff. Personally I share much more some worries for a slow takeoff that doesn't look like that but is still bad for either most or all of humanity. I don't expect AGI to instantly foom though.

comment by dr_s · 2023-11-20T16:43:44.919Z · LW(p) · GW(p)
comment by Burny · 2023-11-21T13:17:48.939Z · LW(p) · GW(p)