OpenAI: Facts from a Weekend
post by Zvi · 2023-11-20T15:30:06.732Z · LW · GW · 165 commentsContents
Just the Facts, Ma’am None 167 comments
Approximately four GPTs and seven years ago, OpenAI’s founders brought forth on this corporate landscape a new entity, conceived in liberty, and dedicated to the proposition that all men might live equally when AGI is created.
Now we are engaged in a great corporate war, testing whether that entity, or any entity so conceived and so dedicated, can long endure.
What matters is not theory but practice. What happens when the chips are down?
So what happened? What prompted it? What will happen now?
To a large extent, even more than usual, we do not know. We should not pretend that we know more than we do.
Rather than attempt to interpret here or barrage with an endless string of reactions and quotes, I will instead do my best to stick to a compilation of the key facts.
(Note: All times stated here are eastern by default.)
Just the Facts, Ma’am
What do we know for sure, or at least close to sure?
Here is OpenAI’s corporate structure, giving the board of the 501c3 the power to hire and fire the CEO. It is explicitly dedicated to its nonprofit mission, over and above any duties to shareholders of secondary entities. Investors were warned that there was zero obligation to ever turn a profit:
Here are the most noteworthy things we know happened, as best I can make out.
- On Friday afternoon at 3:28pm, the OpenAI board fired Sam Altman, appointing CTO Mira Murati as temporary CEO effective immediately. They did so over a Google Meet that did not include then-chairmen Greg Brockman.
- Greg Brockman, Altman’s old friend and ally, was removed as chairman of the board but the board said he would stay on as President. In response, he quit.
- The board told almost no one. Microsoft got one minute of warning.
- Mira Murati is the only other person we know was told, which happened on Thursday night.
- From the announcement by the board: “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.”
- In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period.”
- OpenAI’s board of directors at this point: OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner.
- Usually a 501c3’s board must have a majority of people not employed by the company. Instead, OpenAI’s said that a majority did not have a stake in the company, due to Sam Altman having zero equity.
- In response to many calling this a ‘board coup’: “You can call it this way,” Sutskever said about the coup allegation. “And I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity.” AGI stands for artificial general intelligence, a term that refers to software that can reason the way humans do.When Sutskever was asked whether “these backroom removals are a good way to govern the most important company in the world?” he answered: “I mean, fair, I agree that there is a not ideal element to it. 100%.”
- Other than that, the board said nothing in public. I am willing to outright say that, whatever the original justifications, the removal attempt was insufficiently considered and planned and massively botched. Either they had good reasons that justified these actions and needed to share them, or they didn’t.
- There had been various clashes between Altman and the board. We don’t know what all of them were. We do know the board felt Altman was moving too quickly, without sufficient concern for safety, with too much focus on building consumer products, while founding additional other companies. ChatGPT was a great consumer product, but supercharged AI development counter to OpenAI’s stated non-profit mission.
- OpenAI was previously planning an oversubscribed share sale at a valuation of $86 billion that was to close a few weeks later.
- Board member Adam D’Angelo said in a Forbes in January: There’s no outcome where this organization is one of the big five technology companies. This is something that’s fundamentally different, and my hope is that we can do a lot more good for the world than just become another corporation that gets that big.
- Sam Altman on October 16: “4 times in the history of OpenAI––the most recent time was in the last couple of weeks––I’ve gotten to be in the room when we push the veil of ignorance back and the frontier of discovery forward. Getting to do that is the professional honor of a lifetime.” There was speculation that events were driven in whole or in part by secret capabilities gains within OpenAI, possibly from a system called Gobi, perhaps even related to the joking claim ‘AI has been achieved internally’ but we have no concrete evidence of that.
- Ilya Sutskever co-leads the Superalignment Taskforce, has very short timelines for when we will get AGI, and is very concerned about AI existential risk.
- Sam Altman was involved in starting multiple new major tech companies. He was looking to raise tens of billions from Saudis to start a chip company. He was in other discussions for an AI hardware company.
- Sam Altman has stated time and again, including to Congress, that he takes existential risk from AI seriously. He was part of the creation of OpenAI’s corporate structure. He signed the CAIS letter. OpenAI spent six months on safety work before releasing GPT-4. He understands the stakes. One can question OpenAI’s track record on safety, many did including those who left to found Anthropic. But this was not a pure ‘doomer vs. accelerationist’ story.
- Sam Altman is very good at power games such as fights for corporate control. Over the years he earned the loyalty of his employees, many of whom moved in lockstep, using strong strategic ambiguity. Hand very well played.
- Essentially all of VC, tech, founder, financial Twitter united to condemn the board for firing Altman and for how they did it, as did many employees, calling upon Altman to either return to the company or start a new company and steal all the talent. The prevailing view online was that no matter its corporate structure, it was unacceptable to fire Altman, who had built the company, or to endanger OpenAI’s value by doing so. That it was good and right and necessary for employees, shareholders, partners and others to unite to take back control.
- Talk in those circles is that this will completely discredit EA or ‘doomerism’ or any concerns over the safety of AI, forever. Yes, they say this every week, but this time it was several orders of magnitude louder and more credible. New York Times somehow gets this backwards. Whatever else this is, it’s a disaster.
- By contrast, those concerned about existential risk, and some others, pointed out that the unique corporate structure of OpenAI was designed for exactly this situation. They also mostly noted that the board clearly handled decisions and communications terribly, but that there was much unknown, and tried to avoid jumping to conclusions.
- Thus we are now answering the question: What is the law? Do we have law? Where does the power ultimately lie? Is it the charismatic leader that ultimately matters? Who you hire and your culture? Can a corporate structure help us, or do commercial interests and profit motives dominate in the end?
- Great pressure was put upon the board to reinstate Altman. They were given two 5pm Pacific deadlines, on Saturday and Sunday, to resign. Microsoft’s aid, and that of its CEO Satya Nadella, was enlisted in this. We do not know what forms of leverage Microsoft did or did not bring to that table.
- Sam Altman tweets ‘I love the openai team so much.’ Many at OpenAI respond with hearts, including Mira Murati.
- Invited by employees including Mira Murati and other top executives, Sam Altman visited the OpenAI offices on Sunday. He tweeted ‘First and last time i ever wear one of these’ with a picture of his visitors pass.
- The board does not appear to have been at the building at the time.
- Press reported that the board had agreed to resign in principle, but that snags were hit over who the replacement board would be, and over whether or not they would need to issue a statement absolving Altman of wrongdoing, which could be legally perilous for them given their initial statement.
- Bloomberg reported on Sunday 11:16pm that temporary CEO Mira Murati aimed to rehire Altman and Brockman, while board sought alternative CEO.
- OpenAI board hires former Twitch CEO Emmett Shear to be the new CEO. He issues his initial statement here. I know a bit about him. If the board needs to hire a new CEO from outside that takes existential risk seriously, he seems to me like a truly excellent pick, I cannot think of a clearly better one. The job set for him may or may not be impossible. Shear’s PPS in his note: PPS: “Before I took the job, I checked on the reasoning behind the change. The board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I’m not crazy enough to take this job without board support for commercializing our awesome models.”
- New CEO Emmett Shear has made statements in favor of slowing down AI development, although not a stop. His p(doom) is between 5% and 50%. He has said ‘My AI safety discourse is 100% “you are building an alien god that will literally destroy the world when it reaches the critical threshold but be apparently harmless before that.”’ Here is a thread and video link with more, transcript here or a captioned clip. Here he is tweeting a 2×2 faction chart a few days ago.
- Microsoft CEO Satya Nadella posts 2:53am Monday morning: We remain committed to our partnership with OpenAI and have confidence in our product roadmap, our ability to continue to innovate with everything we announced at Microsoft Ignite, and in continuing to support our customers and partners. We look forward to getting to know Emmett Shear and OAI’s new leadership team and working with them. And we’re extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team. We look forward to moving quickly to provide them with the resources needed for their success.
- Sam Altman retweets the above with ‘the mission continues.’ Brockman confirms. Other leadership to include Jackub Pachocki the GPT-4 lead, Szymon Sidor and Aleksander Madry.
- Nadella continued in reply: I’m super excited to have you join as CEO of this new group, Sam, setting a new pace for innovation. We’ve learned a lot over the years about how to give founders and innovators space to build independent identities and cultures within Microsoft, including GitHub, Mojang Studios, and LinkedIn, and I’m looking forward to having you do the same.
- Ilya Sutskever posts 8:15am Monday morning: I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company. Sam retweets with three heart emojis. Jan Leike, the other head of the superalignment team, Tweeted that he worked through the weekend on the crisis, and that the board should resign.
- Microsoft stock was down -1% after hours on Friday, was back to roughly its previous value on Monday morning and at the open. All priced in. Neither Google or S&P made major moves either.
- 505 of 700 employees of OpenAI, including Ilya Sutskever, sign a letter telling the board to resign and reinstate Altman and Brockman, threatening to otherwise move to Microsoft to work in the new subsidiary under Altman, which will have a job for every OpenAI employee. Full text of the letter that was posted: To the Board of Directors at OpenAI,OpenAl is the world’s leading Al company. We, the employees of OpenAl, have developed the best models and pushed the field to new frontiers. Our work on Al safety and governance shapes global norms. The products we built are used by millions of people around the world. Until now, the company we work for and cherish has never been in a stronger position.The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company. Your conduct has made it clear you did not have the competence to oversee OpenAI.When we all unexpectedly learned of your decision, the leadership team of OpenAl acted swiftly to stabilize the company. They carefully listened to your concerns and tried to cooperate with you on all grounds. Despite many requests for specific facts for your allegations, you have never provided any written evidence. They also increasingly realized you were not capable of carrying out your duties, and were negotiating in bad faith.The leadership team suggested that the most stabilizing path forward – the one that would best serve our mission, company, stakeholders, employees and the public – would be for you to resign and put in place a qualified board that could lead the company forward in stability. Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”Your actions have made it obvious that you are incapable of overseeing OpenAl. We are unable to work for or with people that lack competence, judgement and care for our mission and employees. We, the undersigned, may choose to resign from OpenAl and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman. Microsoft has assured us that there are positions for all OpenAl employees at this new subsidiary should we choose to join. We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.1. Mira Murati2. Brad Lightcap3. Jason Kwon4. Wojciech Zaremba5. Alec Radford6. Anna Makanju7. Bob McGrew8. Srinivas Narayanan9. Che Chang10. Lillian Weng11. Mark Chen12. Ilya Sutskever
- There is talk that OpenAI might completely disintegrate as a result, that ChatGPT might not work a few days from now, and so on.
- It is very much not over, and still developing.
- There is still a ton we do not know.
- This weekend was super stressful for everyone. Most of us, myself included, sincerely wish none of this had happened. Based on what we know, there are no villains in the actual story that matters here. Only people trying their best under highly stressful circumstances with huge stakes and wildly different information and different models of the world and what will lead to good outcomes. In short, to all who were in the arena for this on any side, or trying to process it, rather than spitting bile: .
Later, when we know more, I will have many other things to say, many reactions to quote and react to. For now, everyone please do the best you can to stay sane and help the world get through this as best you can.
165 comments
Comments sorted by top scores.
comment by gwern · 2023-11-22T03:00:13.481Z · LW(p) · GW(p)
The key news today: Altman had attacked Helen Toner https://www.nytimes.com/2023/11/21/technology/openai-altman-board-fight.html (HN, Zvi [LW · GW]; excerpts) Which explains everything if you recall board structures and voting.
Altman and the board had been unable to appoint new directors because there was an even balance of power, so during the deadlock/low-grade cold war, the board had attrited down to hardly any people. He thought he had Sutskever on his side, so he moved to expel Helen Toner from the board. He would then be able to appoint new directors of his choice. This would have irrevocably tipped the balance of power towards Altman. But he didn't have Sutskever like he thought he did, and they had, briefly, enough votes to fire Altman before he broke Sutskever (as he did yesterday), and they went for the last-minute hail-mary with no warning to anyone.
As always, "one story is good, until another is told"...
Replies from: gwern, lc, Lukas_Gloor, Benito, tristan-wegner, faul_sname↑ comment by gwern · 2023-11-25T15:25:48.361Z · LW(p) · GW(p)
The WSJ has published additional details about the Toner fight, filling in the other half of the story. The NYT merely mentions the OA execs 'discussing' it, but the WSJ reports much more specifically that the exec discussion of Toner was a Slack channel that Sutskever was in, and that approximately 2 days before the firing and 1 day before Mira was informed* (ie. the exact day Ilya would have flipped if they had then fired Altman about as fast as possible to schedule meetings 48h before & vote), he saw them say that the real problem was EA and that they needed to get rid of EA associations.
https://www.wsj.com/tech/ai/altman-firing-openai-520a3a8c (excerpts)
The specter of effective altruism had loomed over the politics of the board and company in recent months, particularly after the movement’s most famous adherent, Sam Bankman-Fried, the founder of FTX, was found guilty of fraud in a highly public trial.
Some of those fears centered on Toner, who previously worked at Open Philanthropy. In October, she published an academic paper touting the safety practices of OpenAI’s competitor, Anthropic, which didn’t release its own AI tool until ChatGPT’s emergence. “By delaying the release of Claude until another company put out a similarly capable product, Anthropic was showing its willingness to avoid exactly the kind of frantic corner-cutting that the release of ChatGPT appeared to spur,” she and her co-authors wrote in the paper. Altman confronted her, saying she had harmed the company, according to people familiar with the matter. Toner told the board that she wished she had phrased things better in her writing, explaining that she was writing for an academic audience and didn’t expect a wider public one. Some OpenAI executives told her that everything relating to their company makes its way into the press.
OpenAI leadership and employees were growing increasingly concerned about being painted in the press as “a bunch of effective altruists,” as one of them put it. Two days before Altman’s ouster, they were discussing these concerns on a Slack channel, which included Sutskever. One senior executive wrote that the company needed to “uplevel” its “independence”—meaning create more distance between itself and the EA movement.
OpenAI had lost three board members over the past year, most notably Reid Hoffman [who turns out to have been forced out by Altman over 'conflicts of interest', triggering the stalemate], the LinkedIn co-founder and OpenAI investor who had sold his company to Microsoft and been a key backer of the plan to create a for-profit subsidiary. Other departures were Shivon Zilis, an executive at Neuralink, and Will Hurd, a former Texas congressman. The departures left the board tipped toward academics and outsiders less loyal to Altman and his vision.
So this answers the question everyone has been asking: "what did Ilya see?" It wasn't Q*, it was OA execs letting the mask down and revealing Altman's attempt to get Toner fired was motivated by reasons he hadn't been candid about. In line with Ilya's abstract examples of what Altman was doing, Altman was telling different board members (allies like Sutskever vs enemies like Toner) different things about Toner.
This answers the "why": because it yielded a hard, screenshottable-with-receipts case of Altman manipulating the board in a difficult-to-explain-away fashion - why not just tell the board that "the EA brand is now so toxic that you need to find safety replacements without EA ties"? Why deceive and go after them one by one without replacements proposed to assure them about the mission being preserved? (This also illustrates the "why not" tell people about this incident: these were private, confidential discussions among rich powerful executives who would love to sue over disparagement or other grounds.) Previous Altman instances were either done in-person or not documented, but Altman has been so busy this year traveling and fundraising that he has had to do a lot of things via 'remote work', one might say, where conversations must be conducted on-the-digital-record. (Really, Matt Levine will love all this once he catches up.)
This also answers the "why now?" question: because Ilya saw that conversation on 15 November 2023, and not before.
This eliminates any role for Q*: sure, maybe it was an instance of lack of candor or a capabilities advance that put some pressure on the board, but unless something Q*-related also happened that day, there is no longer any explanatory role. (But since we can now date Sutskever's flip to 15 November 2023, we can answer the question of "how could the board be deceived about Q* when Sutskever would be overseeing or intimately familiar with every detail?" Because he was still acting as part of the Altman faction - he might well be telling the safety board members covertly, depending on how disaffected he became earlier on, but he wouldn't be overtly piping up about Q* in meetings or writing memos to the board about it unless Altman wanted him to. A single board member knowing != "the board candidly kept in the loop".)
This doesn't quite answer the 'why so abruptly?' question. If you don't believe that a board should remove a CEO as fast as possible when they believe the CEO has been systematically deceiving them for a year and manipulating the board composition to remove all oversight permanently, then this still doesn't directly explain why they had to move so fast. It does give one strong clue: Altman was trying to wear down Toner, but he had other options - if there was not any public scandal about the paper (which there was not, no one had even noticed it), well, there's nothing easier to manufacture for someone so well connected, as some OA executives informed Toner:
Some OpenAI executives told her that everything relating to their company makes its way into the press.
This presumably sounded like a well-intended bit of advice at the time, but takes on a different set of implications in retrospect. Amazing how journalists just keep hearing things about OA from little birds, isn't it? And they write those articles and post them online or on Twitter so quickly, too, within minutes or hours of the original tip. And Altman/Brockman would, of course, have to call an emergency last-minute board meeting to deal with this sudden crisis which, sadly, proved him right about Toner. If only the board had listened to him earlier! But they can fix it now...
Unfortunately, this piecemeal description by WSJ leaves out the larger conversational context of that Slack channel, which would probably clear up a lot. For example, the wording is consistent with them discussing how to fire just Toner, but it's also consistent with that being just the first step in purging all EA-connected board members & senior executives - did they? If they did, that would be highly alarming and justify a fast move: eg. firing people is a lot easier than unfiring them, and would force a confrontation they might lose and would wind up removing Altman even if they won. (Particularly if we do not give in to hindsight bias and remember that in the first day, everyone, including insiders, thought the firing would stick and so Altman - who had said the board should be able to fire him and personally designed OA that way - would simply go do a rival startup elsewhere.)
Emmett Shear apparently managed to insist on an independent investigation, and I expect that this Slack channel discussion will be a top priority of a genuine investigation. As Slack has regulator & big-business-friendly access controls, backups, and logs, it should be hard for them to scrub all the traces now; any independent investigation will look for deletions by the executives and draw adverse inferences.
(The piecemeal nature of the Toner revelations, where each reporter seems to be a blind man groping one part of the elephant, suggests to me that the NYT & WSJ are working from leaks based on a summary rather than the originals or a board member leaking the whole story to them. Obviously, the flip-flopped Sutskever and the execs in question, who are the only ones who would have access post-firing, are highly unlikely to be leaking private Slack channel discussions, so this information is likely coming from before the firing, so board discussions or documents, where there might be piecemeal references or quotes. But I could be wrong here. Maybe they are deliberately being cryptic to protect their source, or something, and people are just too ignorant to read between the lines. Sort of like Umbridge's speech on a grand scale.)
* note that this timeline is consistent with what Habryka [LW(p) · GW(p)] says about Toner still scheduling low-priority ordinary meetings like normal just a few days before - which implies she had no idea things were about to happen.
Replies from: gwern, Yvain↑ comment by gwern · 2023-12-01T17:49:12.995Z · LW(p) · GW(p)
The NYer has confirmed that Altman's attempted coup was the cause of the hasty firing (excerpts; HN):
Replies from: gwern...Some members of the OpenAI board had found Altman an unnervingly slippery operator. For example, earlier this fall he’d confronted one member, Helen Toner, a director at the Center for Security and Emerging Technology, at Georgetown University, for co-writing a paper that seemingly criticized OpenAI for “stoking the flames of AI hype.” Toner had defended herself (though she later apologized to the board for not anticipating how the paper might be perceived). Altman began approaching other board members, individually, about replacing her. When these members compared notes about the conversations, some felt that Altman had misrepresented them as supporting Toner’s removal. “He’d play them off against each other by lying about what other people thought”, the person familiar with the board’s discussions told me. “Things like that had been happening for years.” (A person familiar with Altman’s perspective said that he acknowledges having been “ham-fisted in the way he tried to get a board member removed”, but that he hadn’t attempted to manipulate the board.)
...His tactical skills were so feared that, when 4 members of the board---Toner, D’Angelo, Sutskever, and Tasha McCauley---began discussing his removal, they were determined to guarantee that he would be caught by surprise. “It was clear that, as soon as Sam knew, he’d do anything he could to undermine the board”, the person familiar with those discussions said...Two people familiar with the board’s thinking say that the members felt bound to silence by confidentiality constraints...But whenever anyone asked for examples of Altman not being “consistently candid in his communications”, as the board had initially complained, its members kept mum, refusing even to cite Altman’s campaign against Toner.
...The dismissed board members, meanwhile, insist that their actions were wise. “There will be a full and independent investigation, and rather than putting a bunch of Sam’s cronies on the board we ended up with new people who can stand up to him”, the person familiar with the board’s discussions told me. “Sam is very powerful, he’s persuasive, he’s good at getting his way, and now he’s on notice that people are watching.” Toner told me, “The board’s focus throughout was to fulfill our obligation to OpenAI’s mission.” (Altman has told others that he welcomes the investigation---in part to help him understand why this drama occurred, and what he could have done differently to prevent it.)
Some A.I. watchdogs aren’t particularly comfortable with the outcome. Margaret Mitchell, the chief ethics scientist at Hugging Face, an open-source A.I. platform, told me, “The board was literally doing its job when it fired Sam. His return will have a chilling effect. We’re going to see a lot less of people speaking out within their companies, because they’ll think they’ll get fired---and the people at the top will be even more unaccountable.”
Altman, for his part, is ready to discuss other things. “I think we just move on to good governance and good board members and we’ll do this independent review, which I’m super excited about”, he told me. “I just want everybody to move on here and be happy. And we’ll get back to work on the mission”.
↑ comment by gwern · 2023-12-04T16:48:58.592Z · LW(p) · GW(p)
I left a comment over on EAF [EA(p) · GW(p)] which has gone a bit viral, describing the overall picture of the runup to the firing as I see it currently.
The summary is: evaluations of the Board's performance in firing Altman generally ignore that Altman made OpenAI and set up all of the legal structures, staff, and the board itself; the Board could, and should, have assumed good faith of Altman because if he hadn't been sincere, why would he have done all that, proving in extremely costly and unnecessary ways his sincerity? But, as it happened, OA recently became such a success that Altman changed his mind about the desirability of all that and now equally sincerely believes that the mission requires him to be in total control; and this is why he started to undermine the board. The recency is why it was so hard for them to realize that change of heart or develop common knowledge about it or coordinate to remove him given his historical track record - but that historical track record was also why if they were going to act against him at all, it needed to be as fast & final as possible. This led to the situation becoming a powder keg, and when proof of Altman's duplicity in the Toner firing became undeniable to the Board, it exploded.
Replies from: gwern, lc↑ comment by gwern · 2023-12-06T21:20:28.441Z · LW(p) · GW(p)
Latest news: Time sheds considerably more light on the board position, in its discouragingly-named piece "2023 CEO of the Year: Sam Altman" (excerpts; HN). While it sounds & starts like a puff piece (no offense to Ollie - cute coyote photos!), it actually contains a fair bit of leaking I haven't seen anywhere else. Most strikingly:
-
claims that the Board thought it had the OA executives on its side, because the executives had approached it about Altman:
The board expected pressure from investors and media. But they misjudged the scale of the blowback from within the company, in part because they had reason to believe the executive team would respond differently, according to two people familiar with the board’s thinking, who say the board’s move to oust Altman was informed by senior OpenAI leaders, who had approached them with a variety of concerns about Altman’s behavior and its effect on the company’s culture.
(The wording here strongly implies it was not Sutskever.) This of course greatly undermines the "incompetent Board" narrative, possibly explains both why the Board thought it could trust Mira Murati & why she didn't inform Altman ahead of time (was she one of those execs...?), and casts further doubt on the ~100% signature rate of the famous OA employee letter.
Now that it's safe(r) to say negative things about Altman, because it has become common knowledge that he was fired from Y Combinator and there is an independent investigation planned at OA, it seems that more of these incidents have been coming to light.
-
confirms my earlier interpretation that at least one of the dishonesties was specifically lying to a board member that another member wanted to immediately fire Toner to manipulate them into her ouster:
One example came in late October, when an academic paper Toner wrote in her capacity at Georgetown was published. Altman saw it as critical of OpenAI’s safety efforts and sought to push Toner off the board. Altman told one board member that another believed Toner ought to be removed immediately, which was not true, according to two people familiar with the discussions.
This episode did not spur the board’s decision to fire Altman, those people say, but it was representative of the ways in which he tried to undermine good governance, and was one of several incidents that convinced the quartet that they could not carry out their duty of supervising OpenAI’s mission if they could not trust Altman. Once the directors reached the decision, they felt it was necessary to act fast, worried Altman would detect that something was amiss and begin marshaling support or trying to undermine their credibility. “As soon as he had an inkling that this might be remotely on the table,” another of the people familiar with the board’s discussions says, “he would bring the full force of his skills and abilities to bear.”
EDIT: WSJ (excerpts) is also reporting this, by way of a Helen Toner interview (which doesn't say much on the record, but does provide context for why she said that quote everyone used as a club against her: an OA lawyer lied to her about her 'fiduciary duties' while threatening to sue & bankrupt, and she got mad and pointed out that even outright destroying OA would be consistent with the mission & charter so she definitely didn't have any 'fiduciary duty' to maximize OA profits).
Unfortunately, the Time article, while seeming to downplay how much the Toner incident mattered by saying it didn't "spur" the decision, doesn't explain what did spur it, nor refer to the Sutskever Slack discussion AFAICT. So I continue to maintain that Altman was moving to remove Toner so urgently in order to hijack the board, and that this attempt was one of the major concerns, and his deception around Toner's removal, and particularly the executives discussing the EA purge, was probably the final proximate cause which was concrete enough & came with enough receipts to remove whatever doubt they had left (whether that was "the straw that broke the camel's back" or "the smoking gun").
-
continues to undermine 'Q* truthers' by not even mentioning it (except possibly a passing reference by Altman at the end to "doubling down on certain research areas")
The article does provide good color and other details I won't try to excerpt in full (although some are intriguing - where, exactly, was this feedback to Altman about him being dishonest in order to please people?), eg:
Replies from: gwern...Altman, 38, has been Silicon Valley royalty for a decade, a superstar founder with immaculate vibes...Interviews with more than 20 people in Altman’s circle—including current and former OpenAI employees, multiple senior executives, and others who have worked closely with him over the years—reveal a complicated portrait. Those who know him describe Altman as affable, brilliant, uncommonly driven, and gifted at rallying investors and researchers alike around his vision of creating artificial general intelligence (AGI) for the benefit of society as a whole. But four people who have worked with Altman over the years also say he could be slippery—and at times, misleading and deceptive. Two people familiar with the board’s proceedings say that Altman is skilled at manipulating people, and that he had repeatedly received feedback that he was sometimes dishonest in order to make people feel he agreed with them when he did not. [see also Joshua Achiam's defense of Altman] These people saw this pattern as part of a broader attempt to consolidate power. “In a lot of ways, Sam is a really nice guy; he’s not an evil genius. It would be easier to tell this story if he was a terrible person,” says one of them. “He cares about the mission, he cares about other people, he cares about humanity. But there’s also a clear pattern, if you look at his behavior, of really seeking power in an extreme way.” An OpenAI spokesperson said the company could not comment on the events surrounding Altman’s firing. “We’re unable to disclose specific details until the board’s independent review is complete. We look forward to the findings of the review and continue to stand behind Sam,” the spokesperson said in a statement to TIME. “Our primary focus remains on developing and releasing useful and safe AI, and supporting the new board as they work to make improvements to our governance structure.”
↑ comment by gwern · 2023-12-09T00:20:15.544Z · LW(p) · GW(p)
If you've noticed OAers being angry on Twitter today, and using profanity & bluster and having oddly strong opinions about how it is important to refer to roon as @tszzl
and never as @roon
, it's because another set of leaks has dropped, and they are again unflattering to Sam Altman & consistent with the previous ones.
Today the Washington Post adds to the pile, "Warning from OpenAI leaders helped trigger Sam Altman’s ouster: The senior employees described Altman as psychologically abusive, creating delays at the artificial-intelligence start-up — complaints that were a major factor in the board’s abrupt decision to fire the CEO" (archive.is; HN; excerpts), which confirms the Time/WSJ reporting about executives approaching the board with concerns about Altman, and adds on more details - their concerns did not relate to the Toner dispute, but apparently was about regular employees:
This fall, a small number of senior leaders approached the board of OpenAI with concerns about chief executive Sam Altman. Altman---a revered mentor, prodigious start-up investor and avatar of the AI revolution---had been psychologically abusive, the employees said, creating pockets of chaos and delays at the artificial-intelligence start-up, according to two people familiar with the board’s thinking who spoke on the condition of anonymity to discuss sensitive internal matters. The company leaders, a group that included key figures and people who manage large teams, mentioned Altman’s allegedly pitting employees against each other in unhealthy ways, the people said. [The executives approaching the board were previously published in Time/WSJ, and the chaos hinted at in The Atlantic but this appears to add some more detail.]
...these complaints echoed their [the board's] interactions with Altman over the years, and they had already been debating the board’s ability to hold the CEO accountable. Several board members thought Altman had lied to them, for example, as part of a campaign to remove board member Helen Toner after she published a paper criticizing OpenAI, the people said.
The new complaints triggered a review of Altman’s conduct during which the board weighed the devotion Altman had cultivated among factions of the company against the risk that OpenAI could lose key leaders who found interacting with him highly toxic. They also considered reports from several employees who said they feared retaliation from Altman: One told the board that Altman was hostile after the employee shared critical feedback with the CEO and that he undermined the employee on that person’s team, the people said.
The complaints about Altman’s alleged behavior, which have not previously been reported, were a major factor in the board’s abrupt decision to fire Altman on November 17. Initially cast as a clash over the safe development of artificial intelligence, Altman’s firing was at least partially motivated by the sense that his behavior would make it impossible for the board to oversee the CEO.
(I continue to speculate Superalignment was involved, if only due to their enormous promised compute-quota and small headcount, but the wording here seems like it involved more than just a single team or group, and also points back to some of the earlier reporting and the other open letter, so there may be many more incidents than appreciated.)
Replies from: gwern, Wei_Dai, bec-hawk↑ comment by gwern · 2023-12-12T00:43:50.642Z · LW(p) · GW(p)
An elaboration on the WaPo article in the 2023-12-09 NYT: “Inside OpenAI’s Crisis Over the Future of Artificial Intelligence: Split over the Leadership of Sam Altman, Board Members and Executives Turned on One Another. Their Brawl Exposed the Cracks at the Heart of the AI Movement” (excerpts). Mostly a gossipy narrative from both the Altman & D'Angelo sides, so I'll just copy over my HN comment:
-
another reporting of internal OA complaints about Altman's manipulative/divisive behavior, see previously on HN
-
previously we knew Altman had been dividing-and-conquering the board by lying about others wanted to fire Toner, this says that specifically, Altman had lied about McCauley wanting to fire Toner; presumably, this was said to D'Angelo.
-
Concerns over Tigris had been mooted, but this says specifically that the board thought Altman had not been forthcoming about it; still unclear if he had tried to conceal Tigris entirely or if he had failed to mention something more specific like who he was trying to recruit for capital.
-
Sutskever had threatened to quit after Jakub Pachocki's promotion; previous reporting had said he was upset about it, but hadn't hinted at him being so angry as to threaten to quit OA
Sutskever doesn't seem to be too rank/promotion-hungry (why would he be? he is, to quote one article 'a god of AI' and is now one of the most-cited researchers ever) and one would think it would take a lot for him to threaten to quit OA... Coverage thus far seems to be content to take the attitude that he must be some sort of 5-year-old child throwing a temper tantrum over a slight, but I find this explanation inadequate.
I suspect there is much more to this thread, and it may tie back to Superalignment & broken promises about compute-quotas. (Permanently subverting or killing safety research does seem like adequate grounds for Sutskever to deliver an ultimatum, for the same reasons that the Board killing OA can be the best of several bad options.)
-
Altman was 'bad-mouthing the board to OpenAI executives'; this likely refers to the Slack conversation Sutskever was involved in reported by WSJ a while ago about how they needed to purge everyone EA-connected
-
Altman was initially going to cooperate and even offered to help, until Brian Chesky & Ron Conway riled him up. (I believe this because it is unflattering to Altman.)
-
the OA outside lawyer told them they needed to clam up and not do PR like the Altman faction was
-
both sides are positioning themselves for the independent report overseen by Summers as the 'broker'; hence, Altman/Conway leaking the texts quoted at the end posturing about how 'the board wants silence' (not that one could tell from the post-restoration leaking & reporting...) and how his name needs to be cleared.
-
Paul Graham remains hilariously incapable of saying anything unambiguously nice about Altman
One additional thing I'd note: the NYT mentions a quotes from Whatsapp channel that contained hundreds of top SV execs & VCs. This is the sort of thing that you always suspected existed given how coordinated some things seem to be, but this is a striking confirmation of existence. It is also, given the past history of SV like the big wage-fixing scandals of Steve Jobs et al, something I expect contains a lot of statements that they really would not want a prosecutor seeing. One wonders how prudent they have been about covering up message history, and complying with SEC & other regulatory rules about destruction of logs, and if some subpoenas are already winging their way out?
EDIT: Zvi commentary reviewing the past few articles, hitting most of the same points: https://thezvi.substack.com/p/openai-leaks-confirm-the-story/
Replies from: gwern, gwern↑ comment by gwern · 2023-12-24T20:51:59.152Z · LW(p) · GW(p)
The WSJ dashes our hopes for a quiet Christmas by dropping on Christmas Eve a further extension of all this reporting: "Sam Altman’s Knack for Dodging Bullets—With a Little Help From Bigshot Friends: The OpenAI CEO lost the confidence of top leaders in the three organizations he has directed, yet each time he’s rebounded to greater heights", Seetharam et al 2024-12-24 (Archive.is, HN; annotated excerpts).
This article confirms - among other things - what I suspected about there being an attempt to oust Altman from Loopt for the same reasons as YC/OA, adds some more examples of Altman amnesia & behavior (including what is, since people apparently care, being caught in a clearcut unambiguous public lie), names the law firm in charge of the report (which is happening), and best of all, explains why Sutskever was so upset about the Jakub Pachocki promotion.
-
Loopt coup: Vox had hinted at this in 2014 but it was unclear; however, WSJ specifically says that Loopt was in chaos and Altman kept working on side-projects while mismanaging Loopt (so, nearly identical to the much later, unconnected, YC & OA accusations), leading to the 'senior employees' to (twice!) appeal to the board to fire Altman. You know who won. To quote one of his defenders:
“If he imagines something to be true, it sort of becomes true in his head,” said Mark Jacobstein, co-founder of Jimini Health who served as Loopt’s chief operating officer. “That is an extraordinary trait for entrepreneurs who want to do super ambitious things. It may or may not lead one to stretch, and that can make people uncomfortable.”
-
Sequoia Capital: the journalists also shed light on the Loopt acquisition. There have long been rumors about the Loopt acquisition by Green Dot being shady (also covered in that Vox article), especially as Loopt didn't seem to go anywhere under Green Dot so it hardly looked like a great or natural acquisition - but it was unclear how and the discussions seemed to guess that Altman had sold Loopt in a way which made him a lot of money but shafted investors. But it seems that what actually happened was that, again on the side of his Loopt day-job, Altman was doing freelance VC work for Sequoia Capital, and was responsible for getting them into one of the most lucrative startup rounds ever, Stripe. Sequoia then 'helped engineer an acquisition by another Sequoia-backed company', Green Dot.
The journalists don't say this, but the implication here is that Loopt's acquisition was a highly-deniable kickback to Altman from Sequoia for Stripe & others.
-
Greg Brockman: also Stripe-related, Brockman's apparently intense personal loyalty to Altman may stem from this period, where Altman apparently did Brockman a big favor by helping broker the sale of his Stripe shares.
-
-
YC firing: some additional details like Jessica Livingston instigating it, one grievance being his hypocrisy over banning outside funds for YC partners (other than him), and also a clearcut lie by Altman: he posted the YC announcement blog post saying he had been moved to YC Chairman... but YC had not, and never did, agreed to that. So that's why the YC announcements kept getting edited - he'd tried to hustle them into appointing him Chairman to save his face.
To smooth his exit, Altman proposed he move from president to chairman. He pre-emptively published a blog post on the firm’s website announcing the change. But the firm’s partnership had never agreed, and the announcement was later scrubbed from the post.
Nice try, but no cigar. (This is something to keep in mind given my earlier comments about Altman talking about his pride in creating a mature executive team etc - if, after the report is done, he stops being CEO and becomes OA board chairman, that means he's been kicked out of OA.)
-
Ilya Sutskever: as mentioned above, I felt that we did not have the full picture why Sutskever was so angered by Jakub Pachocki's promotion. This answers it! Sutskever was angry because he has watched Altman long enough to understand what the promotion meant:
In early fall this year, Ilya Sutskever, also a board member, was upset because Altman had elevated another AI researcher, Jakub Pachocki, to director of research, according to people familiar with the matter. Sutskever told his board colleagues that the episode reflected a long-running pattern of Altman’s tendency to pit employees against one another or promise resources and responsibilities to two different executives at the same time, yielding conflicts, according to people familiar with the matter…Altman has said he runs OpenAI in a “dynamic” fashion, at times giving people temporary leadership roles and later hiring others for the job. He also reallocates computing resources between teams with little warning, according to people familiar with the matter. [cf. Atlantic, WaPo, the anonymous letter]
Ilya recognized the pattern perhaps in part because he has receipts:
In early October, OpenAI’s chief scientist approached some fellow board members to recommend Altman be fired, citing roughly 20 examples of when he believed Altman misled OpenAI executives over the years. That set off weeks of closed-door talks, ending with Altman’s surprise ouster days before Thanksgiving.
-
Speaking of receipts, the law firm for the independent report has been chosen: WilmerHale. Unclear if they are investigating yet, but I continue to doubt that it will be done before the tender closes early next month.
-
the level of sourcing indicates Altman's halo is severely damaged ("This article is based on interviews with dozens of executives, engineers, current and former employees and friend’s of Altman’s, as well as investors."). Before, all of this was hidden; as the article notes of the YC firing:
For years, even some of Altman’s closest associates—including Peter Thiel, Altman’s first backer for Hydrazine—didn’t know the circumstances behind Altman’s departure.
If even Altman's mentor didn't know, no wonder no one else seems to have known - aside from those directly involved in the firing, like, for example, YC board member Emmett Shear. But now it's all on the record, with even Graham & Livingston acknowledging the firing (albeit quibbling a little: come on, Graham, if you 'agree to leave immediately', that's still 'being fired').
-
Tash McCauley's role finally emerges a little more: she had been trying to talk to OA executives without Altman's presence, and Altman demanded to be informed of any Board communication with employees. It's unclear if he got his way.
So, a mix of confirmation and minor details continuing to flesh out the overall saga of Sam Altman as someone who excels at finance, corporate knife-fighting, & covering up manipulation but who is not actually that good at managing or running a company (reminiscent of Xi Jinping), and a few surprises for me.
On a minor level, if McCauley had been trying to talk to employees, then it's more likely that she was the one that the whistleblowers like Nathan Labenz had been talking to rather than Helen Toner; Toner might have been just the weakest link in her public writings providing a handy excuse. (...Something something 5 lines by the most honest of men...) On a more important level, if Sutskever has a list of 20 documented instances (!) of Altman lying to OA executives (and the Board?), then the Slack discussion may not have been so important after all, and Altman may have good reason to worry - he keeps saying he doesn't recall any of these unfortunate episodes, and it is hard to defend yourself if you can no longer remember what might turn up...
Replies from: gwern, bec-hawk↑ comment by gwern · 2024-03-08T00:12:45.175Z · LW(p) · GW(p)
An OA update: it's been quiet, but the investigation is over. And Sam Altman won. (EDIT: yep [LW(p) · GW(p)].)
To recap, because I believe I haven't been commenting on this since December (this is my last big comment, skimming my LW profile): WilmerHale was brought in to do the investigation. The tender offer, to everyone's relief, went off. A number of embarrassing new details about Sam Altman have surfaced: in particular, about his enormous chip fab plan with substantial interest from giants like Temasek, and how the OA VC Fund turns out to be owned by Sam Altman (his explanation was it saved some paperwork and he just forgot to ever transfer it to OA). Ilya Sutskever remains in hiding and lawyered up (his silence became particularly striking with the release of Sora). There have been increasing reports the past week or two that the WilmerHale investigation was coming to a close - and I am told that the investigators were not offering confidentiality and the investigation was narrowly scoped to the firing. (There was also some OA drama with the Musk lawfare & the OA response, but aside from offering an abject lesson in how not to redact sensitive information, it's both irrelevant & unimportant.)
The news today comes from the NYT leaking information from the final report: "Key OpenAI Executive [Mira Murati] Played a Pivotal Role in Sam Altman’s Ouster" (mirror; EDIT: largely confirmed by Murati in internal note).
The main theme of the article is clarifying Murati's role: as I speculated, she was in fact telling the Board about Altman's behavior patterns, and it fills in that she had gone further and written it up in a memo to him, and even threatened to leave with Sutskever.
But it reveals a number of other important claims: the investigation is basically done and wrapping up. The new board apparently has been chosen. Sutskever's lawyer has gone on the record stating that Sutskever did not approach the board about Altman (?!). And it reveals the board confronted Altman over his ownership of the OA VC Fund (in addition to all his many other compromises of interest**) imply.
So, what does that mean?
First, as always, in a war of leaks, cui bono? Who is leaking this to the NYT? Well, it's not the pro-Altman faction: they are at war with the NYT, and these leaks do them no good whatsoever. It's not the lawyers: these are high-powered elite lawyers, hired for confidentiality and discretion. It's not Murati or Sutskever, given their lack of motive, and the former's panicked internal note & Sutskever's lawyer's denial. Of the current interim board (which is about to finish its job and leave, handing it over to the expanded replacement board), probably not Larry Summers/Brett Taylor - they were brought on to oversee the report as neutral third party arbitrators, and if they (a simple majority of the temporary board) want something in their report, no one can stop them from putting it there. It could be Adam D'Angelo or the ex-board: they are the ones who don't control the report, and they also already have access to all of the newly-leaked-but-old information about Murati & Sutskever & the VC Fund.
So, it's the anti-Altman faction, associated with the old board. What does that mean?
I think that what this leak indirectly reveals is simple: Sam Altman has won. The investigation will exonerate him, and it is probably true that it was so narrowly scoped from the beginning that it was never going to plausibly provide grounds for his ouster. What these leaks are, are a loser's spoiler move: the last gasps of the anti-Altman faction, reduced to leaking bits from the final report to friendly media (Metz/NYT) to annoy Altman, and strike first. They got some snippets out before the Altman faction shops around highly selective excerpts to their own friendly media outlets (the usual suspects - The Information, Semafor, Kara Swisher) from the final officialized report to set the official record (at which point the rest of the confidential report is sent down the memory hole). Welp, it's been an interesting few months, but l'affaire Altman is over. RIP.
Evidence, aside from simply asking who benefits from these particular leaks at the last minute, is that Sutskever remains in hiding & his lawyer is implausibly denying he had anything to do with it, while if you read Altman on social media, you'll notice that he's become ever more talkative since December, particularly in the last few weeks - glorying in the instant memeification of '$7 trillion' - as has OA PR* and we have heard no more rhetoric about what an amazing team of execs OA has and how he's so proud to have tutored them to replace him. Because there will be no need to replace him now. The only major reasons he will have to leave is if it's necessary as a stepping stone to something even higher (eg. running the $7t chip fab consortium, running for US President) or something like a health issue.
So, upshot: I speculate that the report will exonerate Altman (although it can't restore his halo, as it cannot & will not address things like his firing from YC which have been forced out into public light by this whole affair) and he will be staying as CEO and may be returning to the expanded board; the board will probably include some weak uncommitted token outsiders for their diversity and independence, but have an Altman plurality and we will see gradual selective attrition/replacement in favor of Altman loyalists until he has a secure majority robust to at least 1 flip and preferably 2. Having retaken irrevocable control of OA, further EA purges should be unnecessary, and Altman will probably refocus on the other major weakness exposed by the coup: the fact that his frenemy MS controls OA's lifeblood. (The fact that MS was such a potent weapon for Altman in the fight is a feature while he's outside the building, but a severe bug once he's back inside.) People are laughing at the '$7 trillion'. But Altman isn't laughing. Those GPUs are life and death for OA now. And why should he believe he can't do it? Things have always worked out for him before...
Predictions, if being a bit more quantitative will help clarify my speculations here: Altman will still be CEO of OA on June 1st (85%); the new OA board will include Altman (60%); Ilya Sutskever and Mira Murati will leave OA or otherwise take on some sort of clearly diminished role by year-end (90%, 75%; cf. Murati's desperate-sounding internal note); the full unexpurgated non-summary report will not be released (85%, may be hard to judge because it'd be easy to lie about); serious chip fab/Tigris efforts will continue (75%); Microsoft's observer seat will be upgraded to a voting seat (25%).
* Eric Newcomer (usually a bit more acute than this) asks "One thing that I find weird: OpenAI comms is giving very pro Altman statements when the board/WilmerHale are still conducting the investigation. Isn't communications supposed to work for the company, not just the CEO? The board is in charge here still, no?" NARRATOR: "The board is not in charge still."
** Compare the current OA PR statement on the VC Fund to Altman's past position on, say, Helen Toner or Reid Hoffman or Shivon Zilis, or Altman's investment in chip startups touting letters of commitment from OA or his ongoing Hydrazine investment in OA which sadly, he has never quite had the time to dispose of in any of the OA tender offers. As usual, CoIs only apply to people Altman doesn't trust - "for my friends, everything; for my enemies, the law".
EDIT: Zvi commentary: https://thezvi.substack.com/p/openai-the-board-expands
Replies from: gwern, matthew-barnett, Zach Stein-Perlman↑ comment by gwern · 2024-09-25T19:49:20.351Z · LW(p) · GW(p)
Ilya Sutskever and Mira Murati will leave OA or otherwise take on some sort of clearly diminished role by year-end (90%, 75%; cf. Murati's desperate-sounding internal note)
Mira Murati announced today she is resigning from OA. (I have also, incidentally, won a $1k bet with an AI researcher on this prediction.)
Replies from: nikita-sokolsky↑ comment by Nikita Sokolsky (nikita-sokolsky) · 2024-09-25T23:19:10.042Z · LW(p) · GW(p)
Do you think this will have any impact on OpenAI's future revenues / ability to deliver frontier-level models?
Replies from: gwern↑ comment by gwern · 2024-09-25T23:55:50.595Z · LW(p) · GW(p)
See my earlier comments on 23 June 2024 about what 'OA rot' would look like; I do not see any revisions necessary given the past 3 months.
As for Murati finally leaving (perhaps she was delayed by the voice shipping delays), I don't think it matters too much as far as I could tell (not like Sutskever or Brockman leaving), she was competent but not critical; probably the bigger deal is that her leaving is apparently a big surprise to a lot of OAers (maybe I should've taken more bets?), and so will come as a blow to morale and remind people of last year's events.
EDIT: Zoph Barret & Bob McGrew are now gone too. Altman has released a statement, confirming that Murati only quit today:
...When Mira [Murati] informed me this morning that she was leaving, I was saddened but of course support her decision. For the past year, she has been building out a strong bench of leaders that will continue our progress.
I also want to share that Bob [McGrew] and Barret [Zoph] have decided to depart OpenAI. Mira, Bob, and Barret made these decisions independently of each other and amicably, but the timing of Mira’s decision was such that it made sense to now do this all at once, so that we can work together for a smooth handover to the next generation of leadership.
...Mark [Chen] is going to be our new SVP of Research and will now lead the research org in partnership with Jakub [Pachocki] as Chief Scientist. This has been our long-term succession plan for Bob someday; although it’s happening sooner than we thought, I couldn’t be more excited that Mark is stepping into the role. Mark obviously has deep technical expertise, but he has also learned how to be a leader and manager in a very impressive way over the past few years.
Josh[ua] Achiam is going to take on a new role as Head of Mission Alignment, working across the company to ensure that we get all pieces (and culture) right to be in a place to succeed at the mission.
...Mark, Jakub, Kevin, Srinivas, Matt, and Josh will report to me. I have over the past year or so spent most of my time on the non-technical parts of our organization; I am now looking forward to spending most of my time on the technical and product parts of the company.
...Leadership changes are a natural part of companies, especially companies that grow so quickly and are so demanding. I obviously won’t pretend it’s natural for this one to be so abrupt, but we are not a normal company, and I think the reasons Mira explained to me (there is never a good time, anything not abrupt would have leaked, and she wanted to do this while OpenAI was in an upswing) make sense.
(I wish Dr Achiam much luck in his new position at Hogwarts.)
Replies from: T3t↑ comment by RobertM (T3t) · 2024-09-26T01:29:37.324Z · LW(p) · GW(p)
It does not actually make any sense to me that Mira wanted to prevent leaks, and therefore didn't even tell Sam that she was leaving ahead of time. What would she be afraid of, that Sam would leak the fact that she was planning to leave... for what benefit?
Possibilities:
- She was being squeezed out, or otherwise knew her time was up, and didn't feel inclined to make it a maximally comfortable parting for OpenAI. She was willing to eat the cost of her own equity potentially losing a bunch of value if this derailed the ongoing investment round, as well as the reputational cost of Sam calling out the fact that she, the CTO of the most valuable startup in the world, resigned with no notice for no apparent good reason.
- Sam is lying or otherwise being substantially misleading about the circumstances of Mira's resignation, i.e. it was not in fact a same-day surprise to him. (And thinks she won't call him out on it?)
- ???
↑ comment by Matthew Barnett (matthew-barnett) · 2024-03-08T22:12:05.386Z · LW(p) · GW(p)
the new OA board will include Altman (60%)
Looks like you were right, at least if the reporting in this article is correct, and I'm interpreting the claim accurately.
Replies from: gwern↑ comment by gwern · 2024-03-08T22:26:54.370Z · LW(p) · GW(p)
At least from the intro, it sounds like my predictions were on-point: re-appointed Altman (I waffled about this at 60% because while his narcissism/desire to be vindicated requires him to regain his board seat, because anything less is a blot on his escutcheon, and also the pragmatic desire to lock down the board, both strongly militated for his reinstatement, it also seems so blatant a powergrab in this context that surely he wouldn't dare...? guess he did), released to an Altman outlet (The Information), with 3 weak apparently 'independent' and 'diverse' directors to pad out the board and eventually be replaced by full Altman loyalists - although I bet if one looks closer into these three women (Sue Desmond-Hellmann, Nicole Seligman, & Fidji Simo), one will find at least one has buried Altman ties. (Fidji Simo, Instacart CEO, seems like the most obvious one there: Instacart was YC S12.)
Replies from: gwern↑ comment by gwern · 2024-03-08T23:14:16.935Z · LW(p) · GW(p)
The official OA press releases are out confirming The Information: https://openai.com/blog/review-completed-altman-brockman-to-continue-to-lead-openai https://openai.com/blog/openai-announces-new-members-to-board-of-directors
“I’m pleased this whole thing is over,” Altman said at a press conference Friday.
He's probably right.
As predicted, the full report will not be released, only the 'summary' focused on exonerating Altman. Also as predicted, 'the mountain has given birth to a mouse' and the report was narrowly scoped to just the firing: they bluster about "reviewing 30,000 documents" (easy enough when you can just grep Slack + text messages + emails...), but then admit that they looked only at "the events concerning the November 17, 2023 removal" and interviewed hardly anyone ("dozens of interviews" barely even covers the immediate dramatis personae, much less any kind of investigation into Altman's chip stuff, Altman's many broken promises, Brockman's complainers etc). Doesn't sound like they have much to show for over 3 months of work by the smartest & highest-paid lawyers, does it... It also seems like they indeed did not promise confidentiality or set up any kind of anonymous reporting mechanism, given that they mention no such thing and include setting up a hotline for whistleblowers as a 'recommendation' for the future (ie. there was no such thing before or during the investigation). So, it was a whitewash from the beginning. Tellingly, there is nothing about Microsoft, and no hint their observer will be upgraded (or that there still even is one). And while flattering to Brockman, there is nothing about Murati - free tip to all my VC & DL startup acquaintances, there's a highly competent AI manager who's looking for exciting new opportunities, even if she doesn't realize it yet.
Also entertaining is that you can see the media spin happening in real time. What WilmerHales signs off on:
WilmerHale found that the prior Board acted within its broad discretion to terminate Mr. Altman, but also found that his conduct did not mandate removal.
Which is... less than complimentary? One would hope a CEO does a little bit better than merely not engage in 'conduct which mandates removal'? And turns into headlines like
"OpenAI’s Sam Altman Returns to Board After Probe Clears Him"
(Nothing from Kara Swisher so far, but judging from her Twitter, she's too busy promoting her new book and bonding with Altman over their mutual dislike of Elon Musk to spare any time for relatively-minor-sounding news.)
OK, so what was not as predicted? What is surprising?
This is not a full replacement board, but implies that Adam D'Angelo/Brett Taylor/Larry Summers are all staying on the board, at least for now. (So the new composition is D'Angelo/Taylor/Summers/Altman/Demond-Hellmann/Seligman/Simo plus the unknown Microsoft non-voting observer.) This is surprising, but it may simply be a quotidian logistics problem - they hadn't settled on 3 more adequately diverse and prima-facie qualified OA board candidates yet, but the report was finished and it was more important to wind things up, and they'll get to the remainder later. (Perhaps Brockman will get his seat back?)
EDIT: A HNer points out that today, March 8th, is "International Women's Day", and this is probably the reason for the exact timing of the announcement. If so, they may well have already picked the remaining candidates (Brockman?), but those weren't women and so got left out of the announcement. Stay tuned, I guess. EDITEDIT: the video call/press conference seems to confirm that they do plan more board appointments: "OpenAI will continue to expand the board moving forward, according to a Zoom call with reporters." So that is consistent with the hurried women-only announcement.
Replies from: jacques-thibodeau, ESRogs↑ comment by jacquesthibs (jacques-thibodeau) · 2024-09-25T20:09:29.501Z · LW(p) · GW(p)
And while flattering to Brockman, there is nothing about Murati - free tip to all my VC & DL startup acquaintances, there's a highly competent AI manager who's looking for exciting new opportunities, even if she doesn't realize it yet.
Heh, here it is: https://x.com/miramurati/status/1839025700009030027
↑ comment by ESRogs · 2024-03-09T07:41:36.196Z · LW(p) · GW(p)
Nitpick: Larry Summers not Larry Sumners
Replies from: gwern↑ comment by gwern · 2024-03-09T14:28:10.767Z · LW(p) · GW(p)
(Fixed. This is a surname typo I make an unbelievable number of times because I reflexively overcorrect it to 'Sumners', due to reading a lot more of Scott Sumner than Larry Summers. Ugh - just caught myself doing it again in a Reddit comment...)
Replies from: ESRogs↑ comment by Zach Stein-Perlman · 2024-03-25T21:06:22.806Z · LW(p) · GW(p)
Hydrazine investment in OA
Source?
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2024-03-27T02:54:14.864Z · LW(p) · GW(p)
@gwern [LW · GW] I've failed to find a source saying that Hydrazine invested in OpenAI. If it did, that would be a big deal; it would make this a lie.
Replies from: gwern↑ comment by gwern · 2024-03-27T03:35:09.063Z · LW(p) · GW(p)
It was either Hydrazine or YC. In either case, my point remains true: he's chosen to not dispose of his OA stake, whatever vehicle it is held in, even though it would be easy for someone of his financial acumen to do so by a sale or equivalent arrangement, forcing an embarrassing asterisk to his claims to have no direct financial conflict of interest in OA LLC - and one which comes up regularly in bad OA PR (particularly by people who believe it is less than candid to say you have no financial interest in OA when you totally do), and a stake which might be quite large at this point*, and so is particularly striking given his attitude towards much smaller conflicts supposedly risking bad OA PR. (This is in addition to the earlier conflicts of interest in Hydrazine while running YC or the interest of outsiders in investing in Hydrazine, apparently as a stepping stone towards OA.)
* if he invested a 'small' amount via some vehicle before he even went full-time at OA, when OA was valued at some very small amount like $50m or $100m, say, and OA's now valued at anywhere up to $90,000m or >900x more, and further, he strongly believes it's going to be worth far more than that in the near-future... Sure, it may be worth 'just' $500m or 'just' $1000m after dilution or whatever, but to most people that's pretty serious money!
↑ comment by Rebecca (bec-hawk) · 2023-12-25T00:41:24.155Z · LW(p) · GW(p)
Why do you think McCauley is likely to be the board member Labenz spoke to? I had inferred that it was someone not particularly concerned about safety given that Labenz reported them saying they could be easily request access to the model if they’d wanted to (and hadn’t). I took the point of the anecdote to be ‘here was a board member not concerned about safety’.
Replies from: gwern↑ comment by gwern · 2023-12-25T04:04:21.606Z · LW(p) · GW(p)
Because there is not currently any evidence that Toner was going around talking to a bunch of people, whereas this says McCauley was doing so. If I have to guess "did Labenz talk to the person who was talking to a bunch of people in OA, or did he talk to the person who was as far as I know not talking to a bunch of people in OA?", I am going to guess the former.
Replies from: bec-hawk↑ comment by Rebecca (bec-hawk) · 2023-12-25T05:14:56.507Z · LW(p) · GW(p)
They weren’t the only non employee board members though - that’s what I meant by the part about not being concerned about safety, that I took it to rule out both Toner and McCauley.
(Although it for some other reason you were only looking at Toner and McCauley, then no, I would say the person going around speaking to OAI employees is_less_ likely to be out of the loop on GPT-4’s capabilities)
Replies from: gwern↑ comment by gwern · 2023-12-25T17:51:03.899Z · LW(p) · GW(p)
The other ones are unlikely. Shivon Zilis & Reid Hoffman had left by this point; Will Hurd might or might not still be on the board at this point but wouldn't be described nor recommended by Labenz's acquaintance as researching AI safety, as that does not describe Hurd or D'Angelo; Brockman, Altman, and Sutskever are right out (Sutskever researches AI safety but Superalignment was a year away); by process of elimination, over 2023, the only board members he could have been plausibly contacting would be Toner and McCauley, and while Toner weakly made more sense before, now McCauley does.
(The description of them not having used the model unfortunately does not distinguish either one - none of the writings connected to them sound like they have all that much hands-on experience and would be eagerly prompt-engineering away at GPT-4-base the moment they got access. And I agree that this is a big mistake, but it is, even more unfortunately, and extremely common one - I remain shocked that Altman had apparently never actually used GPT-3 before he basically bet the company on it. There is a widespread attitude, even among those bullish about the economics, that GPT-3 or GPT-4 are just 'tools', which are mere 'stochastic parrots', and have no puzzling internal dynamics or complexities. I have been criticizing this from the start, but the problem is, 'sampling can show the presence of knowledge and not the absence', so if you don't think there's anything interesting there, your prompts are a mirror which reflect only your low expectations; and the safety tuning makes it worse by hiding most of the agency & anomalies, often in ways that look like good things. For example, the rhyming poetry ought to alarm everyone who sees it, because of what it implies underneath - but it doesn't. This is why descriptions of Sydney or GPT-4-base [LW · GW] are helpful: they are warning shots from the shoggoth behind the friendly tool-AI ChatGPT UI mask.)
Replies from: bec-hawk↑ comment by Rebecca (bec-hawk) · 2023-12-31T20:43:07.933Z · LW(p) · GW(p)
I think you might be misremembering the podcast? Nathan said that he was assured that the board as a whole was serious about safety, but I don’t remember the specific board member being recommended as someone researching AI safety (or otherwise more pro safety than the rest of the board). I went back through the transcript to check and couldn’t find any reference to what you’ve said.
“ And ultimately, in the end, basically everybody said, “What you should do is go talk to somebody on the OpenAI board. Don’t blow it up. You don’t need to go outside of the chain of command, certainly not yet. Just go to the board. And there are serious people on the board, people that have been chosen to be on the board of the governing nonprofit because they really care about this stuff. They’re committed to long-term AI safety, and they will hear you out. And if you have news that they don’t know, they will take it seriously.” So I was like, “OK, can you put me in touch with a board member?” And so they did that, and I went and talked to this one board member. And this was the moment where it went from like, “whoa” to “really whoa.””
Replies from: gwern↑ comment by gwern · 2023-12-31T23:52:54.296Z · LW(p) · GW(p)
I was not referring to the podcast (which I haven't actually read yet because from the intro it seems wildly out of date and from a long time ago) but to Labenz's original Twitter thread turned into a Substack post. I think you misinterpret what he is saying in that transcript because it is loose and extemporaneous "they're committed" could just as easily refer to "are serious people on the board" who have "been chosen" for that (implying that there are other members of the board not chosen for that); and that is what he says in the written down post:
Replies from: bec-hawkI consulted with a few friends in AI safety research…The Board, everyone agreed, included multiple serious people who were committed to safe development of AI and would definitely hear me out, look into the state of safety practice at the company, and take action as needed.What happened next shocked me. The Board member I spoke to was largely in the dark about GPT-4. They had seen a demo and had heard that it was strong, but had not used it personally. They said they were confident they could get access if they wanted to. I couldn’t believe it. I got access via a “Customer Preview” 2+ months ago, and you as a Board member haven’t even tried it‽ This thing is human-level, for crying out loud (though not human-like!).
↑ comment by Rebecca (bec-hawk) · 2024-01-01T03:46:13.402Z · LW(p) · GW(p)
This quote doesn’t say anything about the board member/s being people who are researching AI safety though - it’s Nathan’s friends who are in AI safety research not the board members.
I agree that based on this quote, it could have very well been just a subset of the board. But I believe Nathan’s wife works for CEA (and he’s previously MCed an EAG), and Tasha is (or was?) on the board of EVF US, and so idk, if it’s Tasha he spoke to and the “multiple people” was just her and Helen, I would have expected a rather different description of events/vibe. E.g. something like ‘I googled who was on the board and realised that two of them were EAs, so I reached out to discuss’. I mean maybe that is closer to what happened and it’s just being obfuscated, either way is confusing to me tbh.
Btw, by “out of date” do you mean relative to now, or to when the events took place? From what I can see, the tweet thread, the substack post and the podcast were all published the same day - Nov 22nd 2023. The link I provided is just 80k excerpting the original podcast.
↑ comment by gwern · 2024-05-21T21:52:59.499Z · LW(p) · GW(p)
I suspect there is much more to this thread, and it may tie back to Superalignment & broken promises about compute-quotas.
The Superalignment compute-quota flashpoint is now confirmed. Aside from Jan Leike explicitly calling out compute-quota shortages post-coup (which strictly speaking doesn't confirm shortages pre-coup), Fortune is now reporting that this was a serious & longstanding issue:
...According to a half-dozen sources familiar with the functioning of OpenAI’s Superalignment team, OpenAI never fulfilled its commitment to provide the team with 20% of its computing power.
Instead, according to the sources, the team repeatedly saw its requests for access to graphics processing units, the specialized computer chips needed to train and run AI applications, turned down by OpenAI’s leadership, even though the team’s total compute budget never came close to the promised 20% threshold.
The revelations call into question how serious OpenAI ever was about honoring its public pledge, and whether other public commitments the company makes should be trusted. OpenAI did not respond to requests to comment for this story.
...It was a task so important that the company said in its announcement that it would commit “20% of the compute we’ve secured to date over the next four years” to the effort. But a half-dozen sources familiar with the Superalignment team’s work said that the group was never allocated this compute. Instead, it received far less in the company’s regular compute allocation budget, which is reassessed quarterly.
One source familiar with the Superalignment team’s work said that there were never any clear metrics around exactly how the 20% amount was to be calculated, leaving it subject to wide interpretation. For instance, the source said the team was never told whether the promise meant “20% each year for four years” or “5% a year for four years” or some variable amount that could wind up being “1% or 2% for the first three years, and then the bulk of the commitment in the fourth year.” In any case, all the sources Fortune spoke to for this story confirmed that the Superalignment team was never given anything close to 20% of OpenAI’s secured compute as of July 2023.
OpenAI researchers can also make requests for what is known as “flex” compute—access to additional GPU capacity beyond what has been budgeted—to deal with new projects between the quarterly budgeting meetings. But flex requests from the Superalignment team were routinely rejected by higher-ups, these sources said.
Bob McGrew, OpenAI’s vice president of research, was the executive who informed the team that these requests were being declined, the sources said, but others at the company, including chief technology officer Mira Murati, were involved in making the decisions. Neither McGrew nor Murati responded to requests to comment for this story.
While the team did carry out some research—it released a paper detailing its experiments in successfully getting a less powerful AI model to control a more powerful one in December 2023—the lack of compute stymied the team’s more ambitious ideas, the source said. After resigning, Leike on Friday published a series of posts on Twitter in which he criticized his former employer, saying “safety culture and processes have taken a backseat to shiny products.” He also said that “over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.”
5 sources familiar with the Superalignment team’s work backed up Leike’s account, saying that the problems with accessing compute worsened in the wake of the pre-Thanksgiving showdown between Altman and the board of the OpenAI nonprofit foundation.
...One source disputed the way the other sources Fortune spoke to characterized the compute problems the Superalignment team faced, saying they predated Sutskever’s participation in the failed coup, plaguing the group from the get-go.
While there have been some reports that Sutskever was continuing to co-lead the Superalignment team remotely, sources familiar with the team’s work said this was not the case and that Sutskever had no access to the team’s work and played no role in directing the team after Thanksgiving. With Sutskever gone, the Superalignment team lost the only person on the team who had enough political capital within the organization to successfully argue for its compute allocation, the sources said.
...The people who spoke to Fortune did so anonymously, either because they said they feared losing their jobs, or because they feared losing vested equity in the company, or both. Employees who have left OpenAI have been forced to sign separation agreements that include a strict non-disparagement clause that says the company can claw back their vested equity if they criticize the company publicly, or if they even acknowledge the clause’s existence. And employees have been told that anyone who refuses to sign the separation agreement will forfeit their equity as well.
↑ comment by Wei Dai (Wei_Dai) · 2023-12-09T03:41:31.851Z · LW(p) · GW(p)
There seems to be very little discussion of this story on Twitter. WP's tweet about it got only 75k views and 59 likes as of now, even though WP has 2M followers.
(I guess Twitter will hide your tweets even from your followers if the engagement rate is low enough. Not sure what the cutoff is, but 1 like to 100 views doesn't seem uncommon for tweets, and this one is only 1:1000. BTW what's a good article to read to understand Twitter better?)
Replies from: gwern↑ comment by gwern · 2023-12-09T14:43:09.298Z · LW(p) · GW(p)
There's two things going on. First, Musk-Twitter appears to massively penalize external links. Musk has vowed to fight 'spammers' who post links on Twitter to what are other sites (gasp) - the traitorous scum! Substack is only the most abhorred of these vile parasites, but all shall be brought to justice in due course. There is no need for other sites. You should be posting everything on Twitter as longform tweets (after subscribing), obviously.
You only just joined Twitter so you wouldn't have noticed the change, but even direct followers seem to be less likely to see a tweet if you've put a link in it. So tweeters are increasingly reacting by putting the external link at the end of a thread in a separate quarantine tweet, not bothering with the link at all, or just leaving Twitter under the constant silent treatment that high-quality tweeting gets you these days.* So, many of the people who would be linking or discussing it are either not linking it or not discussing it, and don't show up in the WaPo thread or by a URL search.
Second, OAers/pro-Altman tweets are practicing the Voldemort strategy: instead of linking the WaPo article at all (note that roon, Eigenrobot etc don't show up at all in the URL search), they are tweeting screenshots or Archive.is links. This is unnecessary (aside from the external link penalty of #1) since the WaPo has one of the most porous paywalls around which will scarcely hinder any readers, but this lets them inject their spin since you have to retweet them if you want to reshare it at all, impedes reading the article yourself to see if it's as utterly terrible and meaningless as they claim, and makes it harder to search for any discussion (what, are you going to know to search for the random archive.is snapshot...? no, of course not).
* I continue to stubbornly include all relevant external links in my tweets rather than use workarounds, and see the penalty constantly. It has definitely soured me even further on Musk-Twitter, particularly as it is contrary to the noises Musk has made about the importance of freedom of speech and higher reliability of tweets - yeah, asshole, how are you going to have highly reliable tweets or a good information ecosystem if including sources & references is almost like a self-imposed ban? And then you share ad revenue with subscribers who tweet the most inflammatory poorly-sourced stuff, great incentive design you've hit upon... I'm curious to see how the experience is going to degrade even further - I wouldn't put it past Musk to make subscriptions mandatory to try to seed the 'X everything app' as a hail mary for the failing Twitter business model. At least that might finally be enough to canonicalize a successor everyone can coordinate a move to.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2023-12-09T15:48:05.064Z · LW(p) · GW(p)
Thanks for the explanations, but I'm not noticing a big "external links" penalty on my own tweets. Found some discussion of this penalty via Google, so it seems real but maybe not that "massive"? Also some of it dates to before Musk purchased Twitter. Can you point me to anything that says he increased the penalty by a lot?
Ah Musk actually published Twitter's algorithms, confirming the penalty. Don't see anyone else saying that he increased the penalty though.
BTW why do you "protect" your account (preventing non-followers from seeing your tweets)?
Replies from: gwern↑ comment by gwern · 2023-12-11T14:51:01.408Z · LW(p) · GW(p)
Ah Musk actually published Twitter's algorithms, confirming the penalty. Don't see anyone else saying that he increased the penalty though.
'The algorithm' is an emergent function of the entire ecosystem. I have no way of knowing what sort of downstream effects a tweak here or there would cause or the effects of post-Musk changes. I just know what I see: my tweets appear to have plummeted since Musk took over, particularly when I link to my new essays or documents etc.
If you want to do a more rigorous analysis, I export my Twitter analytics every few months (thank goodness Musk hasn't disabled that to try to upsell people to the subscription - maybe he doesn't know it's there?) and could provide you my archives. (BTW, there is a moving window where you can only get the last few months, so if you think you will ever be interested in your Twitter traffic numbers, you need to start exporting them every 2-3 months now, or else the historical data will become inaccessible. I don't know if you can restore access to old ones by signing up as an advertiser.) EDIT: I looked at the last full pre-Musk month and my last month, and I've lost ~75% of views/clicks/interactions, despite trying to use Twitter in the same way.
As for the 'published' algorithm, I semi-believe it is genuine (albeit doubtless incomplete) because Musk was embarrassed that it exposed how some parts of the new algorithm are manipulating Twitter to make Musk look more popular (confirming earlier reporting that Musk had ordered such changes after getting angry his views were dropping due to his crummy tweets), but that is also why it hasn't been updated in almost half a year, apparently. God knows what the real thing is like by now...
↑ comment by Rebecca (bec-hawk) · 2023-12-25T00:38:02.982Z · LW(p) · GW(p)
Could you link to some examples of “ OAers being angry on Twitter today, and using profanity & bluster and having oddly strong opinions about how it is important to refer to roon as @tszzl and never as @roon”? I don’t have a twitter account so can’t search myself
↑ comment by lc · 2023-12-05T09:53:03.020Z · LW(p) · GW(p)
I've read your explanations of what happened, and it still seems like the board acted extremely incompetently. Call me an armchair general if you want. Specific choices that I take severe issue with:
- The decision to fire Sam, instead of just ejecting him from the board
Both kicking Sam off the board, and firing him, and kicking Greg off at the same time all at once with no real explanation is completely unnecessary and is also what ultimately gives Sam the cassus belli for organizing the revolt to begin with. It's also unnecessary to defend Helen from Sam's attacks.
Consider what happens if Sam had just lost his board seat. First, his cost-benefit analysis looks different: Sam still has most of what he had before to lose, namely his actual position at OpenAI, and so probably no matter how mad he is he doesn't hold the entire organization hostage.
Second, he is way, way more limited in what he can justifiably publicly do in response. Taking the nuclear actions he did - quitting in protest and moving to Microsoft - in response to losing control over a board he shouldn't have control over in the first place would look disloyal and vindictive. And if/when Sam tries to use his position as CEO to sabotage the company or subvert the board further (this time lacking his own seat), you'll have more ammunition to fire him later if you really need to.
If I had been on the board, my first action after getting the five together is to call Greg and Mira into an office and explain what was going on. Then after a long conversation about our motivations (whether or not they'd agreed with our decision), I immediately call Sam in/over the internet and deliver the news that he is no longer a board member, and that the vote had already been passed. I then overtly and clearly explain the reasoning behind why he's losing the board seat ("we felt you were trying to compromise the integrity of the board with your attacks on Helen and playing of board members against one another"), in front of everybody present. If it's appropriate, I give him the opportunity to save face and say that he voluntarily resigned to keep the board independent. Even if he doesn't go quietly, in this setting he's pretty much incapable of pulling any of the shenanigans he did over that weekend, and key people will know the surface reason of why he's being ejected and will interpret his next moves in that light.
- The decision never to explain why they ejected Sam.
Mindboggling. You can't just depose the favored leader of the organization and not at least lie in a satisfying way about why you did it. People were desperate to know why the board fired him and to believe it was something beyond EA affiliation. Which it was! So just fucking say that, and once you do now it's on Sam to prove or disprove your accusations. People, I'd wager even people inside OpenAI who feel some semblance of loyalty to him, did not actually need that much evidence to believe that Sam Altman - Silicon Valley's career politician - is a snake and was trying to corrupt the organization he was a part of. Say you have private information, explain precisely the things you explain in the above comment. That's way better than saying nothing because if you say nothing then Sam gets to explain what you did instead.
- The decision not to be aggressive in denouncing Sam after he started actively threatening to destroy the company.
Beyond communicating the motivation behind the initial decision, the board (Ilya ideally, if you can get him to do this) should have been on Twitter the entire time screaming at the top of their lungs that Sam's was willing to burn the entire company down in service of his personal ambitions, and that while kicking Sam off the board was a tough call and much tears were shed, everything that happened over the last three days - destroying all of your hard-earned OpenAI equity and handing it to MIcrosoft etc. - was a resounding endorsement of their decision to fire him, and that they will never surrender, etc. etc. etc. The only reason Sam's strategies of feeding info to the press about his inevitable return worked in the first place was because the board stayed completely fucking silent the entire time and refused to give any hint as to what they were thinking to either the staff at OpenAI or the general public.
↑ comment by Scott Alexander (Yvain) · 2023-11-28T06:03:55.522Z · LW(p) · GW(p)
Thanks, this makes more sense than anything else I've seen, but one thing I'm still confused about:
If the factions were Altman-Brockman-Sutskever vs. Toner-McCauley-D'Angelo, then even assuming Sutskever was an Altman loyalist, any vote to remove Toner would have been tied 3-3. I can't find anything about tied votes in the bylaws - do they fail? If so, Toner should be safe. And in fact, Toner knew she (secretly) had Sutskever on her side, and it would have been 4-2. If Altman manufactured some scandal, the board could have just voted to ignore it.
So I still don't understand "why so abruptly?" or why they felt like they had to take such a drastic move when they held all the cards (and were pretty stable even if Ilya flipped).
Other loose ends:
- Toner got on the board because of OpenPhil's donation. But how did McCauley get on the board?
- Is D'Angelo a safetyist?
- Why wouldn't they tell anyone, including Emmett Shear, the full story?
↑ comment by gwern · 2023-11-28T16:19:40.451Z · LW(p) · GW(p)
I can't find anything about tied votes in the bylaws - do they fail?
I can't either, so my assumption is that the board was frozen ever since Hoffman/Hurd left for that reason.
And there wouldn't've been a vote at all. I've explained it before [LW(p) · GW(p)] but - while we wait for phase 3 of the OA war to go hot - let me take another crack at it, since people seem to keep getting hung up on this and seem to imagine that it's a perfectly normal state of a board to be in a deathmatch between two opposing factions indefinitely, and so confused why any of this happened.
In phase 1, a vote would be pointless, and neither side could nor wanted to force it to a vote. After all, such a vote (regardless of the result) is equivalent to admitting that you have gone from simply "some strategic disagreements among colleagues all sharing the same ultimate goals and negotiating in good faith about important complex matters on which reasonable people of goodwill often differ" to "cutthroat corporate warfare where it's-them-or-us everything-is-a-lie-or-fog-of-war fight-to-the-death there-can-only-be-one". You only do such a vote in the latter situation; in the former, you just keep negotiating until you reach a consensus or find a compromise that'll leave everyone mad.
That's not a switch to make lightly or lazily. You do not flip the switch from 'ally' to 'enemy' casually, and then do nothing and wait for them to find out and make the first move.
Imagine Altman showing up to the board and going "hi guys I'd like to vote right now to fire Toner - oh darn a tie, never mind" - "dude what the fuck?!"
As I read it, the board still hoped Altman was basically aligned (and it was all headstrongness or scurrilous rumors) right up until the end, when Sutskever defected with the internal Slack receipts revealing that the war had already started and Altman's switch had apparently flipped a while ago.
So I still don't understand "why so abruptly?" or why they felt like they had to take such a drastic move when they held all the cards (and were pretty stable even if Ilya flipped).
The ability to manufacture a scandal at any time is a good way to motivate non-procrastination, pace Dr Johnson about the wonderfully concentrating effects of being scheduled to hang. As I pointed out, it gives Altman a great pretext to, at any time, push for the resignation of Toner in a way where - if their switch has not been flipped, like he still believed it had not - still looking to the board like the good guy who is definitely not doing a coup and is just, sadly and regretfully, breaking the tie because of the emergency scandal that the careless disloyal Toner has caused them all, just as he had been warning the board all along. (Won't she resign and help minimize the damage, and free herself to do her academic research without further concern? If not, surely D'Angelo or McCauley appreciate how much damage she's done and can now see that, if she's so selfish & stubborn & can't sacrifice herself for the good of OA, she really needs to be replaced right now...?) End result: Toner resigns or is fired. It took way less than that to push out Hoffman or Zillis, after all. And Altman means so well and cares so much about OA's public image, and is so vital to the company, and has a really good point about how badly Toner screwed up, so at least one of you three have to give it to him. And that's all he needs.
(How well do you think Toner, McCauley, and D'Angelo all know each other? Enough to trust that none of the other two would ever flip on the other, or be susceptible to leverage, or scared, or be convinced?)
Of course, their switch having been flipped at this point, the trio could just vote 'no' 3-3 and tell Altman to go pound sand and adamantly refuse to ever vote to remove Toner... but such an 'unreasonable' response reveals their switch has been flipped. (And having Sutskever vote alongside them 4-2, revealing his new loyalty, would be even more disastrous.)
Why wouldn't they tell anyone, including Emmett Shear, the full story?
How do you know they didn't? Note that what they wouldn't provide Shear was a "written" explanation. (If Shear was so unconvinced, why was an independent investigation the only thing he negotiated for aside from the new board? His tweets since then also don't sound like someone who looked behind the curtain, found nothing, and is profoundly disgusted with & hates the old board for their profoundly incompetent malicious destruction.)
↑ comment by Daniel (daniel-glasscock) · 2023-11-28T15:28:45.885Z · LW(p) · GW(p)
If the factions were Altman-Brockman-Sutskever vs. Toner-McCauley-D'Angelo, then even assuming Sutskever was an Altman loyalist, any vote to remove Toner would have been tied 3-3.
A 3-3 tie between the CEO founder of the company, the president founder of the company, and the chief scientist of the company vs three people with completely separate day jobs who never interact with rank-and-file employees is not a stable equilibrium. There are ways to leverage this sort of soft power into breaking the formal deadlock, for example: as we saw last week.
↑ comment by faul_sname · 2023-11-28T18:42:15.953Z · LW(p) · GW(p)
I note that the articles I have seen have said things like
New CEO Emmett Shear has so far been unable to get written documentation of the board’s detailed reasoning for firing Altman, which also hasn’t been shared with the company’s investors, according to people familiar with the situation
(emphasis mine).
If Shear had been unable to get any information about the board's reasoning, I very much doubt that they would have included the word "written".
↑ comment by Mitchell_Porter · 2023-11-28T09:30:18.923Z · LW(p) · GW(p)
how did McCauley get on the board?
I have envisaged a scenario in which the US intelligence community has an interagency working group on AI, and Toner and McCauley were its defacto representatives on the OpenAI board, Toner for CIA, McCauley for NSA. Maybe someone who has studied the history of the board can tell me whether that makes sense, in terms of its shifting factions.
Replies from: wassname↑ comment by wassname · 2023-11-28T12:35:32.348Z · LW(p) · GW(p)
Why would Toner be related to the CIA, and how is McCauley NSA?
If OpenaI is running out money, and is too dependent on Microsoft, defense/intelligence/government is not the worst place for them to look for money. There are even possible futures where they are partially nationalised in a crisis. Or perhaps they will help with regulatory assessment. This possibility certainly makes the Larry Summers appointment take on a different't light with his ties to not only Microsoft, but also the Government.
Replies from: David Hornbein, Mitchell_Porter↑ comment by David Hornbein · 2023-11-28T16:24:38.356Z · LW(p) · GW(p)
Toner's employer, the Center for Security and Emerging Technology (CSET), was founded by Jason Matheny. Matheny was previously the Director of the Intelligence Advanced Research Projects Activity (IARPA), and is currently CEO of the RAND Corporation. CSET is currently led by Dewey Murdick, who previously worked at the Department of Homeland Security and at IARPA. Much of CSET's initial staff was former (or "former") U.S. intelligence analysts, although IIRC they were from military intelligence rather than the CIA specifically. Today many of CSET's researchers list prior experience with U.S. civilian intelligence, military intelligence, or defense intelligence contractors. Given the overlap in staff and mission, U.S. intelligence clearly and explicitly has a lot of influence at CSET, and it's reasonable to suspect a stronger connection than that.
I don't see it for McCauley though.
↑ comment by Mitchell_Porter · 2023-11-28T14:58:38.230Z · LW(p) · GW(p)
Why would Toner be related to the CIA, and how is McCauley NSA?
Toner's university has a long history of association with the CIA. Just google "georgetown cia" and you'll see more than I can summarize.
As for McCauley, well, I did call this a "scenario"... The movie maker Oliver Stone rivals Chomsky as the voice of an elite political counterculture who are deadly serious in their opposition to what the American deep state gets up to, and whose ranks include former insiders who became leakers, whistleblowers, and ideological opponents of the system. When Stone, already known as a Wikileaks supporter, decided to turn his attention to NSA's celebrity defector Edward Snowden, he ended up casting McCauley's actor boyfriend as the star.
My hunch, my scenario, is that people associated with the agency, or formerly associated with the agency, put him forward for the role, with part of the reason being that he was already dating one of their own. What we know about her CV - robotics, geographic information systems, speaks Arabic, mentored by Alan Kay - obviously doesn't prove anything, but it's enough to make this scenario work, as a possibility.
↑ comment by lc · 2023-11-22T04:27:22.035Z · LW(p) · GW(p)
We shall see. I'm just ignoring the mainstream media spins at this point.
Replies from: TrevorWiesinger↑ comment by trevor (TrevorWiesinger) · 2023-11-22T08:55:42.272Z · LW(p) · GW(p)
For those of us who don't know yet, criticizing the accuracy of mainstream Western news outlets is NOT a strong bayesian update against someone's epistemics, especially on a site like Lesswrong (doesn't matter how many idiots you might remember ranting about "mainstream media" on other sites, the numbers are completely different here).
There is a well-known dynamic called Gell-Mann Amnesia, where people strongly lose trust in mainstream Western news outlets on a topic they are an expert on, but routinely forget about this loss of trust when they read coverage on a topic that they can't evaluate accuracy on. Western news outlets Goodhart readers by depicting themselves as reliable instead of prioritizing reliability.
If you read major Western news outlets, or are new to major news outlets due to people linking to them on Lesswrong recently, some basic epistemic prep can be found in Scott Alexander's The Media Very Rarely Lies and if it's important, the follow up posts.
↑ comment by Lukas_Gloor · 2023-11-22T03:24:45.109Z · LW(p) · GW(p)
Yeah, that makes sense and does explain most things, except that if I was Helen, I don't currently see why I wouldn't have just explained that part of the story early on?* Even so, I still think this sounds very plausible as part of the story.
*Maybe I'm wrong about how people would react to that sort of justification. Personally, I think the CEO messing with the board constitution to gain de facto ultimate power is clearly very bad and any good board needs to prevent that. I also believe that it's not a reason to remove a board member if they publish a piece of research that's critical of or indirectly harmful for your company. (Caveat that we're only reading a secondhand account of this, and maybe what actually happened would make Altman's reaction seem more understandable.)
Replies from: Lukas_Gloor↑ comment by Lukas_Gloor · 2023-11-22T03:57:29.369Z · LW(p) · GW(p)
Hm, to add a bit more nuance, I think it's okay at a normal startup for a board to be comprised of people who are likely to almost always side with the CEO, as long as they are independent thinkers who could vote against the CEO if the CEO goes off the rails. So, it's understandable (or even good/necessary) for CEOs to care a lot about having "aligned" people on the board, as long as they don't just add people who never think for themselves.
It gets more complex in OpenAI's situation where there's more potential for tensions between CEO and the board. I mean, there shouldn't necessarily be any tensions, but Altman probably had less of a say over who the original board members were than a normal CEO at a normal startup, and some degree of "norms-compliant maneuvering" to retain board control feels understandable because any good CEO cares a great deal about how to run things. So, it actually gets a bit murky and has to be judged case-by-case. (E.g., I'm sure Altman feels like what happened vindicated him wanting to push Helen off the board.)
↑ comment by Ben Pace (Benito) · 2023-11-22T05:03:07.290Z · LW(p) · GW(p)
I was confused about the counts, but I guess this makes sense if Helen cannot vote on her own removal. Then it's Altman/Brockman/Sutskever v Tasha/D'Angelo.
Pretty interesting that Sutskever/Tasha/D'Angelo would be willing to fire Altman just to prevent Helen from going. They instead could have negotiated someone to replace her. Wouldn't you just remove Altman from the Board, or maybe remove Brockman? Why would they be willing to decapitate the company in order to retain Helen?
Replies from: gwern, Zvi, chess-teacher↑ comment by gwern · 2023-11-22T16:47:47.722Z · LW(p) · GW(p)
They instead could have negotiated someone to replace her.
Why do they have to negotiate? They didn't want her gone, he did. Why didn't Altman negotiate a replacement for her, if he was so very upset about the damages she had supposedly done OA...?
"I understand we've struggled to agree on any replacement directors since I kicked Hoffman out, and you'd worry even more about safety remaining a priority if she resigns. I totally get it. So that's not an obstacle, I'll agree to let Toner nominate her own replacement - just so long as she leaves soon."
When you understand why Altman would not negotiate that, you understand why the board could not negotiate that.
I was confused about the counts, but I guess this makes sense if Helen cannot vote on her own removal. Then it's Altman/Brockman/Sutskever v Tasha/D'Angelo.
Recusal or not, Altman didn't want to bring it to something as overt as a vote expelling her. Power wants to conceal itself and deny the coup. The point here of the CSET paper pretext is to gain leverage and break the tie any way possible so it doesn't look bad or traceable to Altman: that's why this leaking is bad for Altman, it shows him at his least fuzzy and PR-friendly. He could, obviously, have leaked the Toner paper at any time to a friendly journalist to manufacture a crisis and force the issue, but that was not - as far as he knew then - yet a tactic he needed to resort to. However, the clock was ticking, and the board surely knew that the issue could be forced at any time of Altman's choosing.
If he had outright naked control of the board, he would scarcely need to remove her nor would they be deadlocked over the new directors; but by organizing a 'consensus' among the OA executives (like Jakub Pachocki?) about Toner committing an unforgivable sin that can be rectified only by stepping down, and by lobbying in the background and calling in favors, and arguing for her recusal, Altman sets the stage for wearing down Toner (note what they did to Ilya Sutskever & how the Altman faction continues to tout Sutskever's flip without mentioning the how) and Toner either resigning voluntarily or, in the worst case, being fired. It doesn't matter which tactic succeeds, a good startup CEO never neglects a trick, and Altman knows them all - it's not for nothing that Paul Graham keeps describing Altman as the most brutally effective corporate fighter he's ever known and describes with awe how eg he manipulated Graham into appointing him president of YC, and eventually Graham had to fire him from YC for reasons already being foreshadowed in 2016. (Note how thoroughly and misogynistically Toner has been vilified on social media by OAer proxies, who, despite leaking to the media like Niagara Falls, somehow never felt this part about Altman organizing her removal to be worth mentioning; every tactic has been employed in the fight so far: they even have law enforcement pals opening an 'investigation'. Needless to say, there's zero chance of it going anywhere, it's just power struggles, similar to the earlier threats to sue the directors personally.) Note: if all this can go down in like 3 days with Altman outside the building and formally fired and much of the staff gone on vacation, imagine what he could have done with 3 months and CEO access/resources/credibility and all the OAers back?
The board was tolerating all this up to the point where firing Toner came up, because it seemed like Sam was just aw-shucks-being-Sam - being an overeager go-getter was the whole point of the CEO, wasn't it? it wasn't like he was trying to launch a coup or anything, surely not - but when he opened fire on Toner for such an incredibly flimsy pretext without, say, proposing to appoint a specific known safety person to replace Toner and maintain the status quo, suddenly, everything changed. (What do you think a treacherous turn looks like IRL? It looks like that.) The world in which Altman is just an overeager commercializer who otherwise agrees with the board and there's just been a bunch of misunderstandings and ordinary conflicts is a different world from the world in which he doesn't care about safety unless it's convenient & regularly deceives and manipulates & has been maneuvering the entire time to irrevocably take over the board to remove his last check. And if you realize you have been living in the second world and that you have the slimmest possible majority, which will crack as soon as Altman realizes he's overplayed his hand and moves overtly to deploy his full arsenal before he forces a vote...
So Altman appears to have made two key mistakes here, because he was so personally overstretched and 2023 has been such a year: first, taking Sutskever for granted. (WSJ: "Altman this weekend was furious with himself for not having ensured the board stayed loyal to him and regretted not spending more time managing its various factions, people familiar with his thinking said.") Then second, making his move with such a flimsy pretext that it snapped the suspension of disbelief of the safety faction. Had he realized Sutskever was a swing vote, he would have worked on him much harder and waited for better opportunities to move against Toner or McCauley. Well, live and learn - he's a smart guy; he won't make the same mistakes twice with the next OA board.
(If you find any of this confusing or surprising, I strongly suggest you read up more on how corporate infighting works. You may not be interested in corporate governance or power politics, but they are now interested in you, and this literature is only going to get more relevant. Some LWer-friendly starting points here would be Bad Blood on narcissist Elizabeth Holmes, Steve Jobs - Altman's biggest hero - and his ouster, the D&D coup, the classic Barbarians at the Gate, the many contemporary instances covered in Matt Levine's newsletter like the Papa Johns coup or most recently, Sculptor, The Gervais Principle, the second half of Breaking Bad, Zvi's many relevant essays on moral mazes/simulacra levels [? · GW]/corporate dynamics from his perspective as a hedge fund guy, and especially the in-depth reporting on how Harvey Weinstein covered everything up for so long which pairs well with Bad Blood.)
Replies from: habryka4↑ comment by habryka (habryka4) · 2023-11-22T19:14:51.856Z · LW(p) · GW(p)
I... still don't understand why the board didn't say anything? I really feel like a lot of things would have flipped if they had just talked openly to anyone, or taken advice from anyone. Like, I don't think it would have made them global heroes, and a lot of people would have been angry with them, but every time any plausible story about what happened came out, there was IMO a visible shift in public opinion, including on HN, and the board confirming any story or giving any more detail would have been huge. Instead they apparently "cited legal reasons" for not talking, which seems crazy to me.
Replies from: adam_scholl, Linch↑ comment by Adam Scholl (adam_scholl) · 2023-11-23T22:14:14.002Z · LW(p) · GW(p)
I can imagine it being the case that their ability to reveal this information is their main source of leverage (over e.g. who replaces them on the board).
↑ comment by Linch · 2023-11-22T23:49:07.865Z · LW(p) · GW(p)
My favorite low-probability theory is that he had blackmail material on one of the board members[1], who initially decided after much deliberation to go forwards despite the blackmail, and then when they realized they got outplayed by Sam not using the blackmail material, backpeddled and refused to dox themselves. And the other 2-3 didn't know what to do afterwards, because their entire strategy was predicated on optics management around said blackmail + blackmail material.
- ^
Like something actually really bad.
↑ comment by Zvi · 2023-11-22T18:14:12.545Z · LW(p) · GW(p)
It would be sheer insanity to have a rule that you can't vote on your own removal, I would think, or else a tied board will definitely shrink right away.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2023-11-22T18:28:36.484Z · LW(p) · GW(p)
Wait, simple majority is an insane place to put the threshold for removal in the first place. Majoritarian shrinking is still basically inevitable if the threshold for removal is 50%, it should be a higher than that, maybe 62%.
And generally, if 50% of a group thinks A and 50% thinks ¬A, that tells you that the group is not ready to make a decision about A.
↑ comment by Chess3D (chess-teacher) · 2023-11-22T05:34:47.629Z · LW(p) · GW(p)
It is not clear, in the non--profit structure of a board, that Helen cannot vote on her own removal.
The vote to remove Sam may have been some trickery around holding a quorum meeting without notifying Sam or Greg.
Replies from: Linch↑ comment by Tristan Wegner (tristan-wegner) · 2023-11-22T07:18:31.931Z · LW(p) · GW(p)
Mr. Altman, the chief executive, recently made a move to push out one of the board’s members because he thought a research paper she had co-written was critical of the company.
Here the paper: https://cset.georgetown.edu/publication/decoding-intentions/
Some more recent (Nov/Okt 2023) publications from her here:
https://cset.georgetown.edu/staff/helen-toner/
↑ comment by faul_sname · 2023-11-22T06:01:23.415Z · LW(p) · GW(p)
Manifold says 23% (*edit: link doesn't link directly to that option, it shows up if you search "Helen") on
Sam tried to compromise the independence of the independent board members by sending an email to staff “reprimanding” Helen Toner https://archive.ph/snLmn
as "a significant factor for why Sam Altman was fired". It would make sense as a motivation, though it's a bit odd that the board would say that Sam was "not consistently candid" and not "trying to undermine the governance structure of the organization" in that case.
comment by JenniferRM · 2023-11-20T17:53:14.944Z · LW(p) · GW(p)
When I read this part of the letter, the authors seem to be throwing it in the face of the board like it is a damning accusation, but actually, as I read it, it seems very prudent and speaks well for the board.
You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”
Maybe I'm missing some context, but wouldn't it be better for Open AI as an organized entity to be destroyed than for it to exist right up to the point where all humans are destroyed by an AGI that is neither benevolent nor "aligned with humanity" (if we are somehow so objectively bad as to not deserve care by a benevolent powerful and very smart entity).
This reminds me a lot of a blockchain project I served as an ethicist, which was initially a "project" that was interested in advancing a "movement" and ended up with a bunch of people whose only real goal was to cash big paychecks for a long time (at which point I handled my residual duties to the best of my ability and resigned, with lots of people expressing extreme confusion and asking why I was acting "foolishly" or "incompetently" (except for a tiny number who got angry at me for not causing a BIGGER explosion than just leaving to let a normally venal company be normally venal without me)).
In my case, I had very little formal power. I bitterly regretted not having insisted "as the ethicist" in having a right to be informed of any board meeting >=36 hours in advance, and to attend every one of them, and to have the right to speak at them.
(Maybe it is a continuing flaw of "not thinking I need POWER", to say that I retrospectively should have had a vote on the Board? But I still don't actually think I needed a vote. Most of my job was to keep saying things like "lying is bad" or "stealing is wrong" or "fairness is hard to calculate but bad to violate if clear violations of it are occurring" or "we shouldn't proactively serve states that run gulags, we should prepare defenses, such that they respect us enough to explicitly request compliance first". You know, the obvious stuff, that people only flinch from endorsing because a small part of each one of us, as a human, is a very narrowly selfish coward by default, and it is normal for us, as humans, to need reminders of context sometimes when we get so much tunnel vision during dramatic moments that we might commit regrettable evils through mere negligence.)
No one ever said that it is narrowly selfishly fun or profitable to be in Gethsemane and say "yes to experiencing pain if the other side who I care about doesn't also press the 'cooperate' button".
But to have "you said that ending up on the cross was consistent with being a moral leader of a moral organization!" flung on one's face as an accusation suggests to me that the people making the accusation don't actually understand that sometimes objective de re altruism hurts.
Maturely good people sometimes act altruistically, at personal cost, anyway because they care about strangers.
Clearly not everyone is "maturely good".
That's why we don't select political leaders at random, if we are wise.
Now you might argue that AI is no big deal, and you might say that getting it wrong could never "kill literally everyone".
Also it is easy to imagine how a lot of normally venal corporate people (who thought they could get money by lying and saying "AI might kill literally everyone" when they don't believe it to people who do claim to believe it) if a huge paycheck will be given to them for their moderately skilled work contingent on them saying that...
...but if the stakes are really that big then NOT acting like someone who really DID believe that "AI might kill literally everyone" is much much worse than a lady on the side of the road looking helplessly at her broken car. That's just one lady! The stakes there are much smaller!
The big things are MORE important to get right. Not LESS important.
To get the "win condition for everyone" would justify taking larger risks and costs than just parking by the side of the road and being late for where-ever you planned on going when you set out on the journey.
Maybe a person could say: "I don't believe that AI could kill literally everyone, I just think that creating it is just an opportunity to make a lot of money and secure power, and use that to survive the near term liquidation of the proletariate when rambunctious human wage slaves are replaced by properly mind-controlled AI slaves".
Or you could say something like "I don't believe that AI is even that big a deal. This is just hype, and the stock valuations are gonna be really big but then they'll crash and I urgently want to sell into the hype to greater fools because I like money and I don't mind selling stuff I don't believe in to other people."
Whatever. Saying whatever you actually think is one of three legs in a the best definition of integrity that I currently know of.
(The full three criteria: non-impulsiveness, fairness, honesty.)
OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity... Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.
(Sauce. Italics and bold not in original.)
Compare this again:
You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”
The board could just be right about this.
It is an object level question about a fuzzy future conditional event, that ramifies through a lot of choices that a lot of people will make in a lot of different institutional contexts.
If Open AI's continued existence ensures that artificial intelligence benefits all of humanity then its continued existence would be consistent with the mission.
If not, not.
What is the real fact of the matter here?
Its hard to say, because it is about the future, but one way to figure out what a group will pursue is to look at what they are proud of, and what they SAY they will pursue.
Look at how the people fleeing into Microsoft argue in defense of themselves:
We, the employees of OpenAl, have developed the best models and pushed the field to new frontiers. Our work on Al safety and governance shapes global norms. The products we built are used by millions of people around the world. Until now, the company we work for and cherish has never been in a stronger position.
This is all MERE IMPACT. This is just the coolaid that startup founders want all their employees to pretend to believe is the most important thing, because they want employees who work hard for low pay.
This is all just "stuff you'd put in your promo packet to get promoted at a FAANG in the mid teens when they were hiring like crazy, even if it was only 80% true, that 'everyone around here' agrees with (because everyone on your team is ALSO going for promo)".
Their statement didn't mention "humanity" even once.
Their statement didn't mention "ensuring" that "benefits" go to "all of humanity" even once.
Microsoft's management has made no similar promise about benefiting humanity in the formal text of its founding, and gives every indication of having no particular scruples or principles or goals larger than a stock price and maybe some executive bonuses or stock buy-back deals.
As is valid in a capitalist republic! That kind of culture, and that kind of behavior, does have a place in it for private companies that manufacture and sell private good to individuals who can freely choose to buy those products.
You don't have to be very ethical to make and sell hammers or bananas or toys for children.
However, it is baked into the structure of Microsoft's legal contracts and culture that it will never purposefully make a public good that it knowingly loses a lot of money on SIMPLY because "the benefits to everyone else (even if Microsoft can't charge for them) are much much larger".
Open AI has a clear telos and Microsoft has a clear telos as well.
I admire the former more than the latter, especially for something as important as possibly creating a Demon Lord, or a Digital Leviathan, or "a replacement for nearly all human labor performed via arm's length transactional relations", or whatever you want to call it.
There are few situations in normal everyday life where the plausible impacts are not just economic, and not just political, not EVEN "just" evolutionary!
This is one of them. Most complex structures in the solar system right now were created, ultimately, by evolution. After AGI, most complex structures will probably be created by algorithms.
Evolution itself is potentially being overturned.
"People" are part of the world. "Things you care about" are part of the world.
There is no special carveout for cute babies, or picnics, or choirs, or waltzing with friends, or 20th wedding anniversaries, or taking ecstasy at a rave, or ANYTHING HUMAN.
All of those things are in the world, and unless something prevents that natural course of normal events from doing so: software will eventually eat them too.
I don't see Microsoft and the people fleeing to Microsoft, taking that seriously, with serious language, that endorses coherent moral ideals in ways that can be directly related to the structural features of institutional arrangements to cause good outcomes for humanity on purpose.
Maybe there is a deeper wisdom there?
Maybe they are secretly saying petty things, even as they secretly plan to do something really importantly good for all of humanity?
Most humans are quite venal and foolish, and highly skilled impression management is a skill that politicians and leaders would be silly to ignore.
But it seems reasonable to me to take both sides at their word.
One side talks and walks like a group that is self-sacrificingly willing to do what it takes to ensure that artificial general intelligence benefits all of humanity and the other side is just straightforwardly not.
Replies from: dr_s, ryan_b, xpym↑ comment by dr_s · 2023-11-20T17:57:09.733Z · LW(p) · GW(p)
Maybe I'm missing some context, but wouldn't it be better for Open AI as an organized entity to be destroyed than for it to exist right up to the point where all humans are destroyed by an AGI that is neither benevolent nor "aligned with humanity" (if we are somehow so objectively bad as to deserve care by a benevolent powerful and very smart entity).
The problem I suspect is that people just can't get out of the typical "FOR THE SHAREHOLDERS" mindset, so a company that is literally willing to commit suicide rather than getting hijacked for purposes antithetic to its mission, like a cell dying by apoptosis rather than going cancerous, can be a very good thing, and if only there was more of this. You can't beat Moloch if you're not willing to precommit to this sort of action. And let's face it, no one involved here is facing homelessness and soup kitchens even if Open AI crashes tomorrow. They'll be a little worse off for a while, their careers will take a hit, and then they'll pick themselves up. If this was about the safety of humanity it would be a no-brainer that you should be ready to sacrifice that much.
Replies from: michael-thiessen, None↑ comment by Michael Thiessen (michael-thiessen) · 2023-11-21T16:09:40.295Z · LW(p) · GW(p)
Sam's latest tweet suggests he can't get out of the "FOR THE SHAREHOLDERS" mindset.
"satya and my top priority remains to ensure openai continues to thrive
we are committed to fully providing continuity of operations to our partners and customers"
This does sound antithetical to the charter and might be grounds to replace Sam as CEO.
Replies from: dr_s↑ comment by dr_s · 2023-11-21T16:28:18.576Z · LW(p) · GW(p)
I feel like, not unlike the situation with SBF and FTX, the delusion that OpenAI could possibly avoid this trap maps on the same cognitive weak spot among EA/rationalists of "just let me slip on the Ring of Power this once bro, I swear it's just for a little while bro, I'll take it off before Moloch turns me into his Nazgul, trust me bro, just this once".
This is honestly entirely unsurprising. Rivers flow downhill and companies part of a capitalist economy producing stuff with tremendous potential economic value converge on making a profit.
Replies from: Sune↑ comment by Sune · 2023-11-21T17:16:45.661Z · LW(p) · GW(p)
The corporate structure of OpenAI was set up as an answer to concerns (about AGI and control over AGIs) which were raised by rationalists. But I don’t think rationalists believed that this structure was a sufficient solution to the problem, anymore than non-rationalists believed it. The rationalists that I have been speaking to were generally mostly sceptical about OpenAI.
Replies from: dr_s↑ comment by dr_s · 2023-11-21T17:21:18.650Z · LW(p) · GW(p)
Oh, I mean, sure, scepticism about OpenAI was already widespread, no question. But in general it seems to me like there's been too many attempts to be too clever by half from people at least adjacent in ways of thinking to rationalism/EA (like Elon) that go "I want to avoid X-risk but also develop aligned friendly AGI for myself" and the result is almost invariably that it just advances capabilities more than safety. I just think sometimes there's a tendency to underestimate the pull of incentives and how you often can't just have your cake and eat it. I remain convinced that if one wants to avoid X-risk from AGI the safest road is probably to just strongly advocate for not building AGI, and putting it in the same bin as "human cloning" as a fundamentally unethical technology. It's not a great shot, but it's probably the best one at stopping it. Being wishy-washy doesn't pay off.
Replies from: Seth Herd↑ comment by Seth Herd · 2023-11-21T17:29:13.457Z · LW(p) · GW(p)
I think you're in the majority in this opinion around here. I am noticing I'm confused about the lack of enthusiasm for developing alignment methods for thetypes of AGI that are being developed. Trying to get people to stop building it would be ideal, but I don't see a path to it. The actual difficulty of alignment seems mostly unknown, so potentially vastly more tractable. Yet such efforts make up a tiny part of x-risk discussion.
This isn't an argument for building ago, but for aligning the specific AGI others build.
Replies from: dr_s↑ comment by dr_s · 2023-11-21T18:14:34.553Z · LW(p) · GW(p)
Personally I am fascinated by the problems of interpretability and I would consider "no more GPTs for you guys until you figure out at least the main functioning principles of GPT-3" a healthy exercise in actual ML science to pursue, but I also have to acknowledge that such an understanding would make distillation far more powerful and thus also lead to a corresponding advance in capabilities. I am honestly stumped at what "I want to do something" looks like that doesn't somehow end up backfiring. It maybe that the problem is just thinking this way in the first place, and this really is just a shudder political problem, and tech/science can only make it worse.
Replies from: Seth Herd↑ comment by Seth Herd · 2023-11-21T23:54:34.712Z · LW(p) · GW(p)
That all makes sense.
Except that this is exactly what I'm puzzled by: a focus on solutions that probably won't work ("no more GPTs for you guys" is approximately impossible), instead of solutions that still might - working on alignment, and trading off advances in alignment for advances in AGI.
It's like the field has largely given up on alignment, and we're just trying to survive a few more months by making sure to not contribute to AGI at all.
But that makes no sense. MIRI gave up on aligning a certain type of AGI for good reasons. But nobody has seriously analyzed prospects for aligning the types of AGI we're likely to get: language model agents or loosely brainlike collections of deep nets. When I and a few others write about plans for aligning those types of AGI, we're largely ignored. The only substantive comments are "well there are still ways those plans could fail", but not arguments that they're actually likely to fail. Meanwhile, everyone is saying we have no viable plans for alignment, and acting like that means it's impossible. I'm just baffled by what's going on in the collective unspoken beliefs of this field.
Replies from: dr_s↑ comment by dr_s · 2023-11-22T08:36:51.602Z · LW(p) · GW(p)
I'll be real, I don't know what everyone else thinks, but personally I can say I wouldn't feel comfortable contributing to anything AGI-related at this point because I have very low trust even aligned AGI would result in a net good for humanity, with this kind of governance. I can imagine maybe amidst all the bargains with the Devil there is one that will genuinely pay off and is the lesser evil, but can't tell which one. I think the wise thing to do would be just not to build AGI at all, but that's not a realistically open path. So yeah, my current position is that literally any action I could take advances the kind of future I would want by an amount that is at best below the error margin of my guesses, and at worst negative. It's not a super nice spot to be in but it's where I'm at and I can't really lie to myself about it.
↑ comment by [deleted] · 2023-11-20T20:13:20.606Z · LW(p) · GW(p)
In the cancer case, the human body has every cell begin aligned with the body. Anthropically this has to function until breeding age plus enough offspring to beat losses.
And yes, if faulty cells self destruct instead of continuing this is good, there are cancer treatments that try to gene edit in clean copies of specific genes (p51 as I recall) that mediate this (works in rats...).
However the corporate world/international competition world has many more actors and they are adversarial. OAI self destructing leaves the world's best AI researchers unemployed, removes them from competing in the next round of model improvements - whoever makes a gpt-5 at a competitor will have the best model outright.
Coordination is hard. Consider the consequences if an entire town decided to stop consuming fossil fuels. They pay the extra costs and rebuild the town to be less car dependent.
However the consequence is this lowers the market price of fossil fuels. So others use more. (Demand elasticity makes the effect still slightly positive)
Replies from: dr_s↑ comment by dr_s · 2023-11-20T23:30:34.517Z · LW(p) · GW(p)
I mean, yes, a company self-destructing doesn't stop much if their knowledge isn't also actively deleted - and even then, it's just a setback of a few months. But also, by going "oh well we need to work inside the system to fix it somehow" at some point all you get is just another company racing with all others (and in this case, effectively being a pace setter). However you put it, OpenAI is more responsible than any other company for how close we may be to AGI right now, and despite their stated mission, I suspect they did not advance safety nearly as much as capability. So in the end, from the X-risk viewpoint, they mostly made things worse.
↑ comment by ryan_b · 2023-11-21T16:03:04.944Z · LW(p) · GW(p)
I agree with all of this in principal, but I am hung up on the fact that it is so opaque. Up until now the board have determinedly remained opaque.
If corporate seppuku is on the table, why not be transparent? How does being opaque serve the mission?
Replies from: JenniferRM↑ comment by JenniferRM · 2023-11-21T23:56:58.058Z · LW(p) · GW(p)
I wrote a LOT of words in response to this, talking about personal professional experiences that are not something I coherently understand myself as having a duty (or timeless permission?) to share, so I have reduced my response to something shorter and more general. (Applying my own logic to my own words, in realtime!)
There are many cases (arguably stupid cases or counter-producive cases, but cases) that come up more and more when deals and laws and contracts become highly entangling.
Its illegal to "simply" ask people for money in exchange for giving them a transferable right future dividends on a project for how to make money, that you seal with a handshake. The SEC commands silence sometimes and will put you in a cage if you don't.
You get elected to local office and suddenly the Brown Act (which I'd repeal as part of my reboot of the Californian Constitution had I the power) forbids you from talking with your co-workers (other elected officials) about work (the city government) at a party.
A Confessor [LW · GW] is forbidden kinds of information leak.
Fixing <all of this (gesturing at nearly all of human civilization)> isn't something that we have the time or power to do before we'd need to USE the "fixed world" to handle AGI sanely or reasonably, because AGI is coming so fast, and the world is so broken.
That there is so much silence associated with unsavory actors is a valid and concerning contrast, but if you look into it, you'll probably find that every single OpenAI employee has an NDA already.
OpenAI's "business arm", locking its employees down with NDAs, is already defecting on the "let all the info come out" game.
If the legal system will continue to often be a pay-to-win game and full of fucked up compromises with evil, then silences will probably continue to be common, both (1) among the machiavellians and (2) among the cowards, and (3) among the people who were willing to promise reasonable silences as part of hanging around nearby doing harms reduction. (This last is what I was doing as a "professional ethicist".)
And IT IS REALLY SCARY to try to stand up for what you think you know is true about what you think is right when lots of people (who have a profit motive for believing otherwise) loudly insist otherwise.
People used to talk a lot about how someone would "go mad" and when I was younger it always made me slightly confused, why "crazy" and "angry" were conflated. Now it makes a lot of sense to me.
I've seen a lot of selfish people call good people "stupid" and once the non-selfish person realizes just how venal and selfish and blind the person calling them stupid is, it isn't hard to call that person "evil" and then you get a classic "evil vs stupid" (or "selfish vs altruistic") fight. As they fight they become more "mindblind" to each other? Or something? (I'm working on an essay on this, but it might not be ready for a week or a month or a decade. Its a really knotty subject on several levels.)
Good people know they are sometimes fallible, and often use peer validation to check their observations, or check their proofs, or check their emotional calibration, and when those "validation services" get withdrawn for (hidden?) venal reasons, it can be emotionally and mentally disorienting.
(And of course in issues like this one a lot of people are automatically going to have a profit motive when a decision arises about whether to build a public good or not. By definition: the maker of a public good can't easily charge money for such a thing. (If they COULD charge money for it then it'd be a private good or maybe a club good.))
The Board of OpenAI might be personally sued by a bunch of Machiavellian billionaires, or their allies, and if that happens, everything the board was recorded as saying will be gone over with a fine-toothed comb, looking for tiny little errors.
Every potential quibble is potentially more lawyer time. Every bit of lawyer time is a cost that functions as a financial reason to settle instead of keep fighting for what is right. Making your attack surface larger is much easier than making an existing attack surface smaller.
If the board doesn't already have insurance for that extenuating circumstance, then I commit hereby to donate at least $100 to their legal defense fund, if they start one, which I hope they never need to do.
And in the meantime, I don't think they owe me much of anything, except for doing their damned best to ensure that artificial general intelligence benefits all humanity.
↑ comment by xpym · 2023-11-21T08:36:25.137Z · LW(p) · GW(p)
Maybe I’m missing some context, but wouldn’t it be better for Open AI as an organized entity to be destroyed than for it to exist right up to the point where all humans are destroyed by an AGI that is neither benevolent nor “aligned with humanity” (if we are somehow so objectively bad as to not deserve care by a benevolent powerful and very smart entity).
This seems to presuppose that there is a strong causal effect from OpenAI's destruction to avoiding creation of an omnicidal AGI, which doesn't seem likely? The real question is whether OpenAI was, on the margin, a worse front-runner than its closest competitors, which is plausible, but then the board should have made that case loudly and clearly, because, entirely predictably, their silence has just made the situation worse.
comment by Amalthea (nikolas-kuhn) · 2023-11-20T17:45:23.231Z · LW(p) · GW(p)
Whatever else, there were likely mistakes from the side of the board, but man does the personality cult around Altman make me uncomfortable.
Replies from: daniel-glasscock, dr_s↑ comment by Daniel (daniel-glasscock) · 2023-11-20T19:41:05.953Z · LW(p) · GW(p)
It reminds me of the loyalty successful generals like Caesar and Napoleon commanded from their men. The engineers building GPT-X weren't loyal to The Charter, and they certainly weren't loyal to the board. They were loyal to the projects they were building and to Sam, because he was the one providing them resources to build and pumping the value of their equity-based compensation.
Replies from: Sune, dr_s, tristan-wegner↑ comment by Tristan Wegner (tristan-wegner) · 2023-11-21T08:55:42.755Z · LW(p) · GW(p)
From your last link:
Another key difference is that the growth is currently capped at 10x. Similar to their overall company structure, the PPUs are capped at a growth of 10 times the original value.
As the company was doing well recently, with ongoing talks about a investment imply a market cap of $90B, this would mean many employees might have hit their 10x already. The highest payout they would ever get. So all incentive to cash out now (or as soon as the 2-year lock will allow), 0 financial incentive to care about long term value.
This seems worse in aligning employee interest with the long term interest of the company even compare to regular (unlimited allowed growth) equity, where each employee might hope that the valuation could get even higher.
Also:
It’s important to reiterate that the PPUs inherently are not redeemable for value if OpenAI does not turn a profit
So it seems the growth cap actually encourages short term thinking, which seems against their long term mission.
Do you also understand these incentives this way?
↑ comment by dr_s · 2023-11-20T17:49:48.945Z · LW(p) · GW(p)
It's not even a personality cult. Until the other day Altman was a despicable doomer and decel, advocating for regulations that would clip humanity's wings. As soon as he was fired and the "what did Ilya see" narrative emerged (I don't even think it was all serious at the beginning), the immediate response from the e/acc crowd was to elevate him to the status of martyr in minutes and recast the Board as some kind of reactionary force for evil that wants humanity to live in misery forever rather than bask in the Glorious AI Future.
Honestly even without the doom stuff I'd be extremely worried about this being the cultural and memetic environment in which AI gets developed anyway. This stuff is pure poison.
Replies from: nikolas-kuhn↑ comment by Amalthea (nikolas-kuhn) · 2023-11-20T17:59:58.011Z · LW(p) · GW(p)
It doesn't seem to me like e/acc has contributed a whole lot to this beyond commentary. The rallying of OpenAI employees behind Altman is quite plausibly his general popularity + ability to gain control of a situation.
At least that seems likely if Paul Graham's assessment of him as a master persuader is to be believed (and why wouldn't it?).
Replies from: dr_s↑ comment by dr_s · 2023-11-20T18:03:11.646Z · LW(p) · GW(p)
I mean, the employees could be motivated by a more straightforward sense that the firing is arbitrary and threatens the functioning of OpenAI and thus their immediate livelihood. I'd be curious to understand how much of this is calculated self-interest and how much indeed personal loyalty to Sam Altman, which would make this incident very much a crossing of the Rubicon.
Replies from: michael-thiessen↑ comment by Michael Thiessen (michael-thiessen) · 2023-11-20T18:52:06.808Z · LW(p) · GW(p)
I do find it quite surprising that so many who work at OpenAI are so eager to follow Altman to Microsoft - I guess I assumed the folks at OpenAI valued not working for big tech (that's more(?) likely to disregard safety) more than it appears they actually did.
Replies from: chess-teacher↑ comment by Chess3D (chess-teacher) · 2023-11-20T23:35:07.652Z · LW(p) · GW(p)
My guess is they feel that Sam and Greg (and maybe even Ilya) will provide enough of a safety net (compared to a randomized Board overlord) but also a large dose of self-interest once it gains steam and you know many of your coworkers will leave
comment by orthonormal · 2023-11-20T17:22:18.022Z · LW(p) · GW(p)
The most likely explanation I can think of, for what look like about-faces by Ilya and Jan this morning, is realizing that the worst plausible outcome is exactly what we're seeing: Sam running a new OpenAI at Microsoft, free of that pesky charter. Any amount of backpedaling, and even resigning in favor of a less safety-conscious board, is preferable to that.
They came at the king and missed.
Replies from: Lukas_Gloor, Lblack, tachikoma↑ comment by Lukas_Gloor · 2023-11-20T18:13:47.149Z · LW(p) · GW(p)
Yeah but if this is the case, I'd have liked to see a bit more balance than just retweeting the tribal-affiliation slogan ("OpenAI is nothing without its people") and saying that the board should resign (or, in Ilya's case, implying that he regrets and denounces everything he initially stood for together with the board). Like, I think it's a defensible take to think that the board should resign after how things went down, but the board was probably pointing to some real concerns that won't get addressed at all if the pendulum now swings way too much in the opposite direction, so I would have at least hoped for something like "the board should resign, but here are some things that I think they had a point about, which I'd like to see to not get shrugged under the carpet after the counter-revolution."
Replies from: orthonormal↑ comment by orthonormal · 2023-11-20T18:27:56.778Z · LW(p) · GW(p)
It's too late for a conditional surrender now that Microsoft is a credible threat to get 100% of OpenAI's capabilities team; Ilya and Jan are communicating unconditional surrender because the alternative is even worse.
Replies from: Seth Herd↑ comment by Seth Herd · 2023-11-20T23:04:10.026Z · LW(p) · GW(p)
I'm not sure this is an unconditional surrender. They're not talking about changing the charter, just appointing a new board. If the new board isn't much less safety conscious, then a good bit of the organization's original purpose and safeguards are preserved. So the terms of surrender would be negotiated in picking the new board.
Replies from: Linch↑ comment by Linch · 2023-11-21T00:29:40.495Z · LW(p) · GW(p)
AFAICT the only formal power the board has is in firing the CEO, so if we get a situation where whenever the board wants to fire Sam, Sam comes back and fires the board instead, well, it's not exactly an inspiring story for OpenAI's governance structure.
Replies from: TLK↑ comment by TLK · 2023-11-21T17:30:01.008Z · LW(p) · GW(p)
This is a very good point. It is strange, though, that the Board was able to fire Sam without the Chair agreeing to it. It seems like something as big as firing the CEO should have required at least a conversation with the Chair, if not the affirmative vote of the Chair. The way this was handled was a big mistake. There needs to be new rules in place to prevent big mistakes like this.
↑ comment by Lucius Bushnaq (Lblack) · 2023-11-20T20:06:00.099Z · LW(p) · GW(p)
If actually enforcing the charter leads to them being immediately disempowered, it‘s not worth anything in the first place. We were already in the “worst case scenario”. Better to be honest about it. Then at least, the rest of the organisation doesn‘t get to keep pointing to the charter and the board as approving their actions when they don‘t.
The charter it is the board’s duty to enforce doesn‘t say anything about how the rest of the document doesn‘t count if investors and employees make dire enough threats, I‘m pretty sure.
Replies from: faul_sname, orthonormal, nikolas-kuhn↑ comment by faul_sname · 2023-11-20T23:55:20.434Z · LW(p) · GW(p)
If actually enforcing the charter leads to them being immediately disempowered, it‘s not worth anything in the first place.
If you pushed for fire sprinklers to be installed, then yell "FIRE", and turn on the fire sprinklers, causing a bunch of water damage, and then refuse to tell anyone where you thought the fire was and why you thought that, I don't think you should be too surprised when people contemplate taking away your ability to trigger the fire sprinklers.
Keep in mind that the announcement was not something like
After careful consideration and strategic review, the Board of Directors has decided to initiate a leadership transition. Sam Altman will be stepping down from his/her role, effective November 17, 2023. This decision is a result of mutual agreement and understanding that the company's long-term strategy and core values require a different kind of leadership moving forward.
Instead, the board announced
Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
That is corporate speak for "Sam Altman was a lying liar about something big enough to put the entire project at risk, and as such we need to cut ties with him immediately and also warn everyone who might work with him that he was a lying liar." If you make accusations like that, and don't back them up, I don't think you get to be outraged that people start doubting your judgement.
Replies from: aphyer↑ comment by aphyer · 2023-11-21T14:34:29.131Z · LW(p) · GW(p)
If you pushed for fire sprinklers to be installed, then yell "FIRE", and turn on the fire sprinklers, causing a bunch of water damage, and then refuse to tell anyone where you thought the fire was and why you thought that, I don't think you should be too surprised when people contemplate taking away your ability to trigger the fire sprinklers.
The situation is actually even less surprising than this, because the thing people actually initially contemplated doing in response to the board's actions was not even 'taking away your ability to trigger the fire sprinklers' but 'going off and living in a new building somewhere else that you can't flood for lulz'.
As I'm understanding the situation OpenAI's board had and retained the legal right to stay in charge of OpenAI as all its employees left to go to Microsoft. If they decide they would rather negotiate from their starting point of 'being in charge of an empty building' to 'making concessions' this doesn't mean that the charter didn't mean anything! It means that the charter gave them a bunch of power which they wasted.
↑ comment by orthonormal · 2023-11-20T20:13:55.024Z · LW(p) · GW(p)
If they thought this would be the outcome of firing Sam, they would not have done so.
The risk they took was calculated, but man, are they bad at politics.
Replies from: dr_s, chess-teacher↑ comment by dr_s · 2023-11-21T07:38:58.856Z · LW(p) · GW(p)
I keep being confused by them not revealing their reasons. Whatever they are, there's no way that saying them out loud wouldn't give some ammo to those defending them, unless somehow between Friday and now they swung from "omg this is so serious we need to fire Altman NOW" to "oops looks like it was a nothingburger, we'll look stupid if we say it out loud". Do they think it's a literal infohazard or something? Is it such a serious accusation that it would involve the police to state it out loud?
Replies from: faul_sname↑ comment by faul_sname · 2023-11-21T19:35:38.178Z · LW(p) · GW(p)
At this point I'm beginning to wonder if a gag order is involved.
↑ comment by Chess3D (chess-teacher) · 2023-11-20T23:31:45.320Z · LW(p) · GW(p)
Interesting! Bad at politics is a good way to put it. So you think this was purely a political power move to remove Sam, and they were so bad at projecting the outcomes, all of them thought Greg would stay on board as President and employees would largely accept the change.
Replies from: orthonormal↑ comment by orthonormal · 2023-11-21T20:02:39.319Z · LW(p) · GW(p)
No, I don't think the board's motives were power politics; I'm saying that they failed to account for the kind of political power moves that Sam would make in response.
↑ comment by Amalthea (nikolas-kuhn) · 2023-11-20T20:11:31.581Z · LW(p) · GW(p)
It's hard to know for sure, but I think this is a reasonable and potentially helpful perspective. Some of the perceived repercussions on the state of AI safety might be "the band-aid being ripped off".
↑ comment by Tachikoma (tachikoma) · 2023-11-20T18:14:23.091Z · LW(p) · GW(p)
The important question is, why now? Why with so little evidence to back-up what is such an extreme action?
comment by Alex A (alex-a-1) · 2023-11-21T14:02:46.815Z · LW(p) · GW(p)
RE: the board’s vague language in their initial statement
Smart people who have an objective of accumulating and keeping control—who are skilled at persuasion and manipulation —will often leave little trace of wrongdoing. They’re optimizing for alibis and plausible deniability. Being around them and trying to collaborate with them is frustrating. If you’re self-aware enough, you can recognize that your contributions are being twisted, that your voice is going unheard, and that critical information is being withheld from you, but it’s not easy. And when you try to bring up concerns, they are very good at convincing you that those concerns are actually your fault.
I can see a world where the board was able to recognize that Sam’s behaviors did not align with OpenAI’s mission, while not having a smoking gun example to pin him on. Being unskilled politicians with only a single lever to push (who were probably morally opposed to other political tactics) the board did the only thing they could think of, after trying to get Sam to listen to their concerns. Did it play out well? No.
It’s clear that EA has a problem with placing people who are immature at politics in key political positions. I also believe there may be a misalignment in objectives between the politically skilled members of EA and the rest of us—politically skilled members may be withholding political advice/training from others out of fear that they will be outmaneuvered by those they advise. This ends up working against the movement as a whole.
Replies from: lc, faul_sname↑ comment by lc · 2023-11-21T20:00:28.749Z · LW(p) · GW(p)
Feels sometimes like all of the good EAs are bad at politics and everybody on our side that's good at politics is not a good EA.
Replies from: Thane Ruthenis↑ comment by Thane Ruthenis · 2023-11-21T21:02:35.848Z · LW(p) · GW(p)
Yeah, I'm getting that vibe. EAs keep going "hell yeah, we got an actual competent mafioso on our side, but they're actually on our side!", and then it turns out the mafioso wasn't on their side, any more than any other mafioso in history had ever been on anyone's side.
↑ comment by faul_sname · 2023-11-21T19:38:38.503Z · LW(p) · GW(p)
Ok, but then why the statement implying severe misconduct rather than a generic "the board has decided that the style of leadership that Mr. Altman provides is not what OpenAI needs at this time"?
comment by orthonormal · 2023-11-21T20:18:31.471Z · LW(p) · GW(p)
I'm surprised that nobody has yet brought up the development that the board offered Dario Amodei the position as a merger with Anthropic (and Dario said no!).
(There's no additional important content in the original article by The Information, so I linked the Reuters paywall-free version.)
Crucially, this doesn't tell us in what order the board made this offer to Dario and the other known figures (GitHub CEO Nat Friedman and Scale AI CEO Alex Wang) before getting Emmett Shear, but it's plausible that merging with Anthropic was Plan A all along. Moreover, I strongly suspect that the bad blood between Sam and the Anthropic team was strong enough that Sam had to be ousted in order for a merger to be possible.
So under this hypothesis, the board decided it was important to merge with Anthropic (probably to slow the arms race), booted Sam (using the additional fig leaf of whatever lies he's been caught in), immediately asked Dario and were surprised when he rejected them, did not have an adequate backup plan, and have been scrambling ever since.
P.S. Shear is known to be very much on record worrying that alignment is necessary and not likely to be easy; I'm curious what Friedman and Wang are on record as saying about AI x-risk.
Replies from: JamesPayor, Lukas_Gloor↑ comment by James Payor (JamesPayor) · 2023-11-22T00:01:33.031Z · LW(p) · GW(p)
Has this one been confirmed yet? (Or is there more evidence that this reporting that something like this happened?)
↑ comment by Lukas_Gloor · 2023-11-22T00:42:50.502Z · LW(p) · GW(p)
Having a "plan A" requires detailed advance-planning. I think it's much more likely that their decision was reactive rather than plan-based. They felt strongly that Altman had to go based on stuff that happened, and so they followed procedures – appoint an interim CEO and do a standard CEO search. Of course, it's plausible – I'd even say likely – that an "Anthropic merger" was on their mind as something that could happen as a result of this further down the line. But I doubt (and hope not) that this thought made a difference to their decision.
Reasoning:
- If they had a detailed plan that was motivating their actions (as opposed to reacting to a new development and figuring out what to do as things go on), they would probably have put in a bit more time gathering more potentially incriminating evidence or trying to form social alliances.
For instance, even just, in the months or weeks before, visiting OpenAI and saying hi to employees, introducing themselves as the board, etc., would probably have improved staff's perception of how this went down. Similarly, gathering more evidence by, e.g., talking to people close to Altman but sympathetic to safety concerns, asking whether they feel heard in the company, etc, could have unearthed more ammunition. (It's interesting that even the safety-minded researchers at OpenAI basically sided with Altman here, or, at the very least, none of them came to the board's help speaking up against Altman on similar counts. [Though I guess it's hard to speak up "on similar counts" if people don't even really know their primary concerns apart from the vague "not always candid."]) - If the thought of an Anthropic merge did play a large role in their decision-making (in the sense of "making the difference" to whether they act on something across many otherwise-similar counterfactuals), that would constitute a bad kind of scheming/plotting. People who scheme like that are probably less likely than baseline to underestimate power politics and the difficulty of ousting a charismatic leader, and more likely than baseline to prepare well for the fight. Like, if you think your actions are perfectly justified per your role as board member (i.e., if you see yourself as acting as a good board member), that's exactly the situation in which you're most likely to overlook the possibility that Altman may just go "fuck the board!" and ignore your claim to legitimacy. By contrast, if you're kind of aware that you're scheming and using the fact that you're a board member merely opportunistically, it might more readily cross your mind that Altman might scheme back at you and use the fact that he knows everyone at the company and has a great reputation in the Valley at large.
- It seems like the story feels overall more coherent if the board perceived themselves to be acting under some sort of time-pressure (I put maybe 75% on this).
- Maybe they felt really anxious or uncomfortable with the 'knowledge' or 'near-certainty' (as it must have felt to them, if they were acting as good board members) that Altman is a bad leader, so they sped things up because it was psychologically straining to deal with the uncertain situation.
- Maybe Altman approaching investors made them worry that if he succeeds, he'd acquire too much leverage.
- Maybe Ilya approached them with something and prompted them to react to it and do something, and in the heat of the moment, they didn't realize that it might be wise to pause and think things through and see if Ilya's mood is a stable one.
- Maybe there was a capabilities breakthrough and the board and Ilya were worried the new system may not be safe enough especially considering that once the weights leak, people anywhere on the internet can tinker with the thing and improve it with tweaks and tricks.
- [Many other possibilities I'm not thinking of.]
- [Update – I posted this update before gwern's comment but didn't realize it's that it's waaay more likely to be the case than the other ones before he said it] I read a rumor in a new article about talks about how to replace another board member, so maybe there was time pressure before Altman and Brockman would appoint a new board member who would always side with them.
were surprised when he rejected them
I feel like you're not really putting yourself into the shoes of the board members if you think they were surprised by the time where they asked around for CEOs that someone like Dario (with the reputation of his entire company at risk) would reject them. At that point, the whole situation was such a mess that they must have felt extremely bad and desperate going around frantically asking for someone to come in and help save the day. (But probably you just phrased it like that because you suspect that, in their initial plan where Altman just accepts defeat, their replacement CEO search would go over smoothly. That makes sense to me conditional on them having formed such a detailed-but-naive "plan A.")
Edit: I feel confident in my stance but not massively so, so I reserve maybe 14% to a hypothesis that is more like the one you suggested, partly updating towards habryka's cynicism, which I unfortunately think has had a somewhat good track record recently.
comment by Michael Thiessen (michael-thiessen) · 2023-11-20T16:02:51.625Z · LW(p) · GW(p)
https://twitter.com/i/web/status/1726526112019382275
"Before I took the job, I checked on the reasoning behind the change. The board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models."
comment by Htarlov (htarlov) · 2023-11-21T07:21:16.248Z · LW(p) · GW(p)
Most likely explanation is the simplest fitting one:
- The Board was angry on lack of communication for some time but with internal disagreement (Greg, Ilya)
- The things sped up lately. Ilya thought it might be good to change CEO to someone who would slow down and look more into safety as Altman says a lot about safety but speeds up anyway. So he gave a green light on his side (acceptation of change)
- Then the Board made the moves that they made
- Then the new CEO wanted to try to hire back Altman so they changed her
- Then that petition/letter started rolling because the prominent people saw those moves as harming to the company and the goal
- Ilya also saw that the outcome is bad both for the company and for the goal of slowing down and he saw that if the letter will get more signatures it will be even worse, so he changed his mind and also signed
Take note about the language that Ilya uses. He didn't say they did bad to Altman or that decission was bad. He said that that he changed his mind because of consequences being harm for the company.
Replies from: angmohcomment by Lukas_Gloor · 2023-11-22T02:55:33.341Z · LW(p) · GW(p)
One thing I've realized more in the last 24h:
- It looks like Sam Altman is using a bunch of "tricks" now trying to fight his way back into more influence over OpenAI. I'm not aware of anything I'd consider unethical (at least if one has good reasons to believe one has been unfairly attacked), but it's still the sort of stuff that wouldn't come naturally to a lot of people and wouldn't feel fair to a lot of people (at least if there's a strong possibility that the other side is acting in good faith too).
- Many OpenAI employees have large monetary incentives on the line and there's levels of peer pressure that are off the charts, so we really can't read too much into who tweeted how many hearts or signed the letter or whatever.
Maybe the extent of this was obvious to most others, but for me, while I was aware that this was going on, I feel like I underestimated the extent of it. One thing that put things into a different light for me is this tweet.
Which makes me wonder, could things really have gone down a lot differently? Sure, smoking-gun-type evidence would've helped the board immensely. But is it their fault that they don't have it? Not necessarily. If they had (1) time pressure (for one reason or another – hard to know at this point) and (2) if they still had enough 'soft' evidence to justify drastic actions. With (1) and (2) together, it could have made sense to risk intervening even without smoking-gun-type evidence.
(2) might be a crux for some people, but I believe that there are situations where it's legitimate for a group of people to become convinced that someone else is untrustworthy without being in a position to easily and quickly convince others. NDAs in play could be one reason, but also just "the evidence is of the sort that 'you had to be there'" or "you need all this other context and individual data points only become compelling if you also know about all these other data points that together help rule out innocuous/charitable interpretations about what happened."
In any case, many people highlighted the short notice with which the board announced their decision and commented that this implies that the board acted in an outrageous way and seems inexperienced. However, having seen what Altman managed to mobilize in just a couple of days, it's now obvious that, if you think he's scheming and deceptive in a genuinely bad way (as opposed to "someone knows how to fight power struggles and is willing to fight them when he feels like he's undeservedly under attack" – which isn't by itself a bad thing), then you simply can't give him a headstart.
So, while I still think the board made mistakes, I today feel a bit less confident that these mistakes were necessarily as big as I initially thought. I now think it's possible – but far from certain – that we're in a world where things are playing out the way they have mostly because it's a really though situation for the board to be in even when they are right. And, sure, that would've been a reason to consider not starting this whole thing, but obviously that's very costly as well, so, again, tough situation.
I guess a big crux is "how common is it that you justifiably think someone is bad but it'll be hard to convince others?" My stance is that, if you're right, you should eventually be able to convince others if the others are interested in the truth and you get a bunch of time and the opportunity to talk to more people who may have extra info. But you might not be able to succeed if you only have a few days and then you're out if you don't sound convincing enough.
My opinions have been fluctuating a crazy amount recently (I don't think I've ever been in a situation where my opinions have gone up and down like this!), so, idk, I may update quite a bit in the other direction again tomorrow.
↑ comment by Chess3D (chess-teacher) · 2023-11-22T04:16:04.047Z · LW(p) · GW(p)
The board could (justifiably based on Sam's incredible mobilization the past days**) believe that they have little to no chance of winning the war of public opinion and focus on doing everything privately since that is where they feel on equal footing.
This doesn't explain fully why they haven't stated reasons in private, but it does seem they provided at least something to Emmett Shear as he said he had a reason from the board that wasn't safety or commercialization (PPS of https://twitter.com/eshear/status/1726526112019382275)
** Very few fires employees would even consider pushing back, but to be this successful this quickly is impressive. Not taking a side on it being good or evil, just stating the fact of his ability to fight back after things seemed gloom (betting markets were down below 10%)
Replies from: chess-teacher↑ comment by Chess3D (chess-teacher) · 2023-11-22T12:01:49.036Z · LW(p) · GW(p)
Well, seems like the board did provide zero evidence in private, too! https://twitter.com/emilychangtv/status/1727228431396704557
Quite the saga: glad it is over and think that Larry Summers is a great independent thinker that could help the board make some smart expansion decisions
Replies from: dr_scomment by gilch · 2023-11-22T06:13:53.389Z · LW(p) · GW(p)
He's back. Again. Maybe.
https://twitter.com/OpenAI/status/1727205556136579362
We have reached an agreement in principle for Sam [Altman] to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo.
We are collaborating to figure out the details. Thank you so much for your patience through this.
Anyone know how Larry or Bret feel about x-risk?
Replies from: TrevorWiesinger, gilch↑ comment by trevor (TrevorWiesinger) · 2023-11-22T07:38:55.463Z · LW(p) · GW(p)
The verge article is better, shows tweets by Toner and Nadella confirming that it wasn't just someone getting access to the OpenAI twitter/x account (unless of course someone acquired access to all the accounts, which doesn't seem likely).
↑ comment by gilch · 2023-11-22T06:40:26.204Z · LW(p) · GW(p)
WSJ: https://www.wsj.com/tech/openai-says-sam-altman-to-return-as-ceo-766349a5
Replies from: o-ocomment by Ben Pace (Benito) · 2023-11-20T19:47:29.007Z · LW(p) · GW(p)
Fun story.
I met Emmett Shear once at a conference, and have read a bunch of his tweeting.
On Friday I turned to a colleague and asked for Shear's email, so that I could email him suggesting he try to be CEO, as he's built a multi-billion company before and has his head screwed on about x-risk.
My colleague declined, I think they thought it was a waste of time (or didn't think it was worth their social capital).
Man, I wish I had done it, that would have been so cool to have been the one to suggest it to him.
comment by dr_s · 2023-11-20T16:17:15.651Z · LW(p) · GW(p)
Man, Sutskever's back and forth is so odd. Hard to make obvious sense of, especially if we believe Shear's claim that this was not about disagreements on safety. Any chance that it was Annie Altman's accusations towards Sam that triggered this whole thing? It seems strange since you'd expect it to only happen if public opinion built up to unsustainable levels.
Replies from: David Hornbein↑ comment by David Hornbein · 2023-11-20T17:37:55.261Z · LW(p) · GW(p)
My guess: Sutskever was surprised by the threatened mass exodus. Whatever he originally planned to achieve, he no longer thinks he can succeed. He now thinks that falling on his sword will salvage more of what he cares about than letting the exodus happen.
Replies from: dr_s↑ comment by dr_s · 2023-11-20T17:47:08.388Z · LW(p) · GW(p)
This would be very consistent with the problem being about safety (Altman at MSFT is worse than Altman at OAI for that), but then Shear is lying (understandable that he might have to for political reasons). Or I suppose anything that involved the survival of Open AI, which at this point is threatened anyway.
Replies from: David Hornbein↑ comment by David Hornbein · 2023-11-20T17:55:30.985Z · LW(p) · GW(p)
Maybe Shear was lying. Maybe the board lied to Shear, and he truthfully reported what they told him. Maybe "The board did *not* remove Sam over any specific disagreement on safety" but did remove him over a *general* disagreement which, in Sutskever's view, affects safety. Maybe Sutskever wanted to remove Altman for a completely different reason which also can't be achieved after a mass exodus. Maybe different board members had different motivations for removing Altman.
Replies from: orthonormal↑ comment by orthonormal · 2023-11-20T18:09:38.579Z · LW(p) · GW(p)
I agree, it's critical to have a very close reading of "The board did *not* remove Sam over any specific disagreement on safety".
This is the kind of situation where every qualifier in a statement needs to be understood as essential—if the statement were true without the word "specific", then I can't imagine why that word would have been inserted.
Replies from: Kaj_Sotala, DanielFilan, ryan_b↑ comment by Kaj_Sotala · 2023-11-21T19:47:58.739Z · LW(p) · GW(p)
To elaborate on that, Shear is presumably saying exactly as much as he is allowed to say in public. This implies that if the removal had nothing to do with safety, then he would say "The board did not remove Sam over anything to do with safety". His inserting of that qualifier implies that he couldn't make a statement that broad, and therefore that safety considerations were involved in the removal.
Replies from: michael-thiessen↑ comment by Michael Thiessen (michael-thiessen) · 2023-11-21T20:16:06.421Z · LW(p) · GW(p)
According to Bloomberg, "Even CEO Shear has been left in the dark, according to people familiar with the matter. He has told people close to OpenAI that he doesn’t plan to stick around if the board can’t clearly communicate to him in writing its reasoning for Altman’s sudden firing."
Evidence that Shear simply wasn't told the exact reason, though the "in writing" part is suspicious. Maybe he was told not in writing and wants them to write it down so they're on the record.
↑ comment by DanielFilan · 2023-11-20T21:15:07.216Z · LW(p) · GW(p)
He was probably kinda sleep deprived and rushed, which could explain inessential words being added.
↑ comment by ryan_b · 2023-11-20T19:13:15.971Z · LW(p) · GW(p)
I would normally agree with this, except it does not seem to me like the board is particularly deliberate about their communication so far. If they are conscientious enough about their communication to craft it down to the word, why did they handle the whole affair in the way they seem to have so far?
I feel like a group of people who did not see fit to provide context or justifications to either their employees or largest shareholder when changing company leadership and board composition probably also wouldn't weigh each word carefully when explaining the situation to a total outsider.
We still benefit from a very close reading, mind you; I just believe there's a lot more wiggle room here than we would normally expect from corporate boards operating with legal advice based on the other information we have.
Replies from: orthonormal↑ comment by orthonormal · 2023-11-20T20:10:38.338Z · LW(p) · GW(p)
- The quote is from Emmett Shear, not a board member.
- The board is also following the "don't say anything literally false" policy by saying practically nothing publicly.
- Just as I infer from Shear's qualifier that the firing did have something to do with safety, I infer from the board's public silence that their reason for the firing isn't one that would win back the departing OpenAI members (or would only do so at a cost that's not worth paying).
- This is consistent with it being a safety concern shared by the superalignment team (who by and large didn't sign the statement at first) but not by the rest of OpenAI (who view pushing capabilities forward as a good thing, because like Sam they believe the EV of OpenAI building AGI is better than the EV of unilaterally stopping). That's my current main hypothesis.
↑ comment by dr_s · 2023-11-21T07:43:35.263Z · LW(p) · GW(p)
(or would only do so at a cost that's not worth paying)
That's the part that confuses me most. An NDA wouldn't be strong enough reason at this point. As you say, safety concerns might, but that seems pretty wild unless they literally already have AGI and are fighting over what to do with it. The other thing is anything that if said out loud might involve the police, so revealing the info would be itself an escalation (and possibly mutually assured destruction, if there's criminal liability on both sides). I got nothing.
comment by Andrew_Clough · 2023-11-21T15:01:26.863Z · LW(p) · GW(p)
The facts very strongly suggest that the board is not a monolithic entity. Its inability to tell a sensible story about the reasons for Sam's firing might be due to such a single comprehensible story not existing but different board members having different motives that let them agree on the firing initially but ultimately not on a story that they could jointly endorse.
comment by Odd anon · 2023-11-20T20:09:18.326Z · LW(p) · GW(p)
There's... too many things here. Too many unexpected steps, somehow pointing at too specific an outcome. If there's a plot, it is horrendously Machiavellian.
(Hinton's quote, which keeps popping into my head: "These things will have learned from us by reading all the novels that ever were and everything Machiavelli ever wrote, that how to manipulate people, right? And if they're much smarter than us, they'll be very good at manipulating us. You won't realise what's going on. You'll be like a two year old who's being asked, do you want the peas or the cauliflower? And doesn't realise you don't have to have either. And you'll be that easy to manipulate. And so even if they can't directly pull levers, they can certainly get us to pull levers. It turns out if you can manipulate people, you can invade a building in Washington without ever going there yourself.")
(And Altman: "i expect ai to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes")
If an AI were to spike in capabilities specifically relating to manipulating individuals and groups of people, this is roughly how I would expect the outcome to look like. Maybe not even that goal-focused or agent-like, given that GPT-4 wasn't particularly lucid. Such an outcome would likely have initially resulted from deliberate probing by safety testing people, asking it if it could say something to them which would, by words alone, result in dangerous outcomes for their surroundings.
I don't think this is that likely. But I don't think I can discount it as a real possibility anymore.
Replies from: Seth Herd, Odd anon, chess-teacher↑ comment by Seth Herd · 2023-11-20T23:12:22.118Z · LW(p) · GW(p)
I think we can discount it as a real possibility, while accepting Altman's "i expect ai to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes". I think it might be weakly superhuman at persuasion for things like "buy our products", but that doesn't imply being superhuman at working out complex consequences of political maneuvering. Doing that would firmly imply a generally superhuman intelligence, I think.
So I think if this has anything to do with internal AI breakthroughs, it's tangential at most.
Replies from: dr_s↑ comment by dr_s · 2023-11-21T07:46:58.034Z · LW(p) · GW(p)
I mean, this would not be too hard though. It could be achieved by a simple trick of appearing smarter to some people and then dumber at subsequent interactions with others, scaring the safety conscious and then making them look insane for being scared.
I don't think that's what's going on (why would even an AGI model they made be already so cleverly deceptive and driven? I would expect OAI to not be stupid enough to build the most straightforward type of maximizer) but it wouldn't be particularly hard to think up or do.
↑ comment by Odd anon · 2023-11-21T22:12:37.035Z · LW(p) · GW(p)
Time for some predictions. If this is actually from AI developing social manipulation superpowers, I would expect:
- We never find out any real reasonable-sounding reason for Altman's firing.
- OpenAI does not revert to how it was before.
- More instances of people near OpenAI's safety people doing bizarre unexpected things that have stranger outcomes.
- Possibly one of the following:
- Some extreme "scissors statements" pop up which divide AI groups into groups that hate each other to an unreasonable degree.
- An OpenAI person who directly interacted with some scary AI suddenly either commits suicide or becomes a vocal flat-earther or similar who is weirdly convincing to many people.
- An OpenAI person skyrockets to political power, suddenly finding themselves in possession of narratives and phrases which convince millions to follow them.
(Again, I don't think it's that likely, but I do think it's possible.)
Replies from: faul_sname↑ comment by faul_sname · 2023-11-22T01:42:26.029Z · LW(p) · GW(p)
Things might be even weirder than that if this is a narrowly superhuman AI that is specifically superhuman at social manipulation, but still has the same inability to form new gears-level models exhibited by current LLMs (e.g. if they figured out how to do effective self-play on the persuasion task, but didn't actually crack AGI).
↑ comment by Chess3D (chess-teacher) · 2023-11-20T23:40:41.422Z · LW(p) · GW(p)
While I don't think this is true, it's a fun thought (and can also be pointed at Altman himself, rather than an AGI). Neither are true, but fun to think about
comment by Eli Tyre (elityre) · 2023-11-21T20:32:23.649Z · LW(p) · GW(p)
I love how short this post is! Zvi, you should do more posts like this (in addition to your normal massive-post fare).
comment by jimrandomh · 2023-11-22T01:56:58.793Z · LW(p) · GW(p)
Adam D'Angelo retweeted a tweet implying that hidden information still exists and will come out in the future:
Have known Adam D’Angelo for many years and although I have not spoken to him in a while, the idea that he went crazy or is being vindictive over some feature overlap or any of the other rumors seems just wrong. It’s best to withhold judgement until more information comes out.
comment by Bill Benzon (bill-benzon) · 2023-11-20T18:21:52.291Z · LW(p) · GW(p)
#14: If there have indeed been secret capability gains, so that Altman was not joking about reaching AGI internally (it seems likely that he was joking, though given the stakes, it's probably not the sort of thing to joke about), then the way I read their documents, the board should make that determination:
Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.
Once they've made that determination, then Microsoft will not have access to the AGI technology. Given the possible consequences, I doubt that Microsoft would have found such a joke very amusing.
Replies from: dr_s↑ comment by dr_s · 2023-11-21T07:51:13.043Z · LW(p) · GW(p)
Honestly this does seem... possible. A disagreement on whether GPT-5 counts as AGI would have this effect. The most safety minded would go "ok, this is AGI, we can't give it to Microsoft". The more business oriented and less conservative would go "no, this isn't AGI yet, it'll make us a fuckton of money though". There would be conflict. But for example seeing how now everyone might switch to Microsoft and simply rebuild the thing from scratch there, Ilya despairs and decides to do a 180 because at least this way he gets to supervise the work somehow.
comment by trevor (TrevorWiesinger) · 2023-11-20T20:00:46.562Z · LW(p) · GW(p)
This conflict has inescapably taken place in the context of US-China competition over AI, as leaders in both countries are well known to pursue AI acceleration for applications like autonomous low-flying nuclear cruise missiles (e.g. in contingencies where military GPS networks fail), economic growth faster than the US/China/rest of the world, and information warfare [LW · GW].
I think I could confidently bet against Chinese involvement, that seems quite reasonable. I can't bet so confidently against US involvement; although I agree that it remains largely unclear, it's also plausible that this situation has a usual suspect [LW · GW] and we could have seen it coming. Either way, victory/conquest by ludicrously powerful orgs like Microsoft seem like the obvious default outcome.
comment by Hoagy · 2023-11-21T11:49:49.567Z · LW(p) · GW(p)
There had been various clashes between Altman and the board. We don’t know what all of them were. We do know the board felt Altman was moving too quickly, without sufficient concern for safety, with too much focus on building consumer products, while founding additional other companies. ChatGPT was a great consumer product, but supercharged AI development counter to OpenAI’s stated non-profit mission.
Does anyone have proof of the board's unhappiness about speed, lack of safety concern and disagreement with founding other companies. All seem plausible but have seen basically nothing concrete.
comment by rotatingpaguro · 2023-11-20T21:39:48.772Z · LW(p) · GW(p)
The theory that my mind automatically generates seeing these happenings is that Ilya was in cahoots with Sam&Greg, and the pantomime was a plot to oust external members of the board.
However, I like to think I'm wise enough to give this 5% probability on reflection.
comment by Campbell Hutcheson (campbell-hutcheson-1) · 2023-11-20T22:46:57.634Z · LW(p) · GW(p)
I'm 90% sure that the issue here was an inexperienced board with Chief Scientist that didn't understand the human dimension of leadership.
Most independent board members usually have a lot of management experience and so understand that their power on paper is less than their actual power. They don't have day-to-day factual knowledge about the business of the company and don't have a good grasp of relationships between employees. So, they normally look to management to tell them what to do.
Here, two of the board members lacked the organizational experience to know that this was the case. Since any normal board would have tried to take the temperature of the employees before removing the CEO. I think this shows that creating a board for OAI to oversee the development of AGI is an incredibly hard task because they need to both understand AGI and understand the organizational level.
comment by dr_s · 2023-11-20T16:44:08.368Z · LW(p) · GW(p)
What about this?
https://twitter.com/robbensinger/status/1726387432600613127
We can definitely say that the board's decision was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board.
If considered reputable (and not a lie), this would significantly narrow the space of possible reasons.
comment by Review Bot · 2024-03-17T12:24:39.197Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
comment by Eli Tyre (elityre) · 2023-11-21T20:30:29.943Z · LW(p) · GW(p)
Jan Leike, the other head of the superalignment team, Tweeted that he worked through the weekend on the crisis, and that the board should resign.
No link for this one?
Replies from: tristan-wegner↑ comment by Tristan Wegner (tristan-wegner) · 2023-11-22T07:58:44.067Z · LW(p) · GW(p)
I have been working all weekend with the OpenAI leadership team to help with this crisis
Jan Leike
Nov 20
I think the OpenAI board should resign
https://x.com/janleike/status/1726600432750125146?s=20
comment by AVoropaev · 2023-11-20T16:03:25.577Z · LW(p) · GW(p)
What's the source of that 505 employees letter? I mean the contents aren't too crazy, but isn't it strange that the only thing we have is a screenshot of the first page?
Replies from: Robert_AIZI, Zvi↑ comment by Robert_AIZI · 2023-11-20T16:27:10.774Z · LW(p) · GW(p)
It was covered in Axios, who also link to it as a separate pdf with all 505 signatories.
Replies from: Zvi↑ comment by Zvi · 2023-11-20T16:27:54.988Z · LW(p) · GW(p)
Now claim that it's up to 650/770.
Replies from: Robert_AIZI↑ comment by Robert_AIZI · 2023-11-20T17:18:07.879Z · LW(p) · GW(p)
That link is broken for me, did you mean to link to this Lilian Weng tweet?
comment by MadHatter · 2023-11-21T04:50:24.644Z · LW(p) · GW(p)
So much drama!
I predict that in five years we are all just way more zen about everything than we are at this point in time. Heck, maybe in six months.
If AI was going to instantly kill us all upon existing, maybe we'd already be dead?
Replies from: dr_s