OpenAI: Altman Returns

post by Zvi · 2023-11-30T14:10:05.469Z · LW · GW · 12 comments

Contents

  Sam Altman’s Statement
    So what’s next?
  Bret Taylor’s Statement
  Larry Summers’s Statement
  Helen Toner’s Statement
  OpenAI Needs a Strong Board That Can Fire Its CEO
  Some Board Member Candidates
  A Question of Valuation
  A Question of Optics
None
12 comments

As of this morning, the new board is in place and everything else at OpenAI is otherwise officially back to the way it was before.

Events seem to have gone as expected. If you have read my previous two posts on the OpenAI situation, nothing here should surprise you.

Still seems worthwhile to gather the postscripts, official statements and reactions into their own post for future ease of reference.

What will the ultimate result be? We likely only find that out gradually over time, as we await both the investigation and the composition and behaviors of the new board.

I do not believe Q* played a substantive roll in events, so it is not included here. I also do not include discussion here of how good or bad Altman has been for safety.

Sam Altman’s Statement

Here is the official OpenAI statement from Sam Altman. He was magnanimous towards all, the classy and also smart move no matter the underlying facts. As he has throughout, he has let others spread hostility, work the press narrative and shape public reaction, while he himself almost entirely offers positivity and praise. Smart.

Before getting to what comes next, I’d like to share some thanks.

I love and respect Ilya, I think he’s a guiding light of the field and a gem of a human being. I harbor zero ill will towards him. While Ilya will no longer serve on the board, we hope to continue our working relationship and are discussing how he can continue his work at OpenAI.

I am grateful to Adam, Tasha, and Helen for working with us to come to this solution that best serves the mission. I’m excited to continue to work with Adam and am sincerely thankful to Helen and Tasha for investing a huge amount of effort in this process.

Thank you also to Emmett who had a key and constructive role in helping us reach this outcome. Emmett’s dedication to AI safety and balancing stakeholders’ interests was clear.

Mira did an amazing job throughout all of this, serving the mission, the team, and the company selflessly throughout. She is an incredible leader and OpenAI would not be OpenAI without her. Thank you.

Greg and I are partners in running this company. We have never quite figured out how to communicate that on the org chart, but we will. In the meantime, I just wanted to make it clear. Thank you for everything you have done since the very beginning, and for how you handled things from the moment this started and over the last week.

The leadership team–Mira, Brad, Jason, Che, Hannah, Diane, Anna, Bob, Srinivas, Matt, Lilian, Miles, Jan, Wojciech, John, Jonathan, Pat, and many more–is clearly ready to run the company without me. They say one way to evaluate a CEO is how you pick and train your potential successors; on that metric I am doing far better than I realized. It’s clear to me that the company is in great hands, and I hope this is abundantly clear to everyone. Thank you all.

Let that last paragraph sink in. The leadership team ex-Greg is clearly ready to run the company without Altman.

That means that whatever caused the board to fire Altman, whether or not Altman forced the board’s hand to varying degrees, if everyone involved had chosen to continue without Altman then OpenAI would have been fine. We can choose to believe or not believe Altman’s claims in his Verge interview that he only considered returning after the board called him on Saturday, and we can speculate on what Altman otherwise did behind the scenes during that time. We don’t know. We can of course guess, but we do not know.

He then talks about his priorities.

So what’s next?

We have three immediate priorities.

Advancing our research plan and further investing in our full-stack safety efforts, which have always been critical to our work. Our research roadmap is clear; this was a wonderfully focusing time. I share the excitement you all feel; we will turn this crisis into an opportunity! I’ll work with Mira on this.

Continuing to improve and deploy our products and serve our customers. It’s important that people get to experience the benefits and promise of AI, and have the opportunity to shape it. We continue to believe that great products are the best way to do this. I’ll work with Brad, Jason and Anna to ensure our unwavering commitment to users, customers, partners and governments around the world is clear.

Bret, Larry, and Adam will be working very hard on the extremely important task of building out a board of diverse perspectives, improving our governance structure and overseeing an independent review of recent events. I look forward to working closely with them on these crucial steps so everyone can be confident in the stability of OpenAI. 

I am so looking forward to finishing the job of building beneficial AGI with you all—best team in the world, best mission in the world.

Research, then product, then board. Such statements cannot be relied upon, but this was as good as such a statement can be. We must keep watch and see if such promises are kept. What will the new board look like? Will there indeed be a robust independent investigation into what happened? Will Ilya and Jan Leike be given the resources and support they need for OpenAI’s safety efforts?

Altman gave an interview to The Verge. Like the board, he (I believe wisely and honorably) sidesteps all questions about what caused the fight with the board and looks forward to the inquiry. In Altman’s telling, it was not his idea to come back, instead he got a call Saturday morning from some of the board asking him about potentially coming back.

He says he is not focused on getting back on the board, that is not his focus, but that the governance structure clearly has a problem that will take a while to fix.

Q: What does “improving our governance structure” mean? Is the nonprofit holding company structure going to change?

Altman: It’s a better question for the board members, but also not right now. The honest answer is they need time and we will support them in this to really go off and think about it. Clearly our governance structure had a problem. And the best way to fix that problem is gonna take a while. And I totally get why people want an answer right now. But I also think it’s totally unreasonable to expect it.

Oh, just because designing a really good governance structure, especially for such an impactful technology is not a one week question. It’s gonna take a real amount of time for people to think through this, to debate, to get outside perspectives, for pressure testing. That just takes a while.

Yes. It is good to see this highly reasonable timeline and expectations setting, as opposed to the previous tactics involving artificial deadlines and crises.

Mutari confirms in the interview that OpenAI’s safety approach is not changing, that this had nothing to do with safety.

Altman also made a good statement about Adam D’Angelo’s potential conflicts of interest, saying he actively wants customer representation on the board and is excited to work with him again. Altman also spent several hours with D’Angelo.

Bret Taylor’s Statement

We also have the statement from Bret Taylor. We know little about him, so reading his first official statement carefully seems wise.

On behalf of the OpenAI Board, I want to express our gratitude to the entire OpenAI community, especially all the OpenAI employees, who came together to help find a path forward for the company over the past week. Your efforts helped enable this incredible organization to continue to serve its mission to ensure that artificial general intelligence benefits all of humanity. We are thrilled that Sam, Mira and Greg are back together leading the company and driving it forward. We look forward to working with them and all of you. 

As a Board, we are focused on strengthening OpenAI’s corporate governance. Here’s how we plan to do it:

  • We will build a qualified, diverse Board of exceptional individuals whose collective experience represents the breadth of OpenAI’s mission – from technology to safety to policy. We are pleased that this Board will include a non-voting observer for Microsoft.
  • We will further stabilize the OpenAI organization so that we can continue to serve our mission.  This will include convening an independent committee of the Board to oversee a review of the recent events.
  • We will enhance the governance structure of OpenAI so that all stakeholders – users, customers, employees, partners, and community members – can trust that OpenAI will continue to thrive.

OpenAI is a more important institution than ever before. ChatGPT has made artificial intelligence a part of daily life for hundreds of millions of people. Its popularity has made AI – its benefits and its risks – central to virtually every conversation about the future of governments, business, and society.

We understand the gravity of these discussions and the central role of OpenAI in the development and safety of these awe-inspiring new technologies. Each of you plays a critical part in ensuring that we effectively meet these challenges.  We are committed to listening and learning from you, and I hope to speak with you all very soon.

We are grateful to be a part of OpenAI, and excited to work with all of you.

Mostly this is Brad Taylor properly playing the role of chairman of the board, which tells us little other than that he knows the role well, which we already knew.

Microsoft will get only an observer on the board, other investors presumably will not get seats either. That is good news, matching reporting from The Information.

What does ‘enhance the governance structure’ mean here? We do not know. It could be exactly what we need, it could be a rubber stamp, it could be anything else. We do not know what the central result will be.

The statement on a review of recent events here is weaker than I would like. It raises the probability that the new board does not get or share a true explanation.

He mentions safety multiple times. Based on what I know about Taylor, my guess is he is unfamiliar with such questions, and does not actually know what that means in context, or what the stakes truly are. Not that he is dismissive or skeptical, rather that he is encountering all this for the first time.

Larry Summers’s Statement

Here is the announcement via Twitter from board member Larry Summers, which raises the bar in having exactly zero content. So we still know very little here.

Larry Summers: I am excited and honored to have just been named as an independent director of @OpenAI. I look forward to working with board colleagues and the OpenAI team to advance OpenAI’s extraordinarily important mission.

First steps, as outlined by Bret and Sam in their messages, include building out an exceptional board, enhancing governance procedures and supporting the remarkable OpenAI community.

Helen Toner’s Statement

Here is Helen Toner’s full Twitter statement upon resigning from the board.

Helen Toner (11/29): Today, I officially resigned from the OpenAI board. Thank you to the many friends, colleagues, and supporters who have said publicly & privately that they know our decisions have always been driven by our commitment to OpenAI’s mission.

Much has been written about the last week or two; much more will surely be said. For now, the incoming board has announced it will supervise a full independent review to determine the best next steps.

To be clear: our decision was about the board’s ability to effectively supervise the company, which was our role and responsibility. Though there has been speculation, we were not motivated by a desire to slow down OpenAI’s work.

When I joined OpenAI’s board in 2021, it was already clear to me and many around me that this was a special organization that would do big things. It has been an enormous honor to be part of the organization as the rest of the world has realized the same thing.

I have enormous respect for the OpenAI team, and wish them and the incoming board of Adam, Bret and Larry all the best. I’ll be continuing my work focused on AI policy, safety, and security, so I know our paths will cross many times in the coming years.

Many outraged people continue to demand clarity on why the board fired Altman. I believe that most of them are thrilled that Toner and others continue not to share the details, and are allowing the situation outside the board to return to the status quo ante.

There will supposedly be an independent investigation. Until then, I believe we have a relatively clear picture of what happened. Toner’s statement hints at some additional details.

OpenAI Needs a Strong Board That Can Fire Its CEO

Roon gets it. The board needs to keep its big red button going forward, but still must account for its actions if it wants that button to stick.

Roon: The board has a red button but also must explain why its decisions benefit humanity. If it fails to do so then it will face an employee, customer, partner revolt. OpenAI currently creates a massive amount of value for humanity and by default should be defended tooth and nail. The for-profit would not have been able to unanimously move elsewhere if there was even a modicum of respect or good reasoning given.

The danger is that if we are not careful, we will learn the wrong lessons.

Toby Ord: The last few days exploded the myth that Sam Altman’s incredible power faces any accountability. He tells us we shouldn’t trust him, but we now know the board *can’t* fire him. I think that’s important.

Rob Bensinger: We didn’t learn “they can’t fire him”. We did learn that the organization’s staff has enough faith in Sam that the staff won’t go along with the board’s wishes absent some good supporting arguments from the board. (Whether they’d have acceded to good arguments is untested.)

I just want us to be clear that the update about the board’s current power shouldn’t be a huge one, because it’s possible that staff would have accepted the board’s decision in this case if the board had better explained its reasoning and the reasoning had seemed stronger.

Quite so. From our perspective, the board botched its execution and its members made relatively easy rhetorical targets. That is true even if the board had good reasons for doing so. If the board had not botched its execution and had more gravitas? I think things go differently.

If after an investigation, Summers, D’Angelo and Taylor all decide to fire Altman again (note that I very much do not expect this, but if they did decide to do it), I assure you they will handle this very differently, and I would predict a very different outcome.

One of the best things about Sam Altman is his frankness that we should not trust him. Most untrustworthy people say the other thing. Same thing with Altman’s often very good statements about existential risk and the need for safety. When people bring clarity and are being helpful, we should strive to reward that, not hold it against them.

I also agree with Andrew Critch here, that it was good and right for the board to pull the plug on a false signal of supervision. If the CEO makes the board unable to supervise them, or otherwise moves against the board, then it is the duty of the board to bring things to a head, even if there are no other issues present.

Good background, potentially influential in the thinking of several board members including Helen Toner: Former OpenAI board member Holden Karnofsky’s old explanation of why and exactly how Nonprofit Boards are Weird, and how best to handle it.

Some Board Member Candidates

Eliezer Yudkowsky proposes Paul Graham for the board of OpenAI. I see the argument, especially because Graham clearly cares a lot about his kids. My worries are that he would be too steerable by Altman, and he would be too inclined to view OpenAI as essentially a traditional business, and let that overrule other questions even if he knew it shouldn’t.

If he was counted as an Altman ally, as he presumably should, then he’s great. On top of the benefits to OpenAI, it would provide valuable insider information to Graham. Eliezer clarifies that his motivation is that he gives Graham a good chance of figuring out a true thing when it matters, which also sounds right.

Emmett Shear also seems like a clearly great consensus pick.

One concern is that the optics of the board matter. You would be highly unwise to choose a set of nine white guys. See Taylor’s statement about the need for diverse perspectives.

A Question of Valuation

Matt Levine covers developments since Tuesday, especially that the valuation of OpenAI in its upcoming sale did not change, as private markets can stubbornly refuse to move their prices. In my model, private valuations like this are rather arbitrary, and based on what social story everyone involved can tell and everyone’s relative negotiating position, and what will generate the right momentum for the company, rather than a fair estimate of value. Also everyone involved is highly underinvested or overinvested, has no idea what fair value actually is, and mostly wants some form of social validation so they don’t feel too cheated on price. Thus, often investors get away with absurdly low prices, other times they get tricked into very high ones.

Gary Marcus says OpenAI was never worth $86 billion. I not only disagree, I would (oh boy is this not investment advice!) happily invest at $86 billion right now if I had that ability (which I don’t) and thought that was an ethical thing to do. Grok very much does not ‘replicate most of’ GPT-4, the model is instead holding up quite well considering how long they sat on it initially.

OpenAI is nothing without its people. That does not mean they lack all manner of secret sauce. In valuation terms I am bullish. Would the valuation have survived without Altman? No, but in the counterfactual scenario where Altman was stepping aside due to health issues with an orderly succession, I would definitely have thought $86 billion remained cheap.

A Question of Optics

A key question in all this is the extent to which the board’s mistake was that its optics were bad. So here is a great example of Paul Graham advocating for excellent principles.

Paul Graham: When people criticize an action on the grounds of the “optics,” they’re almost always full of shit. All they’re really saying is “What you did looks bad.” But if they phrased it that way, they’d have to answer the question “Was it actually bad, or not?”

If someone did something bad, you don’t need to talk about “optics.” And if they did something that seems bad but that you know isn’t, why are you criticizing it at all? You should instead be explaining why it’s not as bad as it seems.

Bad optics can cause bad things to happen. So can claims that the optics are bad, or worries that others will think the optics are bad, or claims that you are generally bad at optics.

You have two responses.

  1. That means it had bad consequences, which means it was actually bad.
  2. Nobly stand up for right actions over what would ‘look good.’

Consider the options in light of recent events. We all want it to be one way. Often it is the other way.

12 comments

Comments sorted by top scores.

comment by cousin_it · 2023-11-30T18:37:39.921Z · LW(p) · GW(p)

OpenAI currently creates a massive amount of value for humanity and by default should be defended tooth and nail.

Interesting how perspectives differ on this. From what I see around me, if tomorrow a lightning from God destroyed all AI technology, there'd be singing and dancing in the streets.

Replies from: Linch, Wei_Dai
comment by Linch · 2023-11-30T19:29:41.312Z · LW(p) · GW(p)

Out of curiosity, is "around you" a rationalist-y crowd, or a different one?

Replies from: cousin_it
comment by cousin_it · 2023-11-30T19:51:20.530Z · LW(p) · GW(p)

No, just regular people.

Replies from: Linch, liam-donovan-1
comment by Linch · 2023-11-30T21:10:50.101Z · LW(p) · GW(p)

Interesting! That does align better with the survey data than what I see on e.g. Twitter.

comment by Liam Donovan (liam-donovan-1) · 2023-11-30T20:10:18.318Z · LW(p) · GW(p)

What do they have against AI? Seems like the impact on regular people has been pretty minimal.  Also, if GPT4 level technology ws allowed to fully mature and diffuse to a wide audience without increasing in base capability, it seems like the impact on everyone would be hugely beneficial

Replies from: cousin_it, michael-thiessen, isaac-poulton
comment by cousin_it · 2023-12-01T12:00:11.407Z · LW(p) · GW(p)

They think (correctly) that AI will take away many jobs, and that AI companies care only about money and aren't doing anything to prevent or mitigate job loss.

comment by Michael Thiessen (michael-thiessen) · 2023-11-30T22:09:10.331Z · LW(p) · GW(p)

In my non-tech circles people mostly complain about AI stealing jobs from artists, companies making money off of other people's work, etc.

People are also just scared of losing their own jobs.

comment by omegastick (isaac-poulton) · 2023-11-30T21:22:07.908Z · LW(p) · GW(p)

Some people have strong negative priors toward AI in general.

When the GPT-3 API first came out, I built a little chatbot program to show my friends/family. Two people (out of maybe 15) flat out refused to put in a message because they just didn't like the idea of talking to an AI.

I think it's more of an instinctual reaction than something thought through. There's probably a deeper psychological explanation, but I don't want to speculate.

comment by Wei Dai (Wei_Dai) · 2023-12-01T23:04:46.682Z · LW(p) · GW(p)

Just saw a poll result that's consistent with this.

comment by jefftk (jkaufman) · 2023-11-30T18:00:18.981Z · LW(p) · GW(p)

The leadership team–Mira, Brad, Jason, Che, Hannah, Diane, Anna, Bob, Srinivas, Matt, Lilian, Miles, Jan, Wojciech, John, Jonathan, Pat, and many more–is clearly ready to run the company without me. They say one way to evaluate a CEO is how you pick and train your potential successors; on that metric I am doing far better than I realized. It’s clear to me that the company is in great hands, and I hope this is abundantly clear to everyone. Thank you all.

I read this as saying to investors: "don't worry: even if the new board's review process does determine I should no longer be CEO and I leave, the company will continue to run very well". I don't think he thinks that's a likely outcome of the review, but what matters is what investors might expect.

comment by Michael Thiessen (michael-thiessen) · 2023-11-30T16:00:37.823Z · LW(p) · GW(p)

Let that last paragraph sink in. The leadership team ex-Greg is clearly ready to run the company without Altman.

I'm struggling to interpret this, so your guesses as to what this might mean would be helpful. It seems he clearly wanted to come back - is he threatening to leave again if he doesn't get his way?

Also note Ilya not included in the leadership team.

 

While Ilya will no longer serve on the board, we hope to continue our working relationship and are discussing how he can continue his work at OpenAI.

This statement also really stood out to me - if there really was no ill will, why would they have to discuss how Ilya can continue his work? Clearly there's something more going on here. Sounds like Ilya's getting the knife.

Replies from: michael-thiessen
comment by Michael Thiessen (michael-thiessen) · 2023-11-30T17:46:19.888Z · LW(p) · GW(p)

Also, his statements in the verge are so bizarre to me:

"SA: I learned that the company can truly function without me, and that’s a very nice thing. I’m very happy to be back, don’t get me wrong on that. But I come back without any of the stress of, “Oh man, I got to do this, or the company needs me or whatever.” I selfishly feel good because either I picked great leaders or I mentored them well. It’s very nice to feel like the company will be totally fine without me, and the team is ready and has leveled up."

2 business days away and the company is ready to blow up if you don't come back and your takeaway is that it can function without you? I get that this is PR spin, but usually there's at least some amount of plausible believability.

Maybe these are all attempts to signal to investors that everything is fine, even if Sam were to leave it would still all be fine, but at some point if I'm an investor I have to wonder if given how hard Sam is trying to make it look like everything is fine, that things are very much not fine.