Sam Altman fired from OpenAI

post by LawrenceC (LawChan) · 2023-11-17T20:42:30.759Z · LW · GW · 75 comments

This is a link post for https://openai.com/blog/openai-announces-leadership-transition

Contents

75 comments

Basically just the title, see the OAI blog post for more details.

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period.”


EDIT:

Also, Greg Brockman is stepping down from his board seat:

As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.

The remaining board members are:

OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner.


EDIT 2:

Sam Altman tweeted the following.

i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people. 

will have more to say about what’s next later. 

🫡 

Greg Brockman has also resigned.

75 comments

Comments sorted by top scores.

comment by Zach Stein-Perlman · 2023-11-18T00:46:42.894Z · LW(p) · GW(p)

Update: Greg Brockman quit.

Update: Sam and Greg say:

Sam and I are shocked and saddened by what the board did today.

Let us first say thank you to all the incredible people who we have worked with at OpenAI, our customers, our investors, and all of those who have been reaching out.

We too are still trying to figure out exactly what happened. Here is what we know:

- Last night, Sam got a text from Ilya asking to talk at noon Friday. Sam joined a Google Meet and the whole board, except Greg, was there. Ilya told Sam he was being fired and that the news was going out very soon.

- At 12:19pm, Greg got a text from Ilya asking for a quick call. At 12:23pm, Ilya sent a Google Meet link. Greg was told that he was being removed from the board (but was vital to the company and would retain his role) and that Sam had been fired. Around the same time, OpenAI published a blog post.

- As far as we know, the management team was made aware of this shortly after, other than Mira who found out the night prior.

The outpouring of support has been really nice; thank you, but please don’t spend any time being concerned. We will be fine. Greater things coming soon.

Update: three more resignations including Jakub Pachocki.

Update:

Sam Altman's firing as OpenAI CEO was not the result of "malfeasance or anything related to our financial, business, safety, or security/privacy practices" but rather a "breakdown in communications between Sam Altman and the board," per an internal memo from chief operating officer Brad Lightcap seen by Axios.

Update: Sam is planning to launch something (no details yet).

Update: Sam may return as OpenAI CEO.

Update: Tigris.

Update: talks with Sam and the board.

Update: Mira wants to hire Sam and Greg in some capacity; board still looking for a permanent CEO.

Update: Emmett Shear is interim CEO; Sam won't return.

Update: lots more resignations (according to an insider).

Update: Sam and Greg leading a new lab in Microsoft.

Update: total chaos.

Replies from: OliverHayman
comment by OliverHayman · 2023-11-18T10:05:36.362Z · LW(p) · GW(p)

Perhaps worth noting: one of the three resignations, Aleksander Madry, was head of the preparedness team which is responsible for preventing risks from AI such as self-replication. 

Replies from: Buck, o-o
comment by Buck · 2023-11-18T20:26:36.179Z · LW(p) · GW(p)

Note that Madry only just started, iirc.

comment by O O (o-o) · 2023-11-18T10:53:02.768Z · LW(p) · GW(p)

Also: Jakub Pachocki who was the director of research

comment by Max H (Maxc) · 2023-11-17T20:48:51.901Z · LW(p) · GW(p)

Also seems pretty significant:

As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.

The remaining board members are:

OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner.

Has anyone collected their public statements on various AI x-risk topics anywhere?

Replies from: nikolas-kuhn, Zach Stein-Perlman, person-1, LawChan
comment by Amalthea (nikolas-kuhn) · 2023-11-17T22:24:33.485Z · LW(p) · GW(p)

Adam D'Angelo via X:

Oct 25

This should help access to AI diffuse throughout the world more quickly, and help those smaller researchers generate the large amounts of revenue that are needed to train bigger models and further fund their research.

Oct 25

We are especially excited about enabling a new class of smaller AI research groups or companies to reach a large audience, those who have unique talent or technology but don’t have the resources to build and market a consumer application to mainstream consumers.

Sep 17

This is a pretty good articulation of the unintended consequences of trying to pause AI research in the hope of reducing risk: [citing Nora Belrose's tweet linking her article]

Aug 25

We (or our artificial descendants) will look back and divide history into pre-AGI and post-AGI eras, the way we look back at prehistoric vs "modern" times today.

Aug 20

It’s so incredible that we are going to live through the creation of AGI. It will probably be the most important event in the history of the world and it will happen in our lifetimes.

comment by Zach Stein-Perlman · 2023-11-17T21:02:27.441Z · LW(p) · GW(p)

Has anyone collected their public statements on various AI x-risk topics anywhere?

A bit, not shareable.

Helen is an AI safety person. Tasha is on the Effective Ventures board. Ilya leads superalignment. Adam signed the CAIS statement

Replies from: shay-ben-moshe, Benito
comment by ShayBenMoshe (shay-ben-moshe) · 2023-11-17T21:09:54.741Z · LW(p) · GW(p)

For completeness - in addition to Adam D’Angelo, Ilya Sutskever and Mira Murati signed the CAIS statement as well.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-11-17T23:39:28.448Z · LW(p) · GW(p)

Didn't Sam Altman also sign it?

Replies from: habryka4
comment by habryka (habryka4) · 2023-11-17T23:51:36.421Z · LW(p) · GW(p)

Yes, Sam has also signed it.

Replies from: evhub
comment by evhub · 2023-11-18T09:06:13.096Z · LW(p) · GW(p)

Notably, of the people involved in this, Greg Brockman did not sign the CAIS statement, and I believe that was a purposeful choice.

comment by Ben Pace (Benito) · 2023-11-17T21:51:19.674Z · LW(p) · GW(p)

Also D'Angelo is on the board of Asana, Moskovitz's company (Moskovitz who funds Open Phil).

Replies from: nikolas-kuhn
comment by Amalthea (nikolas-kuhn) · 2023-11-17T22:12:13.095Z · LW(p) · GW(p)

Judging from his tweets, D'Angelo seems like significantly not concerned with AI risk, so I was quite taken aback to find out he was on the OpenAI board. This might be misinterpreting his views based on vibes.

comment by Person (person-1) · 2023-11-17T20:56:14.544Z · LW(p) · GW(p)

I couldn't remember where from, but I know that Ilya Sutskever at least takes x-risk seriously. I remember him recently going public about how failing alignment would essentially mean doom. I think it was published as an article on a news site rather than an interview, which are what he usually does. Someone with a way better memory than me could find it.

EDIT: Nevermind, found them.

comment by LawrenceC (LawChan) · 2023-11-17T22:12:19.732Z · LW(p) · GW(p)

Thanks, edited.

comment by Burny · 2023-11-18T06:07:57.978Z · LW(p) · GW(p)

"OpenAI’s ouster of CEO Sam Altman on Friday followed internal arguments among employees about whether the company was developing AI safely enough, according to people with knowledge of the situation.

Such disagreements were high on the minds of some employees during an impromptu all-hands meeting following the firing. Ilya Sutskever, a co-founder and board member at OpenAI who was responsible for limiting societal harms from its AI, took a spate of questions.

At least two employees asked Sutskever—who has been responsible for OpenAI’s biggest research breakthroughs—whether the firing amounted to a “coup” or “hostile takeover,” according to a transcript of the meeting. To some employees, the question implied that Sutskever may have felt Altman was moving too quickly to commercialize the software—which had become a billion-dollar business—at the expense of potential safety concerns."

Kara Swisher also tweeted:

"More scoopage: sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side."

"The developer day and how the store was introduced was in inflection moment of Altman pushing too far, too fast. My bet: [Sam will] have a new company up by Monday."

Apparently Microsoft was also blindsided by this and didn't find out until moments before the announcement.

"You can call it this way," Sutskever said about the coup allegation. "And I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAl builds AGI that benefits all of humanity." AGI stands for artificial general intelligence, a term that refers to software that can reason the way humans do. 
When Sutskever was asked whether "these backroom removals are a good way to govern the most important company in the world?" he answered: "I mean, fair, I agree that there is a not ideal element to it. 100%." 

https://twitter.com/AISafetyMemes/status/1725712642117898654

comment by Wei Dai (Wei_Dai) · 2023-11-18T01:35:53.883Z · LW(p) · GW(p)

https://twitter.com/karaswisher/status/1725678898388553901 Kara Swisher @karaswisher

Sources tell me that the profit direction of the company under Altman and the speed of development, which could be seen as too risky, and the nonprofit side dedicated to more safety and caution were at odds. One person on the Sam side called it a “coup,” while another said it was the the right move.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2023-11-18T01:42:16.750Z · LW(p) · GW(p)

Came across this account via a random lawyer I'm following on Twitter (for investment purposes), who commented, "Huge L for the e/acc nerds tonight". Crazy times...

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2023-11-18T02:17:11.267Z · LW(p) · GW(p)

I think this makes sense as an incentive for AI acceleration- even if someone is trying to accelerate AI for altruistic reasons e.g. differential tech development (e.g. maybe they calculate that LLMs have better odds of interpretability succeeding because they think in English), then they should still lose access to their AI lab shortly after accelerating AI. 

They get so much personal profit from accelerating AI, so only people prepared to personally lose it all within 3 years are prepared to sacrifice enough to do something as extreme as burning the remaining timeline [LW · GW].

I'm generally not on board with leadership shakeups in the AI safety community, because the disrupted alliance webs create opportunities for resourceful outsiders to worm their way in [LW · GW]. I worry especially about incentives for the US natsec community [LW · GW] to do this. But when I look at it from the game theory/moloch perspective, it might be worth the risk, if it means setting things up so that the people who accelerate AI always fail to be the ones who profit off of it, and therefore can only accelerate because they think it will benefit the world.

comment by ThirdSequence · 2023-11-18T23:38:34.241Z · LW(p) · GW(p)

Looks like Sam Altman might return as CEO. 

OpenAI board in discussions with Sam Altman to return as CEO - The Verge

Replies from: Sune, Nathan Young
comment by Sune · 2023-11-19T20:08:03.079Z · LW(p) · GW(p)

It seems the sources are supporters of Sam Altman. I have not seen any indication of this from the boards side.

Replies from: Sune
comment by Sune · 2023-11-19T21:18:52.906Z · LW(p) · GW(p)

Ok, looks like he was invited in to OpenAIs office for some reason at least https://twitter.com/sama/status/1726345564059832609

comment by Nathan Young · 2023-11-19T12:32:50.764Z · LW(p) · GW(p)

This seems to suggest a huge blunder

comment by Nathan Young · 2023-11-18T15:45:14.633Z · LW(p) · GW(p)
Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2023-11-18T16:24:15.569Z · LW(p) · GW(p)

This is the market itself, not a screenshot! Click one of the "bet" buttons. An excellent feature.

Replies from: lahwran, TheBayesian
comment by TheBayesian · 2023-11-19T04:24:58.822Z · LW(p) · GW(p)

Note:  Those are two different markets. Nathan's market is this one and Sophia Wisdom's market (currently the largest one by far) is this one. 

comment by O O (o-o) · 2023-11-18T03:30:39.495Z · LW(p) · GW(p)

I expect investors will take the non-profit status of these companies more seriously going forwards.

I hope Ilya et al. realize what they’ve done.

Edit: I think I’ve been vindicated a bit. As I expected money would just flock to for profit AGI labs, as it is poised to right now. I hope OpenAI remains a non profit but I think Ilya played with fire.

Replies from: o-o
comment by O O (o-o) · 2023-11-19T05:16:23.982Z · LW(p) · GW(p)

So, Meta disbanded its responsible AI team. I hope this story reminds everyone about the dangers of acting rashly.

Firing Sam Altman was really a one time use card.

Microsoft probably threatened to pull its investments and compute which would let Sam Altman new competitor pull ahead regardless as OpenAI would be in an eviscerated state both in terms of funding and human capital. This move makes sense if you’re at the precipice of AGI, but not before that.

Replies from: quetzal_rainbow, Siebe
comment by quetzal_rainbow · 2023-11-19T10:43:56.225Z · LW(p) · GW(p)

Their Responsible AI team was in pretty bad shape after recent lay-offs. I think Facebook just decided to cut costs.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2023-11-20T11:00:04.601Z · LW(p) · GW(p)

It was anyway weird that they had LeCun in charge and a thing called "Responsible AI team" in the same company. No matter what one thinks about Sam Altman now, compared to LeCun, the things he said about AI risks sounded 100 times more reasonable.

comment by Siebe · 2023-11-19T10:11:02.376Z · LW(p) · GW(p)

Meta's actions seem unrelated?

comment by [deactivated] (Yarrow Bouchard) · 2023-11-17T21:43:40.855Z · LW(p) · GW(p)

Now he’s free to run for governor of California in 2026:

I was thinking about it because I think the state is in a very bad place, particularly when it comes to the cost of living and specifically the cost of housing. And if that doesn’t get fixed, I think the state is going to devolve into a very unpleasant place. Like one thing that I have really come to believe is that you cannot have social justice without economic justice, and economic justice in California feels unattainable. And I think it would take someone with no loyalties to sort of very powerful interest groups. I would not be indebted to other groups, and so maybe I could try a couple of variable things, just on this issue.

...

I don’t think I’d have enough experience to do it, because maybe I could do like a few things that would be really good, but I wouldn’t know how to deal with the thousands of things that also just needed to happen.

And more importantly than that to me personally, I wanted to spend my time trying to make sure we get artificial intelligence built in a really good way, which I think is like, to me personally, the most important problem in the world and not something I was willing to set aside to run for office.

Prediction market: https://manifold.markets/firstuserhere/will-sam-altman-run-for-the-governo

comment by Taisia Terumi (taisia-terumi) · 2023-11-17T21:29:18.129Z · LW(p) · GW(p)

Aside from obvious questions on how it will impact the alignment approach of OpenAI and whether or not it is a factional war of some sort, I really hope this has nothing to do with Sama's sister. Both options—"she is wrong but something convinced the OpenAI leadership that's she's right" and "she is actually right and finally gathered some proof of her claims"—are very bad. ...On the other hand, as cynical and grim as that is, sexual harassment probably won't spell a disaster down the line, unlike a power struggle among the tops of an AGI-pursuing company.

Replies from: zby, Ninety-Three, Viliam
comment by zby · 2023-11-17T23:41:14.176Z · LW(p) · GW(p)

Speculation on the available info: They must have questioned him on that. Discovering that he was not entirely candid with them would be a good explanation of this announcement. And shadowbanning would be the most discoverable here.

comment by Ninety-Three · 2023-11-18T00:29:22.865Z · LW(p) · GW(p)

Surely they would use different language than "not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities" to describe a #metoo firing.

Replies from: taisia-terumi
comment by Taisia Terumi (taisia-terumi) · 2023-11-18T01:21:35.880Z · LW(p) · GW(p)

Yeah, I also think this is very unlikely. Just had to point out the possibility for completeness sake.

In other news, someone on Twitter (a leaker? not sure) said that there probably will be more firings and that this is a struggle of for-profit vs non-profit sides of the company, with Sama representing the for-profit side.

Replies from: Daniel_Eth
comment by Daniel_Eth · 2023-11-18T05:25:00.946Z · LW(p) · GW(p)

I think they said that there were more departures to come. I assumed that was referring to people quitting because they disagreed with the decision.

comment by Viliam · 2023-11-20T09:14:39.213Z · LW(p) · GW(p)

I really hope this has nothing to do with Sama's sister

That reminds me of the post [LW · GW] we had here a month ago. When I asked how exactly are we supposed to figure out the truth about something that happened in private many years ago, I was told that:

  • OP is conducting a research; we should wait for the conclusions (should I keep holding my breath?)
  • we should wait whether other victims come forward, and update accordingly (did we actually?)

Now I wonder whether Less Wrong was used as a part of a character-assassination campaign designed to make people less likely to defend Sam Altman in case of a company takeover. And we happily played along.

(This is unrelated to whether firing Sam Altman was good or bad from the perspective of AI safety.)

comment by dentalperson · 2023-11-18T12:14:48.608Z · LW(p) · GW(p)

How surprising is this to the alignment community professionals (e.g. people at MIRI, Redwood Research, or similar)? From an outside view, the volatility/flexibility and movement away from pure growth and commercialization seems unexpected and could be to alignment researchers' benefit (although it's difficult to see the repercussions at this point).  While it is surprising to me because I don't know the inner workings of OpenAI, I'm surprised that it seems similarly surprising to the LW/alignment community as well.  

Perhaps the insiders are still digesting and formulating a response, or want to keep hot takes to themselves for other reasons. If not, I'm curious if there is actually so little information flowing between alignment communities and companies like OpenAI such that this would be as surprising as it is to an outsider.  For example, there seems to be many people at Anthropic that are directly in or culturally aligned with LW/rationality, and I expected the same to be true to a lesser extend for OpenAI.

I understood there was a real distance between groups, but still, I had a more connected model in my head that is challenged by this news and the response in the first day.

Replies from: Sune, DanielFilan
comment by Sune · 2023-11-18T12:25:16.178Z · LW(p) · GW(p)

It seems this was a surprise to almost everyone even at OpenAI, so I don’t think it is evidence that there isn’t much information flow between LW and OpenAI.

comment by DanielFilan · 2023-11-18T19:49:20.970Z · LW(p) · GW(p)

I'm at CHAI and it's shocking to me, but I'm not the most plugged-in person.

comment by RHollerith (rhollerith_dot_com) · 2023-11-18T03:47:21.165Z · LW(p) · GW(p)

Someone writes anonymously, "I feel compelled as someone close to the situation to share additional context about Sam and company. . . ."

https://www.reddit.com/r/OpenAI/comments/17xoact/comment/k9p7mpv/

Replies from: Chris_Leong
comment by Chris_Leong · 2023-11-18T06:37:25.786Z · LW(p) · GW(p)

I read their other comments and I'm skeptical. The tone is wrong.

Replies from: Benito
comment by Ben Pace (Benito) · 2023-11-18T17:26:19.821Z · LW(p) · GW(p)

It read like propaganda to me, whether the person works at the company or not.

comment by MiguelDev (whitehatStoic) · 2023-11-18T01:20:50.391Z · LW(p) · GW(p)

I wonder what changes will happen after Sam and Greg's exit.. I Hope they install a better direction towards AI safety.

Replies from: whitehatStoic
comment by MiguelDev (whitehatStoic) · 2023-11-18T01:22:02.841Z · LW(p) · GW(p)

I expect Sam to open up a new AI company.

Replies from: mishka
comment by mishka · 2023-11-18T05:44:53.088Z · LW(p) · GW(p)

Yeah... On one hand, I am excited about Sam and Greg hopefully trying more interesting things than just scaling Transformer LLMs, especially considering Sam' answer to the last question on Nov. 1 at Cambridge Union, 1:01:45 in https://www.youtube.com/watch?v=NjpNG0CJRMM where he seems to think that more than Transformer-based LLMs are needed for AGI/ASI (in particular, he correctly says that "true AI" must be able to discover new physics, and he doubts LLMs are good enough for that).

On the other hand, I was hoping for a single clear leader in the AI race, and I thought that Ilya Sutskever was one of the best possible leaders for an AI safety project. And now Ilya vs. Sam and Greg Brockman are enemies, https://twitter.com/gdb/status/1725736242137182594, and if Sam and Greg would find a way to beat OpenAI, would they be able to be sufficiently mindful about safety?

Replies from: whitehatStoic, mishka
comment by MiguelDev (whitehatStoic) · 2023-11-18T06:26:47.368Z · LW(p) · GW(p)

Hmmm. The way Sam behaves I can't see a path of him leading an AI company towards safety. The way I interpreted his world tour (22 countries?) talking about OpenAI or AI in general, is him trying to occupy the mindspaces of those countries.  A CEO I wish OpenAI has - is someone who stays at the offices, ensuring that we are on track of safely steering arguably the most revolutionary tech ever created - not trying to promote the company or the tech, I think it's unnecessary to do a world tour if one is doing AI development and deployment safely. 

(But I could be wrong too. Well, let's all see what's going to happen next.)

comment by mishka · 2023-11-18T06:22:13.353Z · LW(p) · GW(p)

Interesting, how sharply people disagree...

It would be good to be able to attribute this disagreement to a particular part of the comment. Is that about me agreeing with Sam about "True AI" needing to be able to do novel physics? Or about me implicitly supporting the statement that LLMs would not be good enough (I am not really sure, I think LLMs would probably be able to create non-LLMs based AIs, so even if they are not good enough to achieve the level of "True AI" directly, they might be able to get there by creating differently-architected AIs)?

Or about having a single clear leader being good for safety? Or about Ilya being one of the best safety project leaders, based on the history of his thinking and his qualification? Or about Sam and Greg having a fighting chance against OpenAI? Or about me being unsure of them being able to do adequate safety work on the level which Ilya is likely to provide?

I am curious which of these seem to cause disagreement...

Replies from: whitehatStoic
comment by MiguelDev (whitehatStoic) · 2023-11-18T06:31:15.113Z · LW(p) · GW(p)

I did not press the disagreement button but here is where I disagree:

Yeah... On one hand, I am excited about Sam and Greg hopefully trying more interesting things than just scaling Transformer LLMs,

Replies from: mishka
comment by mishka · 2023-11-18T06:43:07.579Z · LW(p) · GW(p)

Do you mean this in the sense that this would be particularly bad safety-wise, or do you mean this in the sense they are likely to just build huge LLMs like everyone else is doing, including even xAI?

Replies from: whitehatStoic
comment by MiguelDev (whitehatStoic) · 2023-11-18T06:50:55.277Z · LW(p) · GW(p)

I'm still figuring out Elon's xAI. 

But with regards with how Sam behaves - if he doesn't improve his framing[1] of what AI could be for the future of humanity - I expect the same results.

 

  1. ^

    (I think he frames it with him as the main person that steers the tech rather than an organisation or humanity steering the tech - that's how it feels for me, the way he behaves.)

Replies from: mishka
comment by mishka · 2023-11-18T07:04:03.600Z · LW(p) · GW(p)

I'm still figuring out Elon's xAI.

They released a big LLM, the "Grok". With their crew of stars I hoped for a more interesting direction, but an LLM as a start is not unreasonable (one does need a performant LLM as a component).

I think he frames it with him as the main person that steers the tech

Yeah... I thought he deferred to Ilya and to the new "superalignment team" Ilya has been co-leading safety-wise...

But perhaps he was not doing that consistently enough...

Replies from: whitehatStoic
comment by MiguelDev (whitehatStoic) · 2023-11-18T07:29:37.787Z · LW(p) · GW(p)

They released a big LLM, the "Grok". With their crew of stars I hoped for a more interesting direction, but an LLM as a start is not unreasonable (one does need a performant LLM as a component).
 

I haven't played around with Grok so I'm not sure how capable or safe it is. But I hope Elon and his team of experts gets the safety problem right - as he has created companies with extraordinary achievements.  At least, Elon have demonstrated his aspirations to better humanity in other fields of sciences (Internet /Satellites, Space Exploration and EVs) and hope it translate to xAI and twitter. 

Yeah... I thought he deferred to Ilya and to the new "superalignment team" Ilya has been co-leading safety-wise...

I felt different about Ilya co-leading,  this seems to me that there's something happening inside OpenAI. When Ilya needed to co-lead the new safety direction this felt like: "something feels weird inside OpenAI and Ilya needed to co-lead the safety direction." So maybe the announcement today is related to that too.

Pretty sure there will new info from OpenAI next week or two weeks from now. Hoping it favors more safety directions - long term.

Replies from: mishka
comment by mishka · 2023-11-18T07:55:02.133Z · LW(p) · GW(p)

I haven't played around with Grok so I'm not sure how capable or safe it is.

I expect safety of that to be at zero (they don't think GPT-3.5-level LLMs are a problem in this sense; besides they market it almost as an "anything goes, anti-censorship LLM").

But that's not really the issue; when a system starts being capable to write code reasonably well, then one starts getting a problem... I hope when they come to that, to approaching AIs which can create better AIs, they'll start taking safety seriously... Otherwise, we'll be in trouble...

Ilya co-leading

I thought he was the appropriately competent person (he was probably the AI scientist #1 in the world). The right person for the most important task in the world...

And the "superalignment" team at OpenAI was... not very strong. The original official "superalignment" approach was unrealistic and hence not good enough. I made a transcript of some of his thoughts, https://www.lesswrong.com/posts/TpKktHS8GszgmMw4B/ilya-sutskever-s-thoughts-on-ai-safety-july-2023-a [LW · GW], and it was obvious that his thinking was different from the previous OpenAI "superalignment" approach and much better (as in, "actually had a chance to succeed")...

Of course, now, since it looks like the "coup" has mostly been his doing, I am less sure that this is the leadership OpenAI and OpenAI safety needs. The manner of that has certainly been too erratic. Safety efforts should not evoke the feel of "last minute emergency"...

Replies from: Kaj_Sotala, whitehatStoic
comment by Kaj_Sotala · 2023-11-18T20:40:54.208Z · LW(p) · GW(p)

I expect safety of that to be at zero

At least it refuses to give you instructions for making cocaine.

Replies from: Thane Ruthenis, mishka
comment by Thane Ruthenis · 2023-11-18T21:33:34.253Z · LW(p) · GW(p)

Well. If nothing else, the sass is refreshing after the sycophancy of all the other LLMs.

comment by mishka · 2023-11-18T22:09:43.378Z · LW(p) · GW(p)

That's good! So, at least a bit of safety fine-tuning is there...

Good to know...

comment by MiguelDev (whitehatStoic) · 2023-11-18T08:09:27.399Z · LW(p) · GW(p)

But that's not really the issue; when a system starts being capable to write code reasonably well, then one starts getting a problem... I hope when they come to that, to approaching AIs which can create better AIs, they'll start taking safety seriously... Otherwise, we'll be in trouble...

Yeah, let's see where will they steer Grok.

And the "superalignment" team at OpenAI was... not very strong. The original official "superalignment" approach was unrealistic and hence not good enough. I made a transcript of some of his thoughts, https://www.lesswrong.com/posts/TpKktHS8GszgmMw4B/ilya-sutskever-s-thoughts-on-ai-safety-july-2023-a [LW · GW], and it was obvious that his thinking was different from the previous OpenAI "superalignment" approach and much better (as in, "actually had a chance to succeed")...

Yeah I agree with your analysis with the superalignment agenda, I think it's not a good use of the 20% of compute resources that they have. I even think the resource allocation of 20% on AI safety is not deep enough into the problem as I  think a 100% allocation[1] is necessary. 

I haven't had much time studying Ilya, but I like the way he explains his arguments. I hope they (Ilya, the board and Mira or new CEO) will be better at expanding the tech than Sam is. Let's see. 

 

  1. ^

    I think the safest AI will be the most profitable technoloy as everyone will want to promote and build on top of it.

comment by Mitchell_Porter · 2023-11-18T07:21:18.545Z · LW(p) · GW(p)

So I guess OpenAI will keep pushing ahead on both safety and capabilities, but not so much on commercialization? 

comment by ryan_ryan · 2023-11-17T21:26:00.127Z · LW(p) · GW(p)

Typical speculations: 

  • Annie Altman charges
  • Undisclosed financial interests (AGI, Worldcoin, or YC)
comment by O O (o-o) · 2023-11-18T16:18:48.973Z · LW(p) · GW(p)

Potentially relevant information: 

OpenAI insiders seem to also be blindsided and apparently angry at this move.

I personally think there were likely better ways to for Ilya's faction to get Sam's faction to negotiate with him, but this firing makes sense based on some reviews of this company having issues with communication as a whole and potentially having a toxic work environment. 

 

edit: link source now available in replies

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2023-11-18T16:48:06.733Z · LW(p) · GW(p)

The human brain seems to be structured such that

  1. Factional lines are often drawn splitting up large groups like corporations, government agencies, and nonprofits, with the lines tracing networks of alliances, and also retaliatory commitments that are often used to make factions and individuals hardened against removal by rivals.
  2. People are nonetheless occasionally purged along these lines rather than more efficient decision theory like values handshakes [? · GW].
  3. These conflicts and purges are followed by harsh rhetoric, since people feel urges to search languagespace and find combinations of words [LW · GW] that optimize for retaliatory harm against others.

I would be very grateful for sufficient evidence that the new leadership at OpenAI is popular or unpopular among a large portions of the employees, rather than a small number of anonymous people who might have been allied to the purged people. 

I think it might be better to donate that info e.g. message LW mods via the intercom feature in the lower right corner, than to post it publicly.

Replies from: o-o
comment by O O (o-o) · 2023-11-18T16:51:23.692Z · LW(p) · GW(p)

There are certainly factions in most large groups, with in-conflict, but this sort of coup is unprecedented. I think in the majority of cases, factions tend to cooperate or come to a resolution. If factions couldn't cooperate, most corporations would be fairly dysfunctional. If the solution was a coup, governments would be even more dysfunctional. 

This is public information, so is there a particular reason I should have not posted it?  

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2023-11-18T16:55:27.933Z · LW(p) · GW(p)

Can you please link to it or say what app or website this is?

Replies from: o-o
comment by O O (o-o) · 2023-11-18T16:56:11.080Z · LW(p) · GW(p)

Here it is:

"Sam Altman’s reputation among OpenAI researchers (Tech Industry)" https://www.teamblind.com/us/s/Ji1QX120

comment by O O (o-o) · 2023-11-17T21:06:18.885Z · LW(p) · GW(p)

Can someone from OpenAI anonymously spill the 🍵?

Replies from: magfrump
comment by magfrump · 2023-11-17T21:18:00.179Z · LW(p) · GW(p)

Not from OpenAI but the language sounds like this could be the board protecting themselves against securities fraud committed by Altman.

Replies from: Holly_Elmore, None
comment by Holly_Elmore · 2023-11-18T03:42:49.769Z · LW(p) · GW(p)

What kind of securities fraud could he have committed? 

Replies from: DanielFilan
comment by DanielFilan · 2023-11-18T19:51:43.945Z · LW(p) · GW(p)

I'm just a guy but the impression I get from occasionally reading the Money Stuff newsletter is that basically anything bad you do at a public company is securities fraud, because if you do a bad thing and don't tell investors, then people who buy the securities you offer are doing so without full information because of you.

comment by [deleted] · 2023-11-19T00:10:37.430Z · LW(p) · GW(p)

I doubt the reason for his ousting was fraud-related, but if it was I think it's unlikely to be viewed as securities fraud simply because OpenAI hasn't issued any public securities. I'm not a securities lawyer, but my hunch is even if you could prosecute Altman for defrauding e.g. Microsoft shareholders, it would be far easier to sue directly for regular fraud.

Replies from: faul_sname
comment by faul_sname · 2023-11-19T05:51:30.887Z · LW(p) · GW(p)

MSFT market cap dropped about $40B in a 15 minute period on the news, so maybe someone can argue securities fraud on that basis? I dunno, I look forward to the inevitable Matt Levine article.

comment by Michael Roe (michael-roe) · 2023-11-19T16:31:15.168Z · LW(p) · GW(p)

A wild (probably wrong) theory: Sam Altman announcing custom gpts was the thing that pushed the board to fire him.

 

customizable ai -> user can override rlhf (maybe, probably) -> we are at risk from AIs that have been finetunrd by bad actors

comment by Michael Roe (michael-roe) · 2023-11-18T16:35:53.798Z · LW(p) · GW(p)

If he was fired for some form of sexual misconduct, we wouldn't change our views on AI risk. But the betting  seems to be that it wasn't that.

 

On the other hand, if the reason for his firing was something like he had access to a concerning test result, and was concealing it from the board and the government (illegal as per the executive order) then we're going to worry about what that test result was, and how bad it is for AI risk.

 

Worst case: this is a AI preventing itself from being shutdown, by getting the board members sympathetic to itself to fire the board members most likely to shut it down. (The "surely you could just switch it off" argument is lacking in imagination as to how an agi could prevent shutdown). Personally, low probabilty that it's this option.