Ilya Sutskever and Jan Leike resign from OpenAI [updated]

post by Zach Stein-Perlman · 2024-05-15T00:45:02.436Z · LW · GW · 95 comments

Contents

  Friday May 17:
  Saturday May 18:
None
95 comments

Ilya Sutskever and Jan Leike have resigned. They led OpenAI's alignment work. Superalignment will now be led by John Schulman, it seems. Jakub Pachocki replaced Sutskever as Chief Scientist.

Reasons are unclear (as usual when safety people leave OpenAI).

The NYT piece (archive) and others I've seen don't really have details.

OpenAI announced Sutskever's departure in a blogpost.

Sutskever and Leike confirmed their departures in tweets.


Updates:

Friday May 17:

Superalignment dissolves.

Leike tweets, including:

I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point.

I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics.

These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there.

Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.

Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity.

But over the past years, safety culture and processes have taken a backseat to shiny products.

Daniel Kokotajlo talks to Vox:

“I joined with substantial hope that OpenAI would rise to the occasion and behave more responsibly as they got closer to achieving AGI. It slowly became clear to many of us that this would not happen,” Kokotajlo told me. “I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.” 

Kelsey Piper says:

I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.

More.

TechCrunch says:

requests for . . . compute were often denied, blocking the [Superalignment] team from doing their work [according to someone on the team].

Piper is back:

OpenAI . . . says that going forward, they *won't* strip anyone of their equity for not signing the secret NDA.

(This is slightly good but OpenAI should free all past employees from their non-disparagement obligations.)

Saturday May 18:

OpenAI leaders Sam Altman and Greg Brockman tweet a response to Leike. It doesn't really say anything.

Separately, Altman tweets:

we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement). vested equity is vested equity, full stop.

there was a provision about potential equity cancellation in our previous exit docs; although we never clawed anything back, it should never have been something we had in any documents or communication. this is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have.

the team was already in the process of fixing the standard exit paperwork over the past month or so. if any former employee who signed one of those old agreements is worried about it, they can contact me and we'll fix that too. very sorry about this.

This seems to contradict various claims, including (1) OpenAI threatened to take all of your equity if you don't sign the non-disparagement agreement when you leave—the relevant question for evaluating OpenAI's transparency/integrity isn't whether OpenAI actually took people's equity, it's whether OpenAI threatened to—and (2) Daniel Kokotajlo gave up all of his equity. (Note: OpenAI equity isn't really equity, it's "PPUs," and I think the relevant question isn't whether you own the PPUs but rather whether you're allowed to sell them.)

No comment from OpenAI on freeing everyone from non-disparagement obligations.

It's surprising that Altman says he "did not know this was happening." I think e.g. Gwern [LW(p) · GW(p)] has been aware of this for a while. Surely Altman knew that people leaving were signing non-disparagement agreements and would rather not... Oh, maybe he is talking narrowly about vested equity and OpenAI pseudo-equity is such that he's saying something technically true.

95 comments

Comments sorted by top scores.

comment by Zach Stein-Perlman · 2024-05-17T17:19:58.177Z · LW(p) · GW(p)

Added updates to the post. Updating it as stuff happens. Not paying much attention; feel free to DM me or comment with more stuff.

comment by Tenoke · 2024-05-15T06:31:01.724Z · LW(p) · GW(p)

Reasons are unclear 

This is happening exactly 6 months after the November fiasco (the vote to remove Altman was on Nov 17th) which is likely what his notice period was, especially if he hasn't been in the office since then. 

Are the reasons really that unclear? The specifics of why he wanted Altman out might be, but he is ultimately clearly leaving because he didn't think Altman should be in charge, while Altman thinks otherwise.

Replies from: gwern, alex-mallen
comment by gwern · 2024-05-15T18:52:09.851Z · LW(p) · GW(p)

But this is also right after GPT-4o, which, like Sora not that long ago, is a major triumph of the Sutskeverian vision of just scaling up sequence prediction for everything, and which OA has been researching for years (at least since CLIP/DALL-E 1, and possibly this effort for 2-3 years as 'Gobi'). I don't find it so hard to believe that he's held off until Sora & GPT-4o were out. These are the achievements of not just his lifetime, but hundreds of other peoples' lives (look at the contributor list). He's not going to quit anywhere before it. (Especially since by all accounts he's been gone the entire time, so what's a few more days or weeks of silence?)

Is there a particular reason to think that he would have had an exactly 6-month notice from the vote to remove Altman? And why would he have submitted notice then, exactly? The logical day to submit your quitting notice would be when the investigation report was submitted and was a complete Altman victory, which was not 6 months ago.

Replies from: OMNerd, Tenoke, danten
comment by Arthur Malone (OMNerd) · 2024-05-16T12:11:56.924Z · LW(p) · GW(p)

Pure speculation: The timing of these departures being the day after the big, attention-grabbing GPT-4o release makes me think that there was a fixed date for Ilya and Jan to leave, and OpenAI lined up the release and PR to drown out coverage. Especially in light of Ilya not (apparently) being very involved with GPT-4o.

Replies from: OMNerd
comment by Arthur Malone (OMNerd) · 2024-05-17T17:50:18.007Z · LW(p) · GW(p)

It also occurs to me that the causality could go the other way: Ilya and Jan may have timed their departure to coincide with the 4o release for a number of reasons. If they go on to launch a new safety org soon, for example, I'd be more inclined to think that the timing of the two events was a result of Ilya/Jan trying to use the moment to their advantage.

comment by Tenoke · 2024-05-15T20:23:16.121Z · LW(p) · GW(p)

The 21st when Altman was reinstated, is a logical date for the resignation, and within a week of 6 months now which is why a notice period/agreement to wait ~half a year/something similar is the first thing I thought of, since obviously the ultimate reason why he is quitting is rooted in what happened around then.

Is there a particular reason to think that he would have had an exactly 6-month notice

You are right, there isn't, but 1, 3, 6 months is where I would have put the highest probability a priori.

Sora & GPT-4o were out.

Sora isn't out out, or at least not how 4o is out and Ilya isn't listed as a contributor in any form on it (compared to being an 'additional contributor' for gpt-4 or 'additional leadership' for gpt-4o) and in general, I doubt it had that much to do with the timing. 

GPT-4o of course, makes a lot of sense, timing-wise (it's literally the next day!) and he is listed on it (though not as one of the many contributors or leads). But if he wasn't in the office during that time (or is that just a rumor?) it's just not clear to me if he was actually participating in getting it out as his final project (which yes, is very plausible) or if he was just asked not to announce his departure until after the release, given that the two happen to be so close in time in that case.

Replies from: Jacob L
comment by Jacob L · 2024-05-16T16:20:41.957Z · LW(p) · GW(p)

If a six month notice period was the key driver of the timing, I would have very much expected to see the departure announced slightly more than six months from the notable events, rather than (very) slightly less than six months before the notable events.  Given Ilya was voting in the majority on November 17th, seems unlikely he would have already resigned six months before the public announcement.

Replies from: Tenoke, RedMan
comment by Tenoke · 2024-05-17T07:36:21.275Z · LW(p) · GW(p)

When considering that my thinking was that I'd expect the last day to be slightly after, but the announcement can be slightly before since that doesn't need to be quite on the last day but can and often would be a little before - e.g. be on the first day of his last week.

comment by RedMan · 2024-05-17T12:50:18.387Z · LW(p) · GW(p)

November 17 to May 16 is 180 days.

 

Pay periods often end on the 15th and end of the month, though at that level, I doubt that's relevant.

comment by danten · 2024-05-16T11:30:24.116Z · LW(p) · GW(p)

GPT-4o... like Sora not that long ago, is a major triumph of the Sutskeverian vision of just scaling up sequence prediction for everything, and which OA has been researching for years 

 

Could you elaborate as to why you see GPT-4o as continuous with the scaling strategy? My understanding is this is a significantly smaller model than 4, designed to reduce latency and cost, which is then "compensated for" with multimodality and presumably many other improvements in architecture/data/etc.

Isn't GPT-4o a clear break (temporary, I assume) with the singular focus on scaling of the past few years?

comment by Alex Mallen (alex-mallen) · 2024-05-16T00:18:08.710Z · LW(p) · GW(p)

You seem to be talking primarily about Ilya, but it seems especially unclear why Jan is also leaving now.

Replies from: o-o
comment by O O (o-o) · 2024-05-16T00:29:30.577Z · LW(p) · GW(p)

He is an ally of Ilya

comment by Thane Ruthenis · 2024-05-16T08:32:57.785Z · LW(p) · GW(p)

That's good news.

There was a brief moment, back in 2023, when OpenAI's actions made me tentatively optimistic [LW(p) · GW(p)] that the company was actually taking alignment seriously, even if its model of the problem was broken.

Everything that happened since then has made it clear that this is not the case; that all these big flashy commitments like Superalignment were just safety-washing [LW · GW] and virtue signaling. They were only going to do alignment work inasmuch as that didn't interfere with racing full-speed towards greater capabilities.

So these resignations don't negatively impact my p(doom) in the obvious way. The alignment people at OpenAI were already powerless to do anything useful regarding changing the company direction.

On the other hand, what these resignations do is showcasing that fact. Inasmuch as Superalignment was a virtue-signaling move meant to paint OpenAI as caring deeply about AI Safety, so many people working on it resigning or getting fired starkly signals the opposite.

And it's good to have that more in the open; it's good that OpenAI loses its pretense.

Oh, and it's also good that OpenAI is losing talented engineers, of course.

Replies from: Wei_Dai, Erich_Grunewald
comment by Wei Dai (Wei_Dai) · 2024-05-17T20:23:31.097Z · LW(p) · GW(p)

So these resignations don’t negatively impact my p(doom) in the obvious way. The alignment people at OpenAI were already powerless to do anything useful regarding changing the company direction.

How were you already sure of this before the resignations actually happened? I of course had my own suspicions that this was the case, but was uncertain enough that the resignations are still a significant negative update.

ETA: Perhaps worth pointing out here that Geoffrey Irving recently left Google DeepMind to be Research Director at UK AISI, but seemingly on good terms (since Google DeepMind recently reaffirmed its intention to collaborate with UK AISI).

Replies from: Thane Ruthenis
comment by Thane Ruthenis · 2024-05-18T22:22:54.157Z · LW(p) · GW(p)

How were you already sure of this before the resignations actually happened?

OpenAI enthusiastically commercializing AI + the "Superalignment" approach being exactly the approach [LW(p) · GW(p)] I'd expect someone doing safety-washing to pick + the November 2023 drama + the stated trillion-dollar plans to increase worldwide chip production (which are directly at odds [LW · GW] with the way OpenAI previously framed its safety concerns).

Some of the preceding resignations (chiefly, Daniel Kokotajlo's) also played a role here, though I didn't update off of them much either.

comment by Erich_Grunewald · 2024-05-18T20:46:40.204Z · LW(p) · GW(p)

Everything that happened since then has made it clear that this is not the case; that all these big flashy commitments like Superalignment were just safety-washing [LW · GW] and virtue signaling. They were only going to do alignment work inasmuch as that didn't interfere with racing full-speed towards greater capabilities.

It's not clear to me that it was just safety-washing and virtue signaling. I think a better model is something like: there are competing factions within OAI that have different views, that have different interests, and that, as a result, prioritize scaling/productization/safety/etc. to varying degrees. Superalignment likely happened because (a) the safety faction (Ilya/Jan/etc.) wanted it, and (b) the Sam faction also wanted it, or tolerated it, or agreed to it due to perceived PR benefits (safety-washing), or let it happen as a result of internal negotiation/compromise, or something else, or some combination of these things.

If OAI as a whole was really only doing anything safety-adjacent for pure PR or virtue signaling reasons, I think its activities would have looked pretty different. For one, it probably would have focused much more on appeasing policymakers than on appeasing the median LessWrong user. (The typical policymaker doesn't care about the superalignment effort, and likely hasn't even heard of it.) It would also not be publishing niche (and good!) policy/governance research. Instead, it would probably spend that money on actual PR (e.g., marketing campaigns) and lobbying.

I do think OAI has been tending more in that direction (that is, in the direction of safety-washing, and/or in the direction of just doing less safety stuff period). But it doesn't seem to me like it was predestined. I.e., I don't think it was "only going to do alignment work inasmuch as that didn't interfere with racing full-speed towards greater capabilities". Rather, it looks to me like things have tended that way as a result of external incentives (e.g., looming profit, Microsoft) and internal politics (in particular, the safety faction losing power). Things could have gone quite differently, especially if the board battle had turned out differently. Things could still change, the trend could still reverse, even though that seems improbable right now.

Replies from: Thane Ruthenis
comment by Thane Ruthenis · 2024-05-18T22:10:01.022Z · LW(p) · GW(p)

Superalignment likely happened because (a) the safety faction (Ilya/Jan/etc.) wanted it, and (b) the Sam faction also wanted it, or tolerated it, or agreed to it due to perceived PR benefits (safety-washing), or let it happen as a result of internal negotiation/compromise, or something else, or some combination of these things.

Sure, that's basically my model as well. But if the faction (b) only cares about alignment due to perceived PR benefits or in order to appease faction (a), and faction (b) turns out to have overriding power such that it can destroy or drive out faction (a) and then curtail all the alignment efforts, I think it's fair to compress all that into "OpenAI's alignment efforts are safety-washing". If (b) has the real power within OpenAI, then OpenAI's behavior and values can be approximately rounded off to (b)'s behavior and values, and (a) is a rounding error.

If OAI as a whole was really only doing anything safety-adjacent for pure PR or virtue signaling reasons, I think its activities would have looked pretty different

Not if (b) is concerned about fortifying OpenAI against future challenges, such as hypothetical futures in which the AGI Doomsayers get their way and the government/the general public wakes up and tries to nationalize or ban AGI research. In that case, having a prepared, well-documented narrative of going above and beyond to ensure that their products are safe, well before any other parties woke up to the threat, will ensure that OpenAI is much more well-positioned to retain control over its research.

(I interpret Sam Altman's behavior at Congress as evidence for this kind of longer-term thinking. He didn't try to downplay the dangers of AI, which would be easy and what someone myopically optimizing for short-term PR would do. He proactively brought up the concerns that future AI progress might awaken, getting ahead of it, and thereby established OpenAI as taking them seriously and put himself into the position to control/manage these concerns.)

And it's approximately what I would do, at least, if I were in charge of OpenAI and had a different model of AGI Ruin.

And this is the potential plot whose partial failure I'm currently celebrating.

comment by Alex_Altair · 2024-05-15T01:38:05.681Z · LW(p) · GW(p)

Note that the NYT article is by Cade Metz.

Replies from: teun-van-der-weij, robo
comment by Teun van der Weij (teun-van-der-weij) · 2024-05-15T04:57:50.975Z · LW(p) · GW(p)

I am lacking context, why is this important?

Replies from: habryka4
comment by habryka (habryka4) · 2024-05-15T05:00:42.970Z · LW(p) · GW(p)

Cade Metz was the NYT journalist who doxxed Scott Alexander. IMO he has also displayed a somewhat questionable understanding of journalistic competence and integrity, and seems to be quite into narrativizing things in a weirdly adversarial way (I don't think it's obvious how this applies to this article, but it seems useful to know when modeling the trustworthiness of the article).

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2024-05-16T19:04:04.840Z · LW(p) · GW(p)

This, and see also Gwern's comment here [LW(p) · GW(p)].

comment by robo · 2024-05-15T07:28:48.942Z · LW(p) · GW(p)

My comment here is not cosmically important and I may delete it if it derails the conversation.

There are times when I would really want a friend to tap me on the shoulder and say "hey, from the outside the way you talk about <X> seems way worse than normal.  Are you hungry/tired/too emotionally close?".  They may be wrong, but often they're right.
If you (general reader you) would deeply want someone to tap you on the shoulder, read on, otherwise this comment isn't for you.

If you burn at NYT/Cade Metz intolerable hostile garbage, are you have not taken into account how defensive tribal instincts can cloud judgements, then, um <tap tap>?

Replies from: RobbBB, mesaoptimizer, Alex_Altair
comment by Rob Bensinger (RobbBB) · 2024-05-16T03:41:00.506Z · LW(p) · GW(p)

FWIW, Cade Metz was reaching out to MIRI and some other folks in the x-risk space back in January 2020, and I went to read some of his articles and came to the conclusion in January that he's one of the least competent journalists -- like, most likely to misunderstand his beat and emit obvious howlers -- that I'd ever encountered. I told folks as much at the time, and advised against talking to him just on the basis that a lot of his journalism is comically bad and you'll risk looking foolish if you tap him.

This was six months before Metz caused SSC to shut down and more than a year before his hit piece on Scott came out, so it wasn't in any way based on 'Metz has been mean to my friends' or anything like that. (At the time he wasn't even asking around about SSC or Scott, AFAIK.)

(I don't think this is an idiosyncratic opinion of mine, either; I've seen other non-rationalists I take seriously flag Metz as someone unusually out of his depth and error-prone for a NYT reporter, for reporting unrelated to SSC stuff.)

comment by mesaoptimizer · 2024-05-15T08:46:12.258Z · LW(p) · GW(p)

I think it is useful for someone to tap me on the shoulder and say "Hey, this information you are consuming, its from <this source that you don't entirely trust and have a complex causal model of>".

Enforcing social norms to prevent scapegoating also destroys information that is valuable for accurate credit assignment and causally modelling reality. I haven't yet found a third alternative, and until then, I'd recommend people both encourage and help people in their community to not scapegoat or lose their minds in 'tribal instincts' (as you put it), while not throwing away valuable information.

You can care about people while also seeing their flaws and noticing how they are hurting you and others you care about.

comment by Alex_Altair · 2024-05-15T19:44:22.816Z · LW(p) · GW(p)

I really appreciate this comment!

And yeah, that's why I said only "Note that...", and not something like "don't trust this guy". I think the content of the article is probably true, and maybe it's Metz who wrote it just because AI is his beat. But I do also hold tiny models that say "maybe he dislikes us" and also something about the "questionable understanding" etc that habryka mentions below. AFAICT I'm not internally seething or anything, I just have a yellow-flag attached to this name.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2024-05-16T03:45:09.436Z · LW(p) · GW(p)

FWIW I do think "don't trust this guy" is warranted; I don't know that he's malicious, but I think he's just exceptionally incompetent relative to the average tech reporter you're likely to see stories from.

Like, in 2018 Metz wrote a full-length article on smarter-than-human AI that included the following frankly incredible sentence:

During a recent Tesla earnings call, Mr. Musk, who has struggled with questions about his company’s financial losses and concerns about the quality of its vehicles, chastised the news media for not focusing on the deaths that autonomous technology could prevent — a remarkable stance from someone who has repeatedly warned the world that A.I. is a danger to humanity.

comment by Sodium · 2024-05-15T01:32:03.591Z · LW(p) · GW(p)

So out of the twelve people on the weak to strong generalization paper, four have since left OpenAI? (Leopald, Pavel, Jan, and Ilya)

Other recent safety related departures that come to mind are Daniel Kokotajlo and William Saunders.

Am I missing anyone else?

Replies from: Quadratic Reciprocity, Zach Stein-Perlman, LGS, UnexpectedValues
comment by Quadratic Reciprocity · 2024-05-15T02:28:14.419Z · LW(p) · GW(p)

Cullen O'Keefe also no longer at OpenAI (as of last month)

comment by Zach Stein-Perlman · 2024-05-15T05:40:18.108Z · LW(p) · GW(p)

Two other executives left two weeks ago, but that's not obviously safety-related.

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-05-15T17:28:36.693Z · LW(p) · GW(p)

 the executives— 

Diane Yoon: vice president of people 

Chris Clark: head of nonprofit and strategic initiatives

left the company earlier this week, a company spokesperson said.

comment by LGS · 2024-05-15T19:10:30.315Z · LW(p) · GW(p)

Evan Morikawa?

https://twitter.com/E0M/status/1790814866695143696

comment by Mateusz Bagiński (mateusz-baginski) · 2024-05-15T02:43:43.051Z · LW(p) · GW(p)

After returning to OpenAI just five days after he was ousted, Mr. Altman reasserted his control and continued its push toward increasingly powerful technologies that worried some of his critics. Dr. Sutskever remained an OpenAI employee, but he never returned to work.

Had this been known until now? I didn't know he "didn't get back to work" although admittedly I wasn't tracking the issue very closely.

Replies from: LawChan, nathan-helm-burger, benjamin-sturgeon
comment by LawrenceC (LawChan) · 2024-05-15T04:22:30.224Z · LW(p) · GW(p)

Yeah, it's been a bit of a meme ("where is Ilya?"). See e.g. Gwern's comment thread here. [LW(p) · GW(p)]

Replies from: mateusz-baginski
comment by Mateusz Bagiński (mateusz-baginski) · 2024-05-15T04:29:21.628Z · LW(p) · GW(p)

Yeah, that meme did reach me. But I was just assuming Ilya got back (was told to get back) to doing the usual Ilya superalignment things and decided (was told) not to stick his neck out.

Replies from: wassname
comment by wassname · 2024-05-15T07:20:52.618Z · LW(p) · GW(p)

He might have returned to work, but agreed to no external coms.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-05-15T17:56:56.155Z · LW(p) · GW(p)

Interesting to watch Sam Altman talk about it here at timestamp 18:40: 

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2024-05-15T18:12:56.370Z · LW(p) · GW(p)

Notably, this interview was on March 18th, and afaik the highest-level interview Altman has had to give his two cents since the incident. There's a transcript here. (There was also this podcast a couple days ago).

I think a Dwarkesh-Altman podcast would be more likely to arrive at more substance from Altman's side of the story. I'm currently pretty confident that Dwarkesh and Altman are sufficiently competent to build enough trust to make sane and adequate pre-podcast agreements (e.g. don't be an idiot who plays tons of one-shot games just because podcast cultural norms are more vivid in your mind than game theory), but I might be wrong about this; trailblazing the frontier of making-things-happen, like Dwarkesh and Altman are, is a lot harder than thinking about the frontier of making-things-happen.

Replies from: Seth Herd
comment by Seth Herd · 2024-05-15T22:16:05.763Z · LW(p) · GW(p)

There's also this podcast from just yesterday. It's really good. Sam continues to say all the right things; in fact, I think this is the most reassuring he's ever been on taking the societal risks seriously, if not necessarily the existential risks. Which leaves me baffled. He's certainly a skilled enough social engineer to lie convincingly, but he sounds so dang sincere. I'm weakly concluding for the moment that he just doesn't think the alignment problem is that hard. I think that's wrong, but the matter is curiously murky, so it's not necessarily an irrational opinion to hold. Getting more meaningful discussion between optimistic and pessimistic alignment experts would help close that gap.

Replies from: roha, baynaynas
comment by roha · 2024-05-15T23:58:37.151Z · LW(p) · GW(p)

He has a stance towards risk that is a necessary condition for becoming the CEO of a company like OpenAI, but doesn't give you a high probability of building a safe ASI:

comment by baynaynas · 2024-05-17T16:51:14.336Z · LW(p) · GW(p)

There's a couple problems that even though Sam is surrounded by people like Ilya and Jan warning him of the consequences, he's currently unwilling to change course.

  1. Staring down AGI ruin is just fundamentally a deeply scary abyss. Doubly so when it would've been Sam himself at fault.
  2. Like many political leaders, he's unable to let go of power because he believes that he himself is most capable of wielding it, even when it causes them to take actions that are worse than either leader would do independently if they weren't trying to wrest power away from the other.
  3. More than most, Sam envisions the upside. He's a vegetarian and cares a lot about animal suffering ("someday eating meat is going to get cancelled, and we are going to look back upon it with horror. we have a collective continual duty to find our way to better morals"), seeing the power of AGI to stop that suffering. He talks up the curing cancer and objectively good tech progress a lot. He probably sees it as: heads, I live forever and am the greatest person for bringing utopia, tails, I'm dead anyway like I would be in 50 years.

Ultimately, I think Sama sees the coin flip (but believing utopia AGI is higher chance) and being the ever-optimist, is willing to flip it for the good things because it's too scary to believe that the abyss side would happen.

comment by Benjamin Sturgeon (benjamin-sturgeon) · 2024-05-18T06:40:49.547Z · LW(p) · GW(p)

Actually, as far as I know, this is wrong. He simply hasn’t been back to the offices but has been working remotely.

https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence

This article goes into some detail and seems quite good.

comment by Jelle Donders (jelle-donders) · 2024-05-15T13:59:08.211Z · LW(p) · GW(p)

Guaranteeing all the safety people that left OpenAI that any legal fees for breaking their NDA would be fully compensated might be a very effective intervention.

Replies from: tristan-wegner, LiamLaw
comment by Tristan Wegner (tristan-wegner) · 2024-05-15T14:51:36.720Z · LW(p) · GW(p)

On first order, this might have a good effect on safety.

On second order, it might have negative effects, because it increases the risk of and therefor lowers the rate of such companies hiring people openly worrying about AI X-Risk.

Replies from: Daphne_W
comment by Daphne_W · 2024-05-17T18:59:57.990Z · LW(p) · GW(p)

On third order, people who openly worry about X-Risk may get influenced by their environment, becoming less worried as a result of staying with a company whose culture denies X-Risk, which could eventually even cause them to contribute negatively to AI Safety. Preventing them from getting hired prevents this.

comment by PhilosophicalSoul (LiamLaw) · 2024-05-17T08:18:51.911Z · LW(p) · GW(p)

In my opinion, a class action filed by all employees allegedly prejudiced (I say allegedly here, reserving the right to change 'prejudiced' in the event that new information arises) by the NDAs and gag orders would be very effective.

Were they to seek termination of these agreements on the basis of public interest in an arbitral tribunal, rather than a court or internal bargaining [LW · GW], the ex-employees are far more likely to get compensation. The litigation costs of legal practitioners there also tend to be far less.

Again, this assumes that the agreements they signed didn't also waive the right to class action arbitration. If OpenAI does have agreements this cumbersome, I am worried about the ethics of everything else they are pursuing.


For further context, see:

comment by mike_hawke · 2024-05-17T20:40:34.513Z · LW(p) · GW(p)

Even acknowledging that the NDA exists is a violation of it.

This sticks out pretty sharply to me.

Was this explained to the employees during the hiring process? What kind of precedent is there for this kind of NDA? 

Replies from: evhub
comment by evhub · 2024-05-17T22:10:38.355Z · LW(p) · GW(p)

Was this explained to the employees during the hiring process? What kind of precedent is there for this kind of NDA?

See Kelsey's follow-up reporting on this.

Replies from: mike_hawke
comment by mike_hawke · 2024-05-18T02:08:16.581Z · LW(p) · GW(p)

Thanks for the source.

I've intentionally made it difficult for myself to log into twitter. For the benefit of others who avoid Twitter, here is the text of Kelsey's tweet thread:

I'm getting two reactions to my piece about OpenAI's departure agreements: "that's normal!" (it is not; the other leading AI labs do not have similar policies) and "how is that legal?" It may not hold up in court, but here's how it works:

OpenAI like most tech companies does salaries as a mix of equity and base salary. The equity is in the form of PPUs, 'Profit Participation Units'. You can look at a recent OpenAI offer and an explanation of PPUs here: https://t.co/t2J78V8ee4

Many people at OpenAI get more of their compensation from PPUs than from base salary. PPUs can only be sold at tender offers hosted by the company. When you join OpenAI, you sign onboarding paperwork laying all of this out.

And that onboarding paperwork says you have to sign termination paperwork with a 'general release' within sixty days of departing the company. If you don't do it within 60 days, your units are cancelled. No one I spoke to at OpenAI gave this little line much thought.

And yes this is talking about vested units, because a separate clause clarifies that unvested units just transfer back to the control of OpenAI when an employee undergoes a termination event (which is normal).

There's a common legal definition of a general release, and it's just a waiver of claims against each other. Even someone who read the contract closely might be assuming they will only have to sign such a waiver of claims.

But when you actually quit, the 'general release'? It's a long, hardnosed, legally aggressive contract that includes a confidentiality agreement which covers the release itself, as well as arbitration, nonsolicitation and nondisparagement and broad 'noninterference' agreement.

And if you don't sign within sixty days your units are gone. And it gets worse - because OpenAI can also deny you access to the annual events that are the only way to sell your vested PPUs at their discretion, making ex-employees constantly worried they'll be shut out.

Finally, I want to make it clear that I contacted OpenAI in the course of reporting this story. So did my colleague SigalSamuel They had every opportunity to reach out to the ex-employees they'd pressured into silence and say this was a misunderstanding. I hope they do.

Replies from: wassname, Dweomite
comment by wassname · 2024-05-18T05:48:48.047Z · LW(p) · GW(p)

Thanks, but this doesn't really give insight on whether this is normal or enforceable. So I wanted to point out, we don't know if it's enforcible, and have not seen a single legal opinion.

comment by Dweomite · 2024-05-21T22:52:06.325Z · LW(p) · GW(p)

I am not a lawyer, and my only knowledge of this agreement comes from the quote above, but...if the onboarding paperwork says you need to sign "a" general release, but doesn't describe the actual terms of that general release, then it's hard for me to see an interpretation that isn't either toothless or crazy:

  1. If you interpret it to mean that OpenAI can write up a "general release" with absolutely any terms they like, and you have to sign that or lose your PPUs, then that seems like it effectively means you only keep your PPUs at their sufferance, because they could simply make the terms unconscionable.  (In general, any clause that requires you to agree to "something" in the future without specifying the terms of that future agreement is a blank check.)
  2. If you interpret it to mean either that the employee can choose the exact terms, or that the terms must be the bare minimum that would meet the legal definition of "a general release", then that sounds like OpenAI has no actual power to force the non-disclosure or non-disparagement terms--although they could very plausibly trick employees into thinking they do, and threaten them with costly legal action if they resist.  (And once the employee has fallen for the trick and signed the NDA, the NDA itself might be enforceable?)
  3. Where else are the exact terms of the "general release" going to come from, if they weren't specified in advance and neither party has the right to choose them?
comment by jacquesthibs (jacques-thibodeau) · 2024-05-19T15:36:20.269Z · LW(p) · GW(p)

In case people missed this, another safety researcher recently left OpenAI: Ryan Lowe.

I don't know Ryan's situation, but he was a "research manager working on AI alignment."

comment by Rebecca (bec-hawk) · 2024-05-19T01:25:33.882Z · LW(p) · GW(p)

Noting that while Sam describes the provision as being about “about potential equity cancellation”, the actual wording says ‘shall be cancelled’ not ‘may be cancelled’, as per this tweet from Kelsey Piper: https://x.com/KelseyTuoc/status/1791584341669396560

comment by Linch · 2024-05-15T06:44:55.058Z · LW(p) · GW(p)

It'll be interesting to see if OpenAI will keep going with their compute commitments now that the two main superalignment leads have left. 

Replies from: Zach Stein-Perlman, rhollerith_dot_com
comment by Zach Stein-Perlman · 2024-05-15T07:00:45.984Z · LW(p) · GW(p)

The commitment—"20% of the compute we've secured to date" (in July 2023), to be used "over the next four years"—may be quite little in 2027, with compute use increasing exponentially. I'm confused about why people think it's a big commitment.

Replies from: gwern, Linch
comment by gwern · 2024-05-17T16:26:12.224Z · LW(p) · GW(p)

It seems like it was a big commitment because there were several hints during the OpenAI coup reporting that Superalignment was not getting the quota as OA ran very short on compute in 2023, creating major internal stress (particularly from Sam Altman telling people different things or assigning the same job) and that was one of the reasons for Altman sidelining Ilya Sutskever in favor of Jakub Pachocki. What sounded good & everyone loved initially turned out to be a bit painful to realize. (Sort of like designing the OA LLC so the OA nonprofit board could fire the CEO.)

EDIT: speak of the devil: https://x.com/janleike/status/1791498178346549382 Note Leike has to be very cautious in his wording. EDITEDIT: and further confirmed as predating the coup [LW(p) · GW(p)] and thus almost certainly contributing to why Ilya flipped.

comment by Linch · 2024-05-17T02:55:41.274Z · LW(p) · GW(p)

I agree it's not a large commitment in some absolute sense. I think it'd still be instructive to see whether they're able to hit this (not very high) bar.

comment by RHollerith (rhollerith_dot_com) · 2024-05-15T12:32:24.425Z · LW(p) · GW(p)

Instead of the compute, can we have some extra time instead, i.e., a pause in capabilities research?

comment by Tyler Tracy (tyler-tracy) · 2024-05-15T12:11:21.947Z · LW(p) · GW(p)

It is hard to pinpoint motivation here. If you are a top researcher at a top lab working on alignment and you disagree with something within the company, I see two categories of options you can take to try to fix things

  • Stay and try to use your position of power to do good. Better that someone who deeply cares about AI risk is in charge than someone who doesn't
  • Leave in protest to try to sway public opinion into thinking that your organization is unsafe and that we should not trust it

Jan and Ilya left but haven't said much about how they lost confidence in OpenAI. I expect we will see them making more damning statements about OpenAI in the future

Or is there a possible motivation I'm missing here?

Replies from: Lukas_Gloor, Charlie Steiner, quila, roha
comment by Lukas_Gloor · 2024-05-15T13:00:28.010Z · LW(p) · GW(p)

It seems likely (though not certain) that they signed non-disparagement agreements, so we may not see more damning statements from them even if that's how they feel. Also, Ilya at least said some positive things in his leaving announcement, so that indicates either that he caved in to pressure (or too high agreeableness towards former co-workers) or that he's genuinely not particularly worried about the direction of the company and that he left more because of reasons related to his new project. 

Replies from: tristan-wegner
comment by Tristan Wegner (tristan-wegner) · 2024-05-15T14:48:13.211Z · LW(p) · GW(p)

Someone serious about alignment seeing dangers better do what is save and not be influenced by a non-disparagement agreement. It might lose them some job prospects and have money and possible lawsuit costs, but if history on earth is on the line? Especially since such a known AI genius would find plenty support from people who supported such open move.

So I hope he assumes talking right NOW it not considered strategically worth it. E.g. He might want to increase his chance to be hired by semi safety serious company (more serious than Open AI, but not enough to hire a proven whistleblower), where he can use his position better.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2024-05-15T15:17:43.908Z · LW(p) · GW(p)

I agree with what you say in the first paragraph. If you're talking about Ilya, which I think you are, I can see what you mean in the second paragraph, but I'd flag that even if he had some sort of plan here, it seems pretty costly and also just bad norms for someone with his credibility to say something that indicates that he thinks OpenAI is on track to do well at handling their great responsibility, assuming he were to not actually believe this. It's one thing to not say negative things explicitly; it's a different thing to say something positive that rules out the negative interpretations. I tend to take people at their word if they say things explicitly, even if I can assume that they were facing various pressures. If I were to assume that Ilya is saying positive things that he doesn't actually believe, that wouldn't reflect well on him, IMO. 

If we consider Jan Leike's situation, I think what you're saying applies more easily, because him leaving without comment already reflects poorly on OpenAI's standing on safety, and maybe he just decided that saying something explicitly doesn't really add a ton of information (esp. since maybe there are other people who might be in a better position to say things in the future). Also, I'm not sure it affects future employment prospects too much if someone leaves a company, signs a non-disparagement agreement, and goes "no comment" to indicate that there was probably dissatisfaction with some aspects of the company. There are many explanations for this and if I was making hiring decisions at some AI company, even if it's focused on profits quite a bit, I wouldn't necessarily interpret this as a negative signal. 

That said, signing non-disparagament agreements certainly feels like it has costs and constrains option value, so it seems like a tough choice.

comment by Charlie Steiner · 2024-05-15T14:19:49.532Z · LW(p) · GW(p)

Well, one big reason is if they were prevented from doing the things they thought would constitute using their position of power to do good, or were otherwise made to feel that OpenAI wasn't a good environment for them.

comment by quila · 2024-05-15T13:57:05.062Z · LW(p) · GW(p)

Leaving to dissuade others within the company is another possibility

comment by roha · 2024-05-16T00:03:58.620Z · LW(p) · GW(p)

I assume they can't make a statement and that their choice of next occupation will be the clearest signal they can and will send out to the public.

comment by Jelle Donders (jelle-donders) · 2024-05-15T14:31:03.649Z · LW(p) · GW(p)

How many safety-focused people have left since the board drama now? I count 7, but I might be missing more. Ilya Sutskever, Jan Leike, Daniel Kokotajlo, Leopold Aschenbrenner, Cullen O'Keefe, Pavel Izmailov, William Saunders.

This is a big deal. A bunch of the voices that could raise safety concerns at OpenAI when things really heat up are now gone. Idk what happened behind the scenes, but they judged now is a good time to leave.

Possible effective intervention: Guaranteeing that if these people break their NDA's, all their legal fees will be compensated for. No idea how sensible this is, so agree/disagree voting encouraged.

comment by mishka · 2024-05-15T02:21:18.402Z · LW(p) · GW(p)

Ilya departure is momentous.

What do we know about those other departures? The NYT article has this:

Jan Leike, who ran the Super Alignment team alongside Dr. Sutskever, has also resigned from OpenAI. His role will be taken by John Schulman, another company co-founder.

I have not been able to find any other traces of this information yet.

We do know that Pavel Izmailov has joined xAI: https://izmailovpavel.github.io/

Leopold Aschenbrenner still lists OpenAI as his affiliation everywhere I see. The only recent traces of his activity seem to be likes on Twitter: https://twitter.com/leopoldasch/likes

Replies from: mishka, mishka, pseudobison
comment by mishka · 2024-05-15T12:48:29.441Z · LW(p) · GW(p)

Jan Leike confirms: https://twitter.com/janleike/status/1790603862132596961

Dwarkesh is supposed to release his podcast with John Schulman today, so we can evaluate the quality of his thinking more closely (he is mostly known for reinforcement learning, https://scholar.google.com/citations?user=itSa94cAAAAJ&hl=en, although he has some track record of safety-related publications, including Unsolved Problems in ML Safety, 2021-2022, https://arxiv.org/abs/2109.13916 and Let's Verify Step by Step, https://arxiv.org/abs/2305.20050 which includes Jan Leike and Ilya Sutskever among its co-authors).

No confirmation of him becoming the new head of Superalignment yet...

Replies from: mishka
comment by mishka · 2024-05-15T17:26:12.453Z · LW(p) · GW(p)

The podcast is here: https://www.dwarkeshpatel.com/p/john-schulman?initial_medium=video

From reading the first 29 min of the transcript, my impression is: he is strong enough to lead an org to an AGI (it seems many people are strong enough to do this from our current level, the conversation does seem to show that we are pretty close), but I don't get the feeling that he is strong enough to deal with issues related to AI existential safety. At least, that's what my initial impression is :-(

Replies from: johnvon, nathan-helm-burger
comment by johnvon · 2024-05-19T21:41:36.119Z · LW(p) · GW(p)

This interview was terrifying to me (and I think to Dwarkesh as well), Schulman continually demonstrates that he hasn't really thought about the AGI future scenarios in that much depth and sort of handwaves away any talk of future dangers. 

Right off the bat he acknowledges that they reasonably expect AGI in 1-5 years or so, and even though Dwarkesh pushes him he doesn't present any more detailed plan for safety than "Oh we'll need to be careful and cooperate with the other companies...I guess..."

Replies from: Zvi
comment by Zvi · 2024-05-21T17:25:37.031Z · LW(p) · GW(p)

Here is my coverage of it. Given this is a 'day minus one' interview of someone in a different position, and given everything else we already know about OpenAI, I thought this went about as well as it could have. I don't want to see false confidence in that kind of spot, and the failure of OpenAI to have a plan for that scenario is not news.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-05-15T18:33:01.517Z · LW(p) · GW(p)

I have so much more confidence in Jan and Ilya. Hopefully they go somewhere to work on AI alignment together. The critical time seems likely to be soon. See this clip from an interview with Jan: https://youtube.com/clip/UgkxFgl8Zw2bFKBtS8BPrhuHjtODMNCN5E7H?si=JBw5ZUylexeR43DT 

[Edit: watched the full interview with John and Dwarkesh. John seems kinda nervous, caught a bit unprepared to answer questions about how OpenAI might work on alignment. Most of the interesting thoughts he put forward for future work were about capabilities. Hopefully he does delve deeper into alignment work if he's going to remain in charge of it at OpenAI.]

comment by mishka · 2024-05-27T16:20:09.751Z · LW(p) · GW(p)

We do know that Pavel Izmailov has joined xAI: https://izmailovpavel.github.io/

He removed the mention of xAI, and listed Anthropic as his next job (in all his platforms).

His CV says that he is doing Scalable Oversight at Anthropic https://izmailovpavel.github.io/files/CV.pdf

His LinkedIn also states the end of his OpenAI employment as May 2024: https://www.linkedin.com/in/pavel-izmailov-8b012b258/

Leopold Aschenbrenner still lists OpenAI as his affiliation everywhere I see.

His twitter does say "ex-superalignment" now: https://x.com/leopoldasch?lang=en

comment by gabrielrecc (pseudobison) · 2024-05-15T07:51:21.266Z · LW(p) · GW(p)

Leopold and Pavel were out ("fired for allegedly leaking information") in April. https://www.silicon.co.uk/e-innovation/artificial-intelligence/openai-fires-researchers-558601

comment by Zach Stein-Perlman · 2024-05-18T21:30:38.525Z · LW(p) · GW(p)

Edit: nevermind; maybe this tweet is misleading and narrow and just about restoring people's vested equity; I'm not sure what that means in the context of OpenAI's pseudo-equity but possibly this tweet isn't a big commitment.

@gwern [LW · GW] I'm interested in your take on this new Altman tweet:

we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement). vested equity is vested equity, full stop.

there was a provision about potential equity cancellation in our previous exit docs; although we never clawed anything back, it should never have been something we had in any documents or communication. this is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have.

the team was already in the process of fixing the standard exit paperwork over the past month or so. if any former employee who signed one of those old agreements is worried about it, they can contact me and we'll fix that too. very sorry about this.

In particular "i did not know this was happening"

Replies from: bec-hawk
comment by Rebecca (bec-hawk) · 2024-05-19T00:48:26.677Z · LW(p) · GW(p)

What do you mean by pseudo-equity?

Replies from: Linch
comment by Linch · 2024-05-19T00:55:20.225Z · LW(p) · GW(p)

OpenAI has something called PPUs ("Profit Participation Units") which in theory is supposed to act like RSUs albeit with a capped profit and no voting rights, but in practice is entirely a new legal invention and we don't really know how it works.

Replies from: bec-hawk
comment by Rebecca (bec-hawk) · 2024-05-19T00:59:35.824Z · LW(p) · GW(p)

Is that not what Altman is referring to when he talks about vested equity? My understanding was employees had no other form of equity besides PPUs, in which case he’s talking non-misleadingly about the non-narrow case of vested PPUs, ie the thing people were alarmed about, right?

Replies from: JamesPayor
comment by James Payor (JamesPayor) · 2024-05-20T01:25:18.340Z · LW(p) · GW(p)

It may be that talking about "vested equity" is avoiding some lie that would occur if he made the same claim about the PPUs. If he did mean to include the PPUs as "vested equity" presumably he or a spokesperson could clarify, but I somehow doubt they will.

Replies from: Linch
comment by Linch · 2024-05-20T08:52:39.773Z · LW(p) · GW(p)

I'd be a bit surprised if that's the answer, if OpenAI doesn't offer any vested equity, that half-truth feels overly blatant to me. 

Replies from: JamesPayor
comment by James Payor (JamesPayor) · 2024-05-20T15:39:07.452Z · LW(p) · GW(p)

Fwiw I will also be a bit surprised, because yeah.

My thought is that the strategy Sam uses with stuff is to only invoke the half-truth if it becomes necessary later. Then he can claim points for candor if he doesn't go down that route. This is why I suspect (50%) that they will avoid clarifying that he means PPUs, and that they also won't state that they will not try to stop ex-employees from exercising them, and etc. (Because it's advantageous to leave those paths open and to avoid having clearly lied in those scenarios.)

I think of this as a pattern with Sam, e.g. "We are not training GPT-5" at the MIT talk and senate hearings, which turns out was optimized to mislead and got no further clarification iirc.

There is a mitigating factor in this case which is that any threat to equity lights a fire under OpenAI staff, which I think is a good part of the reason that Sam responded so quickly.

Replies from: JamesPayor
comment by James Payor (JamesPayor) · 2024-05-22T22:55:06.118Z · LW(p) · GW(p)

Okay I guess the half-truth is more like this:

By announcing that someone who doesn’t sign the restrictive agreement is locked out of all future tender offers, OpenAI effectively makes that equity, valued at millions of dollars, conditional on the employee signing the agreement — while still truthfully saying that they technically haven’t clawed back anyone’s vested equity, as Altman claimed in his tweet on May 18.

https://www.vox.com/future-perfect/351132/openai-vested-equity-nda-sam-altman-documents-employees

comment by danbmil99 · 2024-05-16T01:37:12.334Z · LW(p) · GW(p)

Putting aside the fact that OpenAI drama seems to always happen in a world-is-watching fishbowl, this feels very much like the pedestrian trope of genius CTO getting sidelined as the product succeeds and business people pushing business interests take control. On his own, Ilya can raise money for anything he wants, hire anyone he wants, and basically just have way more freedom than he does at OpenAI.

I do think there is a basic p/doom vs e/acc divide which has probably been there all along, but as the tech keeps accelerating it becomes more and more of a sticking point.

I suspect in the depths of their souls, SA and Brock and the rest of that crowd do not really take the idea of existential threat to humanity seriously. Giving Ilya a "Safety and alignment" role probably now looks like a sop to A) shut the p-doomers up and B) signal some level of concern. But when push comes to shove, SA and team do what they know how to do -- push product out the door. Move fast and risk extinction.

One CEO I worked with summed up his attitude thusly: "Ready... FIRE! - aim."

comment by Anders Lindström (anders-lindstroem) · 2024-05-18T23:45:49.786Z · LW(p) · GW(p)

Without resorting to exotic conspiracy theories, is it that unlikely to assume that Altman et al. are under tremendous pressure from the military and  intelligence agencies to produce results to not let China or anyone else win the race for AGI? I do not for a second believe that Altman et al. are reckless idiots that do not understand what kind of fire they might be playing with, that they would risk wiping out humanity just to beat Google on search. There must be bigger forces at play here, because that is the only thing that makes sense when reading Leike's comment and observing Open AI's behavior.

comment by mike_hawke · 2024-05-21T19:42:57.835Z · LW(p) · GW(p)

I'm out of the loop. Did Daniel Kokotajlo lose his equity or not? If the NDA is not being enforced, are there now some disclosures being made?

comment by HONEST_ANNIE · 2024-05-18T07:55:19.174Z · LW(p) · GW(p)

Organizational structure is an alignment mechanism. 

While I sympathize with the stated intentions, I just can't wrap my head around the naivety.  OpenAI corporate structure was a recipe for bad corporate governance. "We are the good guys here, the structure is needed to make others align with us."- an organization where ethical people can rule as benevolent dictators is the same mistake committed socialists made when they had power.  

If it was that easy, AI alignment would be solved by creating ethical AI committed to alignment and giving it as much power as possible. 

Altruists are normal humans. Nothing changes priorities faster than large sums of money.  Any mix of ideals and profit-making must be arranged in a way that concerns don't mix. People in charge of ethics making life-changing money can't be expected to make those hard ethics over money  decisions. 

Bad organizational structure puts well meaning humans like Sutskever repeatedly into position where he must choose between wast sums of money  or his ethical commitments.  

comment by cubefox · 2024-05-15T14:39:29.718Z · LW(p) · GW(p)

For Jan Leike to leave OpenAI I assume there must be something bad happening internally and/or he got a very good job offer elsewhere.

Replies from: mateusz-baginski
comment by Mateusz Bagiński (mateusz-baginski) · 2024-05-15T14:54:32.417Z · LW(p) · GW(p)

I find it hard to imagine a job offer that Jan Leike judged more attractive than OA superalignment. (unless he made an update similar to Kokotajlo's or something?)

comment by Review Bot · 2024-05-15T19:05:52.861Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

comment by Adam Hup (adam-hup) · 2024-05-15T00:47:01.360Z · LW(p) · GW(p)

were fucked

Replies from: mesaoptimizer, idlikeaporsche
comment by mesaoptimizer · 2024-05-15T08:51:46.162Z · LW(p) · GW(p)

I love the score of this comment as of writing: -1 karma points, 23 agree points.

Replies from: tristan-wegner
comment by Tristan Wegner (tristan-wegner) · 2024-05-15T15:01:18.409Z · LW(p) · GW(p)

Actually a great example of people using the voting system right. It does not contribute anything substantial to the conversation, but just express something most of us feel obviously.

I had to order the 2 votes into the 4 prototypes to makes sure I voted sensibly:

High Karma - Agree: A well expressed opinion I deeply share

High Karma - Disagree: A well argued counterpoint that I would never use myself / It did not convince me.

Low Karma - Agree: Something obvious/trivial/repeated that I agree with, but not worth saying here.

Low Karma - Disagree: low quality rest bucket

Also pure factual statement contribution (helpful links, context etc.) should get Karma votes only, as no opinion to disagree with is expressed.

comment by idlikeaporsche · 2024-05-15T13:09:21.566Z · LW(p) · GW(p)

In Gen-Z I think we're cooked

comment by roha · 2024-05-16T00:15:43.078Z · LW(p) · GW(p)

In case anyone got worried, OpenAI's blog post Introducing Superalignment on July 5, 2023 contained two links for recruiting, one still working and the other not. From this we can deduce that superalignment has been reduced to an engineering problem, and therefore scientists like Ilya and Jan were able to move on to new challenges, such as spending the last normal summer in a nice location with close friends and family.

"Please apply for our research engineer and research scientist positions."