OpenAI Staff (including Sutskever) Threaten to Quit Unless Board Resigns
post by Seth Herd · 2023-11-20T14:20:33.539Z · LW · GW · 28 commentsThis is a link post for https://www.wired.com/story/openai-staff-walk-protest-sam-altman/
Contents
28 comments
More drama. Perhaps this will prevent spawning a new competent and funded AI org at MS?
28 comments
Comments sorted by top scores.
comment by Robert_AIZI · 2023-11-20T15:28:26.121Z · LW(p) · GW(p)
I'm noticing my confusion about the level of support here. Kara Swisher says that these are 505/700 employees, but the OpenAI publication I'm most familiar with is the autointerpretability paper, and none (!) of the core research contributors to that paper signed this letter. Why is a large fraction of the company anti-board/pro-Sam except for 0/6 of this team (discounting Henk Tillman because he seems to work for Apple instead of OpenAI)? The only authors on that paper that signed the letter are Gabriel Goh and Ilya Sutskever. So is the alignment team unusually pro-board/anti-Sam, or are the 505 just not that large a faction in the company?
[Editing to add a link to the pdf of the letter, which is how I checked for who signed https://s3.documentcloud.org/documents/24172246/letter-to-the-openai-board-google-docs.pdf ]
Replies from: arabaga, MakoYass, ryan_b↑ comment by arabaga · 2023-11-20T23:30:01.906Z · LW(p) · GW(p)
There is an updated list of 702 who have signed the letter (as of the time I'm writing this) here: https://www.nytimes.com/interactive/2023/11/20/technology/letter-to-the-open-ai-board.html (direct link to pdf: https://static01.nyt.com/newsgraphics/documenttools/f31ff522a5b1ad7a/9cf7eda3-full.pdf)
Nick Cammarata left OpenAI ~8 weeks ago, so he couldn't have signed the letter.
Out of the remaining 6 core research contributors:
- 3/6 have signed it: Steven Bills, Dan Mossing, and Henk Tillman
- 3/6 have still not signed it: Leo Gao, Jeff Wu, and William Saunders
Out of the non-core research contributors:
- 2/3 signed it: Gabriel Goh and Ilya Sutskever
- 1/3 still have not signed it: Jan Leike
That being said, it looks like Jan Leike has tweeted that he thinks the board should resign: https://twitter.com/janleike/status/1726600432750125146
And that tweet was liked by Leo Gao: https://twitter.com/nabla_theta/likes
Still, it is interesting that this group is clearly underrepresented among people who have actually signed the letter.
Edit: Updated to note that Nick Cammarata is no longer at OpenAI, so he couldn't have signed the letter. For what it's worth, he has liked at least one tweet that called for the board to resign: https://twitter.com/nickcammarata/likes
Replies from: gwern↑ comment by gwern · 2023-11-20T23:54:34.137Z · LW(p) · GW(p)
4/7 have still not signed it: Nick Cammarata
Cammarata says he quit OA ~8 weeks ago, so therefore couldn't've signed it: https://twitter.com/nickcammarata/status/1725939131736633579
Replies from: arabaga↑ comment by arabaga · 2023-11-21T00:59:26.042Z · LW(p) · GW(p)
Ah, nice catch, I'll update my comment.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-11-21T02:58:28.037Z · LW(p) · GW(p)
So it's been falsified? Isn't that a pretty big deal against the source, or whoever purports the letter to be 100% genuine?
Replies from: MakoYass, M. Y. Zuo, arabaga↑ comment by mako yass (MakoYass) · 2023-11-21T04:04:58.349Z · LW(p) · GW(p)
I believe Nick was initially mentioned as someone who wasn't on the letter
↑ comment by arabaga · 2023-11-21T04:39:55.181Z · LW(p) · GW(p)
No, the letter has not been falsified.
Just to clarify: ~700 out of ~770 OpenAI employees have signed the letter (~90%)
Out of the 10 authors of the autointerpretability paper, only 5 have signed the letter. This is much lower than the average rate. One out of the 10 is no longer at OpenAI, so couldn't have signed it, so it makes sense to count this as 5/9 rather than 5/10. Either way, it's still well below the average rate.
↑ comment by mako yass (MakoYass) · 2023-11-20T23:38:13.988Z · LW(p) · GW(p)
The only evidence I've seen that this is real, so far, is Kara Swisher's (who?) word, and not having heard a refutation yet, but neither of those things are very reassuring given that Kara's thread bears The Mark:
comment by mako yass (MakoYass) · 2023-11-21T07:54:56.075Z · LW(p) · GW(p)
I was annoyed by the lack of actual proof that the petition to follow sam was signed by anyone from OpenAI, all it would really take is linking a single endorsement (or just an acknowledgement of its existence) from a signatory. So I asked around and here is one such tweet: https://twitter.com/E0M/status/1726743918023496140
↑ comment by RomanHauksson (r) · 2023-11-20T19:47:45.531Z · LW(p) · GW(p)
A research team's ability to design a robust corporate structure doesn't necessarily predict their ability to solve a hard technical problem. Maybe there's some overlap, but machine learning and philosophy are different fields than business. Also, I suspect that the people doing the AI alignment research at OpenAI are not the same people who designed the corporate structure (but this might be wrong).
Welcome to LessWrong! Sorry for the harsh greeting. Standards of discourse are higher than other places on the internet, so quips usually aren't well-tolerated (even if they have some element of truth).
comment by Mitchell_Porter · 2023-11-20T14:56:22.052Z · LW(p) · GW(p)
I have devised a conspiracy theory that this is all a planned double-cross of EAs by the deep state. EAs thought they had a governance structure that could stop a company developing dangerous frontier AI. Instead, the board's actions have provided an opportunity to reel in the unpredictable OpenAI, and bring its research activities under the aegis of trusted government partner Microsoft; while the need to do this, can be blamed on EA zealotry rendering independent OpenAI untenable.
I am not particularly committed to this theory, but it might be an echo of the truth. As @trevor keeps saying, events at the AI frontier are not something that the national security elite can just allow to unfold unattended.
Replies from: TrevorWiesinger, ErickBall, ryan_b, lahwran, o-o↑ comment by trevor (TrevorWiesinger) · 2023-11-20T18:10:59.185Z · LW(p) · GW(p)
This is the wrong way to think about this, and an even worse way to write about this.
I think this comment is closing the overton window on extremely important concepts, in a really bad way. There probably isn't a "deep state", we can infer that the US Natsec establishment is probably decentralized a la lc's Don't take the organization chart literally [LW · GW] due to the structure of the human brain, and I've argued that competent networks of people emerge stochastically based on technological advances [LW · GW] like better lie detection technology, whereas Gwern argued that competence mainly emerges based on proximity to the core leadership and leadership's wavering focus [LW(p) · GW(p)].
I've argued pretty persuasively that there's good odds that excessively powerful people at tech companies and intelligence agencies might follow the national interest [LW · GW], and hijack or damage the AI safety community [LW · GW] in order to get better positioned to access AI tech.
I'm really glad to see people take this problem seriously, and I'm frankly pretty disturbed that the conflict I sketched out might have ended up happening so soon after I first went public about it a month ago [LW · GW]. I think your comment also gets big parts of the problem right e.g. "events at the AI frontier are not something that the national security elite can just allow to unfold unattended" which is well-put.
But everything is on fire right now, and people have unfairly high standards for getting everything right the first time (it is even more unfair for the world which is ending). Like learning a language, a system as obscure and complicated as this required me to make tons of mistakes before I had something ready for presentation.
Replies from: None↑ comment by [deleted] · 2023-11-20T18:31:16.827Z · LW(p) · GW(p)
Its an Occam's razor violation. The simple explanation is that the current chaos comes from human beings making mistakes. The OAI board detonating their own company does not achieve their stated goals of developing safe AGI.
If the ultimate reason here had to do with AI safety, a diaspora from OAI spreads the knowledge around to all the competition.
As for national security, the current outcomes are chaos. Why not just send the FBI to seize everything if the US government decided that AI is a security issue?
That ultimately has to be what happens. In an alternate history without a Manhattan project, imagine GE energy research in 1950 is playing with weapons grade u-235.
Eventually some researcher inside realizes you could make a really powerful bomb, not just compact nuclear power sources, and the government would have to seize all the U-235/H-100s.
Finishing the analogy, the government would probably wait to act until GE causes a nuclear fizzle (a low yield nuclear explosion) or some other clear evidence that the danger is real becomes available.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2023-11-20T22:59:05.344Z · LW(p) · GW(p)
if the US government decided that AI is a security issue
The US government considered AI a national security issue long before ChatGPT. But when it comes to American companies developing technologies of strategic interest, of course they'd prefer to work behind the scenes. If the powers deem that a company needs closer oversight, they would prefer the resulting changes to be viewed as purely a matter of corporate decision-making.
Whatever is actually going on with OpenAI, you can be sure that there are national security factions who care greatly about what happens to its technologies. The relevant decisions are not just business decisions.
Replies from: Seth Herd↑ comment by Seth Herd · 2023-11-20T23:36:07.553Z · LW(p) · GW(p)
That makes sense. It's worth noting that the AI OpenAI leads in is very different from the type the military is most concerned with. So whoever in the goverment is concerned with this transition is less likely to be concerned with the military implications, and more on the economic and social impacts.
It's also interesting relative to government interest that, thus far, generative AI tools have been immediately been deployed globally, so the relative impact on competitive status is negligible.
Thus, we might actually hope that to the extent the relevant government apparatus is composed of sensible individuals, its concerns with this type of AI might actually be similar to those of this community: economic disruption and the real if strange risk of disempowering humanity permanently.
Of course, hoping that the government is collectively sensible is highly questionable. But the security factions you're addressing might be the more pragmatic and sensible parts of government.
↑ comment by ErickBall · 2023-11-20T15:22:53.476Z · LW(p) · GW(p)
You are attributing a lot more deviousness and strategic boldness to the so-called deep state than the US government is organizationally capable of. The CIA may have tried a few things like this in banana republics but there's just no way anybody could pull it off domestically.
Replies from: TrevorWiesinger↑ comment by trevor (TrevorWiesinger) · 2023-11-20T18:26:50.587Z · LW(p) · GW(p)
This is a good point, that much of the data we have comes from leaked operations in South America (e.g. the Church Hearings), and CIA operations are probably much easier there than on American soil.
However, there are also different kinds of systems pointed inward which look more like normal power games e.g. FBI informants, or lobbyists forming complex agreements/arrangements (like how their lawyer counterparts develop clever value-handshake-like agreements/arrangements to settle out-of-court). It shouldn't be surprising that domestic ops are more complicated and look like ordinary domestic power plays (possibly occasionally augmented by advanced technology).
The profit motive alone could motivate Microsoft execs to leverage their access to advanced technology [LW · GW] to get a better outcome for Microsoft. I was pretty surprised by the possibility that silicon valley VCs alone could potentially set up sophisticated operations e.g. using pre-established connections to journalists to leak false information or access to large tech companies with manipulation capabilities (e.g. Andreessen Horowitz's access to Facebook's manipulation research).
↑ comment by ryan_b · 2023-11-20T18:53:53.478Z · LW(p) · GW(p)
I don't understand the mechanism of the double-cross here. How would they get the pro-EA and safety side to trigger the crisis? And why would the safety/EA people, who are trying to make everything more predictable and controllable, be the ones who are purged from influence?
Replies from: TrevorWiesinger, Mitchell_Porter↑ comment by trevor (TrevorWiesinger) · 2023-11-20T19:32:50.530Z · LW(p) · GW(p)
One explanation that comes to mind is that AI already offers extremely powerful manipulation capabilities [LW · GW] and governments are already racing to acquire these capabilities [LW · GW].
I'm very confused about the events that have been taking place, but one factor that I have very little doubt is that the NSA has acquired access to smartphone operating systems and smartphone microphones throughout the OpenAI building (it's just one building, and a really important one, so it's also reasonably likely that it's also been bugged). Whether they were doing anything with that access is much less clear.
↑ comment by Mitchell_Porter · 2023-11-20T22:27:38.165Z · LW(p) · GW(p)
The EAs on the board have national security ties.
↑ comment by the gears to ascension (lahwran) · 2023-11-20T17:57:33.965Z · LW(p) · GW(p)
Seems unlikely they'd need to communicate at all. The stock market is plenty of pressure on Microsoft.
↑ comment by O O (o-o) · 2023-11-20T16:50:46.396Z · LW(p) · GW(p)
Seeing repeated theories about the “deep-state” by supposed EA members is not a good look, for EA especially now.
The mundane explanation: a bunch of inexperienced people, some of whom probably should have never been on the board, severely miscalculated the consequences of their actions due to a combination of good-will, ego, and greed.
Replies from: lahwran, Seth Herd↑ comment by the gears to ascension (lahwran) · 2023-11-20T17:58:01.868Z · LW(p) · GW(p)
Nothing wrong with hypothesizing about it, as long as one doesn't over update on it.
↑ comment by Seth Herd · 2023-11-20T17:23:48.556Z · LW(p) · GW(p)
This does seem vastly more likely. Why would "the deep state" be anti-EA or anti-AI safety? Or organizing complex shenanigans to pursue those values?
I never attribute to malice what is explainable by foolishness.
Replies from: TrevorWiesinger↑ comment by trevor (TrevorWiesinger) · 2023-11-20T18:46:46.649Z · LW(p) · GW(p)
The US Natsec community (which is probably decentralized [LW · GW] and not a "deep state") has a very strong interest in accelerating AI faster than China and Russia [LW · GW] e.g. for use in military hardware like cruise missiles, economic growth in an era of technological stagnation, and for defending/counteracting/mitigating SOTA foreign influence operations e.g. Russian botnets that use AI and user data for targeted manipulation. Current-gen AI is pretty well known to be highly valuable for these uses.
This is what makes "the super dangerous people who already badly want AI" one major hypothesis, but not at all the default explanation. Considering who seems to be benefiting the most, Microsoft (which AFAIK probably has the strongest ties to the military out of the big 5 tech companies), this is pretty clearly worth consideration.
Replies from: rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2023-11-21T15:57:43.753Z · LW(p) · GW(p)
The US NatSec community doesn't know that the US (and Britain) are with probability = .99 at least 8 years ahead of China and Russia in AI?