Sam Altman, Greg Brockman and others from OpenAI join Microsoft

post by Ozyrus · 2023-11-20T08:23:00.791Z · LW · GW · 15 comments

This is a link post for https://twitter.com/satyanadella/status/1726509045803336122

Contents

15 comments

That's very interesting.
I think it's very good that board stood their ground, and maybe a good thing OpenAI can keep focusing on their charter and safe AI and keep commercialization in Microsoft.
People that don't care about alignment can leave for the fat paycheck, while commited ones stay at OpenAI.
What are your thought on implications of this for alignment?

15 comments

Comments sorted by top scores.

comment by johnswentworth · 2023-11-20T17:15:00.107Z · LW(p) · GW(p)

Well that sounds like amazing news!

All the smart people trying to accelerate AI are going to go somewhere, and I have trouble thinking of any company which beats Microsoft in their track-record of having a research lab absolutely packed with brilliant researchers, yet producing hardly any actual impact on anything. I guess there was Kinect? And probably some backend-y language/compiler/database research managed to be used internally at some point? But yeah, I sure do have an impression of Microsoft as the sort of lumbering big company where great research or tech is developed by one team and then never reaches anybody else.

Replies from: orthonormal, None, Thane Ruthenis, johnlawrenceaspden
comment by orthonormal · 2023-11-20T20:58:20.891Z · LW(p) · GW(p)

In addition to this, Microsoft will exert greater pressure to extract mundane commercial utility from models, compared to pushing forward the frontier. Not sure how much that compensates for the second round of evaporative cooling of the safety-minded.

comment by [deleted] · 2023-11-20T18:03:13.771Z · LW(p) · GW(p)

Microsoft practices "Embrace and extinguish" or "monopolistic copier" as their corporate philosophy. So you can expect them to reproduce a mediocre version of gpt-4 - probably complete with unreliable software and intrusive pro Microsoft ads - and to monopolistically occupy the "niche". Maybe. They are really good at niche defense so they would keep making the model better.

Don't celebrate too early though. Chaos benefits accelerationists. Diversity of strategy. If multiple actors - governments, corporations, investors, startups - simply choose what to do randomly, there is differential utility gain in favor of AI. More AI, stronger AI, uncensored and unrestricted AI. All of these things will give the actors who improve AI more investment and so on in a runaway utility gain. (This is the Fermi paradox argument as well. So long as alien species have a diversity of strategy and the tech base for interstellar travel, the expansionists will inevitably fill the stars with themselves)

This is why one point of view is to say that since other actors are certain to have powerful AGI at their disposal as soon as the compute is available to find it, your best strategy is to be first or at least not to be behind by much.

In the age of sail, if everyone else is strapping cannons on their boats, you better be loading your warships with so many guns the ship barely floats. Asking for an international cannon ban wasn't going to work, the other signatories would claim to honor it and then in the next major naval battle, open up their gun ports.

comment by Thane Ruthenis · 2023-11-20T19:19:12.376Z · LW(p) · GW(p)

the sort of lumbering big company where great research or tech is developed by one team and then never reaches anybody else

... except one of our primary threat models is accident risk where the tech itself explodes and the blast wave takes out the light cone. Paraphrasing, the sort of "great tech" that we're worrying about is precisely the tech that would be able to autonomously circumvent this sort of bureaucracy-based causal isolation. So in this one case, it matters comparatively little how bad Microsoft is at deploying its products, compared to how well it can assist their development.

I mean, I can buy that Microsoft is so dysfunctional that just being embedded into it would cripple OpenAI's ability to even do research, but it sounds like Sam Altman is pretty good at what he does. If it's possible to do productive work as part of MS at all, he'd probably manage to make his project do it.

comment by johnlawrenceaspden · 2023-11-20T17:49:42.759Z · LW(p) · GW(p)

Nicely done! I only come here for the humour these days.

comment by Robert_AIZI · 2023-11-20T12:57:04.818Z · LW(p) · GW(p)

I hope this doesn't lead to everyone sorting into capabilities (microsoft) vs safety (openai). OpenAI's ownership was designed to preserve safety commitments against race dynamics, but microsoft has no such obligations, a bad track record (Sydney), and now the biggest name in AI. Those dynamics could lead to talent/funding/coverage going to capabilities unchecked by safety, which would increase my p(doom).

Two caveats:

  • We don't know what the Altman/Brockman "advanced AI research team" will actually be doing at Microsoft, and how much independence they'll have. 
  • According to the new OpenAI CEO Emmett Shear, the split wasn't due to "any specific disagreement on safety", but I think that could be the end result.
Replies from: Charlie Steiner
comment by Charlie Steiner · 2023-11-20T13:34:25.228Z · LW(p) · GW(p)

Biggest name in AI

They hired Hinton? Wow.

Replies from: Robert_AIZI
comment by Robert_AIZI · 2023-11-20T14:09:55.709Z · LW(p) · GW(p)

I appreciate the joke, but I think that Sam Altman is pretty clearly "the biggest name in AI" as far as the public is concerned. His firing/hiring was the leading story in the New York Times for days in a row (and still is at time of writing)!

Replies from: Charlie Steiner
comment by Charlie Steiner · 2023-11-20T15:37:14.638Z · LW(p) · GW(p)

I mean, by that standard I'd say Elon Musk is the biggest name in AI. But yeah, jokes aside I think bringing on Altman even for a temporary period is going to be quite useful for Microsoft attracting talent and institutional knowledge from OpenAI, as well as reassuring investors.

comment by lc · 2023-11-20T08:50:18.414Z · LW(p) · GW(p)

I think it's important to remind people that dramaposting about OpenAI leadership is still ultimately dramaposting. Make the update on OpenAI's nonprofit leadership structure having an effect, etc., and keep looking at the news about once a day until the events stop being eventful. While you're doing that, keep in mind that ultimately the laminated monkey hierarchy is not what's important about OpenAI or any of these other firms, at least terminally.

Replies from: mikkel-wilson, Seth Herd
comment by MikkW (mikkel-wilson) · 2023-11-20T11:19:37.360Z · LW(p) · GW(p)

This is important news. I personally desire to be kept updated on this, and LW is a convenient (and appropriate) place to get this information. And I expect other users feel similarly.

What's different between this and e.g. the developments with Nonlinear, is that the developments here will have a big impact on how the AI field (and by one layer of indirection, the fate of the world) develops.

Replies from: lc
comment by lc · 2023-11-20T12:51:10.209Z · LW(p) · GW(p)

This is important news. I personally desire to be kept updated on this, and LW is a convenient (and appropriate) place to get this information. And I expect other users feel similarly.

I don't disagree! Even if you're not involved directly in the goings on, it's probably still important to tune in once a day or so.

comment by Seth Herd · 2023-11-20T13:40:08.416Z · LW(p) · GW(p)

Ummm, the laminated monkey hierarchy is going to determine exactly who launches the first AGI, and therefore who makes the most important call in humanity's history.

If we provide them a solid alignment solution that makes their choice easier, but it's still going to be some particular person's call.

comment by ThirdSequence · 2023-11-20T13:01:07.880Z · LW(p) · GW(p)

Based on the sentiment expressed by OpenAI employees on Twitter, the ones who are (potentially) leaving are not doing so because of a disagreement with the AI Safety approach, but rather how the entire situation was handled by the board (e.g. the lack of reasons provided for firing Sam Altman). 

If this move was done for the sake of AI safety, wouldn't OpenAI risk disgruntling employees who would otherwise be aligned with the original mission of OpenAI?

Can anybody here think of potential reasons why the board has not disclosed further details about their decision?

comment by Shmi (shminux) · 2023-11-20T08:57:55.901Z · LW(p) · GW(p)

Just a reminder that this site is not a 24-hour news network, or at least wasn't until recently.