OpenAI-Microsoft partnership
post by Zach Stein-Perlman · 2023-10-03T20:01:44.795Z · LW · GW · 19 commentsContents
19 comments
OpenAI has a strong partnership with Microsoft. The details are opaque, as far as I know. It tentatively seems that OpenAI is required to share its models (and some other IP) with Microsoft until OpenAI attains "a highly autonomous system that outperforms humans at most economically valuable work."[1] This is concerning because AI systems could cause a catastrophe with capabilities below that threshold.
(OpenAI may substantially depend on Microsoft; in particular, Microsoft Azure is "OpenAI's exclusive cloud provider." Microsoft's power over OpenAI may make it harder for OpenAI to refuse to share dangerous systems with Microsoft. But mostly this seems moot if OpenAI is just straightforwardly required to share its models with Microsoft.)
If so, then (given that Microsoft is worse on safety than OpenAI) whether OpenAI would do good alignment between training and deployment and then deploy cautiously mostly doesn't matter, because (if OpenAI is leading near the end) whether unsafe AI is deployed will be determined by Microsoft's decisions?
[Edit: I don't think Microsoft has full real-time access to OpenAI's models, given that they launched Bing Chat after OpenAI had RLHF'd GPT-4 but Bing Chat wasn't based on that version of GPT-4, as well as some other reporting. But it's very unclear what access Microsoft does have, or why OpenAI and Microsoft aren't transparent about this.]
(The OpenAI-Microsoft relationship seems like a big deal. Why haven't I heard more about this?)
Update: Microsoft and OpenAI have a a joint Deployment Safety Board (https://blogs.microsoft.com/on-the-issues/2023/10/26/microsofts-ai-safety-policies/, https://openai.com/global-affairs/our-approach-to-frontier-risk, https://www.theverge.com/23733388/microsoft-kevin-scott-open-ai-chat-gpt-bing-github-word-excel-outlook-copilots-sydney). The details are opaque.
- ^
OpenAI says:
by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.
It's not clear whether OpenAI has to share everything besides AGI with Microsoft.
19 comments
Comments sorted by top scores.
comment by Sheikh Abdur Raheem Ali (sheikh-abdur-raheem-ali) · 2023-10-03T23:30:19.657Z · LW(p) · GW(p)
(MS Employee)
I share your concerns. On Thursday, I'm meeting with a Product Manager at Microsoft who works on the Azure OpenAI team. The agenda is to discuss AI safety. Unfortunately, I don't have bandwidth to collaborate officially, but let me know if you have specific questions/feedback. I have sent you my work email over a direct message.
comment by mic (michael-chen) · 2023-10-04T00:25:24.673Z · LW(p) · GW(p)
It's worth keeping in mind that before Microsoft launched the GPT-4 Bing chatbot that ended up threatening and gaslighting users, OpenAI advised against launching so early as it didn't seem ready. Microsoft went ahead anyway, apparently in part due to some resentment that OpenAI stole its "thunder" with releasing ChatGPT in November 2022. In principle, if Microsoft wanted to, there's nothing stopping Microsoft from doing the same thing with future AI models: taking OpenAI's base model, fine-tuning it in a less robustly safe manner, and releasing it in a relatively unsafe manner. Perhaps dangerous capability evaluations are not just about convincing OpenAI or Anthropic to adhere to higher safety standards and potentially pause, but also Microsoft.
Replies from: sharmake-farah↑ comment by Noosphere89 (sharmake-farah) · 2023-10-04T13:28:48.413Z · LW(p) · GW(p)
That's concerning to me, as this could imply that Microsoft won't apply alignment techniques or reverse alignment techniques due to resentment, endangering people solely out of spite.
This is not good at all, and that's saying something, since I'm usually the optimist and am quite optimistic on AI safety working out.
Now I worry that Microsoft will cause a potentially dangerous/misaligned AI from reversing OpenAI's alignment techniques.
I'm happy that the alignment and safety were restored before it launched, but next time let's not reverse alignment techniques, so that we don't have to deal with more dangerous things later on.
Replies from: michael-chen↑ comment by mic (michael-chen) · 2023-10-05T00:17:25.672Z · LW(p) · GW(p)
To be clear, I don't think Microsoft deliberately reversed OpenAI's alignment techniques, but rather it seemed that Microsoft probably received the base model of GPT-4 and fine-tuned it separately from OpenAI.
Microsoft's post "Building the New Bing" says:
Last Summer, OpenAI shared their next generation GPT model with us, and it was game-changing. The new model was much more powerful than GPT-3.5, which powers ChatGPT, and a lot more capable to synthesize, summarize, chat and create. Seeing this new model inspired us to explore how to integrate the GPT capabilities into the Bing search product, so that we could provide more accurate and complete search results for any query including long, complex, natural queries.
This seems to correspond to when GPT-4 "finished training in August of 2022". OpenAI says it spent six months fine-tuning it with human feedback before releasing it in March 2023. I would guess that Microsoft doing its own fine-tuning of the version of GPT-4 from August 2022, separately from OpenAI. Especially with Bing's tendency to repeat itself, it doesn't feel like a fine-tuned version of GPT-3.5/4, after OpenAI's RLHF, but rather more like a base model.
Replies from: sharmake-farah↑ comment by Noosphere89 (sharmake-farah) · 2023-10-05T03:04:40.519Z · LW(p) · GW(p)
That's good news, but still I'm not happy Microsoft ignored OpenAI's warnings.
comment by Algon · 2023-10-03T21:01:56.513Z · LW(p) · GW(p)
I thought there was a cap to the value OpenAI had to provide to Microsoft, past which they could remove Microsoft's influence. Like, $100B-$1t. Is this not true?
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2023-10-03T21:11:21.561Z · LW(p) · GW(p)
My impression is:
- OpenAI pays most of its profits to Microsoft until it has repaid Microsoft's investment
- Then Microsoft still has capped-profit equity
- Microsoft has no control, but gets OpenAI's IP until AGI
But I'm not sure.
Replies from: gwern, sharmake-farah, Algon, quinn-dougherty↑ comment by gwern · 2023-10-05T01:40:21.926Z · LW(p) · GW(p)
Then Microsoft still has capped-profit equity. Microsoft has no control, but gets OpenAI's IP until AGI.
Only until the capped-profit equity is canceled entirely, that being the point of the capped-profit part. So, even assuming that the IP sharing is baked into the equity rather than part of the repayment, it would only include any 'AGI' if OA made a relatively small amount of money on the way to AGI. Which the way things are going, is plausible but certainly not guaranteed.
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2023-10-05T01:43:28.004Z · LW(p) · GW(p)
(Based on this, AGI is explicitly excluded from the IP sharing.)
↑ comment by Noosphere89 (sharmake-farah) · 2023-10-05T00:49:20.488Z · LW(p) · GW(p)
Microsoft has no control, but gets OpenAI's IP until AGI
That seems like the ability to keep the IP forever, since I see no reason to assume that the release condition will ever be satisfied by making more powerful AI.
↑ comment by Algon · 2023-10-03T21:21:28.166Z · LW(p) · GW(p)
I was going to say that's worse than what I thought was the case, but I'm not actually sure about that. Partly because my thoughts were fuzzy, and partly because if Microsoft had access to OpenAI's stuff till they were payed $100B, then that may give them ~the same level of AI access. Both seem bad.
↑ comment by Quinn (quinn-dougherty) · 2023-10-03T22:27:17.674Z · LW(p) · GW(p)
Microsoft has no control, but gets OpenAI's IP until AGI
This could be cashed out a few ways--- does it mean "we can't make decisions about how to utilize core tech in downstream product offerings or verticals, but we take a hefty cut of any such things that openai then ships"? If so, there may be multiple ways of interpreting it (like microsoft could accuse openai of very costly negligence for neglecting a particular vertical or product idea or application, or they couldn't).
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2023-10-03T23:56:16.839Z · LW(p) · GW(p)
I meant Microsoft doesn't get to tell OpenAI what models to make, but gets copies of whatever models it does make. See WSJ.
comment by Zach Stein-Perlman · 2024-06-27T21:45:18.353Z · LW(p) · GW(p)
Mustafa Suleyman [CEO of Microsoft AI] has been doing the unthinkable: looking under the hood at OpenAI’s crown jewels — its secret algorithms behind foundation models like GPT-4, people familiar with the matter said. . . .
Technically, Microsoft has access to OpenAI products that have launched, according to people familiar with the deal, but not its top secret research projects. But practically, Microsoft often sees OpenAI products long before they are publicly available, because the two companies work together closely to bring the products to market at scale. . . .
Microsoft has rights to OpenAI products, but only when they are officially launched as products, and can’t look at experimental research far from the commercialization phase, according to people at both companies.
Source. Not clear what this means. Other reporting from this guy on the frontier AI labs has sometimes been misleading and lacked context, but this is vague anyway.
comment by trevor (TrevorWiesinger) · 2023-10-05T00:03:34.631Z · LW(p) · GW(p)
Why haven't I heard more about this?
I have a ton of answers to this, but I'm not willing to talk about most of them except in person and not near any smartphones. Are you in the DC area?
comment by Julian Bradshaw · 2023-10-04T01:19:42.215Z · LW(p) · GW(p)
(The OpenAI-Microsoft relationship seems like a big deal. Why haven't I heard more about this?)
It is a big deal, but it's been widely reported on and discussed here for years, and particularly within the last year, given that Microsoft keeps releasing AI products based on OpenAI tech. Not sure why you haven't heard about it.
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2023-10-04T01:35:36.236Z · LW(p) · GW(p)
No, I've seen the existing discussion, but I'm suggesting that Microsoft's safety practices are as important as OpenAI's, which is inconsistent with the general discourse.
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-10-04T20:48:42.165Z · LW(p) · GW(p)
... but I'm suggesting that Microsoft's safety practices are as important as OpenAI's...
Who has claimed, or even implied, otherwise?
EDIT: I'm assuming at least one alternative viewpoint was encountered previously, so it doesn't seem like a big ask to spend 20 seconds to write down when that occurred for the benefit of the passing reader.
Perhaps I misread the implication and Zach has literally never encountered a different viewpoint on this.
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2023-10-04T22:08:03.158Z · LW(p) · GW(p)
I don't have anything specific to point to; in my experience, AI safety folks pay lots more attention to OpenAI than Microsoft (because OpenAI is much better at making powerful models).