A concerning observation from media coverage of AI industry dynamics
post by Justin Olive · 2023-03-05T21:38:18.473Z · LW · GW · 3 commentsContents
3 comments
tl:dr: there are indications that ML engineers will migrate to environments with less AI governance in place, which has implications for the tech industry and global AI governance efforts.
=========================
I just wanted to raise something to the community's attention about the coverage of AI companies within the media. The media-source is 'The Information', which is a tech-business focused online news source. Link: https://www.theinformation.com/. I'll also note that their articles are (to my knowledge) all behind a paywall.
The first article in question is titled "Alphabet Needs to Replace Sundar Pichai".
It outlines how Google stocks have stagnated in 2023 compared to other tech stocks such as Meta's.
Here's their mention of Google's actions throughout GPT-mania:
"The other side of this equation is the performance of Alphabet management. Most recently, the company’s bungling of its AI efforts—allowing Microsoft to get the jump on rolling out an AI-powered search engine—was the latest sign of how Alphabet’s lumbering management style is holding it back. (Symbolically, as The Information reported, Microsoft was helped by former Google AI employees!)."
This brings us to the second article: "OpenAI’s Hidden Weapon: Ex-Google Engineers"
"As OpenAI’s web chatbot became a global sensation in recent months, artificial intelligence practitioners and investors have wondered how a seven-year-old startup beat Google to the punch.
...
After it hoovered up much of the world’s machine-learning talent, Google is now playing catch-up in launching AI-centric products to the public. On the one hand, Google’s approach was deliberate, reflecting the company’s enormous reach and high stakes in case something went wrong with the nascent technology. It also costs more to deliver humanlike answers from a chatbot than it does classic search results. On the other hand, startups including OpenAI have taken some of the AI research advances Google incubated and, unlike Google, have turned them into new types of revenue-generating services, including chatbots and systems that generate images and videos based on text prompts. They’re also grabbing some of Google’s prized talent.
Two people who recently worked at Google Brain said some staff felt the unit’s culture had become lethargic, with product initiatives marked by excess caution and layers of red tape. That has prompted some employees to seek opportunities elsewhere, including OpenAI, they said."
Although there are many concerning themes here, I think the key point is in this last paragraph.
I've heard speculation in the EA / tech community that AI will trend towards alignment & safety because technology companies will be risk-averse enough to build alignment into their practices.
I think the articles show that this dynamic is playing out to some degree - Google at least seems to be taking a more risk-averse approach to deploying of AI systems.
The concerning observation is that there has been a two-pronged backlash against Google's 'conservative' approach. Not only is the stock-market punishing Google for 'lagging' behind the competition (despite having equal or better capability to deploy similar systems), according to this article, elite machine-learning talent is also pushing back on this approach.
To me this is doubly concerning. The 'excess caution and layers of red tape' mentioned in the article are potentially examples of measures that AI safety proponents would deem to be useful and necessary. Regardless, it appears that the engineers themselves are willing to jump ship in order to circumvent these safety measures.
Although further evidence would be valuable, it seems that there might be a trend unfolding whereby firms are not only punished by financial markets, they're also forced to weigh up the risks of not being able to retain ML engineers who would rather work for firms with less AI governance measures.
From my limited understand of industrial economics, this dynamic makes sense; I recall reading in Porter's 'Competitive Advantage' that lower-ranked firms are more likely to take actions that damage an overall industry in order to advance their own position in the short-term. In this instance, it means Microsoft are pushing the rate of AI deployment in ways that Google considers to be risky.
Overall, this trend seems to provide another counter-argument to the hypothesis that market incentives will provide sufficient levels of alignment. There are also concerning implications for AI governance in the global AI ecosystem: in the case that some nations are able to implement effective AI governance policies, will this simply cause a migration of AI talent towards lower-governance zones?
I'd enjoy hearing what other thinking and research has been done on this topic, as it appears to add a new dimension to the already tremendously complex issue of AI safety.
// Note: this was cross-posted from the EA forum after a recommendation to do so
3 comments
Comments sorted by top scores.
comment by Dave Orr (dave-orr) · 2023-03-05T23:31:28.437Z · LW(p) · GW(p)
There's a subtlety here around the term risk.
Google has been, IMO, very unwilling to take product risk, or risk a PR backlash of the type that Blenderbot or Sydney have gotten. Google has also been very nervous about perceived and actual bias in deployed models.
When people talk about red tape, it's not the kind of red tape that might be useful for AGI alignment, it's instead the kind aimed at minimizing product risks. And when Google says they are willing to take on more risk, here they mean product and reputational risk.
Maybe the same processes that would help with product risk would also help with AGI alignment risk, but frankly I'm skeptical. I think the problems are different enough that they need a different kind of thinking.
I think Google is better on the big risks than others, at least potentially, since they have some practice at understanding nonobvious secondary effects as applied to search or YouTube ranking.
Note that I'm at Google, but opinions here are mine, not Google's.
Replies from: Justin Olive↑ comment by Justin Olive · 2023-03-06T00:30:56.043Z · LW(p) · GW(p)
Hi Dave, thanks for the great input from the insider perspective.
Do you have any thoughts on whether risk-aversion (either product-related or misalignment-risk) might be contributing to a migration of talent towards lower-governance zones?
If so, are there any effective ways to combat this that don't translate to accepting higher levels of risk?
Replies from: dave-orr↑ comment by Dave Orr (dave-orr) · 2023-03-06T21:57:16.973Z · LW(p) · GW(p)
For sure product risk aversion leads towards people moving to where they can have some impact, for people who don't want pure research roles. I think this is basically fine -- I don't think that product risk is all that concerning at least for now.
Misalignment risk would be a different story but I'm not aware of cases where people moved because of it. (I might not have heard, of course.)