NYT: Google will “recalibrate” the risk of releasing AI due to competition with OpenAI
post by Michael Huang · 2023-01-22T08:38:46.886Z · LW · GW · 2 commentsThis is a link post for https://www.nytimes.com/2023/01/20/technology/google-chatgpt-artificial-intelligence.html
Contents
2 comments
Cross-posted from the EA Forum [EA · GW]
The New York Times: Sundar Pichai, CEO of Alphabet and Google, is trying to speed up the release of AI technology by taking on more risk.
Mr. Pichai has tried to accelerate product approval reviews, according to the presentation reviewed by The Times.
The company established a fast-track review process called the “Green Lane” initiative, pushing groups of employees who try to ensure that technology is fair and ethical to more quickly approve its upcoming A.I. technology.
The company will also find ways for teams developing A.I. to conduct their own reviews, and it will “recalibrate” the level of risk it is willing to take when releasing the technology, according to the presentation.
This change is in response to OpenAI's public release of ChatGPT. It is evidence that the race between Google/DeepMind and Microsoft/OpenAI is eroding ethics and safety.
Demis Hassabis, CEO of DeepMind, urged caution in his recent interview in Time:
He says AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization, and urges his competitors to proceed with more caution than before.
“When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says.
“Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.”
Worse still, Hassabis points out, we are the guinea pigs.
Alphabet/Google is trying to accelerate a technology that its own subsidiary says is powerful and dangerous.
2 comments
Comments sorted by top scores.
comment by Julian Bradshaw · 2023-01-23T04:32:05.453Z · LW(p) · GW(p)
Just quoting from the NYT article:
The consequences of Google’s more streamlined approach are not yet clear. Its technology has lagged OpenAI’s self-reported metrics when it comes to identifying content that is hateful, toxic, sexual or violent, according to an analysis that Google compiled. In each category, OpenAI bested Google tools, which also fell short of human accuracy in assessing content.
Google listed copyright, privacy and antitrust as the primary risks of the technology in the slide presentation. It said that actions, such as filtering answers to weed out copyrighted material and stopping A.I. from sharing personally identifiable information, are needed to reduce those risks.
The way I'm reading this, Google is behind on RLHF, and worried about getting blasted by EU fines. Honestly, those aren't humanity-dooming concerns, and it's not a huge deal if they brush them off. However, you're right that this is exactly the race dynamic AI safety has warned about for years. It would be good if the labs could reach some kind of agreement on exactly what kinds of requirements have to be met before we reach "actually dangerous line do not rush past". Something like OpenAI's Charter:
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
Maybe there ought to be a push for a multilateral agreement of this sort sooner rather than later? Would be good to do before trust starts breaking down.
comment by Evan R. Murphy · 2023-01-22T17:56:43.841Z · LW(p) · GW(p)
It's somewhat surprising to me the way this is shaking out. I would expect DeepMind and OpenAI's AGI research to be competing with one another*. But here it looks like Google is the engine of competition, less motivated by any future focused ideas about AGI more just by the fact that their core search/ad business model appears to be threatened by OpenAI's AGI research.
*And hopefully cooperating with one another too.
(Cross-posted this comment from the EA Forum)