[Linkpost] TIME article: DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution
post by Akash (akash-wasil) · 2023-01-21T16:51:09.586Z · LW · GW · 2 commentsThis is a link post for https://time.com/6246119/demis-hassabis-deepmind-interview/?utm_campaign=Artificial%2BIntelligence%2BWeekly&utm_medium=web&utm_source=Artificial_Intelligence_Weekly_313
Contents
Hassabis on safety: The choice to be acquired by Google (as opposed to Facebook): Differences between DeepMind and OpenAI: Hassabis on the decision to release Chinchilla: Hassabis on concerns about information-sharing & freeloaders: Relationship between DeepMind and Alphabet: None 2 comments
TIME recently published an article about DeepMind, including some quotes from CEO Demis Hassabis. Posting some relevant quotes below (bolding added by me).
You might also be interested in a recent interview with Sam Altman [LW · GW].
Hassabis on safety:
It is in this uncertain climate that Hassabis agrees to a rare interview, to issue a stark warning about his growing concerns. “I would advocate not moving fast and breaking things,” he says, referring to an old Facebook motto that encouraged engineers to release their technologies into the world first and fix any problems that arose later. The phrase has since become synonymous with disruption. That culture, subsequently emulated by a generation of startups, helped Facebook rocket to 3 billion users. But it also left the company entirely unprepared when disinformation, hate speech, and even incitement to genocide began appearing on its platform. Hassabis sees a similarly worrying trend developing with AI. He says AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization, and urges his competitors to proceed with more caution than before. “When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” Worse still, Hassabis points out, we are the guinea pigs.
The choice to be acquired by Google (as opposed to Facebook):
The following year, Google acquired DeepMind for some $500 million. Hassabis turned down a bigger offer from Facebook. One reason, he says, was that, unlike Facebook, Google was “very happy to accept” DeepMind’s ethical red lines “as part of the acquisition.” (There were reports at the time that Google agreed to set up an independent ethics board to ensure these lines were not crossed.) The founders of the fledgling AI lab also reasoned that the megacorporation’s deep pockets would allow them access to talent and computing power that they otherwise couldn’t afford.
Differences between DeepMind and OpenAI:
While DeepMind, Google, and other AI labs had been working on similar research behind closed doors, OpenAI was more willing to let the public use its tools. In late 2022 it launched DALL·E 2, which can generate an image of almost any search term imaginable, and the chatbot ChatGPT.
Hassabis on the decision to release Chinchilla:
But despite Hassabis’s calls for the AI race to slow down, it appears DeepMind is not immune from the competitive pressures. In early 2022, the company published a blueprint for a faster engine. The piece of research, called Chinchilla, showed that many of the industry’s most cutting-edge models had been trained inefficiently, and explained how they could deliver more capability with the same level of computing power. Hassabis says DeepMind’s internal ethics board discussed whether releasing the research would be unethical given the risk that it could allow less scrupulous firms to release more powerful technologies without firm guardrails. One of the reasons they decided to publish it anyway was because “we weren’t the only people to know” about the phenomenon.
Hassabis on concerns about information-sharing & freeloaders:
He says that DeepMind is also considering releasing its own chatbot, called Sparrow, for a “private beta” some time in 2023. (The delay is in order for DeepMind to work on reinforcement learning-based features that ChatGPT lacks, like citing its sources. “It’s right to be cautious on that front,” Hassabis says.) But he admits that the company may soon need to change its calculus. “We’re getting into an era where we have to start thinking about the freeloaders, or people who are reading but not contributing to that information base,” he says. “And that includes nation states as well.” He declines to name which states he means—“it’s pretty obvious, who you might think”—but he suggests that the AI industry’s culture of publishing its findings openly may soon need to end.
Relationship between DeepMind and Alphabet:
Hassabis wants the world to see DeepMind as a standard bearer of safe and ethical AI research, leading by example in a field full of others focused on speed. DeepMind has published “red lines” against unethical uses of its technology, including surveillance and weaponry. But neither DeepMind nor Alphabet has publicly shared what legal power DeepMind has to prevent its parent—a surveillance empire that has dabbled in Pentagon contracts—from pursuing those goals with the AI DeepMind builds. In 2021, Alphabet ended yearslong talks with DeepMind about the subsidiary’s setting up an independent legal structure that would prevent its AI being controlled by a single corporate entity, the Wall Street Journal reported. Hassabis doesn’t deny DeepMind made these attempts, but downplays any suggestion that he is concerned about the current structure being unsafe. When asked to confirm or deny whether the independent ethics board rumored to have been set up as part of the Google acquisition actually exists, he says he can’t, because it’s “all confidential.” But he adds that DeepMind’s ethics structure has “evolved” since the acquisition “into the structures that we have now.”
Ethical committee at DeepMind:
Hassabis says both DeepMind and Alphabet have committed to public ethical frameworks and build safety into their tools from the very beginning. DeepMind has its own internal ethics board, the Institutional Review Committee (IRC), with representatives from all areas of the company, chaired by its chief operating officer, Lila Ibrahim. The IRC meets regularly, Ibrahim says, and any disagreements are escalated to DeepMind’s executive leaders for a final decision. “We operate with a lot of freedom,” she says. “We have a separate review process: we have our own internal ethics review committee; we collaborate on best practices and learnings.” When asked what happens if DeepMind’s leadership team disagrees with Alphabet’s, or if its “red lines” are crossed, Ibrahim only says, “We haven’t had that issue yet.”
I'm grateful to Jeffrey Ladish for telling me about the article and _will_ for posting it to the EA forum [EA · GW].
2 comments
Comments sorted by top scores.
comment by trevor (TrevorWiesinger) · 2023-01-21T22:31:16.142Z · LW(p) · GW(p)
I think it's worth noting that, if anyone is setting up a database of information about world modelling and AI policy (public or private), this post should probably be in that database.
Replies from: TrevorWiesinger↑ comment by trevor (TrevorWiesinger) · 2023-01-21T23:28:13.266Z · LW(p) · GW(p)
Of course it's also important to note that these are interviews, and in general, most people in business see press interviews as a cheap way to ease public doubts without actually doing anything (i.e. talk is cheap). This is extremely common in Congress.