Request to AGI organizations: Share your views on pausing AI progress
post by Akash (akash-wasil), simeon_c (WayZ) · 2023-04-11T17:30:46.707Z · LW · GW · 11 commentsContents
11 comments
A few observations from the last few weeks:
- On March 22, FLI published an open letter calling for a six-month moratorium on frontier AI progress.
- On March 29, Eliezer Yudkowsky published a piece in TIME calling for an indefinite moratorium.
- To our knowledge, none of the top AI organizations (OpenAI, DeepMind, Anthropic) have released a statement responding to these pieces.
We offer a request to AGI organizations: Determine what you think about these requests for an AI pause (possibly with uncertainties acknowledged), write up your beliefs in some form, and publicly announce your position.
We believe statements from labs could improve discourse, coordination, and transparency on this important and timely topic.
Discourse: We believe labs are well-positioned to contribute to dialogue around whether (or how) to slow AI progress, making it more likely for society to reach true and useful positions.
Coordination: Statements from labs could make coordination more likely. For example, lab A could say “we would support a pause under X conditions with Y implementation details”. Alternatively, lab B could say “we would be willing to pause if lab C agreed to Z conditions.”
Transparency: Transparency helps others build accurate models of labs, their trustworthiness, and their future actions. This is especially important for labs that seek to receive support from specific communities, policymakers, or the general public. You have an opportunity to show the world how you reason about one of the most important safety-relevant topics.
We would be especially excited about statements that are written or endorsed by lab leadership. We would also be excited to see labs encourage employees to share their (personal) views on the requests for moratoriums.
Sometimes, silence is the best strategy. There may be attempts at coordination that are less likely to succeed if people transparently share their worldviews. If this is the case, we request that AI organizations make this clear (example: "We have decided to avoid issuing public statements about X for now, as we work on Y. We hope to provide an update within Z weeks.")
At the time of this post, the FLI letter has been signed by 7 DeepMind research scientists/engineers, probably 0 OpenAI research scientists [LW(p) · GW(p)] and 0 Anthropic employees.
See also:
- Let's think about slowing down AI [LW · GW]
- A challenge for AGI organizations, and a challenge for readers [LW · GW]
- Six dimensions of operational adequacy in AGI projects [LW · GW]
11 comments
Comments sorted by top scores.
comment by TW123 (ThomasWoodside) · 2023-04-12T01:10:51.932Z · LW(p) · GW(p)
At the time of this post, the FLI letter has been signed by 1 OpenAI research scientist, 7 DeepMind research scientists/engineers, and 0 Anthropic employees.
"1 OpenAI research scientist" felt weird to me on priors. 0 makes sense, if the company gave some guidance (e.g. legal) to not sign, or if the unanimous opinion was that it's a bad idea to sign. 7 makes sense too -- it's about what I'd expect from DeepMind and shows that there's a small contingent of people really worried about risk. Exactly 1 is really weird -- there are definitely multiple risk conscious people at OpenAI, but exactly one of them decided to sign?
I see a "Yonas Kassa" listed as an OpenAI research scientist, but it's very unclear who this person is. I don't see any LinkedIn or Google Scholar profile of this name associated with OpenAI. Previously, I know many of the signatures were inaccurate, so I wonder if this one is, too?
Anyway, my guess is that actually zero OpenAI researchers, and that both OpenAI and Anthropic employees have decided (as a collective? because of a top down directive? for legal reasons? I have no idea) to not sign.
Replies from: Evan R. Murphy↑ comment by Evan R. Murphy · 2023-04-14T01:37:48.897Z · LW(p) · GW(p)
There are actually 3 signatories now claiming to work for for OpenAI.
comment by WilliamKiely · 2023-04-11T18:41:48.891Z · LW(p) · GW(p)
Demis Hassabis answered the question "Do you think DeepMind has a responsibility to hit pause at any point?" in 2022:
https://www.lesswrong.com/posts/vEJAFpatEq4Fa2smp/hooray-for-stepping-out-of-the-limelight?commentId=x8DZswktu3WtfyzFR [LW(p) · GW(p)]
comment by Raemon · 2023-04-12T13:12:59.038Z · LW(p) · GW(p)
I actually don't know that I think this is helpful to push for now.
I do wish a "good version" of this would happen soon, but I think the version you'd be likely to get is one where they write weird reputational concerns where they don't want to be seen by their investors as not racing ahead to make progress as fast as possible (since their investors don't understand the degree of danger involved)
(There's also the bit where, well, the fact that they're labs pursuing AI in the first place means that (in my opinion) leadership would probably just have pausing-takes I think don't make sense)
And then, once having written a public statement on it, they'd be more likely to stick to that public statement, even if nonsensical.
I do generally wish more orgs would speak more freely (even when I disagree with them), and I separately wish something about their strategic thinking process was different (though I'm not sure exactly what their thought process is at the moment so not sure how I wish it were different). But both of those things seem like causal-nodes further up a chain than "whether they engage publicly on this particular issue."
Replies from: Raemon↑ comment by Raemon · 2023-04-12T13:26:43.804Z · LW(p) · GW(p)
The related thing that I think I do wish orgs would issue statements on is "what are the circumstances in which it would make sense to pause unilaterally, even though all the race-conditions still apply, because your work has gotten too dangerous. i.e., even if you think it's actually relatively safe to continue research and deployment now, if you're taking x-risk seriously as a concern there should be some point at which an AGI model would be unsafe to deploy to the public, and a point at which it's unsafe even to be running new training runs.
Each org should have some model of when that point likely is, and I think even with my cynical-political-world-goggles on it should be to their benefit to say that publicly.
comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-04-11T19:06:28.738Z · LW(p) · GW(p)
I signed the letter because I think that such things are a useful way of moving the Overton window. In this case, I want the government and the public to start thinking about whether and how to regulate AI development. I might not have signed the letter if I thought that it would actually result in a pause, since I don't think that that's probably the wisest strategic move at this point. I explain why here: https://www.lesswrong.com/posts/GxzEnkSFL5DnQEAsZ/paulfchristiano-s-shortform?commentId=hEQL7rzDedGWhFQye [LW(p) · GW(p)]
comment by kdbscott · 2023-04-11T18:28:48.719Z · LW(p) · GW(p)
I think it makes sense that the orgs haven't commented, as it would possibly run afoul of antitrust laws.
See for example when some fashion clothing companies talked about trying to slow down fashion cycles to produce less waste / carbon emissions, which led to antitrust regulators raiding their headquarters.
Replies from: JamesPayor↑ comment by James Payor (JamesPayor) · 2023-04-11T19:46:24.885Z · LW(p) · GW(p)
Huh, does this apply to employees too? (ala "these are my views and do not represent those of my employer")
comment by ChristianKl · 2023-04-12T18:06:41.360Z · LW(p) · GW(p)
On March 22, FLI published an open letter calling for a six-month moratorium on frontier AI progress.
I think it's a mistake to claim that given that the call is for a moratorium on "frontier AI progress" overall. but about a subset of progress.
The call is for "Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium."
It explictely says "AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal" and not that all AI research and development should stop.
comment by WilliamKiely · 2023-04-11T18:46:00.684Z · LW(p) · GW(p)
I strongly agree with this request.
If companies don't want to be the first to issue such a statement then I suggest they coordinate and share draft statements with each other privately before publishing simultaneously.
comment by Jan Kulveit (jan-kulveit) · 2023-04-12T08:48:47.275Z · LW(p) · GW(p)
I think silence is a clearly sensible strategy for obvious reasons.