New US Senate Bill on X-Risk Mitigation [Linkpost]
post by Evan R. Murphy · 2022-07-04T01:25:57.108Z · LW · GW · 12 commentsThis is a link post for https://www.hsgac.senate.gov/media/majority-media/peters-introduces-bipartisan-bill-to-ensure-federal-government-is-prepared-for-catastrophic-risks-
Contents
12 comments
Two US Senators have introduced a bipartisan bill specifically focused on x-risk mitigation, including from AI. From the post on Senate.gov (bold mine):
WASHINGTON, D.C. – U.S. Senator Gary Peters (MI), Chairman of the Homeland Security and Governmental Affairs Committee, introduced a bipartisan bill to ensure our nation is better prepared for high-consequence events, regardless of the low probability, such as new strains of disease, biotechnology accidents, or naturally occurring risks such as super volcanoes or solar flares that though unlikely, would be exceptionally lethal if they occurred.
“Making sure our country is able to function during catastrophic events will improve national security, and help make sure people in Michigan and across the country who are affected by these incidents get the help they need from the federal government,” said Senator Peters. “Though these threats may be unlikely, they are also hard to foresee, and this bipartisan bill will help ensure our nation is prepared to address cataclysmic incidents before it’s too late.”
[The legislation] will establish an interagency committee for risk assessment that would report on the adequacy of continuity of operations (COOP) and continuity of government (COG) plans for the risks identified. The bipartisan legislation would also help counter the risk of artificial intelligence (AI), and other emerging technologies from being abused in ways that may pose a catastrophic risk.
[...]
It's interesting the term 'abused' was used with respect to AI. It makes me wonder if the authors have misalignment risks in mind at all or only misuse risks.
I haven't been able to locate the text of the bill yet. If someone finds it, please share in the comments.
Cross-posted to EA Forum [EA · GW]. Credit to Jacques Thibodeau for posting a link on Slack that made me aware of this.
12 comments
Comments sorted by top scores.
comment by Davidmanheim · 2022-07-04T07:53:22.153Z · LW(p) · GW(p)
The text is now available, here: https://www.congress.gov/bill/117th-congress/senate-bill/4488
comment by Catherine Low (catherine-low) · 2022-07-05T03:00:28.699Z · LW(p) · GW(p)
This bill does seem very important. It is hard to know what will help or hinder the political process, so I recommend that folks in the EA and LW community don't try to do a public coordinated effort try to influence the content or outcome of this proposed bill - at least for now.
My understanding is that the people involved in drafting this bill are aware of the EA and LW community, so they know they can reach out when and if they think that would be helpful.
↑ comment by Jeff Rose · 2022-07-05T05:14:57.811Z · LW(p) · GW(p)
In general, expressions of support for the bill will (modestly) help its passage. So, if you think this is a good bill and you live in the United States (1) call or write your Senators and urge them to support it; and (2) call or write your member of Congress, mention the Senate bill and urge them to introduce such a bill in the House.
If you think there are issues, then you should do the same thing, but instead note the issues and you should also write/call Senator Peters' Washington Office with the same message.
Replies from: Davidmanheim↑ comment by Davidmanheim · 2022-07-10T09:04:20.420Z · LW(p) · GW(p)
Please don't do this. The bill is unlikely to pass on its own, but could be included in other bills as long as it's not a hot-button topic, and bringing attention to it is almost certainly counterproductive.
comment by GeneSmith · 2022-07-04T04:03:40.890Z · LW(p) · GW(p)
How well have these types of inter-agency committees tended to work in the past? Is this a good way to actually get things done or does it just add more bureaucracy?
Replies from: Evan R. Murphy↑ comment by Evan R. Murphy · 2022-07-04T18:19:35.973Z · LW(p) · GW(p)
Good question. I'm not sure about this types of committees in particular, but:
One reason this might not go terribly is that unlike many issues government deals with, there probably aren't a mess of competing interests they'll have to cater to in this case. Voters don't feel strongly about obscure catastrophic risks, and I can't think of any powerful companies who would be investing in lobbying around this (they mostly care about short-term issues).
So if the senators care about this issue and have good guidance on it, they will be relatively unencumbered to follow their experts' advice. They won't have to, e.g. contort their plans to sound good to their constituents and then hollow them out to please their campaign donors.
comment by Shiroe · 2022-07-04T05:34:26.361Z · LW(p) · GW(p)
It's interesting the the term 'abused' was used with respect to AI. It makes me wonder if the bill has misalignment risks in mind at all or only misuse risks.
I would be very surprised if they had anything like the Yudkowskian paradigm in mind when they were thinking of this.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2022-07-05T04:49:31.988Z · LW(p) · GW(p)
Why? ~All the other gov stuff I'm aware of that talks about "GCR" or that talks about AI in the context of "high-consequence [catastrophic] events, regardless of the low probability" cites Bostrom, MIRI, Ord, or Stuart Russell.
Replies from: RobbBB, Shiroe↑ comment by Rob Bensinger (RobbBB) · 2022-07-05T04:54:45.121Z · LW(p) · GW(p)
(But I agree they're likely to have views closer to Superintelligence, Human Compatible, or The Precipice, rather than AGI Ruin. I just think of those views as pretty close to the Yudkowskian paradigm -- eg, Bostrom is big on paperclippers and foom.)
↑ comment by Shiroe · 2022-07-05T22:06:13.724Z · LW(p) · GW(p)
Bostrom and MIRI being cited is pretty cool. I would have thought they'd be outside the Overton window. EDIT: Do you know when the earliest citations occurred?
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2022-07-06T01:07:03.067Z · LW(p) · GW(p)
E.g., Preparing for the Future of Artificial Intelligence and Wired in 2016.
comment by Daniel_Eth · 2022-07-06T02:10:43.201Z · LW(p) · GW(p)
It's interesting the term 'abused' was used with respect to AI. It makes me wonder if the authors have misalignment risks in mind at all or only misuse risks.
A separate press release says, "It is important that the federal government prepare for unlikely, yet catastrophic events like AI systems gone awry" (emphasis added), so my sense is they have misalignment risks in mind.