The public supports regulating AI for safety
post by Zach Stein-Perlman · 2023-02-17T04:10:03.307Z · LW · GW · 9 commentsContents
9 comments
A high-quality American public survey on AI, Artificial Intelligence Use Prompts Concerns, was released yesterday by Monmouth. Some notable results:
- 9% say AI1 would do more good than harm vs 41% more harm than good (similar to responses to a similar survey in 2015)
- 55% say AI could eventually pose an existential threat (up from 44% in 2015)
- 55% favor “having a federal agency regulate the use of artificial intelligence similar to how the FDA regulates the approval of drugs and medical devices”
- 60% say they have “heard about A.I. products – such as ChatGPT – that can have conversations with you and write entire essays based on just a few prompts from humans”
Worries about safety and support of regulation echoes other surveys:
- 71% of Americans agree that there should be national regulations on AI (Morning Consult 2017)
- The public is concerned about some AI policy issues, especially privacy, surveillance, and cyberattacks (GovAI 2019)
- The public is concerned about various negative consequences of AI, including loss of privacy, misuse, and loss of jobs (Stevens / Morning Consult 2021)
Surveys match the anecdotal evidence from talking to Uber drivers: Americans are worried about AI safety and would support regulation on AI. Perhaps there is an opportunity to improve the public’s beliefs, attitudes, and memes and frames for making sense of AI; perhaps better public opinion would enable better policy responses to AI or actions from AI labs or researchers.
Public desire for safety and regulation is far from sufficient for a good government response to AI. But it does mean that the main challenge for improving government response is helping relevant actors believe what’s true, developing good affordances [LW(p) · GW(p)] for them, and helping them take good actions— not making people care enough about AI to act at all.
9 comments
Comments sorted by top scores.
comment by Noosphere89 (sharmake-farah) · 2023-02-17T17:31:52.169Z · LW(p) · GW(p)
This is good news, with caveats. Still, this is very much a good thing to keep in mind.
One major caveat is that the support for AI safety should be interpreted as a maximum, rather than a minimum or average, because of the fact that once policies that have actually being debated, support starts to waver. So there's been basic no adversarial stress test of support.
comment by postjawline · 2023-02-17T16:30:10.834Z · LW(p) · GW(p)
Sure, this is a useful poll, but I'm not so sure that the public understands AI.
Perhaps there is an opportunity to improve the public’s beliefs, attitudes, and memes and frames for making sense of AI.
Yes, I strongly agree, because I think someone should focus their efforts on providing simple, easy to understand explanations as to how AI works in collaboration with key players so the public comes to a decent understanding. Not to be elitist, but I don't think the public's opinion is a useful metric by which policy should be made regarding AI. I think it could potentially harm efforts moving forward in multiple areas.
comment by Aiyen · 2023-02-19T04:13:37.945Z · LW(p) · GW(p)
Regulation in most other areas has been counterproductive. In AI, it will likely be even more so: there's at least some understanding of e.g. medicine by both the public and our rulers, but most people have no idea about the details of alignment.
This could easily backfire in countless ways. It could drive researchers out of the field, it could mandate "alignment" procedures that don't actually help and get in the way of finding procedures that do, it could create requirements for AIs to say what is socially desirable instead of what is true (chatGPT is already notorious for this), making it harder to tell how the AI is functioning...
It is socially desirable to call for regulation as a solution for almost any problem you care to name, but it is practically useful far more rarely. This is AI alignment. This is potentially the future of humanity at stake, and all human values. If we cannot speak the truth here, when will we ever speak it?
There are, of course, potentially reasonable counterarguments. Someone might believe that AI capabilities are more fragile than AI alignment, for instance, such that regulation would tend to slow capabilities without greatly hampering alignment, and the time bought gave us a better chance of a good outcome. Perhaps. But please consider, are you calling for regulation because it actually makes sense, or because it's the Approved Answer to problems?
Please don't make this worse.
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2023-02-19T06:47:20.803Z · LW(p) · GW(p)
But please consider, are you calling for regulation because it actually makes sense, or because it's the Approved Answer to problems?
I didn't call for regulation.
Some possible regulations would be good and some would be bad.
I do endorse trying to nudge regulation to be better than the default.
Replies from: Aiyen↑ comment by Aiyen · 2023-02-19T17:00:48.336Z · LW(p) · GW(p)
How do you propose nudging regulation to be better without nudging for more regulation?
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2023-02-19T19:28:47.462Z · LW(p) · GW(p)
Combating bad regulation would be the obvious way.
In seriousness, I haven’t focused on interventions to improve regulation yet— I just noticed a thing about public opinion and wrote it. (And again, some possible regulations would be good.)
Replies from: Aiyen↑ comment by Aiyen · 2023-02-20T22:11:23.814Z · LW(p) · GW(p)
Combating bad regulation isn’t a solution, but a description of a property you’d want a solution to have.
Or more specifically, while you could perhaps lobby against particular destructive policies, this article is pushing for “helping [government actors] take good actions”, but given the track record of government actions, it would make far more sense to help them take no action. Pushing for political action without a plan to steer that action in a positive direction is much like pushing for AI capabilities without a plan for alignment… which we both agree is insanely dangerous.
The state is not aligned. That should be crystal clear from the medical and economic regulations that already exist. And bringing in a powerful Unfriendly agent into mankind’s efforts to create a Friendly one is more likely to backfire than to help.
comment by Ebenezer Dukakis (valley9) · 2023-02-18T03:07:44.690Z · LW(p) · GW(p)
How about regulating the purchase/rental of GPUs and especially TPUs?
For companies which already have GPU clusters, maybe we need data center regulation? Something like: The code only gets run on the data center if a statement regarding its safety has been digitally signed by at least N government-certified security researchers.
Replies from: valley9↑ comment by Ebenezer Dukakis (valley9) · 2023-02-18T03:14:53.150Z · LW(p) · GW(p)
I wouldn't be opposed to nationalizing data centers, if that's what's needed to accomplish this.