I asked my senator to slow AI
post by Omid · 2023-04-06T18:18:08.272Z · LW · GW · 5 commentsContents
5 comments
All my favorite twitter accounts agree that AI capabilities are advancing too fast.
One way to slow down AI progress is to ask politicians for help. Politicians are generally bad at solving existential engineering problems. But politicians have an almost superhuman talent at obstructing economic growth, especially when they can justify it with a crisis. Just ask the nuclear power industry. I don't think politicians can solve the alignment problem, but I do think that they can obstruct the profitability of AI capabilities research through methods like:
- Raising taxes on big tech
- Passing a GDPR-esque law that complicates AI research and requires lawyers to be consulted
- Summoning tech leaders to public hearings that embarrass them and waste time
There are reasons this might be a bad idea. The biggest reason is that we may break a fragile alliance between the AI capabilities industry and the AI notkilleveryoneism community. However, it seems to me that both the capabilities industry and the notkilleveryone community are already defecting from that alliance. And there may be new opportunities for alliances. An established AI company might welcome a regulatory regime that increased their expenses, as long that regime also made it expensive for new competitors to enter the AI game.
To that end, I called my senator and asked him to make AI less profitable. Here's the sweaty, nasal video.
And here's the transcript:
Hi, my name is Gwen...[last name and where I live].
So, I am just a little concerned with how good artificial intelligence is getting. Um, you might have heard of ChatGPT, maybe Bing Chat, or Google has one called Bard. And all of these artificial intelligence models are incredibly powerful. Um, they can generate pornography, they can design biological weapons, they can talk, they can tell lies, they can manipulate and I just think it's getting a little bit out of hand. They can even write their own software now. And I just think that we need to slow down, um, and however you--Congress wants to do that whether that's with new regulations, new taxes, maybe some new lawsuits…whatever Congress can do to make artificial intelligence more expensive and less profitable I think is going to be for the interest of Americans and American safety. So thank you very much.
This wasn't an elegant message. But I don't think it matters. I'm don't have to convince my senator to care about AI. I just have to convince him that I care.[1]
If you think the government should slow down AI, please call your politicians. American readers can find their Congress representatives at congress.gov/members/find-your-member. If you live in a tech hub like California or Washington, consider calling your local representatives as well. If you live in outside the US, you can still lobby your government to restrict access to American AIs.
For me at least, I get nervous when I talk to normies about AI ruin. I feel like a weird person talking about weird things, so I tend to avoid it. If you're one of those people who thinks it would be weird to call your senator about AI, I hope seeing my mildly embarrassing video can make the experience less awkward. Let's be weird together.
There are 7.8 billion people on earth. If the collective weight of anti-AI lobbying can delay the apocalypse by even one day, this will be equivalent of saving 21 million life years[2]. A six-month pause would be equivalent to saving 50 million newborn babies, each of whom would go to enjoy another 78 years of life. Even for a doomer like me, that's something to hope for.
- ^
In fact, odds are that my senator won't hear from me directly at all. Most politicians will have an aide screen their calls and keep a tally of how many people call about each issue. 10 mediocre pitches will easily beat 1 brilliant pitch. Just add your name to the tally.
- ^
Quality unadjusted, but still!
5 comments
Comments sorted by top scores.
comment by trevor (TrevorWiesinger) · 2023-04-09T06:34:10.648Z · LW(p) · GW(p)
I don't think this is a very helpful approach, and you're not doing yourself justice by taking it. Calling/writing congress might be several orders of magnitude more effective than voting, but it's still at least one order or magnitude under what most people are doing (that takes the same amount of effort as calling congress).
There's tons of examples of things to do that aren't this. I've been interested in using imagery to make it much easier to explain AI risk to someone for the first time [LW · GW]. Raemon has repeatedly endorsed Scott Alexander's Superintelligence FAQ post [LW · GW] as the best known layperson-friendly introduction to AI safety (he also endorsed this post [LW · GW] for ML engineers). Anyone can research and write a post like Lessons learned from talking with 100 academics about AI safety [EA · GW] about the effect, because everyone has access to a sample of people who don't know about AI safety yet (in person, don't use social media, your data will be crap and you'll drag everyone down with you).
Replies from: Daniel Uebele↑ comment by Daniel Uebele · 2023-04-09T06:44:57.208Z · LW(p) · GW(p)
Is the goal of all this persuasion to get people to fire off a letter like the one above?
Replies from: TrevorWiesinger↑ comment by trevor (TrevorWiesinger) · 2023-04-09T07:22:56.494Z · LW(p) · GW(p)
The goal of the explanation is to give people a fair chance of understanding AI risk. You can either give someone a fair chance to model the world correctly, or you can fail to give them that fair chance. More fairness is better.
I could tell from the post that Omid did not feel confident in their ability to give someone a fair chance at understanding AI risk.
comment by Daniel Uebele · 2023-04-09T06:01:08.992Z · LW(p) · GW(p)
Thanks, I've fired off a version of this to my three representatives. I'm going to pass out copies to my friends and family.
comment by irving (judith) · 2023-04-08T06:18:36.802Z · LW(p) · GW(p)
This is exactly the kind of praxis we need to see more of.