Campaign for AI Safety: Please join me
post by Nik Samoylov (nik-samoylov) · 2023-04-01T09:32:11.907Z · LW · GW · 9 commentsContents
9 comments
I will start by saying that I generally agree with Yudkowsky's position on AI. We must proceed with extreme caution. We must radically slow down AI capability advancement. We must invest unfathomable amounts of resources in AI alignment research. We need to enact laws and treaties that will help keep it all together for as long as possible and hopefully we figure things out in time.
The laughter at the recent White House press conference at the question about Yudkowsky's argument, indicates far we are in public debate from a sensible position of caution.
But I am hopeful that we can change that. Few people laugh at nuclear weapons now. We are a species capable of cooperation and of taking things seriously. As the saying goes:
"First they ignore you, then they laugh at you, then they fight you, then you win."
What is missing is public understanding of the dangers of misaligned / unaligned AI. Democracy does not work in darkness. People must know the dangers, the uncertaintly, and of ways they can contribute.
That's why I am proposing a campaign on public awareness of x-risk from AI. So far, it's just me and my wife. Please join me, especially if you work in advertising, marketing, PR, activism, politics, law, etc., if you know how to make a website, if you want to create PR materials, meet journalists, do accounting, fund-raising, etc.
Please share this with people who do not read Less Wrong but we are freaked out and want to do something.
I do not know exactly how this campaign will run, which countries to focus on. I am myself only human and can contribute very little of the total required effort. My background is in consulting and market research and I run a market research company. Personally, at this stage, I can best contribute by coordination and facilitating operations.
We need people, money, expertise, patience, etc. Please join: https://campaignforaisafety.org/.
9 comments
Comments sorted by top scores.
comment by Ruby · 2023-04-01T19:30:39.365Z · LW(p) · GW(p)
I'd urge a lot of caution here. If you've recently updated that AI is a very big deal, then it can feel like you have to do something right now. But there can be both upside and serious downside if you do this wrong. There are a lot of people thinking about this, so I'd work to get to them and coordinate rather than succumbing to unilateralist's curse.
Replies from: nik-samoylov↑ comment by Nik Samoylov (nik-samoylov) · 2023-04-02T08:49:40.272Z · LW(p) · GW(p)
Thank you for your words of caution @the gears to ascension [LW · GW] , @Ruby [LW · GW] , @Chris_Leong [LW · GW]
Indeed I have just recently updated on AI. I lived happily believing AGI was just nonsense after seeing gimmick after gimmick and slow progress on anything general. This all came down as a rude shock a couple of weeks ago.
I will heed your advise on consulting with others.
I am however of the firm opinion that AI alignment is not going to be solved any time soon. The best thing is just to shuit progress on new capabilities down indefinitely. I do not see it being done without the force of law and politics will inevitably be at play.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2023-04-08T15:36:24.741Z · LW(p) · GW(p)
Don't let your firm opinion get in the way of talking to people before you act. It was Elon's determination to act before talking to anyone that led to the creation of OpenAI, which seems to have sealed humanity's fate.
Replies from: WayZ↑ comment by simeon_c (WayZ) · 2023-04-15T13:15:34.515Z · LW(p) · GW(p)
I think that this is misleading to state it that way. There were definitely dinners and discussions with people around the creation of OpenAI.
https://timelines.issarice.com/wiki/Timeline_of_OpenAI
Months before the creation of OpenAI, there was a discussion including Chris Olah, Paul Christiano, and Dario Amodei on the starting of OpenAI: "Sam Altman sets up a dinner in Menlo Park, California to talk about starting an organization to do AI research. Attendees include Greg Brockman, Dario Amodei, Chris Olah, Paul Christiano, Ilya Sutskever, and Elon Musk."
↑ comment by Paul Crowley (ciphergoth) · 2023-04-17T00:51:47.123Z · LW(p) · GW(p)
Thanks, that's useful. Sad to see no Eliezer, no Nate or anyone from MIRI or having a similar perspective though :(
comment by Chris_Leong · 2023-04-01T14:07:26.631Z · LW(p) · GW(p)
I'd encourage you to consult widely with people in AI Safety/governance before running a large public awareness campaign. The AI Safety Governance course is likely a good place to start in terms of skilling up/better understanding this issue. I think it's possible for a public relations campaign to move the needle, but it's also very important to guard against downside risks and to think very carefully and strategically about the path to impact.
For example, if we ask for the government to sponsor research, how do we ensure the money actually goes towards alignment rather than people who just frame their research in terms of alignment?
Or for example with "critical decisions regarding AI safety must not be taken by small groups of AI researchers"? I agreed we would like to avoid a small group of researchers making decisions without consulting with anyone else, but at the same time, I'd much rather have decisions made by researchers than by politicians would most likely be clueless and too focused on appearance rather than substance.
comment by the gears to ascension (lahwran) · 2023-04-01T11:00:20.010Z · LW(p) · GW(p)
Have you looked at the ai alignment fieldbuilding [? · GW] tag at all? It seems to me likely that the approach you're using will result in engaging with standard political information flows in ways that activate unintelligent parts of people's fact evaluation. Your immediate steps are instrumental steps which campaigns often use, and it's not obvious at all that your campaign is ready to succeed. I am, in general, enthusiastic about networking and organizing, but not enthusiastic about campaigning for attention or advertising without a communicative goal. Your site doesn't seem inherently terrible, and it's quite possible I'm simply wrong. But this is my first impression.
To be clear, I'd love to see more folks thinking carefully about how to communicate issues in ways others can digest. I'm a big fan of this dude's video on soft language (text version) in how to communicate about intense topics in large-scale communication projects. So maybe the thing to take away from my comment is just that some rando in the field was slightly hesitant, rather than that I'm some sort of authority who can tell you your approach sucks. Try what you know how to do, after all.
comment by Paul Crowley (ciphergoth) · 2023-04-08T15:37:02.398Z · LW(p) · GW(p)
The lack of names on the website seems very odd.
comment by [deleted] · 2023-04-13T11:31:03.427Z · LW(p) · GW(p)
Thank you for doing this. I'm thinking that at this point, there needs to be an organisation with the singular goal of pushing for a global moratorium on AGI development. Anyone else interested in this? Have DM'd.