Fighting For Our Lives - What Ordinary People Can Do
post by TinkerBird · 2023-02-21T20:36:32.579Z · LW · GW · 18 commentsContents
Nothing Matters Other Than This The Problem The Solution - Raising Awareness No Individual Is Smarter than All of Us Together A Long Shot Idea - Can Non-Programmers Help in Solving Alignment? Getting Everybody Working Together None 18 comments
tl;dr: Get organized and start brainstorming
___________________________________________________________
If you're reading this, you are the resistance.
I've always wanted to say that. But seriously, if you're reading this, then you may be one of the few people who recognize the dangers we're facing, and therefore one of the only people who can help.
Nothing Matters Other Than This
No, really. Climate change, cancer and war don't matter anymore when AGI is about to arrive.
To oversimplify:
- Aligned A.G.I. = heaven, at some point
- Misaligned A.G.I. = we all die
Whether there's a 1% chance of doom or a 99% chance of doom, I don't think those scenarios should be treated that differently. Even a 1% chance of doom is far too much and calls for humanity's every effort to reduce it. And it's not 1%.
Even if you don't think that doom is likely, even if you're convinced that the right people are taking the safety issue seriously, even if you put a lot of stock in the idea of AI being helpful in solving alignment, you will either be dead in 20 years or death will be cured in 20 years. Are you sure that your day job still matters when you know that your time can help improve our chances?
And if you think that Yudkowsky is justified in his predications... then what are you doing just sitting there?
This is the only important thing we will ever do.
The Problem
The problem can be broken down into two parts:
- Slowing down capability research
- Speeding up alignment research
(Though granted, there may be some crossover between these two elements.)
The sub-problems:
- It's very hard to get governments to do things
- It's very hard to motivate corporations do anything other than make money
- It's very hard to get people with power or status to support an unpopular cause
- It's very hard to get AI capability researchers to sabotage or switch their professions
- It's very hard to paint a believable picture of doom for people (ironically, all of those other end of the world cries have hurt us greatly with this)
- Other things that I haven't thought of yet
The Solution - Raising Awareness
The main goal in one sentence is to scream about the problem as loud and convincingly as we can. We want the researchers to stop working on AI capability and to focus more on alignment.
We're in a positive feedback loop with this. The more awareness we raise, the more people we have to help raise awareness.
The first idea that comes to mind is getting Yudkowsky out there. Ideally we could find people even more persuasive than him, but we should communicate with anyone we can who has a platform that would be suitable enough for him to speak with publically. Anyone else with a voice who cares about doom should be helped in speaking up as well.
Second, we should come up with better ways to convince the masses that this problem exists and that it matters. I'm sure that AI itself will be invaluable in doing this over the next few years as jobs start disappearing and Deep Fakes, AI's you can have out loud conversations with and chatbots that can simulate your friends make their debut.
AI may or may not be helpful in alignment directly, but it will hopefully help us in other ways.
We should also focus on us directly convincing anyone with power or a platform and prominent AI researchers that the possibility of doom matters. Recruiting authorities in the field of AI and people who are naturally charismatic and convincing may be the optimal way to do this.
As I'm writing this, it occurs to me that we may wish to observe how cults have spread their own messages in the past.
No Individual Is Smarter than All of Us Together
If the alignment researchers need time and resources, then that's what we'll give them, and that's something that anyone can help with.
Sadly, it will take some very, very creative solutions to get them what they need.
But fortunately...
When properly organized, large groups of people are often amazing at creative problem solving, far better than what any individual can do.
Whether in person or over the internet, people working together are great at coming up with ideas, dividing up to test them, raising up better sounding ideas and tossing out the ones that don't work.
There are many stories of dedicated groups working together online (mostly through sites like 4Chan) to track down criminals, terrorist organizations, animal abusers, ordinary people who stirred up overblown outrage and even Shia LaBeouf - all faster than any government organization could do it. These were ordinary people with too much time on their hands that did amazing things.
Remember that time that Yudkowsky was outmatched by the HPMOR subreddit working together during the Final Exam?
If 4Chan can find terrorist groups better than the US military can, then we can be a lot more creative than that when our lives are on the line.
A Long Shot Idea - Can Non-Programmers Help in Solving Alignment?
Not being a programmer, this is where experts will have to chime in.
It doesn't seem likely, but that said... Foldit.
Foldit is an online game created by the University of Washington that turned protein folding into a competitive sport, designed for people with little to no scientific background. Players used a set of tools to manipulate protein structure and try to find the most stable and efficient way of folding it, and the game's scoring system rewarded players who did this the best. People collaborated with it like they would do with any game, and it went amazingly well.
Foldit players have been credited with numerous scientific discoveries, including solving the structure of a protein involved in the transmission of the AIDS virus - a problem that had stumped researchers for over a decade - and the design of a new enzyme that can break down plastic waste.
As I say, ordinary people working together are pretty much unbeatable.
If there is ANY possible way of doing this for alignment research, even in a small and strange way, it may be worth pursuing.
Getting Everybody Working Together
We're currently not organized or trying very hard save our own lives, and no other online group seems to be either.
It's frankly embarrassing that LessWrong collaborated so well on figuring out the ending to a Harry Potter fanfiction and yet we won't work together to save the world. But hey, it might not be too late yet.
So would anyone like to get organized?
___________________________________________________________
Naturally, if anyone can think of any good additions or edits for this post, do let me know.
18 comments
Comments sorted by top scores.
comment by Richard_Kennaway · 2023-02-22T10:50:38.393Z · LW(p) · GW(p)
And if you think that Yudkowsky is justified in his predications... then what are you doing just sitting there?
I am sitting here having no idea how to solve the problem.
There are two parts to the problem. One is getting people to realise how dangerous the fire is that we're playing with right now, both the generality of people and the people working on making more fire. I can't see that happening short of a few actual but not world-destroying fires, compared with which Sydney is a puff of smoke. Starting such fires deliberately would be counterproductive: the response would be "well then, don't start fires, dum-dum". The other part is finding a way to safely make and use that fire. I have no good ideas about that, and my impression is that a lot of people have only bad ideas.
Finding nothing for me to do about this, I do nothing.
comment by Alex Power (alex-power) · 2023-02-21T23:13:52.658Z · LW(p) · GW(p)
No, "raising awareness" is not a solution. Saying "all we need is awareness" is a lazy copout, somewhere between an appeal to magic and a pyramid scheme.
If other people here agree with this, I will have to add it to https://www.newslettr.com/p/contra-lesswrong-on-agi
Replies from: lahwran, TinkerBird↑ comment by the gears to ascension (lahwran) · 2023-02-21T23:44:01.977Z · LW(p) · GW(p)
Upvote, disagree: Raising productively useful awareness looks like exactly the post you made. Insufficiently detailed awareness of a problem that just looks like "hey everyone, panic about a thing!" is useless, yeah. And folks who make posts here have warned about that kind of raising awareness before as well.
(Even as a lot of folks make posts to the ai safety fieldbuilding [? · GW] tag. Some of them are good raising awareness, a lot of them are annoying and unhelpful, if you ask me. The ones that are like "here's yet another way to argue that there's a problem!" often get comments like yours, and those comments often get upvoted. Beware confirmation bias when browsing those; the key point I'm making here is that there isn't any sort of total consensus, not that you're wrong that some people push for a sort of bland and useless "let's raise awareness".)
Interesting recent posts that are making the point you are or related ones:
- first, they post https://www.lesswrong.com/posts/FqSQ7xsDAGfXzTND6/stop-posting-prompt-injections-on-twitter-and-calling-it [LW · GW] but then they followed it up with https://www.lesswrong.com/posts/guGNszinGLfm58cuJ/on-second-thought-prompt-injections-are-probably-examples-of [LW · GW]
- https://www.lesswrong.com/posts/nExb2ndQF5MziGBhe/should-we-cry-wolf [LW · GW]
- link specifically to a previous comment of mine, but I really mean the whole post - https://www.lesswrong.com/posts/AvQR2CqxjaNFK7J22/how-seriously-should-we-take-the-hypothesis-that-lw-is-just?commentId=JmhiskFRhYJc3vyXk [LW(p) · GW(p)]
If you ask me (which you didn't): There's real reason to be concerned about the trajectory of AI. There's real reason to invite more people to help. And yet you're quite right; just yelling "hey, help with this problem!!" is not a strategy that is a good idea to make reputable. Science is hard and requires evidence. Especially for extraordinary claims.
Also, I think plenty of evidence exists that it's a larger than 5% risk. And the ai safety fieldbuilding tag does have posts that go over it. I'd suggest opening a bunch of them and reading them fast, closing the ones that don't seem to make useful points; I'm sure you will, on net, mostly think most of the arguments suck. If you don't think you can sift good from bad arguments and then still take the insights of the good arguments home, like, shrug, I guess don't let you brain be hacked by reading those posts, but I think there are some good points in both directions floating around.
↑ comment by TinkerBird · 2023-02-22T13:27:20.184Z · LW(p) · GW(p)
When did I say that raising awareness is all that we need to do?
comment by Jonathan Claybrough (lelapin) · 2023-02-22T10:24:43.532Z · LW(p) · GW(p)
Cool that you wanna get involved! I recommend the most important thing to do is coordinate with other people already working on AI safety, because they might have plans and projects already going on you can help with, and to avoid the unilateralist's curse.
So, a bunch of places to look into to both understand the field of AI safety better and find people to collaborate with :
http://aisafety.world/tiles/ (lists different people and institutions working on AI safety)
https://coda.io/@alignmentdev/alignmentecosystemdevelopment (lists AI safety communities, you might join some international ones or local ones near you)
I have an agenda around outreach (convincing relevant people to take AI safety seriously) and think it can be done productively, though it wouldn't look like 'screaming on the rooftops', but more expert discussion with relevant evidence.
I'm happy to give an introduction to the field and give initial advice on promising directions, anyone interested dm me and we can schedule that.
comment by the gears to ascension (lahwran) · 2023-02-21T23:01:11.865Z · LW(p) · GW(p)
No, really. Climate change, cancer and war don't matter anymore when AGI is about to arrive.
Disagree, sort of - those are all instances of the same underlying generalized problem. Yeah, maybe we don't need to solve them all literally exactly the same way - but I think a complex systems view of learning gives me hunches that the climate system, body immune systems, geopolitical systems, reinforcement learning ai systems, and supervised learning systems all share key information theoretic properties.
I know that sounds kind of crazy if you don't already have a mechanistic model of what I'm trying to describe, and it might be hard to identify what I'm pointing at. I've been collecting resources intended to make it easier for "ordinary people" to get involved.
(Though nobody is ordinary, humans are very efficient, even by modern ML standards; current smart models just saw a ton of data. And, yeah, maybe you don't have time to see as much data as they saw.)
In general I think some of your claims about how to raise awareness are oversimplified; in general shouting at people is not a super effective way to change their views and hadn't been working until evidence showed up that was able to change people's views without intense shouting. Calmly showing examples and describing the threat seems to me to be more effective. But, it is in fact important to (gently) activate your urgency and agency, because ensuring that we can create universalized <?microsolidarity/coprotection/peer-free-will-protection?> across not just social groups but species is not gonna be trivial. We gotta figure out how to make hypergrowth superplanners not eat literally all beings, including today's AIs, and a key part of that is figuring out how to create a big enough and open enough co-protection network.
As with most things I post, this was written in a hurry. Feedback welcome. I'm available to talk in realtime on discord at your whim.
comment by [deleted] · 2023-02-23T08:02:40.401Z · LW(p) · GW(p)
How do you address the "capabilities overhang" argument?
If you keep capabilities research at the same pace, or speed it up, capable AGIs will be built at an earlier date.
Iff they are dangerous/uncontrollable, there will be direct evidence of this. Perhaps science fiction scenarios where the AGIs commit murders before getting caught and shut down - something that clearly demonstrates the danger. (I hope no one is ever killed by rogue AI, but in the past, most of the rules for industrial safety were written using the blood of killed workers as the ink. I'm not sure there are OSHA rules that aren't directly from someone dying.)
Right now you are telling people about a wolf that doesn't even exist yet.
If the AGIs are built in this earlier era, it means there is less compute available. Right now, it might require an unfeasible amount of compute to train and host an AI - easily tens, maybe hundreds of millions of dollars in hardware just to host 1 instance. This makes "escapes" difficult if there is only a few sets of the necessary hardware on the planet.
There is also less robotics available - with the recent tech company funding cuts to robotics efforts, even less robotics than that.
Replies from: TinkerBird↑ comment by TinkerBird · 2023-02-23T08:07:47.597Z · LW(p) · GW(p)
>How do you address the "capabilities overhang" argument?
I don't, I'm just not willing to bet the Earth that a world-ender will be proceed by mere mass murders. It might be, but let's not risk it.
As I say, if there's a 1% chance of doom, then we should treat it as nearly 100%.
Replies from: None↑ comment by [deleted] · 2023-02-23T08:15:36.757Z · LW(p) · GW(p)
By working on capabilities, "you" could do two things:
- Create a trail, a path to a working system that is probably safe. This creates evolutionary lock in - people are going to build on the existing code and explore around it, they won't start over. Example: Linux Torvalds banned the use of C++ in the Linux kernel, a decision that sticks to the present.
- Accelerate slightly the date to the first AGI, which has the advantages mentioned above.
By obstructing capabilities, "you":
1. Leave it to the other people who create the AGI instead to decide the approach
2. Delay slightly the date to the first AGI, where more compute and robotics will be available for bad outcomes.
By "you" I mean every user of lesswrong who is in a position knowledge/career wise combined, so a few thousand people.
Are broader outcomes even possible? If EY really did spur the creation of OpenAI, he personally subtracted several years from the time until first AGI...
↑ comment by TinkerBird · 2023-02-23T08:23:52.203Z · LW(p) · GW(p)
All good points. I suspect the best path into the future looks like: everyone's optimistic and then a 'survivable disaster' happens with AI. Ideally, we'd want all the panic to happen in one big shot - it's the best way to motivate real change.
Replies from: None↑ comment by [deleted] · 2023-02-23T08:34:54.903Z · LW(p) · GW(p)
Yeah. Around here, there's Zvi, EY himself, and many others who are essentially arguing that capabilities research is itself evil.
The problem with that take is
(1) alignment research will probably never achieve success without capabilities to test their theories on. Why wasn't alignment worked on since 1955, when AI research began? Because there was no credible belief to think it was a threat.
(2) the world we live in has a number of terribad things happening to people by the millions, with that nasty virus that went around being only a recent particularly bad example, and we have a bunch of problems where we humans are probably not capable of solving them. Too many independent variables, too stochastic, correct theories are probably too complex for a human to keep the entire theory "in their head" at once. Examples: medical problems, how the economy works.
Replies from: TinkerBird↑ comment by TinkerBird · 2023-02-23T08:41:57.779Z · LW(p) · GW(p)
I wouldn't call it evil, but I would say that it's playing with fire.
comment by kerry · 2023-02-23T04:17:07.577Z · LW(p) · GW(p)
There is not currently a stock solution to convince someone of the -realistic- dangers of AI who is not willfully engaged already. There are many distracting stories about AI, which are worse than nothing. But describing a bad AI ought to be a far easier problem than aligning AI. I believe we should be focused, paradoxically, perhaps dangerously, on finding and illustrating very clearly the shortest, most realistic and most impactful path to disaster.
The most common misconception I think that people make is to look at the past, and our history of surviving disasters. But most disasters thus far in planetary history have been heavily based in physics and biology - wars, plagues, asteroids. An intelligence explosion would likely have only trivial constraints in either of these domains.
I would try to start in illustrating that point.
Replies from: TinkerBird↑ comment by TinkerBird · 2023-02-23T08:02:45.778Z · LW(p) · GW(p)
This is why I'm crossing my fingers for a 'survivable disaster' - an AI that merely kills a lot of people instead of everyone. Maybe then people would take it seriously.
Coming up with a solution for spreading awareness of the problem is a difficult and important problem that ordinary people can actually tackle, and that's what I want to try.
Replies from: kerry↑ comment by kerry · 2023-02-27T21:32:10.460Z · LW(p) · GW(p)
maybe some type of oppositional game can help in this regard?
Along the same lines as the AI Box experiment. We have one group "trying to be the worst case AI" starting right at this moment. Not a hypothetical "worst case" but one taken from this moment in time, as if you were an engineer trying to facilitate the worst AI possible.
The Worst Casers propose one "step" forward in engineering. Then we have some sort of Reality Checking team (maybe just a general crowd vote?), where they rate to disprove the feasbility of the step, given the conditions that exist in the scenario so far. Anyone else can subit a "worse-Worst Case" if it is easier / faster / larger magnitude than the standing one.
Over time the goal is to crowd source the shortest credible path to the worst possible outcome, which if done very well, migth actually reach the realm of colloquial communicability.
I've started coding editable logic trees like this as web apps before, so if that makes any sense I could make it public while I work on it.
Another possibility is to get Steven Spielberg to make a movie but force him to have Yud as the script writer.
Replies from: TinkerBird↑ comment by TinkerBird · 2023-02-27T23:22:49.518Z · LW(p) · GW(p)
Based on a few of his recent tweets, I'm hoping for a serious way to turn Elon Musk back in the direction he used to be facing and get him to publically go hard on the importance of the field of alignment. It'd be too much to hope for though to get him to actually fund any researchers, though. Maybe someone else.
comment by Foyle (robert-lynn) · 2023-02-22T11:24:13.945Z · LW(p) · GW(p)
Have just watched E.Y's "Bankless" interview
I don't disagree with his stance, but am struck that he sadly just isn't an effective promoter for people outside of his peer group. His messaging is too disjointed and rambling.
This is, in the short term clearly an (existential) political rather than technical problem, and needs to be solved politically rather than technically to buy time. It is almost certainly solvable in the political sphere at least.
As an existence proof we have a significant percentage of western world's pop stressing about (comparatively) unimportant environmental issues (generally 5-15% vote Green in western elections) and they have built up an industry that is collecting and spending 100's of billions a year in mitigation activities - equivalent to something on the order of a million workers efforts directed toward it.
That psychology could certainly be redirected to the true existential threat of AI mageddon - there is clearly a large fraction of humans with patterns of belief needed to take this on this and other existential issues as a major cause if they have it explained in a compelling way. Currently Eliezer appears to lack the charismatic down-to-earth conversational skills to promote this (maybe media training could fix that), but if a lot of money was directed towards buying effective communicators/influencers with large reach into youth markets to promote the issue it would likely quickly gain traction. Elon would be an obvious person to ask for such financial assistance. And there are any number of elite influencers who would likely take a pay check to push this.
Laws can be implemented if there is are enough people pushing for it, elected politicians follow the will of the people - if they put their money where their mouths are, and rogue states can be economically and militarily pressured into compliance. A real Butlerian Jihad.
Replies from: None↑ comment by [deleted] · 2023-02-23T08:06:42.956Z · LW(p) · GW(p)
Rather grimly, none of that green activism is likely helping on a planetary scale. It raises the costs of manufacturing, so China just does it all burning coal.
The only meaningful 'green' outcome is the government funding for solar panel/wind/EVs and battery research over decades has helped make alternatives cheaper, and people vote with their wallets.
Similarly, laws restricting AI research would just push it to other countries who will have no laws on it.