In defense of flailing, with foreword by Bill Burr
post by lc · 2022-06-17T16:40:32.152Z · LW · GW · 6 commentsContents
8 comments
I don't give a [expletive] who you are, if the world is ending, and you're getting chased by zombies, you're not running around saying "oh golly gee. Oh heck... Aw jimmety cricket!"...You know?... It's the end of the world with zombies. From the beginning, once they discover the zombies, to the end of that movie where they hopefully solve the problem, should be a bunch of people - no wait - 85% of the people going "[array of sorted expletives] WHAT ARE WE GONNA DO OH MY GOD" and then the other 15% should be grabbing them by the shoulders going "FOR CHRISTS' SAKE GET A [expletive] HOLD OF YOURSELF!"... That should be most of the dialogue in that movie.
- Comedy man on existential risk
CFAR has a notion of "flailing". Alone on a desert island, if you injure yourself, you'll probably think fast about how to solve the problem. Whereas injuring yourself around friends, you're more likely to "flail": you'll lean into things that demonstrate your pain/trouble to others.
Flailing can in fact be counterproductive. I will admit that if you're a MIRI engineer, and you start to flail in front of the other MIRI engineers, that is probably in poor taste. Everyone at MIRI already knows you are in trouble and is doing their best. Many of them are as scared as you are, and are trying to suppress their own instinct to flail.
But, just like crying on the floor, flailing does in fact serve legitimate personal and social purposes. People evolved shared abilities to signal distress for a reason. Before March of this year, on the rare occasion I saw the Utterly Horrifying Situation explicitly acknowledged, the guy on my laptop screen devoting his life to the problem would always follow by saying: "Don't Panic. Do not alert the other people next to you that you might be panicking if you are. Don't honestly convey how worried you are about the problem. Deliver your argument in reasoned, unemotional tones, like you are considering a hypothetical, not like you are explaining an asteroid is headed towards earth. If you must break down, do it quietly and in a separate room, because otherwise people might not think you're Rational and by extension your community is not Rational."
We are not on a desert island. A large fraction of the basic coordination problem stems from the complete absence of public and institutional understanding of how serious the situation has gotten. It clearly wouldn't solve everything, but a world in which most people understand that sans serious coordination it's going to end seems closer to organizing a solution than one where almost nobody does. And if you're a concerned citizen trying to alert others, you should realize there's a distinct problem with suppressing your emotions.
That problem is: people hear philosophical-sounding arguments as to why we might be doing something very bad all the time. Normies take a brief look at these arguments, and then check out the emotional state and behavior of the person delivering them. The way you act and speak telegraphs to the other person how to react to the problem and the degree to which you yourself care about it.
If you are a programmer at FaceGoog who really does sound like they read something very scary on the internet, but puts 40% of their income into their 401k, bystanders who know these facts about you will probably not do anything about your pet doomsday scenario either, no matter how solid your reasoning is, because you are not behaving how they'd expect someone concerned about doomsday to behave. If you're an AI safety researcher who has a job working on the problem but you explain it to others like you're reading a weather forecast, they will probably be even less inclined to believe you, because people in the Real World will assume you should have an emotional attachment to the significance of your pet cause and have had lots of time to justify it to yourself. "This just sounds like a LARP, why aren't you doing anything about it then" is by far the most common unmanageable reaction I get after an hour conversation trying to raise the alarm for people I know, and I have little to no good explanation except "I dunno, maybe I'm a terrible/illogical person".
In fact, since the "MIRI announces new "Death With Dignity" strategy" [LW · GW] post, I've had a bit of time to reflect on why I've been working on a miscellaneous startup for years, even though I would've told anybody who asked me that the most likely outcome for myself is death before 30 years old, and why I suddenly now feel like I have to wake the fuck up. The reason is I've had these misgivings is probably because Eliezer started flailing. I don't think "Death with Dignity" is actually a helpful way, psychologically, of looking at the problem, but the post highlighted Eliezer's internal world model to me in a second-order way that flipped a switch. It was the way he communicated, how his tone was consistent with what my hindbrain expects from the alarmed, not the content, that started to get me to think more seriously about what on Earth I was trying to do with my life. Same thing with the AGI Ruin [LW · GW] post. As Zvi puts it: "One could also propose making it not full of rants, but I don’t think that would be an improvement. The rants are important. The rants contain data. They reveal Eliezer’s cognitive state and his assessment of the state of play. Not ranting would leave important bits out and give a meaningfully misleading impression."
There's probably a line beyond which panicking or showing too much emotion about how the world is ending makes you look crazy, and where that line is placed is certainly different depending on how well you personally know whoever you're talking to. However, for this particular issue it seems very rare, in practice, for people to cross that line. On the contrary, it seems like most people pretend to be calm or evade the actual subject of pending doom to a point that completely deflates their attempt at mobilizing or convincing others. Eliezer@pre-2022 will give a completely dry explanation of the problem or the factors surrounding the problem, and no matter how safely inside the Overton window he tries to be, anybody who dislikes him will just make up some psychoanalytic nonsense about how he's doing it because he wants to validate the importance of his IQ. So unless you sound incoherent to the people who were going to believe you anyways, I think you should be honestly expressing how you feel and the degree to which the problem concerns you. If you believe with 90% probability everyone is going to die in the next fifteen years, no one seems to understand that after talking with you about the problem, and it's not your deliberate intention to hide your beliefs, you're not being explicit enough.
6 comments
Comments sorted by top scores.
comment by Steven Byrnes (steve2152) · 2022-06-17T20:35:01.958Z · LW(p) · GW(p)
I wonder whether “Person P is freaking out” might have mixed effects: Maybe for people who were previously inclined to like P and/or agree with P, it would move them in the right direction, and for people who were previously inclined to dislike P and/or disagree with P, it would move them in the wrong direction.
Like, I think that I feel disinclined to listen to (person who I think is unreasonable), but that I feel much more disinclined to listen to (person who I think is unreasonable and is being very rant-y and emotional about it).
Replies from: alenglander↑ comment by Aryeh Englander (alenglander) · 2022-06-17T23:35:43.853Z · LW(p) · GW(p)
It also depends on your target audience. (Which is basically what you said, just in slightly different words.) If you want to get Serious Researchers to listen to you and they aren't already within the sub-sub-culture that is the rationality community and its immediate neighbors, then in many (most?) cases ranting and freaking out is probably going to be actively counterproductive to your cause. Same if you're trying to build a reputation as a Serious Researcher, with a chance that decision makers who listen to Serious Researchers might listen to you. On the other hand, if your target audience is people who already trust you or who are already in your immediate sub-sub-tribe, and you don't mind risking being labeled a crackpot by the wider world, then I can see why visibly freaking out could be helpful.
[Also, it goes without saying that not everybody agrees with Eliezer's probability-of-doom estimates. Depending on your relative probabilities it might make perfect sense to work in a random startup, have a 401k, not visibly freak out, etc.]
comment by Jiro · 2022-06-18T18:00:35.346Z · LW(p) · GW(p)
If you flail because it convinces people how serious the problem is, then in the long term, you're Goodharting flailing; flailing ceases to be an indicator of how serious the problem is if you do it for strategic reasons.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2022-06-19T00:00:21.126Z · LW(p) · GW(p)
Yeah, I think this is a large part of the problem with flailing. Also, the issue where you flail so much that you don't spend enough time seriously strategizing and acting on the object-level.
That said, I think the OP is a good argument that some versions of flailing can be good. In particular, I already thought that candor, blurting, and venting are good, and these can overlap with flailing.
comment by Nicholas / Heather Kross (NicholasKross) · 2022-06-17T17:52:57.913Z · LW(p) · GW(p)
Can confirm, Eliezer's recent posts (especially "AGI Ruin") have kinda "woken me up" to the urgency of the problem, and how far behind we are relative to where we should (could?) be.
comment by [deleted] · 2022-06-18T05:29:03.976Z · LW(p) · GW(p)
Replies from: lc↑ comment by lc · 2022-06-18T05:36:20.661Z · LW(p) · GW(p)
I did not see it. I've had this post in my drafts for a few days now and only got around to publishing it now, as the LW mods can confirm (I asked them to review). Funny coincidence.
Edit: Appears it was posted earlier than that, but yeah. We just came up with the same title independently.
Replies from: None