Most people should probably feel safe most of the time
post by Kaj_Sotala · 2023-05-09T09:35:11.911Z · LW · GW · 28 commentsContents
28 comments
There is an idea that I’ve sometimes heard around rationalist and EA circles, that goes something like “you shouldn’t ever feel safe, because nobody is actually ever safe”. I think there are at least two major variations of this:
- You shouldn’t ever feel safe, because something bad could happen at any time. To think otherwise is an error of rationality.
- You shouldn’t ever feel safe, because AI timelines might be short and we might be about to die soon.[1] Thus, to think that you’re safe is making an error of rationality.
I’m going to argue against both of these. If you already feel like both of these are obviously wrong, you might not need the rest of this post.
Note that I only intend to dispute the intellectual argument that these are making. It’s possible to accept on an intellectual level that it would make sense to feel safe most of the time, but still not feel safe. That kind of emotional programming requires different kinds of tools [LW · GW] to deal with. I’m mostly intending to say that if you feel safe, you don’t need to feel bad about that. You don’t need to make yourself feel unsafe; for most people, it’s perfectly rational to feel safe.
I do expect some of the potential readers of this post to live in a very unsafe environment - e.g. parts of current-day Ukraine, or if they live together with someone abusive - where they are actually in constant danger. For them, it may make sense to feel unsafe all the time. (If you are one of them, I genuinely hope things get better for you soon.) But these are clearly situations where something has gone badly wrong; the feeling that one has in those situations shouldn’t be something that one was actively striving for. I think that any reader who doesn’t live in an actively horrendous situation would do better to feel safe most of the time. (Short timelines don't count as a horrendous situation, for reasons that I'll get into.)
As I interpret it, the core logic in both of the “you shouldn’t ever feel safe” claims goes as follows:
- To feel safe implies a belief that nothing bad is going to happen to you
- But something bad can happen to you at any time, even when you don’t expect it. In the case of AI, we even have reasons to put a significant probability on this in fact happening soon.
- Thus, feeling safe requires having an incorrect belief, and the rational course of action is to not feel safe.
One thing that you might notice from looking at this argument is that one could easily construct an exactly opposite one as well.
- To feel unsafe implies a belief that things aren’t going to go well for you.
- But things can go well for you, even when you don’t expect it. In the case of AI, we even have reasons to put a significant probability on things going well.
- Thus, feeling unsafe requires having an incorrect belief, and the rational course of action is to feel safe.
That probably looks obviously fallacious - just because things can go well, doesn’t mean that it would be warranted to always feel safe. But why then would it be warranted to feel unsafe in the case where things just can go badly?
To help clarify our thinking, let's take a moment to look at how the US military orients to the question of being safe or not. More specifically, to the question of whether a given military unit is reasonably safe or whether it should prepare for an imminent battle.
Readiness Condition levels are a series of standardized levels that a unit’s commander uses to adjust the unit’s readiness to move and fight. Here’s an abridged summary of them:
- REDCON-1. Full alert; unit ready to move and fight. The unit’s equipment and NBC alarms are stowed, soldiers at observation posts are pulled in. All personnel are alert and mounted on vehicles. Weapons are manned, engines are started, company team is ready to move immediately.
- REDCON-2. Full alert; unit ready to fight. Equipment except for NBC alarms is stowed. Precombat checks are complete, all personnel is alert and mounted in vehicles. Weapons are manned but engines are not started. Company team is ready to move within 15 minutes of notification.
- REDCON-3. Reduced alert. Fifty percent of the unit executes work and rest plans. Company team is ready to move within 30 minutes of notification.
- REDCON-4. Minimum alert. Observation posts are manned; one soldier per platoon designated to monitor radio and man turret weapons. Company team is ready to move within one hour of notification.
Now, why are all units not always kept on REDCON-1, or at least on REDCON-2? After all, there could always be an unexpected need for the units to mobilize or fight on immediate notice. Even units based on the US mainland might be called in to deal with a terrorist attack (as happened on 9/11) or natural disaster at any time.
The obvious answer is that a higher REDCON burns resources and makes the unit incapable of doing the tasks that it would carry out at lower readiness levels. The soldiers get tired, running the engines consumes fuel, soldiers who would be at observation posts aren’t carrying out observations, and work and rest tasks aren’t being carried out.
Even though the military realizes that the unit could be needed at any time, setting their readiness condition isn’t just a question of whether it’s possible for the unit to be needed on short notice. It’s also a question of whether that’s likely enough to make the cost of maintaining a high state of readiness worth it in expectation.
I think that it makes sense for a individual person to also orient to the question of being safe or unsafe in a similar way. If someone claims that you shouldn’t ever feel safe, they are presumably saying that because they expect the feeling to translate to actions. They are saying that you should act as if you weren’t safe. But there is an opportunity cost to that; frequently thinking about possible threats burns mental cycles that could be used on something else, it makes it harder to rest and relax, and it biases the kind of information that you pay attention to.
In fact, I’m going to take the analogy a step further. I think that a person’s sense of (un)safety is in fact their subjective experience of an internal variable [LW · GW] that tracks something analogous to the readiness condition level of that person’s body and brain.
This quote from Cosmides & Tooby (2000) describes some of the effects they suggest may be triggered when a person is alone at night and feels a fear of being stalked; or in this analogy, the kinds of responses that the body and brain associate when being at their equivalent of REDCON 1 or 2:
(1) There are shifts in perception and attention: You may suddenly hear with far greater clarity sounds that bear on the hypothesis that you are being stalked, but that ordinarily you would not perceive or attend to, such as creaks or rustling. Are the creaks footsteps? Is the rustling caused by something moving stealthily through the bushes? Signal detection thresholds shift: Less evidence is required before you respond as if there were a threat, and more true positives will be perceived at the cost of a higher rate of false alarms.
(2) Goals and motivational weightings change: Safety becomes a far higher priority. Other goals and the computational systems that subserve them are deactivated: You are no longer hungry; you cease to think about how to charm a potential mate; practicing a new skill no longer seems rewarding. Your planning focus narrows to the present: worries about yesterday and tomorrow temporarily vanish. Hunger, thirst, and pain are suppressed.
(3) Information-gathering programs are redirected: Where is my baby? Where are others who can protect me? Is there somewhere I can go where I can see and hear what is going on better?
(4) Conceptual frames shift, with the automatic imposition of categories such as "dangerous" or "safe". Walking a familiar and usually comfortable route may now be mentally tagged as "dangerous". Odd places that you normally would not occupy - a hallway closet, the branches of a tree - suddenly may become salient as instances of the category "safe" or "hiding place".
(5) Memory processes are directed to new retrieval tasks: Where was that tree I climbed before? Did my adversary and his friend look at me furtively the last time I saw them?
(6) Communication processes change: Depending on the circumstances, decision rules might cause you to emit an alarm cry, or be paralyzed and unable to speak. Your face may automatically assume a species-typical fear expression.
(7) Specialized inference systems are activated: Information about a lion's trajectory or eye direction might be fed into systems for inferring whether the lion saw you. If the inference is yes, then a program automatically infers that the lion knows where you are; if no, then the lion does not know where you are (the "seeing-isknowing" circuit identified by Baron-Cohen 1995, and inactive in autistics). This variable may automatically govern whether you freeze in terror or bolt. Are there cues in the lion's behavior that indicate whether it has eaten recently, and so is unlikely to be predatory in the near future? (Savanna ungulates, such as zebras and wildebeests, commonly make this kind of judgment; Marks, 1987).
(8) Specialized learning systems are activated, as the large literature on fear conditioning indicates (e.g., LeDoux, 1995; Mineka & Cook, 1993; Pitman & Orr, 1995). If the threat is real, and the ambush occurs, the victim may experience an amygdala-mediated recalibration (as in post-traumatic stress disorder) that can last for the remainder of his or her life (Pitman & Orr, 1995).
(9) Physiology changes: Gastric mucosa turn white as blood leaves the digestive tract (another concomitant of motivational priorities changing from feeding to safety); adrenalin spikes; heart rate may go up or down (depending on whether the situation calls for flight or immobility), blood rushes to the periphery, and so on (Cannon, 1929; Tomaka, Blascovich, Kibler, & Ernst, 1997); instructions to the musculature (face, and elsewhere) are sent (Ekman, 1982). Indeed, the nature of the physiological response can depend in detailed ways on the nature of the threat and the best response option (Marks, 1987).
(10) Behavioral decision rules are activated: Depending on the nature of the potential threat, different courses of action will be potentiated: hiding, flight, self-defense, or even tonic immobility (the latter is a common response to actual attacks, both in other animals and in humans). Some of these responses may be experienced as automatic or involuntary.
From this list, it’s pretty clear that it would be a bad idea to maintain this state all the time. Even if there were circumstances where it was theoretically possible for a person to be stalked, maintaining a constant state of fear would prevent them from digesting their food, relaxing, or for that matter thinking about anything other than how to get to safety.
And if everywhere was classified as an unsafe state (as one does if they say you should never feel safe), then the priority of “get somewhere safe” couldn’t do anything useful. The person would just be stuck in a constant anxiety loop that necessitated constant running or fighting but never recognized a state where those responses could be even temporarily wounded down.
In a more recent paper, Levy & Shiller (2020) discuss our subjective experience of threat as being linked to a series of unconscious computations about the expected distance to physical danger, such as a predator. They describe a series of threat levels that one could see as being analogous to the readiness conditions of a military unit (extra paragraph breaks and bolding of the different stages added):
Encountering a life-threatening situation engages neural computations that consider information about the environment and the source or threat. These computations design defensive policies and select adaptive responses for execution. The threat imminence continuum model [2], which maps defensive behaviors onto levels of threat imminence (how far a predator is in time and space), provides a platform of prey-predator relations for assessing the neural circuits and computations for survival [3, 4].
In the first ‘safe’ stage, there is no threat, an encounter with a predator may occur in the distant future. Individuals may experience occasional anxiety, and flash forward toward possible future threats. Cognitive control and emotion regulation could keep this process in check.
Next is the ‘pre-encounter threat’– the predator is not present, but may surface at any moment. Individuals may experience anticipatory anxiety and exhibit vigilance and preparatory behaviors.
In the more dangerous ‘post-encounter threat’ the prey, not yet detected, observes the predator. This step generates encounter anxiety, involving close inspection and anticipation of the predator’s moves, strategic freezing to avoid detection and gather information, and avoidance estimation.
Finally, the prey is under most extreme danger during the ‘circa-strike’ phase, when the predator is attacking. Within that attack mode, the predator could be distant enough to allow a feeling of fear and rapid thoughts examining the situation and assessing escape routes. Fight or flight ensues as the predator gets closer yet without contact. The final point of contact provokes hard-wired, fast, often poorly-executed reactions of freezing and panic [5].
If this model is right, it then implies that “feeling safe” does not imply an assessment of zero probability of threat. Feeling safe is the subjective experience of the brain assigning a low probability to a threat, and emphasizing the kinds of behavioral and biological priorities that make the best use of a low-threat situation.
Meanwhile, “feeling unsafe” implies an assessed level of at least ‘pre-encounter threat’, meaning at least a moderate probability of an immediate physical threat. “Moderate” in this context is a bit of a fuzzy term, given that even something like a 1% probability of there being a predator around could plausibly make it justified to maintain this level of readiness.
Still, a 99+% probability of being safe isn’t that high of a bar; extreme probabilities are common [LW · GW]. If a person’s risk of being physically attacked was 1% per day, they would have a 97% chance of being attacked within a year. Some of the people reading this post may live in a location where their risk of being assaulted is around that order of magnitude, but if they aren’t, then constantly feeling unsafe implies that their subconscious isn’t calculating the probabilities correctly. (Probably warranting a diagnosis of an anxiety disorder.)
There is also the consideration that feeling unsafe doesn’t only imply a moderate probability of something bad happening; it implies a classification of the bad thing as “the kind of a thing that this type of a readiness response is useful for dealing with”. To extend the analogy to the military’s readiness levels, having your weapons manned and being ready to fight isn’t very useful if the threat being faced is an infectious disease, or Congress deliberating a funding cut that involves the unit in question being decommissioned.
Likewise, it isn’t very useful to activate behavioral responses that are evolved for the purpose of getting to safety from an imminent threat, if the threat involves unaligned AI possibly being developed within the next decade or so. There isn’t anywhere that you could run away to, nor a concrete enemy you could defeat, so this response would be stuck trying to do an impossible task. The mindset necessary for solving the problem is clear and calm thinking, and an ability to relax and get rest as well. So this is an instance of a situation where it might be warranted to feel safe even if you intellectually acknowledge that you might not be safe.
So to recap, I think that “you should never feel safe” is an incorrect argument for several reasons:
- It implies at least a moderate probability of an immediate threat
- It assumes that the kinds of threats the person is facing, are ones that are effectively dealt with by keeping the mind and body in a constant state of low-grade anxiety
- It assumes that it’s possible for those anxiety responses to achieve something useful even when every possible state you can be in is classified as unsafe, and their very purpose is to get you somewhere safe
If you are somewhere where there is an actual tangible threat against you - then yes, feel unsafe! If you are approached by someone you know to be violent or abusive, or if you are out on a walk and you think that you might actually be stalked - then yes, feeling unsafe may very well be the right response.
But if those criteria aren’t met, you are probably better off feeling safe, and harnessing the resources that that state grants you.
- ^
Malcolm Ocean describes a form of this experience:
I recount how in 2019, I heard a podcast where Esther Perel says to a client about his partner who has PTSD flashbacks, “you can tell him ‘you’re safe now’” and I found myself thinking “that’s not okay. I can’t feel that I’m safe in this moment. AI could eat the world, and I’m not doing enough about it. I can’t feel safe until we’ve figured it out.”
28 comments
Comments sorted by top scores.
comment by Razied · 2023-05-09T11:03:46.985Z · LW(p) · GW(p)
There is an idea that I’ve sometimes heard around rationalist and EA circles, that goes something like “you shouldn’t ever feel safe, because nobody is actually ever safe”.
Wait, really?! If this is true then I had severely overestimated the sanity minimum of rationalists. The objections in your post are all true, of course, but they should also pop out in a sane person's mind within like 15 seconds of actually hearing that statement...
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2023-05-09T13:52:23.091Z · LW(p) · GW(p)
It's the kind of thought that one might have if they have a (possibly low-grade) anxiety issue: you feel anxious and like the world isn't safe and you need to be alert all the time, so then your mind takes that observation as an axiom and generates intellectual reasoning to justify it. And I think there's a subset of rationalists who were driven to rationality because they were anxious; Eliezer even has an old post [LW · GW] suggesting that in order to be really dedicated to rationality, you need to have undergone trauma that broke your basic trust in people:
Of the people I know who are reaching upward as rationalists, who volunteer information about their childhoods, there is a surprising tendency to hear things like: "My family joined a cult and I had to break out," or "One of my parents was clinically insane and I had to learn to filter out reality from their madness."
My own experience with growing up in an Orthodox Jewish family seems tame by comparison... but it accomplished the same outcome: It broke my core emotional trust in the sanity of the people around me.
Until this core emotional trust is broken, you don't start growing as a rationalist. I have trouble putting into words why this is so. Maybe any unusual skills you acquire—anything that makes you unusually rational—requires you to zig when other people zag. Maybe that's just too scary, if the world still seems like a sane place unto you.
Or maybe you don't bother putting in the hard work to be extra bonus sane, if normality doesn't scare the hell out of you.
In retrospect, it's too surprising that people might get anxious and maladaptive thought patterns if "normality scares the hell out of" them.
Replies from: havequick, MondSemmel↑ comment by havequick · 2023-05-09T23:50:21.542Z · LW(p) · GW(p)
Re "they should also pop out in a sane person's mind within like 15 seconds of actually hearing that statement" I agree with that in the abstract; few people will say that a state of high physiological alertness/vigilance is Actually A Good Idea to cultivate for threats/risks not usefully countered by the effects of high physiological alertness.
Being able to reason about that in the abstract doesn't necessarily transfer to actually stopping doing that. Like personally, I feel like being told something along the line of "you're working yourself up into a counterproductive state of high physiological alertness about the risks of [risk] and counterproductively countering that with incredibly abstract thought disconnected from useful action" is not something I am very good at hearing from most people when I am in that sort of extraordinarily afraid state. It can really feel like someone wants to manipulate me into thinking that [risk] is not a big deal, or discourage me from doing anything about [risk], or that they're seeking to make me more vulnerable to [risk]. These days this is rarely the case; but the heuristic still sticks around. Maybe I should find its commanding officer so it can be told by someone it trusts that it's okay to stand down...
With the military analogy; it's like you'd been asked to keep an eye out for a potential threat, and your commanding officer tells you on the radio to get on REDCON 1. Later on you hear an unfamiliar voice on the radio which doesn't authenticate itself, and it keeps telling you that your heightened alertness is actually counterproductive and that you should stand down.
Would you stand down? No, you'd be incredibly suspicious! Interfering with the enemy's communication is carte blanche in war. Are there situations where you would indeed obey the order from the unfamiliar voice? Perhaps! Maybe your commanding officer's vehicle got destroyed, or more prosaically, maybe his radio died. But it would have to be in a situation where you're confident it represents legitimate military authority. It would be a high bar to clear, since if you do stand down and it was an enemy ruse, you're in a very bad situation regardless if you get captured by the enemy or if you get court-martialed for disobeying orders. If it seems like standing down makes zero tactical/strategic sense, your threshold would be even higher! In the extreme, nothing short of your commanding officer showing up in person would be enough.
All of this is totally consistent with the quoted section in OP that mentions "Goals and motivational weightings change", "Information-gathering programs are redirected", "Conceptual frames shift", etc. The high physiological alertness program has to be a bit sticky, otherwise a predator stalking you could turn it off by sitting down and you'd be like "oh, I guess I'm not in danger anymore". If you've been successfully tricked by a predator into thinking that it broke off the hunt when it really was finding a better position to attack you from, the program's gonna be a bit stickier, since its job is to keep you from becoming food.
To get away from the analogies, I really appreciate this piece and how it was written. I specifically appreciate it because it doesn't feel like it is an attempt to make me more vulnerable to something bad. Also I think it might have helped me get a bit of a felt sense shift [LW · GW].
Replies from: Kaj_Sotala, sharmake-farah↑ comment by Kaj_Sotala · 2023-05-10T18:50:36.471Z · LW(p) · GW(p)
To get away from the analogies, I really appreciate this piece and how it was written. I specifically appreciate it because it doesn't feel like it is an attempt to make me more vulnerable to something bad. Also I think it might have helped me get a bit of a felt sense shift [LW · GW].
Thank you for sharing that, I'm happy to hear it. :)
↑ comment by Noosphere89 (sharmake-farah) · 2023-06-16T19:19:26.656Z · LW(p) · GW(p)
I want to mention here that the war example is an example of where there is an adversarial scenario, or adversarial game, and applying an adversarial frame is usually not the correct decision to do, and importantly given that the most perverse scenarios usually can't be dealt with without exotic physics due to computational complexity reason, you usually shouldn't focus on adversarial scenarios, and here Kaj Sotala is very, very correct on this post.
↑ comment by MondSemmel · 2023-05-10T09:51:48.027Z · LW(p) · GW(p)
This logic can be taken too far - I don't see the point of feeling constantly anxious -, but at least on an intellectual level, I think it does make a certain amount of sense. It's hard to notice the insanity or inadequacy of the world until it affects you personally. Some examples of this:
- People buy insurance to be safe from <disaster>, but insurance companies often don't want to pay out. So when you buy insurance, you might incorrectly feel safe, but only notice that you weren't if a disaster actually happens.
- If you've never been ill, then it's easy to believe that if you got ill, you could just go to the doctor and be healed. Sometimes things do work that way. At other times, you might learn that reality is more complicated, and civilization less competent, than previously thought.
- I think the Covid pandemic, and the (worldwide!) inadequate policy response, should've been at least a bit traumatizing to every person on this planet. Not necessarily on an emotional level, but certainly on an intellectual level. There's a kind of trust one can only have in institutions one knows ~nothing about (related: Gell-Mann amnesia), and the pandemic is the kind of event that should've deservedly broken this kind of trust.
↑ comment by Kaj_Sotala · 2023-05-10T13:25:46.361Z · LW(p) · GW(p)
Agree. (I'm not saying that losing one's trust in civilizational adequacy is necessarily a bad thing on net, just that it can also lead to some maladaptive thought patterns.)
comment by teradimich · 2023-05-12T18:55:56.255Z · LW(p) · GW(p)
I do expect some of the potential readers of this post to live in a very unsafe environment - e.g. parts of current-day Ukraine, or if they live together with someone abusive - where they are actually in constant danger.
I live ~14 kilometers from the front line, in Donetsk. Yeah, it's pretty... stressful.
But I think I'm much more likely to be killed by an unaligned superintelligence than an artillery barrage.
Most people survive urban battles, so I have a good chance.
And in fact, many people worry even less than I do! People get tired of feeling in danger all the time.
comment by nim · 2023-05-09T17:12:07.581Z · LW(p) · GW(p)
You shouldn’t ever feel safe, because something bad could happen at any time. To think otherwise is an error of rationality.
I'm curious, do you hear this as often from those with the emotional literacy to usefully differentiate "think" or "assume" from "feel"?
Usually there's little harm done from failing to clearly differentiate assumptions from feelings, but this is an interesting edge case where the framing "you should never assume you're totally safe" seems obviously useful and correct, but it's easy to conflate with the obviously unhelpful and incorrect "you should never feel safe".
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2023-05-09T18:07:57.929Z · LW(p) · GW(p)
Good question, I think often there's been a failure to differentiate going on. Though it's been quite a while since I spoke to some of the people I was thinking of so my recollection of them might be misleading (and others I've only heard about through second-hand accounts).
comment by dr_s · 2023-05-10T16:12:14.728Z · LW(p) · GW(p)
Agreed. Honestly this feels like one of those Bell curve memes, where most people would know perfectly well at a gut level what "safe" means, then rationalists tried being disruptive and provocative by suggesting a seeming deviation from common sense ("you are actually always unsafe!"), and then we get the explanation in rationalist terms of precisely what other people instinctively do when deciding whether to feel safe or unsafe.
Which isn't necessarily a bad thing: examining your unconscious assumptions and elevating them to the conscious level is good!
Nor do I necessarily agree with the average risk level that people seem to consider as safe. I was particularly frustrated by how COVID was declared solved essentially not by lowering the risk past what the most obvious measure (the vaccine) could do, but by raising the risk tolerance of everyone via shaming and peer pressure (roughly speaking, calling everyone who didn't go along with it a boring party pooper). But disagreement on the specific level of course doesn't change the fact that there has to be a level. I can't be at all times as aware and on high readiness as I would be if I was being stalked by a psycho axe murderer, or my worst enemy would neither be COVID nor the axe murderer: it would be the inevitable aneurysm or heart attack I'd get out of sheer stress.
comment by CronoDAS · 2023-05-09T17:24:00.363Z · LW(p) · GW(p)
Is "driving a car (especially in bad road conditions)" a situation in which some degree of feeling unsafe is useful?
Replies from: Celarix, Kaj_Sotala↑ comment by Celarix · 2023-05-10T13:36:17.424Z · LW(p) · GW(p)
I'd say kind of... you definitely have to keep your attention and wits about you on the road, but if you're relying on anxiety and unease to help you drive, you're probably actually doing a bit worse than optimal safety - too quick to assume that something bad will happen, likely to overcorrect and possibly cause a crash.
Replies from: jimmy↑ comment by jimmy · 2023-05-10T21:50:00.744Z · LW(p) · GW(p)
Adding onto this, an important difference between "anxiety" and "heightened attentiveness" is that anxiety has a lot to do with not knowing what to do. If you have a lot of experience driving cars and losing traction, and life or death scenarios, then when it happens you know what to do and just focus on doing it. If you're full of anxiety, it's likely that you don't actually have any good responses ready if the tires do lose traction, and beyond not having a good response to enact you can't even focus on performing the best response you do have because your attention is also being tugged towards "I don't have a good way to respond and this is a problem!".
↑ comment by Kaj_Sotala · 2023-05-09T17:33:00.716Z · LW(p) · GW(p)
I don't have a driving license so this isn't a situation I'd have personal experience with, but I imagine that it would be useful to have some degree of unsafeness to focus your attention more strongly on the driving.
comment by tcheasdfjkl · 2023-05-09T15:35:08.460Z · LW(p) · GW(p)
I think there's also a constructive kind of "not feeling totally safe" where you know that the future is unknown and you could lose the things you have and it is worth both putting in some effort to make that less likely and to cherish and enjoy what you have now. But yeah, it shouldn't be a high-alert state, and I'm not really sure how to better describe the thing that it is instead.
comment by Dagon · 2023-05-09T14:59:57.764Z · LW(p) · GW(p)
There's some good advice in there, but I don't much like the framing about how one should feel, as opposed to how one should think about risks.
The truth is, nobody's actually perfectly safe, ever. The likely outcome for every current individual is eventual death. One can have a reasonable belief that it's a long way off, and that there's even some chance for it to be a VERY long way off. And there are much shorter-term risks as well, some of which can be mitigated, and some can't. How one feels about that is less important than how one integrates it into their framework for action.
Thinking about and internalizing the threat imminence continuum idea is good. It probably does lead to better emotional stability - not "feeling safe", but "accepting and mitigating risks", but it's not directly based on feelings, it's upstream.
Replies from: xiann, Kaj_Sotala↑ comment by xiann · 2023-05-09T17:15:06.473Z · LW(p) · GW(p)
Feeling unsafe is probably not a free action though; as far as we can tell cortisol has a deleterious effect on both physical health & mental ability over time, and it becomes more pronounced w/ continous exposure. So the cost of feeling unsafe all the time, particularly if one feels less safe/more readiness than the situation warrants, is to hurt your prospects in situations where the threat doesn't come to pass (the majority outcome).
The most extreme examples of this are preppers; if society collapses they do well for themselves, but in most worlds they simply have an expensive, presumably unfun hobby and inordinate amounts of stress about an event that doesn't come to pass.
Replies from: CronoDAS↑ comment by CronoDAS · 2023-05-09T18:06:21.030Z · LW(p) · GW(p)
Yeah, things close to full-blown doomsday doesn't happen very often. The most common is probably literal war (as in Ukraine and Syria) and the best response to that on an individual level is usually "get the hell away from where the fighting is." Many of the worst natural disasters are also best handled by simply evacuating. If you don't have to/didn't have time to evacuate and you don't die in the immediate aftermath, your worst problems might be the local utilities shutting down for a while and needing to find alternative sources of water and heat until they're fixed.
The potential natural disasters for which I think doomsday-level prepping might actually make a difference are volcanoes and geomagnetic storms, because they could cause problems on a continent-wide or global scale and "go somewhere unaffected" or "endure the short-term disruptions until things go back to normal" might not work. Volcanoes can block the sun and cripple global agriculture, and a giant electromagnetic pulse could cause enough damage to both the power grid and to natural gas pipelines that it could take years to rebuild them. (Impacts from space might also be on the list, depending on the severity.)
↑ comment by Kaj_Sotala · 2023-05-09T17:40:45.617Z · LW(p) · GW(p)
Is your model that our thoughts come first, and feelings second?
I think that there are cases where that's true, but that generally our emotional state exerts a strong influence on what kinds of thoughts we're capable of having. So feeling safe (or at least not feeling unsafe) may be a prerequisite for being able to think clearly about risks.
(Though this gets complicated because there are influences going in both directions - if I thought that intellectual ideas had zero influence on feelings, it would have been pointless for me to write this post.)
Replies from: Dagon↑ comment by Dagon · 2023-05-09T18:28:42.663Z · LW(p) · GW(p)
Is your model that our thoughts come first, and feelings second?
Not exactly - there's more feedback loop than that. I fully agree with "this gets complicated".
I would say that intentional changes to mind-state tend to be thoughts-first. I don't know if that's tautological from the nature of "intentional", but it does seem common enough to make it the best starting point for most people.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2023-05-10T18:27:53.899Z · LW(p) · GW(p)
Right, that makes sense.
And to clarify, as I tried to say in the introduction, the post is mostly intended to counter the thought that "I shouldn't feel safe". So if someone is having thoughts that it's wrong to feel safe and they should stop doing so, then the intent of the post isn't to say "here's how you should feel". Rather, it's just to say "if you do feel safe, I don't think you need to take a metaphorical hammer and hit yourself with it until you feel unsafe (nor do you need to believe people who say that you should); here's why I think you can stop doing that".
So I think that if you are saying that one should focus on how they think about risks, and I'm saying that here's one way to think about them, then we agree?
Replies from: Dagoncomment by Aorou (Adnll) · 2023-05-10T14:44:32.974Z · LW(p) · GW(p)
Downvoted because there is no « disagree » button.
I strongly disagree with the framing that one could control their emotions (both from the EA quote and from OP). I’m also surprised that most comments don’t go against the post in that regard.
To be specific, I’m pointing to language like « should feel », « rational to feel » etc.
Replies from: Kaj_Sotala, dr_s↑ comment by Kaj_Sotala · 2023-05-10T18:44:27.643Z · LW(p) · GW(p)
As the other comment pointed out, I'm not assuming that one could control their emotions - I actually lean towards thinking that attempts to control one's emotions are often harmful, though of course there's also a place for healthy emotion regulation.
To be specific, I’m pointing to language like « should feel », « rational to feel » etc.
This clarification [LW(p) · GW(p)] seems relevant.
Also in general, I don't think that considering some feelings more rational than others requires an ability to control one's feelings. A feeling can be instrumentally rational if it helps bring about the kinds of outcomes the person cares about, and epistemically rational if the implicit beliefs it's based on are correct ones. That can be true regardless of how much control we have over our feelings.
Of course, if we had absolutely no influence over our feelings, then this might be pointless to talk about. But people can certainly do things that affect their feelings, from listening to music that puts them in a certain mood to (more relevant for this post) telling themselves that they are wrong to feel safe. Also, while controlling feelings is impossible, it's possible to bring light to beliefs underlying the feelings and to update incorrect beliefs that some of the feelings might be based on (I discussed that in this post [LW · GW] among others).
Replies from: Adnll↑ comment by Aorou (Adnll) · 2023-05-11T21:17:31.820Z · LW(p) · GW(p)
Thanks for pointing to your clarification. I find it a lot clearer than the OP.
↑ comment by dr_s · 2023-05-10T16:16:13.889Z · LW(p) · GW(p)
Post says that explicitly:
Note that I only intend to dispute the intellectual argument that these are making. It’s possible to accept on an intellectual level that it would make sense to feel safe most of the time, but still not feel safe
I think it's absolutely sensible to believe there are emotions that we shouldn't feel, as in, we have no benefit from feeling and we don't want to feel. I don't want to feel sudden homicidal anger, or have suicidal thoughts, or be afraid of leaving my own room. All those are possible feelings that I definitely believe I should not feel, and will do my best to remove if I ever have them! Of course that's not easy, but the notion that all feelings are equally good by virtue of simply being feelings and thus there is no "should" that applies to them is ridiculous.
Replies from: jimmy↑ comment by jimmy · 2023-05-10T21:46:21.559Z · LW(p) · GW(p)
It's not that it's "necessarily good and something you should act on" just because that's what you feel, it's that it's not "necessarily bad and something you shouldn't feel" just because that's what you think. Maybe, and maybe, but you're always going to be fallible on both fronts so it makes sense to check.
And that is actually how you can make sure to "not feel" this kind of inappropriate feeling, by the way. The mental move of "I don't want to feel this. I shouldn't feel this" is the very mental move that leads people to be stuck with feelings which don't make sense, since it is an avoidance of bringing them into contact with reality.
If you find yourself stuck with an "irrational" fear, and go to a therapist saying "I shouldn't feel afraid of dogs", they're likely to suggest "exposure therapy" which is basically a nice way of saying "Lol at your idea that you shouldn't feel this, how about we do the exact opposite, make you feel it more, and refrain from trying not to?". In order to do exposure therapy, you have to set aside your preconceived ideas about whether the fear is appropriate and actually find out. When the dog visibly isn't threatening you, and you're actually looking at the fact that there's nothing scary, then you tend to start feeling less afraid. That's really all there is to it, and so if you can maintain a response to fear of "Oh wow, this is scary. I wonder if it's actually dangerous?" even as you feel fear, then you never develop a divergence between your feelings and what you feel is appropriate to feel, and therefore no problem that calls for a therapist or "shoulding" at yourself.
It's easier said than done, of course, but the point is that "I shouldn't feel this" doesn't actually work either instrumentally or epistemically.