The true degree of our emotional disconnect

post by siIver · 2016-10-31T19:07:42.333Z · LW · GW · Legacy · 12 comments

Contents

12 comments

If I said that human fears are irrational, because you are probably more afraid sleeping in an abandoned house than driving to work, I would hardly be covering new ground. Myself, I thought to have understood this well before finding LessWrong: some threats are programmed by evolution to be scary, so we are greatly afraid of them; some threats aren't, so we are a little bit afraid of those. Simple enough.

 

But is that actually true? Am I, in fact, afraid of those threats? Am I actually afraid, at all, of dying in travel, of Climate Change, nuclear war, or unfriendly AI?

 

The answers are no, a little bit, just barely, and nope, and the reason for that 'barely' has nothing to do with the actual scope of the problem, but rather with an ability to roughly visualize (accurately or not) the event due to its usage in media. As for Climate Change, the sole reason why I am somewhat afraid is that I've been telling myself for the better part of my life that it is by far humanities biggest problem.

 

In truth, the scope of a problem doesn't seem to have a small impact on our sensitivities; rather it seems to have none. And this is a symptom of a more far more fundamental problem. The inspiration for writing this came when I pondered the causes for Signaling. Kaj_Sotala opens his article The Curse of Identity with the following quote:

 

So what you probably mean is, "I intend to do school to improve my chances on the market". But this statement is still false, unless it is also true that "I intend to improve my chances on the market". Do you, in actual fact, intend to improve your chances on the market?

 

I expect not. Rather, I expect that your motivation is to appear to be the sort of person who you think you would be if you were ambitiously attempting to improve your chances on the market... which is not really motivating enough to actually DO the work.

 

The reason to do this, I realized, is not that the motivation of Signaling – to appear to be the sort of person who does certain stuff – is larger than I had thought, but because the motivation to do the thing it is based on is virtually non-existent outside the cognitive level. If I visualize a goal I have right now, then I don't seem to feel any emotional drive to be working on it. At all. It is really a bit scary.

 

The common approach to deal with Signaling seems to be either to overrule emotional instincts with cognitive choices, or to attempt to compromise, finding ways to reward status-seeking instincts with actions that also help pursuing its respective cognitive goal. But, if it is true that we are starting from zero, why not instead try to create emotional attachment, as I did with Climate Change?

 

I will briefly raise the question of whether being more afraid of significant threats is actually a good thing. I have heard the argument that it is bad, given that fear causes irrationality and hasty decision making, which I'd assess to be true in a very limited context, but not when applied to life decisions with sufficient time. As with every problem of map and territory, I think it would be nice if the degree to which one is afraid had some kind of correlation to reality, which often enough isn't the case. A higher amount of rational fear may also cause a decrease in irrational fear. Maybe. I don't know. If you have no interest in raising fear of rational threats, I'd advise skipping the final paragraph.

 

Take a moment to try and visualize what will happen in the case of unfriendly AI – or another X-risk of your choice. Do it in a concrete way. Think through the steps that might occur, that would result in your death. Would you have time to notice it? Would there be panic? An uprising? Chaos? You may be noticing now how hard it is to be afraid, even if you are trying, and even if the threat is so real. Or maybe you succeeded. Maybe it can be a source of motivation for you. Because the other way doesn't work. Attempting to establish a connection of a goal's end to an emotion reward fails due to the goal's distance. You want to achieve the goal, not the first step that would lead you there. But fear doesn't have this problem. Fear will motivate you immediately, without caring that the road is long.

12 comments

Comments sorted by top scores.

comment by Gunnar_Zarncke · 2016-10-31T21:29:47.052Z · LW(p) · GW(p)

Maybe indeed we shouldn't try to instill fear. Especially not the immediate fear that leads to panic. But instead imagine Something to Protect. Imagine protecting family, friends, all humans from UFAI or atomic war. No you yourself are in danger but all your beloved ones. Does that make a difference? What other emotions could contribute in a comparable way?

comment by Lumifer · 2016-11-01T14:38:45.834Z · LW(p) · GW(p)

why not instead try to create emotional attachment, as I did with Climate Change?

Off the top of my head I see a problem: you are locking yourself into a particular position.

Basically, emotional attachments are more permanent than evaluations of evidence. If you change your mind about the danger of, say, global warming, your emotional attachment will not automatically go away. In fact, that attachment will be a considerable obstacle in the way of adjusting your beliefs.

And in general I would be cautious about deliberately instilling a phobia in oneself.

comment by Gyrodiot · 2016-10-31T20:31:51.470Z · LW(p) · GW(p)

I argue the reason we don't have an instant fear of distant scenarios is exactly why we shouldn't rely on fear to deal with distant scenarios.

Our fear is indeed disconnected from distant threats. The function of fear is to react to immediate threats, and we can conjure the reaction of fear if we picture vividly a situation. However, the reaction induced by fear may not be what you want. Say you wear a virtual reality headset, and show someone the consequences of climate change, or X-risk. If the simulation is vivid enough, the subject will think "I must act on this threat now as if my life depended on it", but that would be trading inaction with panic.

Yes, your life depends on the resolution of the distant problems, but we shouldn't fear them as they were coming right now. Attempting to connect a distant threat to an immediate emotion fails, of course, which means we must devise other means to deal with the threat.

Conversely, saying a threat isn't important because we don't feel it threatening is a dangerous fallacy, I agree. The next step is to make distant threats credible by other means than emotion, because those don't trigger the kind of reaction we need to manage said threats.

Replies from: siIver
comment by siIver · 2016-11-01T06:17:04.132Z · LW(p) · GW(p)

However, the reaction induced by fear may not be what you want. Say you wear a virtual reality headset, and show someone the consequences of climate change, or X-risk. If the simulation is vivid enough, the subject will think "I must act on this threat now as if my life depended on it", but that would be trading inaction with panic.

This is exactly where I disagree. Sure you might panic for few minutes. And then you're going to calm down, no matter how bit the risk actually is. TheClimateMobilization write extensively about how the fear during WW2 ramped up productivity in the United States, how it triggered what they call the emergency mode which is different from panic. I don't see any reason why the same effect wouldn't be taking place on a smaller scale.

Replies from: Gyrodiot
comment by Gyrodiot · 2016-11-01T09:35:02.073Z · LW(p) · GW(p)

WW2 is an excellent example of a situation where large amounts of people must be made aware of a single threat. Fear and panic makes the threat move up your priority list. In emergency mode, you can still vividly remember what caused the panic.

It works well for one threat, but it doesn't help when you have multiple distant threats. How do you manage mobilization against climate change and X-risk, through fear, at the same time?

Should we make separate groups of people care about separate concerns, each fearing a single distant threat? That would be effective, with each person having their one priority they are very concerned about (a caveat, they would probably see the other groups as blissful ignorants).

Replies from: siIver
comment by siIver · 2016-11-01T13:40:09.909Z · LW(p) · GW(p)

It works well for one threat, but it doesn't help when you have multiple distant threats

I don't understand why you think this is true. Fear pushes you to take action. Why should whether it is a single threat or multiple threats matter?

Replies from: Gyrodiot, Pimgd
comment by Gyrodiot · 2016-11-02T20:20:17.211Z · LW(p) · GW(p)

I have several (cached) assumptions behind this. I need to do some extra research on the matter to be confident about this.

First, the brain is very bad at estimating actual danger based on fear. If you picture death from a 1-in-a-million-years event, it will be the same reaction for a 1-in-a-century event. When you calm down, you may be able to sort out which event is more threatening, but you will have a better time figuring out calmly which problem is the most significant and then trigger a fear reaction about it.

Second, fear about threat A can warp perception about threat B even if the two aren't related, because immediate action is needed towards solving A and whatever threat B is cannot be as important at the moment. I exaggerate a bit; I have a hard time how anyone could manage having to take potentially conflicting actions against several threats at once.

Multiple threats matter because you need to split your resources between them: does fear help in that case, given that fear pushes you, not only to take action, but immediate and probably miscalibrated action to squash the threat?

Replies from: siIver
comment by siIver · 2016-11-03T13:06:26.038Z · LW(p) · GW(p)

I can only speculate at this point.

It doesn't work like that for me. I find that I am actually kind of flexible in where exactly I direct the urge to do something, whether it's out of fear of guilt or something else. But that could be because I sort of put everything I do on a linear scale of usefulness and have no problem to do rough maths with just about everything.

Out of curiosity, you say you need to do research, what are sources which you think are credible about this topic? I am fairly skeptic about the existence of such sources.

comment by Pimgd · 2016-11-02T11:52:00.837Z · LW(p) · GW(p)

(Not sure, just a guess)

Does being scared take willpower? ... Or maybe to act on being scared takes willpower ...? ... With too many threats I visualize the same aphasia/ not caring that you get from having spent too much willpower on things.

comment by truth_hunter · 2016-11-11T15:29:47.759Z · LW(p) · GW(p)

But, if it is true that we are starting from zero, why not instead try to create emotional attachment, as I did with Climate Change?

As you suggest, it seems to be much harder than just telling ourselves that our behaviour for example will lead to a problem in the future. Maybe if we could connect it with a fear or an imminent threat, but that seems to be most of the time hard because we are in a very cozy and priviledged position.

If you could emotionally attach to an idea or threat of your choosing, why wouldn't most of us just create an emotional attachment to a more rational lifestyle and to become more of a critical thinker. There should be enough fears we could use in our favor that would help realizing how threatening it is if we dont go this route soon.

Replies from: siIver
comment by siIver · 2016-11-11T16:27:22.710Z · LW(p) · GW(p)

I think this is a point largely separate of the one I was making, though reading your post I realize that I haven't quite thought this true. Nonetheless I definitely think that I am already attached to something which I label as a "rational lifestyle," and I suspect most of us are. The issue is probably more the part about doing stuff. You can be a rational/critical thinker and correctly identify a lot of problems but don't bother to do anything against them. In fact, this is a pattern that I see frustratingly often.

comment by ingive · 2016-11-01T17:39:40.930Z · LW(p) · GW(p)

Have you tried the following inquiry: https://logicnation.org/wiki/A_simple_click ?

It seems similar, solving some of the questions you bring up here, but I am not 100% certain.