The deeper solution to the mystery of moralism—Believing in morality and free will are hazardous to your mental health

post by metaphysicist · 2012-10-14T13:21:36.086Z · LW · GW · Legacy · 20 comments

Contents

20 comments

[Crossposted.]

The complex relationship between Systems 1 and 2 and construal level

The distinction between pre-attentive and focal-attentive mental processes  has dominated cognitive psychology for some 35 years. In the past decade has arisen another cognitive dichotomy specific to social psychology: processes of abstract construal (far cognition) versusconcrete construal (near cognition). This essay will theorize about the relationship between these dichotomies to clarify further how believing in the existence of free will and in the objective existence of morality can thwart reason by causing you to choose what you don’t want.

The state of the art on pre-attentive and focal-attentive processes is Daniel Kahneman’s bookThinking, Fast and Slow, where he calls pre-attentive processes System 1 and focal-attentive processes System 2. The reification of processes into fictional systems also resembles Freud’sSystem Csc (Conscious) and System Pcs (Preconscious). I’ll adopt the language System 1 andSystem 2, but readers can apply their understanding of conscious –preconscious, pre-attentive – focal-attentive, or automatic processes – controlled processes dichotomies. They name the same distinction, in which System 1 consists of processes occurring quickly and effortlessly in parallel outside awareness; System 2 consists of processes occurring slowly and effortfully in sequentialawareness, which in this context refers to the contents of working memory rather than raw experience and accompanies System 2 activity.

To integrate Systems 1 and 2 with construal-level theory, we note that System 2—the conscious part of our minds—can perform any of three routines in making a decision about taking some action, such as whether to vote in an election, a good example not just for timeliness but also for linkages to our main concern with morality: voting is a clear example of an action without tangible benefit. The potential voter might:

Case 1. Make a conscious decision to vote based on applying the principle that citizens owe a duty to vote in elections.
Case 2. Decide to be open to the candidates’ substantive positions and vote only if either candidate seems worthy of support.
Case 3. Experience a change of mind between 1 and 2.

The preceding were examples of the three routines System 2 can perform:

Case 1. Make the choice.
Case 2. “Program” System 1 to make the choice based on automatic criteria that don’t require sequential thinking.
Case 3. Interrupt System 1 in the face of anomalies.

When System 2 initiates action, whether it retains the power to decide or passes to System 1 is the difference between concrete and abstract construal. The second routine is key to understanding how Systems 1 and 2 work to produce the effects construal-level theory predicts. Keep in mind that the unconscious, automatic System 1 includes not just hardwired patterns but also skilled habits. Meanwhile, System 2 is notoriously “lazy,” unwilling to interrupt System 1, as in Case 3, but despite the perennial biases that plague system 1, resulting from letting System 1 have its way, the highest levels of expertise also occur in System 1.

A delegate System 1 operates with potentially complex holistic patterns typifying far cognition. This pattern is far because we offload distant matter to System 1 but exercise sequential control under System 2 as immediacy looms—although there are many exceptions. It is critical to distinguish far cognition from the lazy failure of System 2 to perform properly in Case 3. Such failure isn’t specific to mode. Far cognition, System 1 acting as delegate for System 2, is a narrower concept than automatic cognition, but far cognition is automatic cognition. Nearcognition admits no easy cross-classification.

Belief in free will and moral realism undermine our “fast and frugal heuristics”

The two most important recent books on the cognitive psychology of decision and judgment areThinking, Fast and Slow by Daniel Kahneman and Gut Reactions: The Intelligence of the Unconscious by Gerd Gigerenzer, and both insist on the contrast between their positions, although conflicts aren’t obvious. Kahneman explains System 1 biases as due to the mechanisms employed outside the range of evolutionary usefulness; Gigerenzer describes “fast and frugal heuristics” that sometimes misfire to produce biases. Where these half-empty versus half-full positions on heuristics and biases really differ is their overall appraisal of near and far processes, as Gigerenzer is a far thinker and Kahneman a near thinker, and they are both naturally biased for their preferred modes. Far thought shows more confidence in fast-and-frugal heuristics, since it offloads to System 1, whose province is to employ them.

The fast-and-frugal-heuristics way of thinking is particularly useful in understanding the effect of moral realism and free will: they cause System 2 to supplant System 1 in decision-making. When we apply principles of integrity to regulate our conduct, sometimes we do better in far mode, where System 2 offloads the task of determining compliance to System 1. To the contrary, if you have a principle of integrity that includes an absolute obligation to vote, you act as in Case 1: on a conscious decision. But principles of integrity do not really take this absolute form, an illusion begotten by moral realism. A principle of integrity flexible enough for actual use might favor voting (based, say, on a general principle embracing an obligation to perform duties) but disfavor it for “lowering the bar” when there’s only a choice between the lesser of evils. To practice the art of objectively applying these principles depends on your honest appraisal of the strength of your commitment to each virtue. System 2 is incapable of this feat; when it can be accomplished, it’s due to System 1’s automatic skills, operating unconsciously.Principles of integrity are applied more accurately in far-mode than near-mode. [Hat Tip to Overcoming Bias for these convenient phrases.]

But belief in moral realism and free will impel moral actors to apply their principles in near-mode. Objective morality and moral realism imply that compliance with morality results from freely willed acts. I’m not going to defend this premise thoroughly here, but this thought experiment might carry some persuasive weight. Read the following in near mode, and introspect your emotions:

 

Sexual predator Jerry Sandusky will serve his time in a minimal security prison, where he’s allowed groups of visitors five days a week.

 


Some readers will experience a sense of outrage. Then remind yourself: There’s no free will.If you believe the reminder, your outrage will subside; if you’ve long been a convinced and consistent determinist, you might not need to remind yourself. Morality inculpates based on acts of free will: morality and free will are inseparable.

A point I must emphasize because of its novelty: it’s System 1 that ordinarily determines what you want. System 2 doesn’t ordinarily deliberate about the subject directly; it deliberates about relevant facts, but in the end, you can only intuit your volition. You can’t deduce it.

What a belief in moral realism and free will do is nothing less than change the architecture of decision-making. When we practice principles of integrity and internalize them, they and nonmoral considerations co-determine our System 1 judgments, whereas according to moral realism and free will, moral good is the product of conscious free choice, so System 2 contrastsits moral opinion to System 1’s intuition, for which System 2 compensates—and usually overcompensates. The voter had to weigh the imperatives of the duty to vote and the duty to avoid “lowering the bar” when both candidates are ideologically and programmatically distasteful. System 2 can prime and program System 1 by studying the issues, but the multifaceted decision is itself best made by System 1. What happens when System 2 tries to decide these propositions? System 2 makes the qualitative judgment that System 1 is biased one way or the other and corrects System 1. This will implicate the overcompensation bias, in which conscious attempts to counteract biases usually overcorrect. A voter who thinks correction is needed for a bias toward shirking duty will vote when not really wanting to, all things considered. A voter biased toward "lowering the bar" will be excessively purist. Whatever standard the voter uses will be taken too far.

Belief in moral realism and free will biases practical reasoning

This essay presents the third of three ways that belief in objective morality and free will can cause people to do what they don’t want to do:

 

  1. It retards people in adaptively changing their principles of integrity.
  2. It prevents people from questioning their so-called foundations.
  3. It systematically exaggerates the compellingness of moral claims.

 

Some will be tempted to think that the third either is contrary to experience or is socially desirable. It’s neither. In moralism, an exaggerated subjective sense of duty and excessive sense of guilt co-exist with unresponsiveness to morality’s practical demands.

20 comments

Comments sorted by top scores.

comment by MixedNuts · 2012-10-14T11:47:42.589Z · LW(p) · GW(p)

Summary:

  • System 1 follows non-moral heuristics, System 2 deliberates on and follows morality.

  • Far mode uses System 2, Near mode uses System 1 unless something unusual happens and System 2 takes over.

  • People who believe in moral realism want System 2 to be in charge all the time.

  • This is unpleasant, because

    • It takes effort to switch System 2 on.
    • System 2 tries to compensate for System 1's non-moral behavior, but can never fully succeed. The belief that it always succeeds is called "free will" in this article.
    • Moral principles are not optimized for convenience.
  • Nothing unpleasant is true, therefore moral realism is false.

Read the following in near mode, and introspect your emotions:

I don't have perfect control of which mode I'm working in. You're the writer, it's your job to write the sentence in a way that makes your reader feel "This heineous rapist bastard is getting away with barely a slap on the wrist, and laughing at how he conned us into thinking that was fair" rather than "A member of the reference class of imprisoned criminals is being treated as humanely as is compatible with minimizing danger to society".

Sexual predator Jerry Sandusky will serve his time in a minimal security prison, where he’s allowed groups of visitors five days a week.

Some readers will experience a sense of outrage. Then remind yourself: There’s no free will.

What? No! I'm okay with Sandusky receiving visitors because prisons are for time-out, not for revenge. (If asked to justify that, I'll switch to Far mode and talk about deterrence.)

I do not believe that Sandusky, or any human short of heavy mind alteration, was inevitably led to rape the way water is led downhill ("no free will" in the traditional sense). A kid begging you not to hurt him is a typical thing that should trigger the "horror and remorse" script, or System 1 confusion and switch to Far mode, and that's only in the case you never noticed "Huh, Near-mode-me wants to rape kids, I better never give it occasion to then".

Neither do I believe that Sandusky suffered a failure of willpower ("no free will" in the "finite willpower" sense this article uses). That's certainly a thing that happens to humans. But Sandusky didn't turn himself in or run away, he kept raping his victims. His choice was as real as choices get, and he chose to rape.

In moralism, an exaggerated subjective sense of duty and excessive sense of guilt co-exist with unresponsiveness to morality’s practical demands.

That's partially true, but behavior is not perfectly unresponsive. Most people have wanted to do something, thought "It would be wrong", and refrained. System 1 can be trained to mirror System 2; people can read abstract arguments about the death penalty, be convinced, and start feeling revulsion at the death penalty. (I suppose it works the other way too; if you're attracted to kids, you pretty much have to switch your philosophical position from "People who are attracted to kids are horrible monsters" to "People who are attracted to kids need lots of support to help fight their urges".)

Well, that was System 2 speaking. System 1 says:

Your philosophical sophistry would have us coddle rapists! What any douchebag feels like doing is law, and morality is for chumps. And you call yourself good?

comment by magfrump · 2012-10-13T20:48:41.927Z · LW(p) · GW(p)

I have not read this article, because I find it to be visually hideous. Just scrolling past it made me notice a huge amount of difference between it and most articles posted on Less Wrong; a tinted background, changed font colors, an extremely long title, and it doesn't seem to have any references in it.

While I remain open to the possibility that this is a great post which is worth cross-posting, I would ask that when you cross-post something to Less Wrong, that you format it in a style standard fro Less Wrong.

Replies from: BerryPick6
comment by BerryPick6 · 2012-10-13T21:44:24.632Z · LW(p) · GW(p)

Agreed. Also, adding a summary to the end would be something that I, personally, would find helpful.

comment by Vladimir_Nesov · 2012-10-14T13:21:47.123Z · LW(p) · GW(p)

Moved to Discussion, removed non-standard background color and paragraph style.

comment by Vladimir_Nesov · 2012-10-13T21:32:45.705Z · LW(p) · GW(p)

As usually understood on LW, "free will" is a relatively simple property of reasoning during decision-making. As such, it's not clear what does the statement "there is no free will" mean, and correspondingly how the belief in this statement works, if using the conventional-on-LW sense of "free will". If this is not the sense you are working under, you should introduce the sense that you do use in the post.

comment by Kindly · 2012-10-14T14:16:34.605Z · LW(p) · GW(p)

Some readers will experience a sense of outrage. Then remind yourself: There’s no free will.If you believe the reminder, your outrage will subside; if you’ve long been a convinced and consistent determinist, you might not need to remind yourself. Morality inculpates based on acts of free will: morality and free will are inseparable.

They're perfectly separable. My hypothetical outrage is based on seeing a heinous monster not get what's coming to him. If the heinous monster's actions were in principle predictable if we looked at the entire state of the universe ten years prior, then... so what?

comment by Bugmaster · 2012-10-13T21:00:14.334Z · LW(p) · GW(p)

What happened to this article ? Did a unicorn explode all over it ?

All the crazy colors are making it very hard to read.

Replies from: Cloppy
comment by Cloppy · 2012-10-14T03:23:55.894Z · LW(p) · GW(p)

That reminds me of Rainbow Splash, the Fourier-transformed alter-ego of Rainbow Dash in "Momentum Space".

comment by nshepperd · 2012-10-14T04:10:46.407Z · LW(p) · GW(p)

Having read the article, I can now confirm that it is approximately as worthless as its poor formatting and excessive wordiness would suggest. Downvoted.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2012-10-14T04:53:35.030Z · LW(p) · GW(p)

However irritating the formatting might be (and believe me, it did irritate me), I think you still have an intellectual obligation to provide some kind of justification for making such a harsh judgment on someone else's contribution. What made the article worthless, in your estimation? (You don't need to write a detailed review; a few lines should suffice.)

Replies from: nshepperd, Oscar_Cunningham
comment by nshepperd · 2012-10-14T13:32:03.671Z · LW(p) · GW(p)

The article (as well as being poorly formatted and wordy):

  1. Appears to equate moral realism with some kind of deontology, or virtue theory, it's hard to tell, since the term is only defined through describing what bad things it causes.

  2. Seems to have been written in ignorance of the notion of compatibilist free will, as well as lacking justification for why free will would even be relevant anyway.

  3. In general seems to be a long string of arguments with little justification of their validity, and

  4. Mainly, appears to have been written for the purpose of proving wrong the author's political opponent "moralism", and as such to have written the bottom line first.

I could be more specific, but I don't really think it would be worthwhile.

Replies from: BerryPick6, Pablo_Stafforini
comment by BerryPick6 · 2012-10-14T14:01:05.431Z · LW(p) · GW(p)

I agree with these points, and I also think that he is using certain terms in either unique or unfamiliar ways. This, in itself, is fine, but I'm not seeing any specific place in which these words are defined as being used in an unusual manner, and it left me very confused as I was reading.

comment by Pablo (Pablo_Stafforini) · 2012-10-14T14:54:55.501Z · LW(p) · GW(p)

Many thanks.

comment by Oscar_Cunningham · 2012-10-14T16:52:30.432Z · LW(p) · GW(p)

I think you still have an intellectual obligation to provide some kind of justification for making such a harsh judgment on someone else's contribution.

I disagree. There's too many idiots out there for me to provide an explanation to everyone I dismiss.

Replies from: thomblake, Will_Newsome, Bruno_Coelho
comment by thomblake · 2012-10-15T17:31:48.265Z · LW(p) · GW(p)

There's too many idiots out there for me to provide an explanation to everyone I dismiss.

That's fine - if you're strapped for time, you can "dismiss" them without posting anything. But if you have the time to complain, then you should make the complaints helpful.

comment by Will_Newsome · 2012-10-14T17:12:27.309Z · LW(p) · GW(p)

Careful with that axe, Eugene.

comment by Bruno_Coelho · 2012-10-15T20:55:23.880Z · LW(p) · GW(p)

Unjustified assertions could be more productive if not made. Creating fuss about idiots makes unnecessary noise. However, we could think this dismissiveness inform us about the status of the post.

comment by dspeyer · 2012-10-14T02:03:38.013Z · LW(p) · GW(p)

according to moral realism and free will, moral good is the product of conscious free choice

This seems to be the core of your argument. Where are you getting it from? I think most moral realists would disagree.

comment by Manfred · 2012-10-14T22:45:16.739Z · LW(p) · GW(p)

For some reason I'm seeing this pop up at the top of the discussion section repeatedly. That's bad :(

comment by Furcas · 2012-10-13T21:46:16.414Z · LW(p) · GW(p)

Read the sequences.