Dark Side Epistemology

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-17T23:55:22.000Z · LW · GW · Legacy · 155 comments

Contents

155 comments

If you once tell a lie, the truth is ever after your enemy.

I have discussed the notion that lies are contagious. If you pick up a pebble from the driveway, and tell a geologist that you found it on a beach—well, do you know what a geologist knows about rocks? I don’t. But I can suspect that a water-worn pebble wouldn’t look like a droplet of frozen lava from a volcanic eruption. Do you know where the pebble in your driveway really came from? Things bear the marks of their places in a lawful universe; in that web, a lie is out of place.1

What sounds like an arbitrary truth to one mind—one that could easily be replaced by a plausible lie—might be nailed down by a dozen linkages to the eyes of greater knowledge. To a creationist, the idea that life was shaped by “intelligent design” instead of “natural selection” might sound like a sports team to cheer for. To a biologist, plausibly arguing that an organism was intelligently designed would require lying about almost every facet of the organism. To plausibly argue that “humans” were intelligently designed, you’d have to lie about the design of the human retina, the architecture of the human brain, the proteins bound together by weak van der Waals forces instead of strong covalent bonds . . .

Or you could just lie about evolutionary theory, which is the path taken by most creationists. Instead of lying about the connected nodes in the network, they lie about the general laws governing the links.

And then to cover that up, they lie about the rules of science—like what it means to call something a “theory,” or what it means for a scientist to say that they are not absolutely certain.

So they pass from lying about specific facts, to lying about general laws, to lying about the rules of reasoning. To lie about whether humans evolved, you must lie about evolution; and then you have to lie about the rules of science that constrain our understanding of evolution.

But how else? Just as a human would be out of place in a community of actually intelligently designed life forms, and you have to lie about the rules of evolution to make it appear otherwise, so too beliefs about creationism are themselves out of place in science—you wouldn’t find them in a well-ordered mind any more than you’d find palm trees growing on a glacier. And so you have to disrupt the barriers that would forbid them.

Which brings us to the case of self-deception.

A single lie you tell yourself may seem plausible enough, when you don’t know any of the rules governing thoughts, or even that there are rules; and the choice seems as arbitrary as choosing a flavor of ice cream, as isolated as a pebble on the shore . . .

. . . but then someone calls you on your belief, using the rules of reasoning that they’ve learned. They say, “Where’s your evidence?”

And you say, “What? Why do I need evidence?”

So they say, “In general, beliefs require evidence.”

This argument, clearly, is a soldier fighting on the other side, which you must defeat. So you say: “I disagree! Not all beliefs require evidence. In particular, beliefs about dragons don’t require evidence. When it comes to dragons, you’re allowed to believe anything you like. So I don’t need evidence to believe there’s a dragon in my garage.”

And the one says, “Eh? You can’t just exclude dragons like that. There’s a reason for the rule that beliefs require evidence. To draw a correct map of the city, you have to walk through the streets and make lines on paper that correspond to what you see. That’s not an arbitrary legal requirement—if you sit in your living room and draw lines on the paper at random, the map’s going to be wrong. With extremely high probability. That’s as true of a map of a dragon as it is of anything.”

So now this, the explanation of why beliefs require evidence, is also an opposing soldier. So you say: “Wrong with extremely high probability? Then there’s still a chance, right? I don’t have to believe if it’s not absolutely certain.”

Or maybe you even begin to suspect, yourself, that “beliefs require evidence.” But this threatens a lie you hold precious; so you reject the dawn inside you, push the Sun back under the horizon.

Or you’ve previously heard the proverb “beliefs require evidence,” and it sounded wise enough, and you endorsed it in public. But it never quite occurred to you, until someone else brought it to your attention, that this proverb could apply to your belief that there’s a dragon in your garage. So you think fast and say, “The dragon is in a separate magisterium.”

Having false beliefs isn’t a good thing, but it doesn’t have to be permanently crippling—if, when you discover your mistake, you get over it. The dangerous thing is to have a false belief that you believe should be protected as a belief—a belief-in-belief, whether or not accompanied by actual belief.

A single Lie That Must Be Protected can block someone’s progress into advanced rationality. No, it’s not harmless fun.

Just as the world itself is more tangled by far than it appears on the surface, so too there are stricter rules of reasoning, constraining belief more strongly, than the untrained would suspect. The world is woven tightly, governed by general laws, and so are rational beliefs.

Think of what it would take to deny evolution or heliocentrism—all the connected truths and governing laws you wouldn’t be allowed to know. Then you can imagine how a single act of self-deception can block off the whole meta level of truth-seeking, once your mind begins to be threatened by seeing the connections. Forbidding all the intermediate and higher levels of the rationalist’s Art. Creating, in its stead, a vast complex of anti-law, rules of anti-thought, general justifications for believing the untrue.

Steven Kaas said, “Promoting less than maximally accurate beliefs is an act of sabotage. Don’t do it to anyone unless you’d also slash their tires.” Giving someone a false belief to protect—convincing them that the belief itself must be defended from any thought that seems to threaten it—well, you shouldn’t do that to someone unless you’d also give them a frontal lobotomy.

Once you tell a lie, the truth is your enemy; and every truth connected to that truth, and every ally of truth in general; all of these you must oppose, to protect the lie. Whether you’re lying to others, or to yourself.

You have to deny that beliefs require evidence, and then you have to deny that maps should reflect territories, and then you have to deny that truth is a good thing . . .

Thus comes into being the Dark Side.

I worry that people aren’t aware of it, or aren’t sufficiently wary—that as we wander through our human world, we can expect to encounter systematically bad epistemology.

The “how to think” memes floating around, the cached thoughts of Deep Wisdom—some of it will be good advice devised by rationalists. But other notions were invented to protect a lie or self-deception: spawned from the Dark Side.

“Everyone has a right to their own opinion.” When you think about it, where was that proverb generated? Is it something that someone would say in the course of protecting a truth, or in the course of protecting from the truth? But people don’t perk up and say, “Aha! I sense the presence of the Dark Side!” As far as I can tell, it’s not widely realized that the Dark Side is out there.

But how else? Whether you’re deceiving others, or just yourself, the Lie That Must Be Protected will propagate recursively through the network of empirical causality, and the network of general empirical rules, and the rules of reasoning themselves, and the understanding behind those rules. If there is good epistemology in the world, and also lies or self-deceptions that people are trying to protect, then there will come into existence bad epistemology to counter the good. We could hardly expect, in this world, to find the Light Side without the Dark Side; there is the Sun, and that which shrinks away and generates a cloaking Shadow.

Mind you, these are not necessarily evil people. The vast majority who go about repeating the Deep Wisdom are more duped than duplicitous, more self-deceived than deceiving. I think.

And it’s surely not my intent to offer you a Fully General Counterargument, so that whenever someone offers you some epistemology you don’t like, you say: “Oh, someone on the Dark Side made that up.” It’s one of the rules of the Light Side that you have to refute the proposition for itself, not by accusing its inventor of bad intentions.

But the Dark Side is out there. Fear is the path that leads to it, and one betrayal can turn you. Not all who wear robes are either Jedi or fakes; there are also the Sith Lords, masters and unwitting apprentices. Be warned; be wary.

As for listing common memes that were spawned by the Dark Side—not random false beliefs, mind you, but bad epistemology, the Generic Defenses of Fail—well, would you care to take a stab at it, dear readers?

1Actually, a geologist in the comments says that most pebbles in driveways are taken from beaches, so they couldn’t tell the difference between a driveway pebble and a beach pebble, but they could tell the difference between a mountain pebble and a driveway/beach pebble (http://lesswrong.com/lw/uy/dark_side_epistemology/4xbv). Case in point . . .

155 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Daniel_Franke · 2008-10-18T00:20:23.000Z · LW(p) · GW(p)

The most dangerous dark side meme I can think of is the idea of sinful thoughts: that questioning one's faith is itself a sin even if not acted upon. A close second is "don't try to argue with the devil -- he has more experience at it than you".

Replies from: lockeandkeynes
comment by lockeandkeynes · 2010-07-06T07:05:36.011Z · LW(p) · GW(p)

Especially when it's explicitly enforced, a la death penalty for leaving Islam in Islamic countries.

comment by TGGP4 · 2008-10-18T01:32:19.000Z · LW(p) · GW(p)

Not all who wear robes are either Jedi or fakes What do you mean by "wear robes"? Could we move away from references to fictional stories?

Replies from: asgardiator
comment by asgardiator · 2015-05-17T06:23:12.315Z · LW(p) · GW(p)

Are you trying to argue against the use of metaphor for argument? The fact that Star Wars is a fiction doesn't make analogies made with its concepts wrong.

To clarify the phrase that you take issue with, "robes" from what I can gather signifies memetic authority, like scientists or priests or marketers who have dominion over a region of thought patterns - as the Jedi wield the Force.

comment by Roland2 · 2008-10-18T01:37:34.000Z · LW(p) · GW(p)

Eliezer,

I agree with you what regards people deceiving themselves. But I disagree regarding people that are deceiving others with purpose. Some of these people can be very smart and know very well what they are doing and on what biases they are playing. They have elevated the art of deception to a science, ohhh yes, read marketing books as an example. Otherwise a superintelligence would become stupid in the process of lying to the human operator with the intention to get out of the box.

comment by PK · 2008-10-18T01:41:24.000Z · LW(p) · GW(p)

-faith: i.e. unconditional belief is good. It's like loyalty. Questioning beliefs is like betrayal. -The saying "Stick to your guns.": Changing your mind is like diserting your post in a war. Sticking to a belief is like being a heroic soldier. -The faithfull: i.e. us, we are the best, god is on our side. -the infedels: i.e. them, sinners, barely human, or not even. -God: Infenetly powerful alpha male. Treat him as such with all the implications... -The devil and his agents: They are always trying to seduce you to sin. Any doubt is evedence the devil is seducing you to sin and suceeding. Anyone opposed to your beliefs is cooperating with/being influenced by the devil. -Assasination fatwas: Whacking people who are anti-Islam is the will of Allah. -a sexually satisfying lifestyle is bad: This makes people more angsty(especially young men). This angst is your fault and it's sin. To be less angsty you should be less sinful ergo fight your sexual urges. And so the cycle of desire, guilt, angst and confusion continues. -no masturbation: see above. -you are born in debt to Jesus because he died for your sins 2000 years ago. That's all I could think of right now.

comment by Carl_Shulman · 2008-10-18T01:43:20.000Z · LW(p) · GW(p)

The endorsement of information cascades: claiming that X is indisputably true in the name of philosophical majoritarianism, and thus biasing research and statements to foster belief in X is desirable as a way to foster true beliefs (where the majority only exists because of such biased efforts).

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-18T01:45:35.000Z · LW(p) · GW(p)

Just to be clear, I'm not looking for random false beliefs defended by Dark Side epistemology, I'm looking for Dark Side epistemology itself - the Generic Defenses of Fail.

Roland, these are the Sith masters.

comment by Peter3 · 2008-10-18T01:58:07.000Z · LW(p) · GW(p)

In general, beliefs require evidence.

In general? Which beliefs don't?

Think of what it would take to deny evolution or heliocentrism

Or what it would take to prove that the Moon doesn't exist.

As for listing common memes that were spawned by the Dark Side - would you care to take a stab at it, dear readers?

Cultural relativity. Such-and-such is unconstitutional. The founding fathers never intended... (various appeals to stick to the founding fathers original vision) Be reasonable (moderate) Show respect for your elders It's my private property _ is human nature. Don't judge me. _ is unnatural and therefore wrong. _ is natural and therefore right. We need to switch to alternative energies such as wind, solar, and tidal. The poor are lazy The entire American political vocabulary (bordering on Orwellian) Animal rights

.. much more.

Replies from: simplicio, PlacidPlatypus, DanielLC, yurii-burak
comment by simplicio · 2010-03-06T05:17:27.141Z · LW(p) · GW(p)

"'In general, beliefs require evidence.' In general? Which beliefs don't?"

This is a language problem. "In general" or "generally" to a scientist/mathematician/engineer means "always," whereas in everyday speech it means "sometimes."

For example I could tell you that a fence with 2 sections has 3 posts ( I=I=I ), or I could tell you that "in general" a fence with N sections has N+1 posts.

Replies from: wedrifid, dmitrii-zelenskii
comment by wedrifid · 2010-03-06T09:57:21.914Z · LW(p) · GW(p)

Where N >= 3 the fence can (and often does) have N posts.

Replies from: simplicio
comment by simplicio · 2010-03-06T16:26:27.018Z · LW(p) · GW(p)

Ya, if it wraps in on itself, for sure.

Or if the farmer uses a tree instead. ;)

Replies from: kpreid
comment by kpreid · 2010-03-06T17:18:37.181Z · LW(p) · GW(p)

“How many posts does a fence have, if you call the tree a post?”

comment by Дмитрий Зеленский (dmitrii-zelenskii) · 2019-08-15T21:56:38.965Z · LW(p) · GW(p)

"In general" does not mean "always", it means "by default". It is not the same thing. Rectangles, in general, do not have equal sides with a common dot - except the squares which do. However, there must be reasons for excluding something from a default - and a random false belief is unlikely to find such reasons (not to mention the very going from belief to finding such reasons is backwards).

comment by PlacidPlatypus · 2012-05-27T16:24:56.543Z · LW(p) · GW(p)

"We need to switch to alternative energies such as wind, solar, and tidal. The poor are lazy ... Animal rights"

I don't think these fit. Regardless of whether you agree with them, they are specific assertions, not general claims about reasoning with consistently anti-epistemological effects.

Replies from: Eliyoole
comment by Elias (Eliyoole) · 2021-05-04T08:42:45.246Z · LW(p) · GW(p)

Actually the poor are lazy and animal rights seem to fit to me. Animal rights were a hard sell for me, but thinking about it I had to come to the conclusion, that the bottom line "we should treat animals well" was probably either motivated by "I don't want to eat sick food" or "Awww, cuute!". Not by "I believe that animals in general need rights, because..." What? They react faster to stimuli than plants? They show complex behaviour? In that case, do you not kill mosquitos? Do you want rights for some fungi as well? How about programs that show complex behaviour? It seems like this was written after the bottom line.

Similarly, since we do not live in an equal world, simply saying that the poor were lazy makes sense if your motivation is to not feel guilty about not trying to help them.

Alternative energies however... I think time proved our dear OP wrong on that front. We may not need to use any one of these specifically, but we need to get away from fossil fuels and until we have fusion or solar farms in orbit, alternative energies are the longest term option. Even nuclear runs out of fuel in a relatively short amount of time.

comment by DanielLC · 2012-09-26T03:46:47.847Z · LW(p) · GW(p)

In general? Which beliefs don't?

The probability is the prior times the evidence ratio, so the higher the prior probability, the less evidence you need. If there's a lottery with one million numbers, and you have no evidence for anything, you'll think there's a 0.0001% chance of it getting 839772 exactly, a 50% chance of it getting 500000 or less, and a 99.9999% chance of it getting something other than 839772. Thus, you can be pretty sure it won't land on 839772 even without evidence.

Replies from: hannahelisabeth, VAuroch
comment by hannahelisabeth · 2012-11-11T19:25:47.893Z · LW(p) · GW(p)

I think knowing a prior constitutes evidence. If you know that the lottery has one million numbers, that is a piece of evidence.

Replies from: DanielLC
comment by DanielLC · 2012-11-11T20:44:59.192Z · LW(p) · GW(p)

You need a prior to take evidence into account. If the prior is evidence, then what is the prior?

Replies from: hannahelisabeth, liliet-b
comment by hannahelisabeth · 2012-11-11T21:20:12.991Z · LW(p) · GW(p)

Hm... You make a good point. I'm not sure I understand this conceptually well enough to have any sort of coherent response.

comment by Liliet B (liliet-b) · 2019-12-07T13:29:28.509Z · LW(p) · GW(p)

The ultimate prior is maximum entropy, aka "idk", aka "50/50: either happens or not". We never actually have it, because we start gathering evidence for how the world is before our brains even form enough to make any links between it.

Replies from: Gurkenglas
comment by Gurkenglas · 2019-12-07T15:50:43.781Z · LW(p) · GW(p)

That prior doesn't work when there is a countable number of hypotheses, aka "I've picked a number from {0,1,2,...}. Which?" or "Given that the laws of physics can be described by a computer program, which?".

comment by VAuroch · 2013-11-10T21:23:10.543Z · LW(p) · GW(p)

Your knowledge of the rules of probability is evidence. It's not evidence specific to this question, but it is evidence for this question, among others.

comment by Polkovnik_TrueForm (yurii-burak) · 2023-12-07T15:25:11.980Z · LW(p) · GW(p)

Your link is broken.

Well, cultural relativity is a fact, as there are no morality and people either justify any of their actions via tradition, or simply follow it when they don't want to think. Universal life rights would be great (no less than human rights, at least. I'm one personality legalist and one personality ecocentrist who wants sentience to remain in order to save biosphere from geological and astronomical events that are coming sooner than new Homo sapiens may emerge through evolution if current one is extinct before making AGI) Everything else, I upvote.

comment by Dave5 · 2008-10-18T01:59:02.000Z · LW(p) · GW(p)

Everyone has a right to their own opinion. When you think about it, where was that proverb generated?

In the words of the great sage Emo Phillips, "I used to think that the brain was the most wonderful organ in my body. Then I realized who was telling me this."

comment by PK · 2008-10-18T02:01:24.000Z · LW(p) · GW(p)

I thought of some more. -there is a destiny/Gods plan/reason for everything: i.e. some powerful force is making things the way they are and it all makes sense(in human terms, not cold heartless math). That means you are safe but don't fight the status quo. -everything is connected with "energy"(mystically): you or special/chosen people might be ably to tap into this "energy". You might glean information you normally shouldn't have or gain some kind of special powers. -Scientists/professionals/experts are "elitists". -Mystery is good: It makes life worth while. Appreciating it makes us human. As opposed to destroying it being good. That's it for now.

comment by Dave5 · 2008-10-18T02:07:14.000Z · LW(p) · GW(p)

I'm looking for Dark Side epistemology itself - the Generic Defenses of Fail.

Relax. It will be over soon.

We're past that now.

X is supernatural.

X is natural.

You're correct, but it will make people uncomfortable.

You're smart. You should go to college.

Replies from: VAuroch
comment by VAuroch · 2013-11-10T21:24:09.659Z · LW(p) · GW(p)

Why do you consider

You're smart. You should go to college.

among these? It seems like the odd one out.

Replies from: Polymeron
comment by Polymeron · 2014-02-04T05:33:26.433Z · LW(p) · GW(p)

I've had forms of this said to me; it basically means "I'm losing the debate because you personally are smart, not because I'm wrong. Whichever authority I listen to in order to reinforce my existing beliefs would surely crush all your arguments. So stop assailing me with logic..."

It's Dark Side because it surrenders personal understanding to authority, and treats it as a default epistemological position.

Replies from: wedrifid
comment by wedrifid · 2014-02-04T10:29:44.190Z · LW(p) · GW(p)

It's Dark Side because it surrenders personal understanding to authority, and treats it as a default epistemological position.

Dark side or not it is quite often valid. People who do not trust their ability to filter bullshit from knowledge should not defer to whatever powerful debater attempts to influence them.

It is no error to assign a low value to p(the conclusion expressed is valid | I find the argument convincing).

Replies from: Polymeron, Fronken
comment by Polymeron · 2014-02-04T19:55:31.038Z · LW(p) · GW(p)

No, and argument from authority can be a useful heuristic in certain cases, but at least you'd want to take away the one or two arguments you found most compelling and check them out later. In that sense, this is borderline.

Usually, however, this tactic is employed by people who are just looking for an excuse to flee into the warm embrace of an unassailable authority, often after scores of arguments they made were easily refuted. It is a mistake to give a low value to p(my position is mistaken | 10 arguments I have made have been refuted to my satisfaction in short order).

comment by Fronken · 2014-02-05T19:40:29.480Z · LW(p) · GW(p)

Isn't "Dark Side" approximately "effective, but dangerous"?

comment by michael_vassar3 · 2008-10-18T02:07:57.000Z · LW(p) · GW(p)

I'm pretty confident that ""Everyone has a right to their own opinion." was generated by people trying to protect themselves from people who were trying to protect themselves from the truth.

We really need some talk about what the consequences of an AI with access to its own source code and self-protecting beliefs would be.

comment by Peter3 · 2008-10-18T02:08:01.000Z · LW(p) · GW(p)

I'm looking for Dark Side epistemology itself - the Generic Defenses of Fail.

In that case - association, essentialism, popularity, the scientific method, magic, and what I'll call Past-ism.

Replies from: raptortech97
comment by raptortech97 · 2012-04-19T19:04:14.942Z · LW(p) · GW(p)

Wait a second - the scientific method? How? It may not be the most efficient way to get the truth, and it may not take into account Baye's theorem that could speed it up, but I don't see how the scientific method is epistemologically (is that a word?) wrong.

Replies from: thomblake
comment by thomblake · 2012-04-19T19:09:19.889Z · LW(p) · GW(p)

Wait a second

Too late - it's been 3 and a half years.

(is that a word?

"epistemologically " is a word, but it's hard to tell when to instead say "epistemically".

Replies from: Vaniver
comment by Vaniver · 2012-04-19T19:35:00.249Z · LW(p) · GW(p)

Too late - it's been 3 and a half years.

Somewhat amusing, but it should not be surprising that most of the commentary on old sequence posts is people reading them and engaging with the ideas for the first time.

Replies from: thomblake, JustinMElms
comment by thomblake · 2012-04-19T19:46:46.204Z · LW(p) · GW(p)

Yes, it's the time language that got me.

comment by JustinMElms · 2016-06-22T21:54:35.636Z · LW(p) · GW(p)

That's ridiculous: whenever I want to comment, I always observe that I am reading 4-year-old arguments and keep on scrolling.

Replies from: Vaniver, Good_Burning_Plastic
comment by Vaniver · 2016-06-23T11:03:52.903Z · LW(p) · GW(p)

An interesting claim I came across recently is that most people view the Internet as opening up the past, but that isn't quite right--the past was always accessible, through books and stories and so on. What the internet does that is strange is extend the present into the past, so that content created in 2001 or 2012 or so on can be indistinguishable from content created in 2016, if the formatting, context, or dynamics are the same.

That is, one doesn't expect Jane Austen to return any fan letters, but sometimes when you respond to a four-year old comment, you get a response within a day.

Replies from: Lumifer
comment by Lumifer · 2016-06-23T14:27:53.483Z · LW(p) · GW(p)

the past was always accessible, through books and stories and so on.

I don't know if that's right. The past was always accessible to some degree, but never before as an overwhelming exhaustive array of minutiae. It's precisely because of that level of detail that this past looks so much like the present.

comment by Good_Burning_Plastic · 2016-06-23T15:42:50.027Z · LW(p) · GW(p)

Necro-commenting isn't usually frowned upon around here.

Replies from: Caperu_Wesperizzon
comment by Caperu_Wesperizzon · 2022-08-29T05:22:52.905Z · LW(p) · GW(p)

I'd say rules against necro-commenting are a good tool for the Dark Side, ensuring no discussion progresses beyond a single burst of activity and wasting a lot of time repeating the same arguments again and again.

comment by PK · 2008-10-18T02:28:23.000Z · LW(p) · GW(p)

We are missing something. Humans are ultimatly driven by emotions. We should look for which emotions beliefs tap into in order to understand why people seek or avoid certain beliefs.

Replies from: slicedtoad
comment by slicedtoad · 2015-07-09T20:24:53.280Z · LW(p) · GW(p)

I'm not sure what emotion it is, but I would hypothesize that it comes from tribal survival habits. Group cohesion was existentially important in the tribal prehuman/early-human era. Being accurate and correct with your beliefs was important, but not as important as sharing the same beliefs as the tribe.

So we developed methods of fitting into our tribes despite it requiring us to believe paradoxical and irrational things that should be causing cognitive dissonance.

comment by outofculture · 2008-10-18T03:32:56.000Z · LW(p) · GW(p)

A particular flavor of "if it ain't broke, don't fix it" that points to established traditions as "having worked for ages". Playing off the fear of the unknown? The meme of traditions in general adds weight to many of these.

I second "cultural relativity" as being an extension of "everyone having a right to their opinion", but in both cases point to them as also being tools to find things in one's own life that are arbitrary and in need of evaluation on a more objective basis.

comment by Nominull3 · 2008-10-18T03:35:44.000Z · LW(p) · GW(p)

Isn't the scientific method a servant of the Light Side, even if it is occasionally a little misguided?

Replies from: Jotto999
comment by Jotto999 · 2015-11-20T15:32:39.673Z · LW(p) · GW(p)

What kind of thing do you mean by "occasionally a little misguided"? Are you referring to something bad about it because humans (and all our mental frailties) were using it, or something bad that would happen no matter what kind of creature tried to use it, even ones that had ways around human-like mental frailties?

(I see this comment is from 7 years ago, and I will understand completely if no response comes.)

comment by Roland2 · 2008-10-18T03:47:03.000Z · LW(p) · GW(p)

@Eliezer: Roland, these are the Sith masters.

Ok, got your point. One thing I worry though is how much those movie analogies end up inducing biases in you and others.

comment by Roland2 · 2008-10-18T03:56:06.000Z · LW(p) · GW(p)

@Eliezer:

To drive home my earlier point. The whole idea of jedis vs. siths reflects a Manichaeistic worldview(good vs. bad). Isn't this a simplification?

comment by Peter3 · 2008-10-18T03:56:09.000Z · LW(p) · GW(p)

Isn't the scientific method a servant of the Light Side, even if it is occasionally a little misguided?

Too restrictive. Science is not synonymous with the hypothetico-deductive method, and nor is there any sort of thing called the "scientific method" from which scientists draw their authority on a subject. Neither is it a historically accurate description of how science has done its work. Read up on Feyerabend.

Science is inherently structureless and chaotic. It's whatever works.

Replies from: Mat
comment by Mat · 2012-08-20T22:12:54.525Z · LW(p) · GW(p)

Read up on Feyerabend

Aehm, was Feyerabend a scientist?

comment by Richard_Hollerith2 · 2008-10-18T04:20:40.000Z · LW(p) · GW(p)

Eliezer writes, "In general, beliefs require evidence."

To which Peter replies, "In general? Which beliefs don't?"

Normative beliefs (beliefs about what should be) don't, IMHO. What would count as evidence for or against a normative belief?

Replies from: robert-miles, savageorange
comment by Robert Miles (robert-miles) · 2011-10-07T23:38:27.113Z · LW(p) · GW(p)

What would count as evidence for or against a normative belief?

In isolation, almost certainly nothing, but you can play normative beliefs against one another. If you can demonstrate that a person's normative belief is inconsistent with another of their normative beliefs, that demonstrates that one of them must be 'false'. You can't check them against reality directly, but they must still be consistent.

comment by savageorange · 2014-03-04T00:44:31.664Z · LW(p) · GW(p)

Evidence that would substantially inform a simulation of the enforcement of those beliefs. For example, history provides pretty clear evidence of the ultimate result of fascist states/dictatorships, partisan behaviour, and homogeneous group membership The qualities found in this projected result is highly likely to conflict with other preferences and beliefs.

At that point, the person may still say 'Shut up, I believe what I want to believe.' But that would only mean they are rejecting the evidence, not that the evidence doesn't apply.

comment by celeriac · 2008-10-18T04:25:52.000Z · LW(p) · GW(p)

How about "Comparing Apples and Oranges," or "How Dare you Compare," a misrepresentation of the scope of analogies. For a recent example, see the response to John Lewis's drawing an analogy between certain aspects of the McCain campaign and those of George Wallace -- the response is not a consideration of the scope and aptness of the analogy but a rejection that any analogy at all can be drawn between two subjects when one is so generally recognized to be Evil. The McCain campaign does not attempt to differentiate the aspects under analogy (rhetoric and its potential for the fomentation of violence) from those of Wallace, but rather condemns the idea that the analogy can be considered at all. Under the epistemology of Fail, any difference between two subjects of comparison is enough to reject its validity, regardless the relevance of the distinction to the actual comparison being drawn. See also: Godwin's Law.

Some self-entitled males like to use this one, particularly in defense of the notion that one has in inviolate right to make sexual advances toward other people regardless of circumstance or outward sign. Sooner or later, after demonstrating how each of their justifications also justify sexual assault, it leads to "how dare you compare me to a rapist," which is where the fun begins. After I have done epistemologically belittling them I point out that the obvious fact that sexual assault is known to be bad is a manifestation of general principles of ethical interaction among humans, and not a special case handed down from a God who says that everything that is not expressly forbidden by a law is good.

Replies from: anon895
comment by anon895 · 2011-07-17T18:32:40.898Z · LW(p) · GW(p)

Somehow I doubt that "regardless of circumstance or outward sign" is their wording and not yours.

(Edit) Also, the converse of "not everything that is not expressly forbidden by a law is good" is "not everything that causes the slightest incidental harm is unforgivable babyeating evil".

comment by O3 · 2008-10-18T04:42:03.000Z · LW(p) · GW(p)

Animal rights???

You're smart. You should go to college???

Essentialism???

comment by Peter3 · 2008-10-18T04:52:43.000Z · LW(p) · GW(p)

Normative beliefs (beliefs about what should be) don't [require evidence], IMHO. What would count as evidence for or against a normative belief?

That's correct if you don't consider pure reason to be evidence - but I consider it to be so. So morality and ethics and all these normative things are, in fact, based on evidence - although it is a mix of abstract evidence (reason) with concrete evidence (empirical data). If you base your morality, or any normative theory (how the world should be) on anything other than how things actually are (including mathematics), you necessarily have to invoke ascribe some supernatural property onto it

comment by Marcello · 2008-10-18T05:26:50.000Z · LW(p) · GW(p)

One giant category of dark side reasoning looks like "That idea is _" Where the idea is an "is" (not a "should") and _ is any negative affect word with a meaning other than "untrue".

Examples include {unpatriotic, communist, capitalist, liberal, conservative, provincial, any-demonym-goes-here, cultish, religious, atheistic, sinful, evil, dangerous, repugnant, elitist, condescending, out-of-touch, politically incorrect, offensive, argumentative, hateful, cowardly, fool-hardy, inappropriate, indecent, unsettling, lewd, silly, idiotic, new-fangled, old-fashioned, staid, dead, uncool, too simple, too complicated} and many more.

Important note: The exception to this rule is if the speaker could goes on to show how _ is evidence about the truth of the proposition. If you can say why something is idiotic, that's fine. A seasoned scientist has the right to say "that theory looks too complicated" if the they have many examples of surprisingly simple theories explaining things well, but a creationist doesn't earn the right to accuse the theory of evolution of being "too complicated," until they explain what whatever it is they mean by "too complicated" has to do with the idea being wrong.

To avoid concluding that an idea is true, the Dark Side's first line of defense is to avoid even considering whether the idea is true. Those who are good enough at suppressing contradictions can simply save themselves the trouble of building up "a vast complex of anti-law, rules of anti-thought". After all, building such a complex is a risky business from the standpoint of protecting the precious belief. The larger the complex gets, the more close scrapes it could have with real sensory experience.

Just as a murderer ties the corpse of his victim to a heavy stone before throwing it into the water, so too do victims of the Dark Side tie ideas they want to dispose of to negative affect words. It really does make them less likely to resurface.

The same caution applies to tying positive affect words to desired ideas.

Replies from: Caperu_Wesperizzon
comment by Caperu_Wesperizzon · 2022-08-29T05:39:44.807Z · LW(p) · GW(p)

Examples include [...] politically incorrect

Ideas are also often dismissed for being politically correct, by concluding the speaker is a hypocrite. I suppose you can count that as a particular case of cowardly.

comment by JamesAndrix · 2008-10-18T06:43:07.000Z · LW(p) · GW(p)

Saying 'There is lots of evidence for it' When in fact there is little to none. I guess the epistemology is 'It is ok to believe something if you believe there is evidence to support it.'

Creationists are told the fossil record supports X and Y, and they run with it.

comment by Bo2 · 2008-10-18T07:44:37.000Z · LW(p) · GW(p)

The concept of different epistemological magisteria. E gave an example of it in this post (and also in the post about scientists outside the laboratory), but his example is just the tip of the iceberg. This failure of rationality doesn't manifest itself explicitly most of the time, but is engaged in implicitly by almost everybody that I know that isn't into hardcore rationality.

It's definitely engaged in by people who are into, or at least cheer for, science and (traditional) rationality and/or philosophy. It's the double standard between what epistemological standards you explicitly endorse, and what are the actual beliefs on the basis of which you act. Acting as if the sun will rise tomorrow even though you endorse radical scepticism, accepting what Richard Dawkins says on his authority while seeking out refutations for creationist arguments. I think one big reason for this is that people who are interested in this sort of thing are exposed too much to deductive reasoning and hardly at all to rigorous inductive reasoning. Inductive reasoning is the practical form of reasoning that actually works in the real world (many fallacies of deductive reasoning are actually valid probabilistic inferences), and we all have to engage in it explicitly or implicitly to cope in the world. But having been exposed only the "way" of deductive rationality, and warned against it's fallacies, people may come to experience a cognitive dissonance between what epistemological techniques are useful in real life and which epistemological techniques they ought to be using - and therefore to see science, rationality and philosophy as disconnected from real life, things to be cheered for and entertaning diversions. Such people don't hold every part of their epistemological self under the same level of scrutiny, because implicitly they believe that their methods of scrutinizing are imperfect. I recognize my past self in this, but not my present self, who knows about evo psych, inductive reasoning etc. and has seen that these methods actually work and can therefore criticize his own epistemological habits using the full force of his own rationality...

This might concern mistaken, well-meaning people more than the actual Dark Side but it seems to me to be an important point anyway.

comment by Richard_Kennaway · 2008-10-18T08:13:31.000Z · LW(p) · GW(p)

A few general schemas:

"True for", as in, "That may be true for you, but not for me. We each choose our own truths."

"I feel that X." Every sentence of this form is false, because X is an assertion about the world, not a feeling. Someone saying "I feel that X" in fact believes X, but calling it a feeling instead of a belief protects it from refutation. Try replying "No you don't", and watch the explosion. "How dare you try to tell me what I'm feeling!"

Write obscurely.

Never explicitly state your beliefs. Hint at them in terms that the faithful will pick up and applaud, but which give nothing for the enemy to attack. Attack the enemy by stating their beliefs in terms that the faithful will boo, while giving the enemy nothing to dispute.

Ignore the entire machinery of rationality. Treat all human interaction as nothing more than social grooming or status games in a tribe of apes.

Replies from: buybuydandavis, DPiepgrass, Caperu_Wesperizzon
comment by buybuydandavis · 2013-04-02T06:41:36.585Z · LW(p) · GW(p)

Never explicitly state your beliefs.

Argument by innuendo. Politicians love this. Imply, then deny. "I never said that."

comment by DPiepgrass · 2019-06-02T05:49:14.745Z · LW(p) · GW(p)
Write obscurely. Never explicitly state your beliefs. Ignore the entire machinery of rationality.

All good stuff. Perhaps dark side epistemology is mainly about behaviors, not beliefs? A list of behaviors I noticed while speaking to climate science deniers:

  • First and foremost, they virtually never admit that they got anything wrong, not even little things. (If you spot someone admitting they were wrong about something, congrats! You may have stumbled upon a real skeptic!)
  • They don't construct a map of the enemy's territory: they have a poor mental model of how the climate system works. After all, they are taught “models can’t be trusted,” even though all science is built on models of some sort. Instead they learn a list of stories, ideas and myths, and they debate mainly by repeating items on their list.
  • They often ignore your most rock solid arguments, as if you'd said nothing at all, and they attack whatever they perceive to be your weakest point.
  • They think they are “scientific”. I was astonished at one of them's ability to sound sciencey.... but then I saw how GPT2 could say plausible things without really understanding what it was saying, and I saw Eliezer talking about the "literary genre" of science, so I guess that's the answer - certain people somehow pick up and mimick the literary genre of science without understanding or caring about its underlying substance.
  • They lack self-awareness. You’ll never ever hear them say “Okay, I know this might sound crazy, but those thousands of climate scientists are all wrong. I can’t blame you for agreeing with a supermajority, but if you’ll just hear me out, I will explain how I, a non-scientist, can be certain the contrarians are right. Just let me know if I’ve made some mistake in my reasoning here...” (which reminds me of I an interesting idea I had after reading about philisophical zombies... is it possible that people who seem to lack self-awareness literally lack self-awareness? That they are zombies?)
  • So, they are not introspective: they’re not thinking about how they think. So they haven’t thought about the Dunning-Kruger effect (meme!), and confirmation bias is something that happens to other people. “Motivated reasoning? Not me! So what if I do? Everybody does it…”
  • It's as if schoolyard irony is an important defense mechanism for them. They take accusations often used against them, and toss them at detractors. They’ll say you’re in a “cult” or “religion” for believing humans cause warming, that you lie, fudge data, are “closed-minded”, etc. One guy called me a “denier” (in denial that it’s all a hoax) even though I had not called him a denier. In general you can expect attacks on your character even if you were careful not to attack them, yet these attacks will seem like plausible descriptions of the attacker. Similarly, they may dismiss talk of the scientific literature or consensus as “appeals to authority”, apparently oblivious to the authorities (Rush Limbaugh, Roy Spencer, and many others) upon which their own opinion is based. Last but not least, they’ll complain of “politicizing the science” while politicizing the science.
  • Lack of knowledge seems to satisfy them as a knowledge substitute — e.g. “I’ve not seen evidence for X, so I can safely assume X is false” or “I’ve not seen evidence against X, so I can safely assume X is true.” Missing knowledge somehow provides not merely hope, but great confidence that the experts are wrong.
Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-06-02T07:27:04.868Z · LW(p) · GW(p)

When you have reached the point where you’re considering whether your opponents are literally zombies without any subjective consciousness… could it be time to consider whether your own thinking has gone wrong somewhere?

Replies from: DPiepgrass
comment by DPiepgrass · 2019-09-15T16:42:12.454Z · LW(p) · GW(p)

Lacking self-awareness (in the sense described above: habitually declining to engage in metacognitive thinking) is different from lacking consciousness/qualia. I am not claiming that they lack the latter. But, I do wonder if there have been any investigations into whether qualia are universal among humans, and I wonder how one would go about detecting qualia (it's vaguely like a Turing test; a human without qualia would likely not intentionally deceive the tester the way a computer might during a Turing test, but would of course be unaware that there is any difference between his/her experience and anyone else's, and can be expected to deny any difference exists.)

Replies from: jeronimo196
comment by jeronimo196 · 2020-03-08T22:34:36.856Z · LW(p) · GW(p)

I don't think the proponents of qualia as metaphysical would agree that such a test is possible in theory - otherwise you could put someone in an MRI scan, show him a red square, monitor for activity in his visual cortex and wait for him to confirm he sees "the redness". This should be enough to conclude some "redness" related experience has occured in the subject's brain (since qualia is supposed to be individual, differences in experience is expected - it doesn't have to be exactly the same). And yet the question of philosophical zombies remains (at least according to some philosophers).

Replies from: DPiepgrass
comment by DPiepgrass · 2020-04-15T20:38:22.841Z · LW(p) · GW(p)

If I take a digital picture, I can convert the file to BMP format and extract the "red" bits, but this is no evidence that my phone has qualia of redness. An fMRI scanning a brain will have the same problem. The idea that everyone has qualia is inductive: I have qualia (I used to call it my "soul"), and I know others have it too since I learned about the word itself from them. I can deduce that maybe all humans have it, but it's doomed to be a "maybe". If someone were to invent a test for qualia, perhaps we couldn't even tell if it works properly without solving the hard problem of consciousness.

Replies from: jeronimo196
comment by jeronimo196 · 2020-04-19T22:36:43.084Z · LW(p) · GW(p)

To avoid semantic confusion, here is the Wikipedia definition of qualia: "In philosophy and certain models of psychology, qualia (/ˈkwɑːliə/ or /ˈkweɪliə/; singular form: quale) are defined as individual instances of subjective, conscious experience." https://en.m.wikipedia.org/wiki/Qualia

If I take a digital picture, I can convert the file to BMP format and extract the "red" bits, but this is no evidence that my phone has qualia of redness. An fMRI scanning a brain will have the same problem.

You are skipping the part where we receive confirmation from the patient that he sees the redness. This, combined with the fMRI, should be enough to prove the colour red has been experienced (i.e. processed) by the patient's brain.

Now one question remains - was this a conscious experience? (Thank you for making me clarify this, I missed it in my previous comment!)

I propose that any meaningful phylosophical definition of consciousness related to humans should cover the medical state of consciousness (i.e. the patient follows a light, knows the day of the week, etc.) If it doesn't, I would rather taboo "consciousness" and discuss "the mental process of modeling the environment" instead.

Whatever the definition of consciousness, as long as it relates to the function of a healthy human brain, it entails qualia.

However, if the definition of consciousness doesn't include what's occuring in the human brain, why bother with it?

The idea that everyone has qualia is inductive: I have qualia (I used to call it my "soul"), and I know others have it too since I learned about the word itself from them. I can deduce that maybe all humans have it, but it's doomed to be a "maybe".

I've heard people speaking of a soul before - it did not convince me they (or I) have one. I would happily grant them consciousness instead.

If someone were to invent a test for qualia, perhaps we couldn't even tell if it works properly without solving the hard problem of consciousness.

Even without solving the hard problem of consciousness, as long as we agree that consciousness is a property the human mind has, the test can be administered by a paramedic with a flashlight.

We will need the solution when we try to answer if our phone/dog/A.I. is conscious, though.

(I recently worked out a rudimentary solution (most probably wrong), which relies heavily on Eliezer's writings on the question of free will later in the Sequences. I am reluctant to share it here, since it would spoil Eliezer's solution and he advises people to try working it out for themselves first. I could PM or ROT13 in case of interest.)

Replies from: DPiepgrass, TAG
comment by DPiepgrass · 2020-05-24T14:27:45.082Z · LW(p) · GW(p)

Again, as a non-illusionist, I disagree that physiological consciousness necessarily implies qualia (or that an AGI necessary has qualia). It seems merely to be a reasonable assumption (in the human case only).

Replies from: jeronimo196
comment by jeronimo196 · 2020-05-29T10:59:43.905Z · LW(p) · GW(p)

Ok. I am still unsure of your position. Do you think other people have experiences, but we cannot say if those are conscious experiences? Or are you of the opinion we cannot say anyone has any kind of experiences? Could you please taboo "qualia", so I know we are not talking about different things entirely?

Replies from: DPiepgrass
comment by DPiepgrass · 2020-06-06T15:59:11.520Z · LW(p) · GW(p)

Well, the phrase "something-it-is-like to be a thing" is sometimes used as a stand-in for qualia. What I am talking about when I use that word is "the element of experience which, according to the known laws of physics, does not exist". There is only one level of airplane [LW · GW], and it's quarks. It seems impossible for a quark (electron, atom) or photon to be aware it is inside a mind. So in the standard reductionist model, there is no meaningful difference between minds and airplanes; a mind cannot feel anything for the same reason an airplane or a computer cannot feel anything. The sun is constantly exploding while being crushed, but it is not in pain. A mind is simply a machine with unusual mappings from inputs to outputs. Redness, cool breezes, pleasure, and suffering are just words that represent states which are correlated with past inputs and moderate the mind's outputs. Many computer programs (intelligent or not) could be described in similar terms.

Suppose someone invents a shockingly human-like AGI and compiles it to run single-threaded. I run a copy on the same PC I'm using now, inside a GPU-accelerated VR simulation (maybe it runs extremely slowly, at 1/500 real time, but we can start it from a saved teenager-level model and speak to it immediately via a terminal in the VR). Some would claim this AGI is "phenomenally conscious"; I claim it is not, since the hardware can't "know" it's running an AGI any more than it "knows" it is running a text editor inside a web browser on lesswrong.com. It's just fetching and executing a sequence of instructions like "mov", "add", "push", "cmp", "bnz", just as it always has (and it doesn't know it's doing that, either). I claim that, associated with our minds, there is something additional, aside from the quarks, which can feel things or be aware of feelings. This something is not an abstraction (representing a collection of quarks which could be interpreted by another mind as a state that modulates the output of a neural network), but a primitive of some sort that exists in addition to the quarks that embody the state, and interacts with those quarks somehow. I expect this primitive will, like everything else in the universe, follow computable rules, so it will not associate itself with any arbitrary representation of a state, such as my single-threaded AGI or an arrangement of rocks. (by the way, I also assume that this primitive provides something useful to its host, otherwise animals would not evolve an attachment to them.)

Replies from: jeronimo196
comment by jeronimo196 · 2020-06-11T13:44:34.345Z · LW(p) · GW(p)

"something-it-is-like to be a thing"

Ok, I could decipher this as a vague stand in for experience. I would much prefer something like "the ability to process information about the environment and link it to past memories", but to each their own.

"the element of experience which, according to the known laws of physics, does not exist".

Uhm... Are you banking on a revolution in the field of physics? And later you even show exactly how reductionism not only permits, but also explains our experiences.

So in the standard reductionist model, there is no meaningful difference between minds and airplanes;

Yes, there is. One has states of mind and the other doesn't. How meaningful this difference is depends on your position on nihilism.

a mind cannot feel anything for the same reason an airplane or a computer cannot feel anything.

Wrong! The end of your paragraph shows why this is a wrong description of reductionism.

A mind is simply a machine with unusual mappings from inputs to outputs. Redness, cool breezes, pleasure, and suffering are just words that represent states which are correlated with past inputs and moderate the mind's outputs.

Yes. Exactly. Pleasure and suffering are just words, but the states of mind they represent are very much real.

It seems impossible for a quark (electron, atom) or photon to be aware it is inside a mind.

Correct - particals lack the computational power to know anything. Minds, on the other hand, can know they are made of particles. This is not a problem for reductionism. Actually, explaining how simple particles' interactions lead to observed phenomena on the macro level is the entire point.

Some would claim this AGI is "phenomenally conscious"; I claim it is not, since the hardware can't "know" it's running an AGI any more than it "knows" it is running a text editor inside a web browser on lesswrong.com.

Yes, no one would call your GPU conscious. The AGI is the software, though. The AGI could entertain the hipotesis it lives in a simulation, even before discovering any hard evidence. Much like we do. Depending on its code, it could have states of mind similar to a human and then I would not hesitate to call it conscious.

How willing would you be to put such an AGI in the state of mind described by reductionists as "pain", even if it is simply a program run on hardware?

but a primitive of some sort that exists in addition to the quarks that embody the state, and interacts with those quarks somehow.

If such a primitive does interact with quarks, we will find it.

I expect this primitive will, like everything else in the universe, follow computable rules

And then we have yet another particle. How is that different from reductionism?

so it will not associate itself with any arbitrary representation of a state, such as my single-threaded AGI or an arrangement of rocks.

Ah, it's a magical particle. It is smaller than an electron, yet it interacts with the quarks in the brain, but not those in the carbon of a diamond. Or is it actually big, remote and intelligent on its own (unlike electrons)? So intelligent it knows exactly what to interact with, and exactly when, so as to remain undetected?

If you are not postulating a god, you are at the very least postulating a soul under a new name.

See, once you step outside the boundaries of mundane physics, you get very close to teology very fast.

Replies from: DPiepgrass
comment by DPiepgrass · 2020-06-13T18:52:05.655Z · LW(p) · GW(p)
Yes, no one would call your GPU conscious.

I wasn't talking about the GPU. Using the word "yes" to disagree with me is off-putting.

How is that different from reductionism?

I never said I rejected reductionism. I reject illusionism.

Ah, it's a magical particle. It is smaller than an electron

Quite the opposite. A magical particle would be one that is inexplicably compatible with any and every representation of human-like consciousness (rocks, CPUs of arbitrary design) - with the term "human-like" also remaining undefined. I make no claims as to its size. I claim only that it is not an abstraction, and that therefore known physics does not seem to include it.

So intelligent it knows exactly what to interact with

I do not think it is intelligent, though it may augment intelligence somehow.

How willing would you be to put such an AGI in the state of mind described by reductionists as "pain"

I think it's fair to give illusionism a tiny probability of truth, which could make me hesitant (especially given its convincing screams), but I would be much more concerned about animal suffering than about my AMD Ryzen 5 3600X suffering.

By the way, where will the suffering be located? Is it in the decode unit? The scheduler? the ALU? The FPU? The BTB? The instruction L1 cache? The data L1 cache? Does the suffering extend to the L2 cache? the L3? out to the chipset and the memory sticks? Is this a question that can be answered at all, and if so, how could one go about finding the answer?

Replies from: jeronimo196
comment by jeronimo196 · 2020-06-15T08:36:14.521Z · LW(p) · GW(p)

Using the word "yes" to disagree with me is off-putting.

Noted. Thank you for pointing this out.

I wasn't talking about the GPU.

Good to have that clarified.

... but I would be much more concerned about animal suffering than about my AMD Ryzen 5 3600X suffering.

Huh? I am now confused.

By the way, where will the suffering be located? Is it in the decode unit?...

Pain signals are processed by the brain and suffering happens in the mind. So, theoretically, the suffering would be happening in the mind running on top of the simulated cortex, inside the matrix. All the hardware would be necessary to run the simulation. The hardware would not be experiencing the simulation. Just as individual electrons are not seeing red.

I never said I rejected reductionism.

I misunderstood then - you do seem unhappy with the standard reductionist model's position on emotions and experiences as states of mind.

I reject illusionism.

What do you mean by "illusionism"? Is it only the belief that AGI or a mind upload could be conscious? Or is there more to it?

Quite the opposite. A magical particle would be one that is inexplicably compatible with any and every representation of human-like consciousness (rocks, CPUs of arbitrary design) - with the term "human-like" also remaining undefined. I make no claims as to its size. I claim only that it is not an abstraction, and that therefore known physics does not seem to include it.

And how do you know that? Why do you think this unknown particle is not compatible with rocks and CPUs? Is it because you get to define its behaviour precisely as you need to answer a philosophical question a certain way?

What evidence would it take to falsify your belief in this primitive particle? What predictions does it allow you to make? Does it pay rent in anticipation?

Replies from: DPiepgrass
comment by DPiepgrass · 2020-06-15T15:47:10.241Z · LW(p) · GW(p)
I am now confused.

I don't know why. I have an AMD Ryzen 5 CPU and my earlier premise should make sense if you know what "single-threaded" means.

Why do you think this unknown particle is not compatible with rocks and CPUs?

I thought it was obvious, but okay... let X be a nontrivial system or pattern with some specific mathematical properties. I can't conceive of a rule by which any arbitrary physical representation of X could be detected, let alone interacted with. If a particle (or indivisible entity) does something computationally impossible (or even just highly intelligent), I call it magic.

Does it pay rent in anticipation?

It pays rent in sensation. I have a first-person subjective experience and I am unable to believe that it is only an abstraction. (Otherwise I probably would have turned atheist [LW(p) · GW(p)] much sooner.)

Replies from: jeronimo196
comment by jeronimo196 · 2020-06-16T16:09:43.534Z · LW(p) · GW(p)

I think of consciousness as a process (software) run on our brains (wetware), with the theoretical potential to be run on other hardware. I thought you understood my position. Asking me to pinpoint the hardware component which would contain suffering, tells me you don't.

To me, saying the cpu (or the gpu) is conscious sounds like saying the cpu is linux - this is a type error. A pc can be running linux. A pc cannot actually be linux, even if "running" is often omitted.

But if one doesn't know "running" is omitted, one could ask where does the linux-ness come from, if neither the cpu nor the ram are themselves linux.

If a particle (or indivisible entity) does something computationally impossible (or even just highly intelligent), I call it magic.

But it does know to interact with mammals and not with trees and diamonds? ... Argh! You know what, screw it. This is like arguing how many angels can sit on top of a needle. Occam's razor says not to.

Does it pay rent in anticipation?

It pays rent in sensation.

Without falsifiable predictions, we have no way to difirentiate a true ad-hoc explanation from a false one. Also, a model with no predictive powers is useless. Its only "benefit" would be to provide piece of mind as a curiosity stopper. (See https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences.) [LW · GW]

I have a first-person subjective experience and I am unable to believe that it is only an abstraction.

I honestly don't see the disconnect. I don't think the existence of a conscious AGI would invalidate my subjective experiences in the slightest. The explanation is always mundane ("only an abstraction" ?), that doesn't detract from the beauty of the phenomenon. (See https://www.lesswrong.com/posts/x4dG4GhpZH2hgz59x/joy-in-the-merely-real [LW · GW]).

(Otherwise I probably would have turned atheist much sooner.)

I believe you are right. Many people cite subjective personal experiences as their reason for being religious. This does make me doubt our ability to draw correct conclusions based on such.

Replies from: DPiepgrass
comment by DPiepgrass · 2020-06-23T22:16:17.425Z · LW(p) · GW(p)

So, I think we've cleared up the distinction between illusionism and non-illusionism (not sure if the latter has its own name), yay for that. But note that Linux is a noun and "conscious" is an adjective—another type error—so your analogy doesn't communicate clearly.

But it does know to interact with mammals and not with trees and diamonds?

I can't be sure of that. AFAIK, you are correct that we have no falsifiable predictions as of yet—it's called the "hard problem" for a reason. But illusionism has its own problems. The most obvious problem—that there is no "objective" subjective experience, qualia, or clear boundaries on consciousness in principle (you could invent a definition that identifies a "boundary" or "experience", but surely someone else could invent another definition with different boundaries in edge cases)—tends not to be perceived as a problem by illusionists, which is mysterious to me. I think you're saying the suffering has no specific location (in my hypothetical scenario), but that it still exists, and that this makes sense and you're fine with it; I'm saying I don't get it.

But perhaps illusionism's consequences are a problem? In particular, in a future world filled with AGIs, I don't see how morality can be defined in a satisfactory way without an objective way to identify suffering. How could you ever tell if an AGI is suffering "more" than a human, or than another AGI with different code? (I'm not asking for an answer, just asserting that a problem exists.)

Replies from: jeronimo196
comment by jeronimo196 · 2020-06-25T22:29:35.635Z · LW(p) · GW(p)

But note that Linux is a noun and "conscious" is an adjective—another type error—so your analogy doesn't communicate clearly.

Linux is also an adjective - linux game/shell/word processor.

Still, let me rephrase then - I don't need a wet cpu to simulate water. Why would I need a conscious cpu to simulate consciousness?

AFAIK, you are correct that we have no falsifiable predictions as of yet.

Do you expect this to change? Chalmers doesn't. In fact, expecting to have falsifiable predictions is itself a falsifiable prediction. So you should drop the "yet". Only then can you see your position for the null hypothesis it is.

The most obvious problem—that there is no "objective" subjective experience, qualia, or clear boundaries on consciousness in principle (you could invent a definition that identifies a "boundary" or "experience", but surely someone else could invent another definition with different boundaries in edge cases)—tends not to be perceived as a problem by illusionists, which is mysterious to me.

There is not a single concept, that could not be redefined. If this is a problem, it is not unique to consciousness.

"A process currently running on human brains" -although far from being a complete definition, already gives us some boundaries.

I think you're saying the suffering has no specific location (in my hypothetical scenario), but that it still exists, and that this makes sense and you're fine with it; I'm saying I don't get it.

Suffering is a state of mind. The physical location is the brain.

By stimulating different parts of the brain, we can cause suffering (and even happiness).

Another way to think about it is this - where does visual recognition happen? How about arithmetic? Both required a biological brain for a long, long time.

And for the hipothetical scenario - let's say I am playing CS and I throw a grenade - where does it explode?

But perhaps illusionism's consequences are a problem? In particular, in a future world filled with AGIs, I don't see how morality can be defined in a satisfactory way without an objective way to identify suffering. How could you ever tell if an AGI is suffering "more" than a human, or than another AGI with different code? (I'm not asking for an answer, just asserting that a problem exists.)

That's only the central problem of all of ethics, is it not? Objective morality? How could you tell if a human is suffering more than another human?

I don't see how qualia helps you with that one. It would be pretty bold to exclude AGIs from your moral considerations, before excluding trees (and qualia has not helped you exclude trees!).

Edit: I now realize your position has little to do with Chalmers. Since you are postulating a qualia particle, which has casual effects, you are a substance dualist. But why rob your position of its falsifiable prediction? Namely - before the question of consciousness is solved, the qualia particle will be found.

Or am I misrepresenting you again?

Replies from: DPiepgrass
comment by DPiepgrass · 2020-06-26T17:56:55.075Z · LW(p) · GW(p)

"Car" isn't an adjective just because there's a "Car factory"; Consider: *"the factory is tall, car, and red".

Do you expect this to change?

Yes, but I expect it to take a long time because it's so hard to inspect living humans brains non-destructively. But various people theorize about the early universe all the time despite our inability to see beyond the surface of last scattering... ideas about consciousness should at least be more testable than ideas about how the universe began. Hard problems often suffer delays; my favorite example is the delay between the Michelson–Morley experiment's negative result and the explanation of that negative result (Einstein's Special Relativity). Here, even knowing with certainty that something major was missing from physics, it still took 18 years to find an explanation (though I see here an ad-hoc explanation was given by George FitzGerald in 1889 which pointed in the right direction). Today we also have a long-standing paradox where quantum physics doesn't fit together with relativity, and dark matter and dark energy remain mysterious... just knowing there's a problem doesn't always quickly lead to a solution. So, while I directly sense a conflict between my experience and purely reductive consciousness, that doesn't mean I expect an easy solution. Assuming illusionism, I wouldn't expect a full explanation of that to be found anytime soon either.

postulating a qualia particle

It was just postulation. I wouldn't rule out panpsychism.

Chalmers seems not to believe in a consciousness without physical effects - see his 80000 hours interview. So Yudkowsky's description of Chalmers' beliefs [LW · GW] seems to be either flat-out wrong, or just outdated.

Namely - before the question of consciousness is solved, the qualia particle will be found.

I do hope we solve this before letting AGIs take over the world, since, if I'm right, they won't be "truly" conscious unless we can replicate whatever is going on in humans. Whether EAs should care about insect welfare, or even chicken welfare, also hinges on the answer to this question.

Replies from: jeronimo196
comment by jeronimo196 · 2020-06-29T07:38:18.219Z · LW(p) · GW(p)

Thank you for this discussion.

I was wrong about grammar and the views of Chalmers, which is worse. Since I couldn't be bothered to read him myself, I shouldn't have parroted the interpretations of someone else.

I now have better understanding of your position, which is, in fact, falsifiable.

We do agree on the importance of the question of consciousness. And even if we expect the solution to have different shape, we both expect it to be embedded in physics (old or new).

I hope I've somewhat clarified my own views. But if not, I don't expect to do better in future comments, so I will bow out.

Again, thank you for the discussion.

Replies from: DPiepgrass
comment by DPiepgrass · 2020-07-03T07:55:20.917Z · LW(p) · GW(p)

Yeah, this was a good discussion, though unfortunately I didn't understand your position beyond a simple level like "it's all quarks".

On the question of "where does a virtual grenade explode", to me this question just highlights the problem. I see a grenade explosion or a "death" as another bit pattern changing in the computer, which, from the computer's perspective, is of no more significance than the color of the screen pixel 103 pixels from the left and 39 pixels down from the top changing from brown to red. In principle a computer can be programmed to convincingly act like it cares about "beauty" and "love" and "being in pain", but it seems to me that nothing can really matter to the computer because it can't really feel anything. I once wrote software which actually had a concept that I called "pain". So there were "pain" variables and of course, I am confident this caused no meaningful pain in the computer.

I intuit that at least one part* of human brains are different, and if I am wrong it seems that I must be wrong either in the direction of "nothing really matters: suffering is just an illusion" or, less likely, "pleasure and suffering do not require a living host, so they may be everywhere and pervade non-living matter", though I have no idea how this could be true.

* after learning about the computational nature of brains, I noticed that the computations my brain does are invisible to me. If I glance at an advertisement with a gray tube-nosed animal, the word "elephant" comes to mind; I cannot sense why I glanced at the ad, nor do I have any visibility into the processes of interpreting the image and looking up the corresponding word. What I feel, at the level of executive function, is only the output of my brain's computations: a holistic sense of elephant-ness (and I feel as though I "understand" this output—even though I don't understand what "understanding" is). I have no insight into what computations happened, nor how. My interpretation of this fact is that most of the brain is non-conscious computational machinery (just as a human hand or a computer is non-conscious) which is connected to a small kernel of "consciousness" that feels high-level outputs from these machines somehow, and has some kind of influence over how the machinery is subsequently used. Having seen the movie "Being John Malkovich", and having recently head of the "thousand brains theory", I also suppose that consciousness may in fact consist of numerous particles which likely act identically under identical circumstances (like all other particles we know about) so that many particles might be functionally indistinguishable from one "huge" particle.

Replies from: TAG
comment by TAG · 2020-07-03T13:54:00.667Z · LW(p) · GW(p)

It's not true that particles behave identically under identical circumstances -- that would be determinism.

If it were true, it wouldn't only apply to consciousness, or mean that "cosnciousness is One" in some sense that doens't apply to everything else.

There's a lot of information in N particles. If you want to conserve it all, your huge particle has to exist in 3*N dimensional space. But a freely moving particle in 3*N space would behave locally, so you also need constraints to recover locality. Which is bascially the argument for space realluy being 3 dimensional.

comment by TAG · 2020-05-24T16:54:03.107Z · LW(p) · GW(p)
 If someone were to invent a test for qualia, perhaps we couldn’t even tell if it works properly without solving the hard problem of consciousness.

Even without solving the hard problem of consciousness, as long as we agree that consciousness is a property the human mind has, the test can be administered by a paramedic with a flashlight.

Qualiaphiles don't think qualia are something other than a property the mind has, they think they are not open to any obvious third-party inspection, like shining a flashlight.

If you define consc. as the thing EMT's can check with a flashlight, all you have done is left qualia out of the definition: you haven't solved any problem of qualia.

Replies from: jeronimo196
comment by jeronimo196 · 2020-05-29T12:12:11.634Z · LW(p) · GW(p)

Yes. Once I define qualia as "conscious experience", I necessarily have to leave it out of the definition of "consciousness" (whatever that may be).

My point is that only the question of consciousness remains. And consciousness is worth talking about only if human brains exhibit it.

I am not trying to solve the question of qualia, I am trying to dissolve it as improper.

P.S. Do you mind tabooing "qualia" in any further discission? This way I can be sure we are talking about the same thing.

comment by Caperu_Wesperizzon · 2022-08-29T06:10:37.918Z · LW(p) · GW(p)

Ignore the entire machinery of rationality. Treat all human interaction as nothing more than social grooming or status games in a tribe of apes.

Is there actually anything else to human interaction?

It makes no sense to expect people to engage the machinery of rationality when they don't believe it'll further their goals. Even if they benefit from being privately rational, it's not necessarily in their interest to share their rationality with you. Hence, if you haven't earned their respect, they'll conceal their wisdom from you, like the Spartans.

In fact, pretty much everything in Eliezer's post seems to apply only to the rare situation of two or more people who respect each other enough to actually feel a need to appear logically consistent and make their lies plausible. Usually at least one of the people is in no real need to convince the other of anything (i.e., they have higher status), so they won't waste any time or energy trying to. Therefore, their statements serve other purposes; mainly, to display their high status and to warn the underling when they're getting too close to a line they won't let them cross unpunished. Conspicuously wasting the interlocutor's time with nonsense serves this purpose very well.

Status, status, status. It gets (some of) us every time. There seems to be very little to life but status to a normal person.

comment by Richard_Kennaway · 2008-10-18T08:15:46.000Z · LW(p) · GW(p)

Daniel: A close second is "don't try to argue with the devil -- he has more experience at it than you".

Would you still disagree with that one if "the devil" was replaced by "a strong AI"?

comment by Alexandros3 · 2008-10-18T10:51:55.000Z · LW(p) · GW(p)

How about the notion of an insult as a first-order offence? "Don't insult God/Our Nation/The People/etc.". It is an explicit emotional fortress that reason cannot by definition scale. When it goes near there, all the 'intelligence defeating itself' mechanisms come into play. We take the fortress as our starting argument and start to think backwards until our agitated emotions are satisfied by our half-reasonable but beautiful explanation of why the fortress is safe and why what caused us to doubt it is either not so or can be explained some other way. Ergo, one step deeper into dark epistemology.

comment by Daniel_Franke · 2008-10-18T11:14:51.000Z · LW(p) · GW(p)

Would you still disagree with that one if "the devil" was replaced by "a strong AI"?

Yes. Suffice it to say I don't think I'd be a very reliable gatekeeper :-).

(Conversely, I don't even think the AI's job in the box experiment is even hard, much less impossible. Last week, I posted a $15 offer to play the AI in a run of the experiment, but my post disappeared somehow.)

comment by Jef_Allbright · 2008-10-18T13:43:33.000Z · LW(p) · GW(p)

I'm in strong agreement with Peter's examples above. I would generalize by saying that the epistemic "dark side" tends to arise whenever there's an implicit discounting of the importance of increasing context. In other words, whenever, for the sake of expediency, "the truth", "the right", "the good". etc., is treated categorically rather than contextually (or equivalently, as if the context were fixed or fully specified.)

comment by Caledonian2 · 2008-10-18T13:47:00.000Z · LW(p) · GW(p)
Too restrictive. Science is not synonymous with the hypothetico-deductive method, and nor is there any sort of thing called the "scientific method" from which scientists draw their authority on a subject. Neither is it a historically accurate description of how science has done its work. Read up on Feyerabend. Science is inherently structureless and chaotic. It's whatever works.

See, now there's a prime example of corrupted reasoning right there. Science is carefully structured chaos, ordered according to certain fundamental principles. Meeting those principles is what we mean when we talk about something 'working'.

The recognition of what 'working' is, and the tools that have been found useful in reaching that state, is what constitutes the scientific method.

Scientists do not concern themselves with what philosophers say about science -- it is my experience that they are actively contemptuous of such. Yet science goes on. Strange, isn't it? It's almost as though the philosophers didn't know what they were talking about.

(Additional: the central metaphor of this discussion is flawed - the Light and Dark sides define and require each other; contrastingly, both Jedi and Sith are corruptions and failures to properly represent the two sides of the Force. Accept one, and you reject the truth of things.)

Replies from: None, TheAncientGeek
comment by [deleted] · 2014-09-02T21:56:03.931Z · LW(p) · GW(p)

"Scientists do not concern themselves with what philosophers say about science -- it is my experience that they are actively contemptuous of such. Yet science goes on. Strange, isn't it? It's almost as though the philosophers didn't know what they were talking about."

This is a rather tribalistic disciplinary dogmatism, which is really quite out of step with your subsequent claim to universal monological truth (scientists think it works, so who cares what philosophers think) - a clear demonstration of Archimedean rationality...

Replies from: Keith_Coffman
comment by Keith_Coffman · 2014-09-03T12:48:05.420Z · LW(p) · GW(p)

Do scientists think it works, or does it work? The end result is a model for a particular phenomenon which can be tested for accuracy. When we use a cell phone we are seeing the application of our understanding of electromagnetism, among other things. It's not scientists saying that science works - it's just working.

Replies from: None
comment by [deleted] · 2014-09-03T16:44:48.797Z · LW(p) · GW(p)

Can you clarify what your point is?

My original objection, to which you responded, although not explicit, was that 'science going on' is not sufficient reason for the philosophy of science 'not knowing what they are talking about' - the entire post is puerile dogmatism.

Replies from: Keith_Coffman
comment by Keith_Coffman · 2014-09-03T17:07:22.860Z · LW(p) · GW(p)

My point was not really related to your discussion, I just wanted to clarify on your paraphrasing of "scientists think it works, so who cares what philosophers think."

I think it is slightly silly to worry about who thinks it works when the fact of the matter is that it works - this is not a point directly against your comments, just a point of clarification in general.

comment by TheAncientGeek · 2014-09-03T17:30:19.892Z · LW(p) · GW(p)

Scientists do not concern themselves with what philosophers say about science -- it is my experience that they are actively contemptuous of such.

These comments are largely true

Yet science goes on. Strange, isn't it? It's almost as though the philosophers didn't know what they were talking about.

These comments don't follow from the above. Yes,scientists dont need philosophers to tell them how to science, which they can do on the riding-a-bike basis. That doesn't mean philosophers are wrong. Birds don't need scientists to tell them how to fly..doesn't mean the scientists are wrong.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-18T14:06:30.000Z · LW(p) · GW(p)
Roland: The whole idea of jedis vs. siths reflects a Manichaeistic worldview(good vs. bad).

That was part of my point - that, in this one facet of human endeavor, and in modern times rather than ancient ones, it's remarkable the extent to which an actual Light Side Epistemology and Dark Side Epistemology have developed. Like the sort of contrast that naive people draw between Their Party and the Other Party, only in real life.

comment by Paul_Crowley2 · 2008-10-18T16:27:21.000Z · LW(p) · GW(p)
  • There's a huge conspiracy covering it up

  • Well, that's just what one of the Bad Guys would say, isn't it?

  • Why should I have to justify myself to you?

  • Oh, you with your book-learning, you think you're smarter than me?

  • They said that to Einstein and Galileo!

  • That's a very interesting question, let me show you the entire library that's been written about it (where if there were a satisfactory answer it would be shortish)

  • How can you be so sure?

comment by Douglas_Knight3 · 2008-10-18T16:32:43.000Z · LW(p) · GW(p)

Marcello, I think your list generalizes too much. I see three main types of words on the list. The first type indicates in-group out-group distinction and seems pretty poisonous to me. The second are ad hominem arguments which are dangerous, but do apply sometimes. And then there are a few like "too complicated." You call those "negative affect words"? Surely it is better to say "that is too complicated to be true" than to say simply "that is not true"?

comment by IL · 2008-10-18T16:49:42.000Z · LW(p) · GW(p)

-You can't prove I'm wrong!

-Well, I'm an optimist.

-Millions of people believe it, how can they all be wrong?

-You're relying too much on cold rationality.

-How can you possibly reduce all the beauty in the world to a bunch of equations?

comment by Marcello · 2008-10-18T16:56:02.000Z · LW(p) · GW(p)

Douglas says: """ And then there are a few like "too complicated." You call those "negative affect words"? Surely it is better to say "that is too complicated to be true" than to say simply "that is not true"? """

Well, yes, but that's only when whatever you mean by complicated has something to do with being true. Some people though, just use the phrase "too complicated" just so they can avoid thinking about an idea, and, in that context it really is an empty negative-affect phrase.

Of course, it is better for a scientist to say "that's too complicated to be true" rather than just "that's not true." You're not done by any means once you've made a claim about whether something is true or false; the claim still needs to be backed up. The point was simply that any characterization of an idea is bad unless that characterization really does have something to do with whether the idea is true.

comment by JamesAndrix · 2008-10-18T17:24:58.000Z · LW(p) · GW(p)

That was part of my point - that, in this one facet of human endeavor, and in modern times rather than ancient ones, it's remarkable the extent to which an actual Light Side Epistemology and Dark Side Epistemology have developed. Like the sort of contrast that naive people draw between Their Party and the Other Party, only in real life.

That sounds a lot more like you're being subject to the same bias. "Some people have this view, even though reality is more complex, but what's amazing is that in a subject area I care a lot about, that's what's there."

Yes, if you label the things you accept Light, and the things you reject Dark, you'll see that dichotomy, but why that grouping?

Is traditional rationality Light side? or just bayesianism?

The dark side might be more appropriately grouped into a few different schools.

There will be classes of similar rules that contain both light and dark members.

The both sides have always been around, some of the light side rules might be new, and it is new to group the light side together as the things that work best.

But they are not opposed to each other. Just as physics doesn't care if you suffer, logic doesn't care if you get the right answer. There is no battle for our minds. Humans argue about the origin of life, but all existing humans use a combination of light and dark thinking. Creationists can look for evidence and evolutionists can say irrational things for their own psychological defense. The 'sides' coexist quite peacefully, not at all like competing bands of primates.

And this might be a reason that it's so hard to get rid of bad thinking even in ourselves. The light side doesn't have any alarm bell defenses against the dark side.

comment by Alan_Crowe · 2008-10-18T17:53:23.000Z · LW(p) · GW(p)

"one man's modus ponens in another man's modus tollens."[1][2] is maxim that is easily weaponised by the Dark Side by taking it in a one sided way. One sees ones own implications as proving their consequents and the other sides implications as casting doubt on their antecedents.

comment by NancyLebovitz · 2008-10-18T18:20:01.000Z · LW(p) · GW(p)

If you once tell a lie, the truth is ever after your enemy.

That isn't true.

I've told lies when I was a kid. If I got caught I gave up rather than doing an epistomological attack.

Richard Kennaway: "I feel that X." Every sentence of this form is false, because X is an assertion about the world, not a feeling. Someone saying "I feel that X" in fact believes X, but calling it a feeling instead of a belief protects it from refutation. Try replying "No you don't", and watch the explosion. "How dare you try to tell me what I'm feeling!"

If I say I feel something, I'm talking about an emotion. I don't intend it to be an objective statement about the world, and I'm not offended if someone says it doesn't apply to everyone else.

Replies from: dmitrii-zelenskii
comment by Дмитрий Зеленский (dmitrii-zelenskii) · 2019-08-15T22:08:41.124Z · LW(p) · GW(p)

"If you once tell a lie..." should, of course, read "If you once tell a lie then, until you give it up...".

comment by Richard_Kennaway · 2008-10-18T19:16:48.000Z · LW(p) · GW(p)

Nancy Lebovitz: If I say I feel something, I'm talking about an emotion.

That prohibits you from saying "I feel that X". No emotion is spoken of in saying "I feel that the Riemann hypothesis is true", or "I feel that a sequel to The Hobbit should never be made", or "I feel that there is no God but Jaynes and Eliezer (may he live forever) is His prophet", or in any other sentence of that form. "I feel" and "that X" cannot be put together and make a sensible sentence.

If someone finds themselves about to say "I feel that X", they should try saying "I believe that X" instead, and notice how it feels to say that. It will feel different. The difference is fear.

Replies from: buybuydandavis
comment by buybuydandavis · 2013-04-02T06:44:23.856Z · LW(p) · GW(p)

There is no God but Jaynes and Eliezer is His prophet

That's kind of catchy.

comment by Phil_Boncer · 2008-10-18T20:43:43.000Z · LW(p) · GW(p)

I believe that there are circumstances in which you can say "I feel that X". What that could rationally mean is that you yourself recognize that you do not have enough evidence or knowledge to justify a belief about X vs. not-X, but that without evidence you lean toward X because you like that alternative. You are admitting ignorance on the subject. Ideally, this would then also imply an openness with regard to forming a belief about X or not-X given some evidence -- that recognition that all you have is a feeling about it means a very weak attachment to the idea of X.

PhilB

comment by Peter3 · 2008-10-18T20:55:07.000Z · LW(p) · GW(p)

Caledonian: What fundamental principles? As far as I can tell the only fundamental principle is that it has to work. But I'm open to counterexamples, if you are.

The recognition of what 'working' is, and the tools that have been found useful in reaching that state, is what constitutes the scientific method.

The scientific method is actually pretty specific - and it is not a set of tools. There is no systematic method of advancing science, no set of rules/tools which are exclusively the means to attaining scientific knowledge.

Scientists do not concern themselves with what philosophers say about science -- it is my experience that they are actively contemptuous of such. . . It's almost as though the philosophers didn't know what they were talking about.

That's actually my point. Scientists do what works, and employ methodological diversity - the "scientific method" is not an actual description of how real scientists do their work, nor how real science has advanced. It's propaganda, made up by certain people who were/are absolutely horrified that science has no defining and fundamental underlying principles - which would throw their entire schema of epistemology into turmoil.

The "rules" of science, if they exist, are subject to change at any time. Science has physical reality at the input and useful models at the output - and no bona fide, tried and true, structure in between.

Replies from: Keith_Coffman
comment by Keith_Coffman · 2014-08-31T16:13:03.214Z · LW(p) · GW(p)

The "rules" of science, if they exist, are subject to change at any time.

Here's a rule of science: Your hypothesis must make testable predictions. It must be falsifiable. Is that "subject to change at any time" ? I bet there are more.

While it may not perfectly describe how actual scientists do their work all the time, the scientific method is a description of the process of how we sort out good ideas/models from bad ones, which is the quintessential goal of science (the "advancement of science," if you will).

Just to be clear on what we are discussing, here is the Oxford English Dictionary definition (I don't like using dictionaries as authorities; I think it's stupid. this is just to have a working definition on the table): "A method or procedure... consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses."

In order for the scientific community to take a claim seriously, there are certain expectations that must be satisfied such as a reproducible experiment, peer reviewed publication, etc. When a hypothesis is proposed (assuming it has already met the baseline requirement of making testable predictions), it is thrust into the death pit of scientific inquiry where scientists do everything they can to test and falsify it. While the subject matter may span vastly different areas of science, this process is still generally followed.

Scientists who do science for a living may have gotten good at this process, so much so that they do it without belaboring each element as you would in a middle school science class, but they do it never the less. It is true that in the past, bad science happened, and even today lapses in scientific integrity happen; however, the reason science is given the authority that it is is due to it's strict adherence to the above process. (Also, as a disclaimer, there are many nuances to said process that I glossed over; I just wanted to get the general idea.)

If I may go out on a limb here, it sounds to me like the chaos you are talking about is the unavoidably arbitrary nature of observation of phenomena and the unavoidably arbitrary nature of proposing hypotheses. Often times throughout history we have encountered entirely new areas of science by sheer accident. Likewise (unless they are making a phenomenological model) scientists have no better way to propose hypotheses than to guess at what the answer is based on observations that they currently have and then make new observations/experiments to see if they were right.

So I definitely agree with you on the chaotic nature of our stumbling across new phenomena on on our quest to understand reality, but to say that the process we go through to establish scientific knowledge is not systematic seems a bit extreme.

comment by Nick_Tarleton · 2008-10-18T21:17:59.000Z · LW(p) · GW(p)

You haven't earned the right to say X.

comment by Daniel_Franke · 2008-10-18T22:07:21.000Z · LW(p) · GW(p)

You haven't earned the right to say X.

I think that one is poorly-phrased but defensible. You can think of it as short hand for "Your life experiences have provided you with an insufficient collection of Bayesian priors to permit you to assert X with any reasonable certainty".

comment by JulianMorrison · 2008-10-18T22:40:30.000Z · LW(p) · GW(p)

The worst one is "this is my truth". The ultimate victory of map over territory. In the universe I create, rocks fall up. Forcing me to believe in "gravity" puts you in my proper role as divine map-maker. Your "reason" and "evidence" are just a power grab. I choose not to believe the rock I'm about to drop on my toes will hurt. Ouch! You bastard, you contaminated my purity of self-definition.

comment by Thom_Blake · 2008-10-19T02:26:00.000Z · LW(p) · GW(p)

"Everyone has a right to their own opinion" is largely a product of its opposite. For a long period many people believed "If my neighbor has a different opinion than I do, then I should kill him". This led to a bad state of affairs and, by force, a less lethal meme took hold.

Replies from: orthonormal
comment by orthonormal · 2020-08-05T17:59:26.876Z · LW(p) · GW(p)

Exactly - it's not epistemics, it's a peace treaty.

comment by NancyLebovitz · 2008-10-19T07:25:07.000Z · LW(p) · GW(p)

To Richard Kennaway:

Your original point, which I didn't read carefully enough:

"I feel that X." Every sentence of this form is false, because X is an assertion about the world, not a feeling. Someone saying "I feel that X" in fact believes X, but calling it a feeling instead of a belief protects it from refutation. Try replying "No you don't", and watch the explosion. "How dare you try to tell me what I'm feeling!"

"No, you don't" sounds like a chancy move under the circumstances. Have you tried "How sure are you about X?" and if so, what happens?

More generally, statements usually imply more than one claim. If you negate a whole statement, you may think that which underlying claim you're disagreeing with is obvious, but if the person you're talking to thinks you're negating a different claim, it's very easy to end up talking past each other and probably getting angry at each other's obtuseness.

My reply: If I say I feel something, I'm talking about an emotion.

You again: That prohibits you from saying "I feel that X". No emotion is spoken of in saying "I feel that the Riemann hypothesis is true", or "I feel that a sequel to The Hobbit should never be made", or "I feel that there is no God but Jaynes and Eliezer (may he live forever) is His prophet", or in any other sentence of that form. "I feel" and "that X" cannot be put together and make a sensible sentence.

If someone finds themselves about to say "I feel that X", they should try saying "I believe that X" instead, and notice how it feels to say that. It will feel different. The difference is fear."

It sounds to me as though you've run into a community (perhaps representative of the majority of English speakers) with bad habits. I, and the people I prefer to hang out with, would be able to split "I feel that x" into a statement about emotions or intuitions and a statement about the perceived facts which give rise to the emotions or intuitions.

I believe that "I believe that a sequel to The Hobbit should never be made" is emotionally based. Why would someone say such a thing unless they believed that the sequel would be so bad that they'd hate it?

Here's something I wrote recently about the clash between trying to express the feeling that strong emotions indicate the truth and universality of their premises and the fact that real world is more complicated.

comment by pdf23ds · 2008-10-19T07:57:13.000Z · LW(p) · GW(p)

"I feel that X" really means, "I believe X, and accept that others will likely disagree." The purpose is to serve as a conversational marker showing that disagreement is expected. When used properly, this is simply to grease the wheels of discourse a bit, making it more likely that the respondent will have the proper idea about the attitude the speaker takes towards the idea, not to imply that the disagreement will be taken as unresolvable. It makes discourse more efficient. Of course, it can be misused in the way that Richard complains about, but I think he's being obtuse to be against the phrase in every manifestation, and especially obtuse in the way he frames his disagreement.

comment by Richard_Kennaway · 2008-10-19T09:52:12.000Z · LW(p) · GW(p)

I am being forthright, not obtuse. I say again that there is no statement of the form "I feel that X", which would not be rendered more accurate by replacing it with "I believe that X". That people use the word "feel" in this way does not make it a statement about feelings: it remains a statement about beliefs. Neither of those statements actually contains any expression of a feeling about X. Here is one that does: "I am angry that X". Compare "I feel that X" -- what is the feeling? It is not there. In a larger context, the listener may be able to tell, but if they can, they can do so equally well from "I believe that X".

I believe that "I believe that a sequel to The Hobbit should never be made" is emotionally based.

It might well be. But the emotions would not be communicated any better by using the word "feel". They are not communicated at all by either word. (I can think of other reasons why someone might object to a sequel: for example, some people have an ethical objection to fanfiction.)

And no, I've never actually responded to an "I feel that" with a blunt "No you don't". It would rarely help. But I do know people that would call me on it if I ever used the expression, as I would them. A lot of the time -- I am talking about actual, specific experience here, not vague generalisation -- people react emotionally to beliefs they are holding that they have never actually stated out loud as beliefs, and asked "Are these actually true?" Until you have noticed what you believe, you cannot update your beliefs. I-feel-thats avoid that confrontation.

To use "feel", as a couple of people suggested, to mean "tentative belief" changes only the map: there are still no actual feelings being expressed, just a word that has been blurred. This does not grease the wheels of discourse, it gums them up. Better to reserve "feel" for feelings and "believe" for beliefs, for it is a short step from calling them both by the same name to passing them off as the same thing, and then you are on the Dark Side, whether you know it or not. State something as a belief and you open yourself to the glorious possibility of being proved wrong. Call it a feeling and you give yourself a licence to ignore reality.

Replies from: mantis
comment by mantis · 2012-08-21T16:46:31.994Z · LW(p) · GW(p)

Probably silly to reply almost four years later, but what the heck. I think that in a lot of cases "I feel that X" is a statement of belief in belief. That is, what the person really means is "I believe that X should be true," or "I have an emotional need to believe that X is true regardless of whether it is or not." Since you're very unlikely to get someone who think "I feel that X" is a valid statement in support of X to admit what they really mean, it is indeed an excellent example of Dark epistemology.

comment by outofculture · 2008-10-19T18:12:12.000Z · LW(p) · GW(p)

Hyperbole as a perversion of projection, arguments like: "...and next you'll be killing AI developers who disagree with FAI, to prevent them posing an existential threat." that contain both sufficient clear reasoning and sufficient unknowable elements as to sound possible, sure, plausible, even. This is used to discredit the original idea, not the fantastical extrapolation.

comment by Tiedemies2 · 2008-10-20T10:21:00.000Z · LW(p) · GW(p)

How about the all-time great, now better than ever:
This time it will be different

comment by alexandros2 · 2008-10-20T15:27:00.000Z · LW(p) · GW(p)

Another good candidate may be revealed in the following Dostoevsky quote:

"If someone were to prove to me that Christ is outside of the truth, and it were truly so, that the truth was outside of Christ, I would prefer to remain with Christ, rather than with the truth."
[http://books.google.co.uk/books?ct=result&q=%22Christ+is+outside+of+the+truth%22&btnG=Search+Books]

Substitute 'Christ' for your favourite deity/belief system. This was the epistemological line I was not able to cross during my christian journey. Others may however, and once it is crossed, there may be little that can be done to rescue that person (other than perhaps pure shock and awe at the reprecussions of such a departure from reality). If this is not the root of a 'dark side epistemology', it is certainly the pinnacle of it, the final lie that must be accepted to justify all the ones that came before it.

comment by Richard_Kennaway · 2008-10-21T06:17:00.000Z · LW(p) · GW(p)

An interesting contrast to that is C.S. Lewis (through one of his characters): "I’m on Aslan’s side even if there isn’t any Aslan to lead it. I’m going to live as like a Narnian as I can, even if there isn’t any Narnia."

comment by Jeff4 · 2008-10-29T02:28:00.000Z · LW(p) · GW(p)

I agree with Thom Blake: "Everyone has a right to their own opinion" is a defense against unreliable hardware. Your opinion is wrong so I must kill you and take your women, or even just your opinion is wrong so I must repress you.

comment by Chris_Wegford · 2008-11-01T12:50:00.000Z · LW(p) · GW(p)

For a long time, I've had problems with phrases that treat Pride as a good thing. i.e. "Take some pride in X" "Where is your pride?" "Have you no pride?"

I realize that in the past, Pride may have had many positive evolutionary values, but in modern times, we have more efficient and accurate ways to test for usefulness and prowess among our population.

Replies from: taryneast
comment by taryneast · 2011-01-30T16:05:11.130Z · LW(p) · GW(p)

From what I can tell - this is actually just the flip-side of shame. Shame is often used to coerce people into (or out of) certain behaviours.

Contrast with: "Where is your shame?", "Have you no shame?"

comment by AndySimpson · 2009-03-11T07:06:00.000Z · LW(p) · GW(p)

There are two of these Generic Defenses, iterations of this species of logical fallacy, that I've found particularly vile. They may collapse into one. First, the extension of "tolerance" to assertions, e.g. "Be tolerant of my creationist beliefs", which means "My creationist beliefs are immune to discourse or thought: they command respect simply because they are my assertions," but disguises itself in the syntax of a honeyed pluralistic truism like "Be tolerant of people who hold opinions that aren't yours."

The other is the notion of false balance, which is a palatable and pervasive trope of people who are talking nonsense, e.g. "There are two sides to the dinosaur debate: Some scientists believe in dinosaurs, and others think God has put fossils in the ground to test our faith in Him. Isn't it interesting to consider the arguments of both sides? I guess we'll never know the real answer!"

That stuff drives me mad.

comment by Fyrius · 2010-03-05T11:55:30.340Z · LW(p) · GW(p)

Arguably, another one is the adage that when people disagree on anything very strongly, "the truth is usually in the middle."

It's not entirely nonsensical to anticipate and correct for people's tendency to exaggerate away from their perceived enemy, but it's not a reliable rule of thumb at all. It's not all that hard to find situations where one side is just wrong.

Here's a better way to take polarisation into account: instead of concluding that "both sides are probably a bit right", it would be more realistic to say "both sides are probably wrong". Or better yet: "what both sides think is irrelevant, I'm just going to ignore the whole business and figure it out for myself."

comment by simplicio · 2010-03-06T05:29:02.329Z · LW(p) · GW(p)

The worst of them all is probably to judge an idea by some real or perceived characteristics of its proponents (e.g., "strident"). Taken to an extreme this leads to whining about issues like tone while ignoring content.

Sometimes jerks are right.

comment by Swimmy · 2010-10-15T01:30:10.857Z · LW(p) · GW(p)

"Cui bono?" Who benefits?

I believe the Dark Side coopted "cui bono?" because it has a valid usage: those who benefit from various policies may falsify or embelish their opinions, and "cui bono?" can sometimes identify faked opinions. (For instance, why do many businesses support minimum wage hikes?) A rationalist should count a suspect opinion as weaker evidence than a non-suspect opinion.

But the dark side uses it thus: if someone benefits, the belief is wrong and the evidence in its favor can be dismissed.

Example: "Who benefits from the story of the Holocaust? Israel. The Holocaust raises sympathy for Jews worldwide, and sympathizing voters and politicians in the United States and Europe enable Israel's continued existence."

This is 1) Not the rationalist use of "cui bono" and 2) COMPLETELY INSANE. Holocaust deniers use "cui bono?" to question if the Holocaust actually happened. They figure that the fact someone benefits is enough to support a worldwide, 65-year long conspiracy theory. No matter how much suspicious motives may make us weary of someone, the independent lines of evidence leading to the historical event of the Holocaust blow them out of the water. "Cui bono?" is so weak in comparison that it can be completely ignored when estimating the likelihood of "The Holocaust happened."

This usage can probably be categorized as a subset of all Type M arguments.

comment by ata · 2011-01-05T00:52:51.956Z · LW(p) · GW(p)

An amusing Onion parody of anti-epistemology and crackpots: Rogue Scientist Has His Own Scientific Method

comment by quinsie · 2011-09-29T21:59:40.169Z · LW(p) · GW(p)

One method I've seen no mention of is distraction from the essence of an argument with pointless pedantry. The classical form is something along the lines of "My opponent used X as an example of Y. As an expert in X, which my opponent is not, I can assure you that X is not an example of Y. My opponent clearly has no idea how Y works and everything he says about it is wrong." which only holds true of X and Y are in the same domain of knowledge.

A good example: Eliezer said in the first paragraph that a geologist could tell a pebble from the beach from a driveway. As a geologist, I know that most geologists, myself included, honestly couldn't tell the difference. Most pebbles found in concrete, driveways and so forth are taken from rivers and beaches, so a pebble that looks like a beach pebble wouldn't be suprising to find in someone's driveway. That doesn't mean that Eliezer's point is wrong, since he could have just as easily said "a pebble from a mountaintop" or "a pebble from under the ocean" and the actual content of this post wouldn't have changed a bit.

In a more general sense, this an example of assuming an excessively convenient world to fight the enemy arguments in, but I think this specific form bears pointing out, since it's a bit less obvious than most objections of that sort.

comment by buybuydandavis · 2011-10-29T05:13:31.730Z · LW(p) · GW(p)

The dangerous thing is to have a false belief that you believe should be protected as a belief ...

Time for some Stirner:

No thought is sacred, for let no thought rank as "devotions";* no feeling is sacred (no sacred feeling of friendship, mother's feelings, etc.), no belief is sacred. They are all alienable, my alienable property, and are annihilated, as they are created, by me .

The Christian can lose all things or objects, the most loved persons, these "objects" of his love, without giving up himself (i.e., in the Christian sense, his spirit, his soul) as lost. The owner can cast from him all the thoughts that were dear to his heart and kindled his zeal, and will likewise "gain a thousandfold again," because he, their creator, remains.

comment by Sengachi · 2012-12-09T22:09:36.180Z · LW(p) · GW(p)

You know, the Jedi had bad epistemology, same as the Sith. For instance: "Only the Sith speak in absolutes!" .... Give it a moment. Think about it. Only is what kind of modifier again?

Replies from: dmitrii-zelenskii
comment by Дмитрий Зеленский (dmitrii-zelenskii) · 2019-08-15T22:15:21.628Z · LW(p) · GW(p)

Attributing Obi-Wan's highly emotional statement in the situation of the Order's destruction to all the Jedi is a no-go. They did have problems with their actions but more of the kind of being, so to say, "too careful".

comment by Rixie · 2012-12-22T00:01:00.819Z · LW(p) · GW(p)

I love this. I just . . . this is awesome. You rock. Thank you.

comment by ikrase · 2013-01-12T02:09:49.823Z · LW(p) · GW(p)

Here are some:

  • Your epistemology is just a ploy so that only the university-educated can defend opinions
  • If I credibly claim that I have suffered a sufficiently large wrong imposed at the national culture level, then I may dictate my proper place and anybody who questions me, even just to ask whether my claim is really credible, is participating in that national-culture-level wrong.
  • Endless hypothesis privileging.
comment by MarsColony_in10years · 2015-04-03T18:37:54.435Z · LW(p) · GW(p)

A popular sentiment is "I don't care about X!!!". Sometimes this even appears in memes proudly lauding the "fact" of their non-caring about whatever X happens to be. While it may be wise to take people's knee-jerk disapproval with a grain of salt, clearly we as humans are wired in such a way as to care what others think, for better or for worse. Instead of facing our emotions head-on, and admitting that we do care, it is much easier not to reveal how fragile we are to the world.

An interesting specific case study (although I haven't been able to generalize a more broad category) would be the argument that Pluto is or should be a planet. People who argue this tend to know a couple of the reasons why it makes sense for scientists to use the term "planet" to refer to the larger bodies, while using the words "dwarf planet" to indicate objects like Pluto and Ceres with another set of characteristics. Their argument is interesting in that it doesn't seem to occur on the factual level at all, but purely on the emotional, gut reaction level. In other cases, the two usually get tangled up rapidly, with factual arguments on both sides, but this does not occur here. It's purely an argument about terminology, and that our terminology should not be optimized to reflect the facts of reality, but rather our own traditions and desires for their own sakes alone. There aren't even any augments that such an awkward naming convention would be instrumentally valuable, say by letting scientists keep the same terminology they've been using for years. You see exactly those sorts of arguments for not switching to the Metric system, but not regarding Pluto for some reason.

But what might this tell us about how emotionally based opinions are formed? What's different in this case? Well, first of all, it is extremely obvious that the outcome of the disagreement will have no harmful impact on anyone anywhere. With political arguments, there's always a victim, no matter who wins. The question is who's grievance is worse, or which victim you identify with more. That's how you choose your side. So, when you need to prove that your side has it worse off, you need to bring facts into the discussion to prove it. We quickly get defensive, and then start rationalizing.

But I wouldn't call what's going on in the Pluto debate "rationalizing". It's not starting with a conclusion, and then trying to find evidence and arguments to justify it. It's not starting with evidence either though. It's the raw belief itself, without any supporting evidence or justifications tacked on. If you want a rationalization, you actually have to probe someone for it, and even then they may or may not respond with one. They are just as likely to respond with "but it's just really sad that Pluto is getting demoted". It's pure nostalgia, or personification of the inanimate, or some mix of other emotions with no logic attached.

No amount of facts or logic will win against this. Instead, the only winning move is to get them to love truth, to find beauty in the structure of logic, to admire the scientific method, to see elegance in simplicity, and to find Joy in the Merely Real. Failing to be able to do so isn't a Dark Side Epistemology, but rather the gaping yaw that all Dark Side Epistemology is trying to fill. It's the true cause of a Fake Justification. If we want to actually prevent systemic rational failures from popping up, we need to know the true causal history that originated the need for such beliefs, not just the true causal history that originated the beliefs themselves. Dark Side Epistemology gives rise to the beliefs themselves, but the inability to find joy in the merely real gives rise to the need for such beliefs.

What's the solution, then? Well, there are already "beautiful engineering" memes, and some visualizing mathematics such as fractals, although the more abstract math is difficult to show that way. But there are plenty of quotes out there proclaiming the beauty of such things. "I Fucking Love Science" is popular, and Neil deGrasse Tyson brings the stars and planets to life fairly effectively. What seems to be missing is a social base promoting formal logic itself, or traits that limit self-deception. There are plenty of skeptics groups, some of which advocate for something like reductionism, but that's only tangentially relevant to disproving UFO claims. Less Wrong seems to be the closest thing there is to this, but I wouldn't want to dilute this community down to a meme factory. Things like HPMOR are a big step in the right direction, but we need a true cultural movement to unfold if we want to change the way people think.

comment by Chriswaterguy · 2015-11-20T21:41:32.106Z · LW(p) · GW(p)

I can't answer your questions about / criticisms of my belief, but if you ask my guru (or read his book), he'll definitely have the answers to all your questions."

(Or "her book" etc - but the examples I've come across have all used men as their infallible guru.)

Replies from: Jiro
comment by Jiro · 2015-11-21T05:06:27.739Z · LW(p) · GW(p)

Ayn Rand is an example at times.

comment by Bound_up · 2016-05-13T04:07:20.083Z · LW(p) · GW(p)

You're overthinking it

comment by DPiepgrass · 2019-06-02T06:14:24.704Z · LW(p) · GW(p)

The acronym FLICC describes techniques of science denial and alludes to a lot of dark side epistemology:

F - Fake Experts (and Magnified Minority): you've got your scientists and I've got mine (and even though There's No Consensus, mine are right and yours are wrong, that's for sure).

L - Logical fallacies

I - Impossible expectations. This refers to an unrealistic expectation of proof before acting on evidence. It tends to be paired with very low demands of evidence for the contrary position (confirmation bias). This is often unnecessary because if the goal is inaction (e.g. don't bother to lower emissions or get vaccinations) you can just have an unreasonable standard of proof for both sides and take no action as a default. Nevertheless this heavily lopsided analysis occurs in practice.

C - Cherry picking of data (perhaps this is just another logical fallacy, but it is more central to science denial than other logical fallacies)

C - Conspiracy theories. One "dark side" thing about conspiracy theories is their self-sealing quality - evidence contrary to one's position can always be explained by assuming it was generated by the conspiracy, so the conspiracy theory tends to grow larger over time until it is a massive global conspiracy with untold thousands of actors hiding the hidden truth. An even more interesting and common dark-side trick, though, is to believe in a conspiracy without ever thinking about the conspiracy. Most people aren't dumb enough to believe in a massive global conspiracy, but they use an assumption of some amount of conspiracy as a "background belief": they rely mainly on FLIC, and just use Conspiracy Theory as a last resort, so Conspiracy serves as a window dressing to cover any remaining issues that otherwise wouldn't make sense in their version of "the truth". Or maybe it just looks that way: the science denier may know that talking about their conspiracy theory would make them sound more nutty, so they outwardly prefer to rely on other arguments and fall back on conspiracy as a last resort.

comment by DPiepgrass · 2019-06-02T13:13:49.539Z · LW(p) · GW(p)

Looking at Scott Alexander's Argument From My Opponent Believes Something, I guessed that the general Dark Side technique he's describing was misrepresentation borne out of sloppy analog thinking. But at the end he points out that he has listed a set of Fully General Counterarguments, all of which are tools of the dark side since they can attack any position and lead to any conclusion:

It is an unchallengeable orthodoxy that you should wear a coat if it is cold out. Day after day we hear shrill warnings from the high priests of this new religion practically seething with hatred for anyone who might possibly dare to go out without a winter coat on. But these ideologues don’t realize that just wearing more jackets can’t solve all of our society’s problems. Here’s a reality check – no one is under any obligation to put on any clothing they don’t want to, and North Face and REI are not entitled to your hard-earned money. All that these increasingly strident claims about jackets do is shame underprivileged people who can’t afford jackets, suggesting them as legitimate targets for violence. In conclusion, do we really want to say that people should be judged by the clothes they wear? Or can we accept the unjacketed human body to be potentially just as beautiful as someone bundled beneath ten layers of coats?
comment by jeronimo196 · 2020-03-08T23:44:14.366Z · LW(p) · GW(p)

Jordan Peterson's redefinition of truth comes to mind. During his first appearance on Sam Harris' podcast, he presented the following: "Nietzsche said that truth is useful (for humanity). Therefore, what is harmful for humanity, cannot be "true". Example - if scientists discover how to create a new plague, that knowledge may be technically correct, but cannot be called "true". On the other hand, the bible is very useful. Like, extremely useful. So very useful, that even if not technically correct, the bible is nevertheless "true"."

Of course, how to judge whether "E=mc^2" is "true" or only correct (before the Apocalypse!) is left to the listener. The important part is being able to say that the bible is "true", everything else is secondary.

Replies from: DPiepgrass
comment by DPiepgrass · 2021-09-26T14:52:56.367Z · LW(p) · GW(p)

I think the problem you're pointing at is "using words to confuse the issue". Most people know what truth is, and don't need a definition (except to clarify which sense of the word we're talking about). But humans do a lot of linguistic reasoning. So if you introduce a new definition for a word, one that people don't normally use, you have a chance of confusing people into reasoning using that new definition, and using the results of that reasoning on the original sense of the word.

Here, I don't know what Nietzsche said, but it does not follow from the phrase "truth is useful" that "it is not true that this is a discovery for creating a plague, because plagues are not useful". It seems, rather, that he's misrepresenting Nietzsche by simply mislabeling usefulness as truth (and if Nietzsche actually did that, he's wrong).

Another way to look at it is to observe that the word "is" is used in the same sense as "a sphere is rollable", which does not imply "if it is rollable, it must be a sphere". In the same way, "truth is useful" does not imply "if it is useful, it must be truth".

Either way, people make logical mistakes all the time, and therefore one mistake in isolation is not dark epistemology. But what if you had the chance to explain to Peterson what his logical mistake was, and he responded by (1) denying that he made any mistake or (2) ignoring your point entirely? Now that's what I call dark epistemology. Or what if Peterson makes the same mistake over and over and never seems to notice unless his opponents do it? More dark epistemology.

Replies from: jeronimo196
comment by jeronimo196 · 2021-09-28T07:51:42.493Z · LW(p) · GW(p)

I've heard Peterson accuse feminists of disregarding what is true in the name of ideology on many occasions.

Sam Harris initially spent an hour arguing against Peterson's redefinition of "truth" to include a "moral dimension". They've clashed about it since, with no effect. Afaik, "the bible is true because it is useful" is central component of Peterson's worldview.

To be fair, I believe Peterson has managed to honestly delude himself on this point and is not outright lying about his beliefs.

Nevertheless, when prompted to think of a "General Defense of Fail", attempting to redefine the word "truth" in order to protect one's ideology came to mind very quickly.

comment by Ian Televan · 2021-03-08T20:45:57.402Z · LW(p) · GW(p)

Arguing against consistency itself. "I was trying to be consistent when I was younger, but now I'm more wise than that."

comment by Andrew Vlahos (andrew-vlahos) · 2021-03-30T03:14:48.805Z · LW(p) · GW(p)

Most lies are bad, but there are circumstances where lying is necessary and does not make truth the enemy, when telling the truth causes immediate bad action.

When people in Germany were sheltering people during the holocaust, and a Nazi official asked if they were hiding anyone, the correct response was "no" even though it was a lie. When someone doesn't believe in a religion or is gay or something, but they would be cast out of the home or "honor-killed" if parents found out, they should lie until they have a way to escape. 

Replies from: Yoav Ravid
comment by Yoav Ravid · 2021-04-01T07:42:29.635Z · LW(p) · GW(p)

Yes, Eliezer agrees with that and wrote about it in Meta-Honesty: Firming Up Honesty Around Its Edge-Cases [LW · GW] (also using the hiding someone from a Nazi example)

Replies from: Jake_NB
comment by Jake_NB · 2021-05-31T15:07:00.710Z · LW(p) · GW(p)

Eliezer also mentions it here, saying that if you're willing to lie to someone, you should be willing to slash their tires or lobotomize them. But I want to point out the Fallacy of Gray here - there are different degrees of lying, of its implications, and of the implications. I may hide the truth from my teacher about my friend cheating on a test (trying to stop the friend is a different discussion, but I would), but I wouldn't go so far as to outright violence in order to protect the secret.

Replies from: Caperu_Wesperizzon
comment by Caperu_Wesperizzon · 2022-08-29T07:06:34.245Z · LW(p) · GW(p)

I think the vast majority of people will gladly slash your tyres or lobotomize you without a second thought if the alternative is to go to the effort of debating you for any length of time with a genuinely truth-seeking attitude. Only if they fear you may they attempt to fake the latter.

comment by lesswronguser123 (fallcheetah7373) · 2024-03-31T09:20:05.065Z · LW(p) · GW(p)

The vast majority who go about repeating the Deep Wisdom are more duped than duplicitous, more self-deceived than deceiving.

“One of the saddest lessons of history is this: If we’ve been bamboozled long enough, we tend to reject any evidence of the bamboozle. We’re no longer interested in finding out the truth. The bamboozle has captured us. It’s simply too painful to acknowledge, even to ourselves, that we’ve been taken. Once you give a charlatan power over you, you almost never get it back.” 
 

― Carl Sagan 

comment by Self (CuriousMeta) · 2024-11-27T09:54:53.117Z · LW(p) · GW(p)

The “how to think” memes floating around, the cached thoughts of Deep Wisdom—some of it will be good advice devised by rationalists. But other notions were invented to protect a lie or self-deception: spawned from the Dark Side.

It's so unfortunate that "how to think" - the rules of proper belief - are not hardcoded in the system's firmware, and must instead be entered via user-supplied data the belief system is built to manage. I'd frame that this post is centrally about this user-caused systembehavior-variability, and the implicit security flaw.

Another aspect: Dominant memes - that is, memes that feel good, fair & highstatus - can be functionally dysvirtuous and unilaterally damaging.