How to offend a rationalist (who hasn't thought about it yet): a life lesson
post by mszegedy · 2013-02-06T07:22:48.146Z · LW · GW · Legacy · 113 commentsContents
113 comments
Usually, I don't get offended at things that people say to me, because I can see at what points in their argument we differ, and what sort of counterargument I could make to that. I can't get mad at people for having beliefs I think are wrong, since I myself regularly have beliefs that I later realize were wrong. I can't get mad at the idea, either, since either it's a thing that's right, or wrong, and if it's wrong, I have the power to say why. And if it turns out I'm wrong, so be it, I'll adopt new, right beliefs. And so I never got offended about anything.
Until one day.
One day, I encountered a belief that should have been easy to refute. Or, rather, easy to dissect, and see whether there was anything wrong with it, and if there was, formulate a counterargument. But for seemingly no reason at all, it frustrated me to great, great, lengths. My experience was as follows:
I was asking the opinion of a socially progressive friend on what they feel are the founding axioms of social justice, because I was having trouble thinking of them on my own. (They can be derived from any set of fundamental axioms that govern morality, but I wanted something that you could specifically use to describe who is being oppressed, and why.) They seemed to be having trouble understanding what I was saying, and it was hard to get an opinion out of them. They also got angry at me for dismissing Tumblr as a legitmate source of social justice. But eventually we got to the heart of the matter, and I discovered a basic disconnecf between us: they asked, "Wait, you're seriously applying a math thing to social justice?" And I pondered that for a moment and explained that it isn't restricted to math at all, and an axiom in this context can be any belief that you use to base your beliefs on. However, then the true problem came to light (after a comparison of me to misguided 18th-century philosophes): "Sorry if it offends you, I just don't think in general that you should apply this stuff to society. Like... no."
And that did it. For the rest of the day, I wreaked physical havoc, and emotionally alienated everyone I interacted with. I even seriously contemplated suicide. I wasn't angry at my friend in particular for having said that. For the first time, I was angry at an idea: that belief systems about certain things should not be internally consistent, should not follow logical rules. It was extremely difficult to construct an argument against, because all of my arguments had logically consistent bases, and were thus invalid in its face.
I'm glad that I encountered that belief, though, like all beliefs, since I was able to solve it in the end, and make peace with it. I came to the following conclusions:
- In order to make a rationalist extremely aggravated, you can tell them that you don't think that belief structures should be internally logically consistent. (After 12-24 hours, they acquire lifetime immunity to this trick.)
- Belief structures do not necessarily have to be internally logically consistent. However, consistent systems are better, for the following reason: belief systems are used for deriving actions to take. Many actions that are oriented towards the same goal will make progress in accomplishing that goal. Making progress in accomplishing goals is a desirable thing. An inconsistent belief system will generate actions that are oriented towards non-constant goals, and interfere destructively with each other, and not make much progress. A consistent belief system will generate many actions oriented towards the same goal, and so will make much progress. Therefore, assuming the first few statements, having an internally consistent belief system is desirable! Having reduced it to an epistemological problem (do people really desire progress? can actions actually accomplish things?), I now only have epistemological anarchism to deal with, which seems to work less well in practice than the scientific method, so I can ignore it.
- No matter how offended you are about something, thinking about it will still resolve the issue.
113 comments
Comments sorted by top scores.
comment by Qiaochu_Yuan · 2013-02-06T17:42:20.481Z · LW(p) · GW(p)
"Sorry if it offends you, I just don't think in general that you should apply this stuff to society. Like... no."
I don't understand what "this stuff" refers to in this sentence and it is far from clear to me that your interpretation of what your friend said is correct.
I also don't think it's a good idea to take an axiomatic approach to something like social justice. This approach:
- primes you in the wrong direction about complexity of value (when you hear "axioms" you think maybe 5 or 6, not 5,000 or 6,000)
- risks falling prey to confusion about the meaning of words
- risks confusing terminal and instrumental values
- risks assigning empirical statements probability 1.
Edit: Also, a general comment. Suppose you think that the optimal algorithm for solving a problem is X. It does not follow that making your algorithm look more like X will make it a better algorithm. X may have many essential parts, and making your algorithm look more like X by imitating some but not all of its essential parts may make it much worse than it was initially. In fact, a reasonably efficient algorithm which is reasonably good at solving the problem may look nothing like X.
This is to say that at the end of the day, the main way you should be criticizing your friend's approach to social justice is based on its results, not based on aesthetic opinions you have about its structure.
Replies from: Error↑ comment by Error · 2013-02-08T14:28:52.539Z · LW(p) · GW(p)
This comment causes me unpleasant cognitive dissonance. I read the second party's statement as meaning something like "no, logical rigor is out of place in this subject, and that's how it should be." And I find that attitude, if not offensive, at least incredibly irritating and wrongheaded.
And yet I recognize that your argument has merit and I may need to update. I state this not so much because I have something useful to say for or against it, but to force it out of my head so I can't pretend the conflict isn't there.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-08T19:08:21.073Z · LW(p) · GW(p)
I read the second party's statement as meaning something like "no, logical rigor is out of place in this subject, and that's how it should be."
See this comment for a steelmanning of what I think the friend's point of view is.
comment by Emile · 2013-02-06T07:38:02.413Z · LW(p) · GW(p)
Reminds me of an comment by pjeby (holy cow, 100 upvotes!) in an old thread:
Replies from: Eugine_Nier, ikrase, byrnemaOne of the things that I've noticed about this is that most people do not expect to understand things. For most people, the universe is a mysterious place filled with random events beyond their ability to comprehend or control. Think "guessing the teacher's password", but not just in school or knowledge, but about everything.
Such people have no problem with the idea of magic, because everything is magic to them, even science.
...
↑ comment by Eugine_Nier · 2013-02-07T03:25:46.697Z · LW(p) · GW(p)
I had the opposite problem, for a while I divided the world (or at least mathematics) into two categories, stuff I understand and stuff I will understand later. It was a big shock when I realized that for most things this wasn't going to happen.
↑ comment by byrnema · 2013-02-06T17:58:31.462Z · LW(p) · GW(p)
P̶j̶e̶b̶y̶ ̶a̶l̶s̶o̶ ̶h̶a̶s̶ ̶a̶ ̶r̶e̶a̶l̶l̶y̶ ̶g̶o̶o̶d̶ ̶p̶o̶s̶t̶ ̶a̶b̶o̶u̶t̶ ̶f̶i̶g̶u̶r̶i̶n̶g̶ ̶o̶u̶t̶ ̶w̶h̶y̶ ̶s̶o̶m̶e̶t̶h̶i̶n̶g̶ ̶o̶f̶f̶e̶n̶d̶s̶ ̶y̶o̶u̶.̶ ̶I̶'̶l̶l̶ ̶h̶u̶n̶t̶ ̶i̶t̶ ̶d̶o̶w̶n̶ ̶w̶h̶e̶n̶ ̶I̶ ̶g̶e̶t̶ ̶b̶a̶c̶k̶ ̶i̶f̶ ̶s̶o̶m̶e̶o̶n̶e̶ ̶e̶l̶s̶e̶ ̶d̶o̶e̶s̶n̶'̶t̶ ̶f̶i̶n̶d̶ ̶i̶t̶ ̶f̶i̶r̶s̶t̶.̶
(Perhaps Yvain, rather, and I can't find it.)
Found it. Was harder to find because I remembered it as a post but actually it was a comment.
Replies from: pjeby, prase↑ comment by pjeby · 2013-02-09T00:47:35.180Z · LW(p) · GW(p)
Out of curiosity, did you ever have occasion to use the advice in that comment, and if so, what was the result?
Replies from: byrnema↑ comment by byrnema · 2013-02-09T16:09:15.075Z · LW(p) · GW(p)
What I mainly took from your post was the need to identify the particular norm being violated each time I'm angry/offended. I've found (2 or 3 examples come to mind) that it really helps to do this, especially if the anger seems to keep simmering without progress. It does typically take a few tries, to identify what I'm really upset about, but after I identify the reason, there is finally resolution because (i) I find that I finally agree with myself (the self-validation seems to be very important step for moving on, I no longer feel a need to petulantly keep defending myself by protesting) and (ii) I usually find that my anger was a little bit misdirected or not appropriate for the context. In any case, I'm able to let it go.
I'm often surprised how primitive the 'norm' is that I felt was violated. Typically for me it's a basic need for love and acceptance that isn't being met (which seems strange when I'm a grown, independent adult).
The most recent example is that I was offended/upset by a critical remark by a health technician who matter-of-fact told me I needed to do something differently. Of course there was the initial sting of being criticized, but I was disproportionately angry. At first I thought I was upset because she "wasn't being professional" about other peripheral things, which is the first argument that came to mind because that's what people tend to say, and also mentally attacking her relatively lower level of education compared to the doctor was distracting me from identifying the real reason.
It took a while, but I discovered I was upset because I wanted her to be loving and supporting, because I've been putting a lot of effort in this aspect of my health. As soon as I realized I was looking for positive feedback for my efforts I (i) agreed with myself, it is true I ought to receive positive feedback for my efforts if I'm going to succeed in this and (ii) realized my anger was misdirected; it would have been nice if there was some support coming from the technician, but once consciously realized I wouldn't depend on it.
... for caring and support there are, fortunately, friends that I can call, but as soon as I identified the problem, it wasn't necessary. I know that I'm doing a good job and working harder than the technician gave me credit for, which is (for me) a relatively atypical occasion of self-validation.
Maybe in retrospect the reason was obvious, but I don't seem to be as strong in identifying the source of negative feelings, so identifying the 'norm being violated' is a very useful exercise for me.
Replies from: pjeby↑ comment by pjeby · 2013-02-10T02:33:15.816Z · LW(p) · GW(p)
Thanks for the reply.
Typically for me it's a basic need for love and acceptance that isn't being met (which seems strange when I'm a grown, independent adult)
It's not that strange at all, actually. It's quite common for us to not learn how to take care of our own emotional needs as children. And in my case at least, it's been taking me a great deal of study to learn how to do it now. There are quite a lot of non-intuitive things about it, including the part where getting other people to love and accept you doesn't actually help, unless you're trying to use it as an example.
To put it another way, we don't have emotional problems because we didn't get "enough" love as kids, but because we didn't get enough examples of how to treat ourselves in a loving way, e.g. to approach our own thoughts and feelings with kindness instead of pushing them away or invalidating them (or whatever else we got as an example).
Or to put it yet another way, this is a matter of "emotional intelligence" being far more about nurture than nature.
But now I'm babbling. Anyway, from the rest of what you describe, you sound like you've actually got better skills than me in the area of the actual "taking care of your needs" part, so I wouldn't worry about it. I'm glad the specific tip about norm violations helped. Those are one of those things that our brains seem to do just out of conscious awareness, like "lost purposes", that you sort of have to explicitly ask yourself in order to do anything about the automatic reaction.
It also helps to get rid of the norm or expectation itself, if it's not a reasonable one. For example, expecting all of your colleagues to always treat you with love and acceptance might not be realistic, in which case "upgrading an addiction to a preference" (replacing the shoulds with like/prefer statements) can be helpful in preventing the need to keep running round the "get offended, figure out what's happening, address the specifics" loop every single time. If you stop expecting and start preferring, the anger or sense of offense doesn't arise in the first place.
↑ comment by prase · 2013-02-08T18:21:32.676Z · LW(p) · GW(p)
Out of curiosity, how did you make the strikethrough line which extends far to the right outside the comment box?
Replies from: byrnema↑ comment by byrnema · 2013-02-08T18:38:04.172Z · LW(p) · GW(p)
I used the tool on this webpage.
It appears it added underscores between each letter... but the underscores are actually part of the font, I think.
e x a m p l e (with spaces)
comment by byrnema · 2013-02-06T16:20:15.061Z · LW(p) · GW(p)
Belief structures do not necessarily have to be internally logically consistent. However, consistent systems are better, for the following reason: belief systems are used for deriving actions to take.
I have a working hypotheses that most evil (from otherwise well-intentioned people) comes from forcing a very complex, context-dependent moral system into one that is "consistent" (i.e., defined by necessarily overly simplified rules that are global rather than context-dependent) and then committing to that system even in doubtful cases since it seems better that it be consistent.
(There's no problem with looking for consistent rules or wanting consistent rules, the problem is settling on a system too early and applying or acting on insufficient, inadequate rules.)
Eliezer has written that religion can be an 'off-switch' for intuitively knowing what is moral ... religion is the common example of any ideology that a person can allow to trump their intuition in deciding how to act. My pet example is, while I generally approve of the values of the religion I was brought up with, you can always find specific contexts (its not too difficult, actually) where their decided rules of implementation are entirely contrary to the values they are supposed to espouse.
By the way, this comment has had nothing to say about your friend's comment. To relate to that, since I understand you were upset, my positive spin would be that (a) your friend's belief about the relationship between 'math' and social justice is not strong evidence on the actual relationship (though regardless your emotional reaction is an indication that this is an area where you need to start gathering evidence, as you are doing with this post) and (b) if your friend thought about it more, or thought about it more in the way you do (Aumann's theorem), I think they would agree that a consistent system would be "nicest".
comment by Qiaochu_Yuan · 2013-02-06T21:08:37.000Z · LW(p) · GW(p)
A comment from another perspective. To be blunt, I don't think you understand why you got upset. (I'm not trying to single you out here; I also frequently don't understand why I am upset.) Your analysis of the situation focuses too much on the semantic content of the conversation and ignores a whole host of other potentially relevant factors, e.g. your blood sugar, your friend's body language, your friend's tone of voice, what other things happened that day that might have upset you, etc.
My current understanding of the way emotions work is something like this: first you feel an emotion, then your brain guesses a reason why you feel that emotion. Your brain is not necessarily right when it does this. This is why people watch horror movies on dates (first your date feels an intense feeling caused by the horror movie, then hopefully your date misinterprets it as nervousness caused by attraction instead of fear). Introspection is unreliable.
When you introspected for a reason why you were upset, you settled on "I was upset because my friend was being so irrational" too quickly. This is an explanation that indicates you weren't trying very hard to explicitly model what was going on in your friend's head. Remember, your friend is not an evil mutant. The things they say make sense to them.
Replies from: mszegedy, OrphanWilde↑ comment by mszegedy · 2013-02-06T23:02:41.544Z · LW(p) · GW(p)
It took me the whole day to figure even that out, really. Stress from other sources was definitely a factor, but what I observed is, whenever I thought about that idea, I got very angry, and got sudden urges to throw heavy things. When I didn't, I was less angry. I concluded later that I was angry at the idea. I wasn't sure why (I'm still not completely sure: why would I get angry at an idea, even if it was something that was truly impossible to argue against? a completely irrefutable idea is a very special one; I guess it was the fact that the implications of it being right weren't present in reality), but it seemed that the idea was making me angry, so I used the general strategy of feeling the idea for any weak points, and seeing whether I could substitute something more logical for inferences, and more likely for assumptions. Which is how I arrived at my conclusions.
Replies from: Qiaochu_Yuan, ChristianKl, bsterrett↑ comment by Qiaochu_Yuan · 2013-02-06T23:22:43.152Z · LW(p) · GW(p)
Thanks for the explanation. I still think it is more likely that you got angry at, for example, your friend's dismissive attitude, and thinking about the idea reminded you of it.
why would I get angry at an idea
You are a human, and humans get angry for a lot of reasons, e.g. when other humans challenge their core beliefs.
even if it was something that was truly impossible to argue against?
1) I don't think your friend's point of view is impossible to argue against (as I mentioned in my other comment you can argue based on results), 2) it's not obvious to me that you've correctly understood your friend's point of view, 3) I still think you are focusing too much on the semantic content of the conversation.
Replies from: mszegedy↑ comment by mszegedy · 2013-02-06T23:58:03.045Z · LW(p) · GW(p)
I don't think your friend's point of view is impossible to argue against (as I mentioned in my other comment you can argue based on results)
I'm talking hypothetically. I did allow myself to consider the possibility that the idea was not perfect. Actually, I assumed that until I could prove otherwise. It just seemed pretty hopeless, so I'm considering the extreme.
it's not obvious to me that you've correctly understood your friend's point of view
Maybe not. I'm not angry at my friend at all, nor was I before. I felt sort of betrayed, but my friend had reasons for thinking things. If (I think) the things or reasons are wrong, I can tell my friend, and then they'll maybe respond, and if they don't, then it's good enough for me that I have a reasonable interpretation of their argument, unless it is going to hurt them that they hold what I believe to be a wrong belief. Then there's a problem. But I haven't encountered that yet. But the point is that it, to me, is much more interesting/useful/not tedious to consider this idea that challenges rationality very fundamentally, than to try and argue against the idea that everybody who had tried to apply rationality to society had it wrong, which is a very long battle that needs to be fought using history books and citations.
I still think you are focusing too much on the semantic content of the conversation.
Then what else should I focus on?
You are a human, and humans get angry for a lot of reasons, e.g. when other humans challenge their core beliefs.
I like having my beliefs challenged, though. That's what makes me a rationalist in the first place.
Though, I have thought of an alternate hypothesis for why I was offended. My friend compared me to white supremacist philosophes from the early days of the Enlightenment. And when I said that I did not share their ideas, my friend said that it was not because of my ideas, but because I was trying to apply rationality to society. And maybe that offended me. Just because I was like them in that I was trying to apply rationality to society (which I had rational reasons for doing), I was as bad as a white supremacist. Again, I can't be mad at my friend, since that's just a belief they hold, and beliefs can change, or be justified. My friend had reasons for holding that belief, and it hadn't caused any harm to anybody. But maybe that was what was so offensive? That sounds at least equally likely.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-07T01:42:42.541Z · LW(p) · GW(p)
But the point is that it, to me, is much more interesting/useful/not tedious to consider this idea that challenges rationality very fundamentally
This is what I mean when I say I don't think you've correctly understood your friend's point of view. Here is a steelmanning of what I imagine your friend's point of view to be that has nothing to do with challenging rationality:
"Different domain experts use different kinds of frameworks for understanding their domains. Taking the outside view, someone who claims that a framework used in domain X is more appropriate for use in domain Y than what Y-experts themselves use is probably wrong, especially if X and Y are very different, and it would take a substantial amount of evidence to convince me otherwise. In the particular case that X = mathematics and Y = social justice, it seems like applying the methods of X to Y risks drastically oversimplifying the phenomena in Y."
My friend compared me to white supremacist philosophes from the early days of the Enlightenment. And when I said that I did not share their ideas, my friend said that it was not because of my ideas, but because I was trying to apply rationality to society.
You and your friend probably do not mean the same thing by "rationality." It seems plausible to me that your friend pattern-matched what it sounded like you were trying to do to scientific racism. Your friend may also have been thinking of the stupid things that Spock does and trying to say "don't be an idiot like Spock."
And maybe that offended me.
Yes, that sounds plausible.
Replies from: ChristianKl, mszegedy↑ comment by ChristianKl · 2013-02-17T21:59:26.400Z · LW(p) · GW(p)
You and your friend probably do not mean the same thing by "rationality."
Why? They argue about whether it makes sense to base your moral philosophy in axioms and then logically deduce conclusions. There are plenty of people out there who disagree with that way of doing things.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-17T22:03:34.898Z · LW(p) · GW(p)
When you say the word "rationality" to most people they are going to round it to the nearest common cliche, which is Spock-style thinking where you pretend that nobody has emotions and so forth. There's a nontrivial inferential gap that needs to be closed before you, as an LWer can be sure that a person understand what LW means by "rationality."
Replies from: ChristianKl↑ comment by ChristianKl · 2013-02-17T22:30:49.999Z · LW(p) · GW(p)
There's a nontrivial inferential gap that needs to be closed before you, as an LWer can be sure that a person understand what LW means by "rationality."
I think you are making a mistake when you assume that the position that mszegedy argues is just LW-style rationality. mszegedy argued with his friend about using axiom based reasoning, where you start with axioms and then logically deduce your conclusions.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-17T22:33:53.906Z · LW(p) · GW(p)
I think the word rationality was also relevant to the argument. From one of mszegedy's comments:
Replies from: ChristianKlMy friend compared me to white supremacist philosophes from the early days of the Enlightenment. And when I said that I did not share their ideas, my friend said that it was not because of my ideas, but because I was trying to apply rationality to society.
↑ comment by ChristianKl · 2013-02-18T16:53:17.664Z · LW(p) · GW(p)
I think the word rationality was also relevant to the argument.
You make a mistake when you assume rationality to mean LW-style rationality. That's not what they argued about.
When mszegedy's friend accused him of applying rationality to society he refered to mszegedy's argument that one should base social justice on axioms.
According to him the problem with the white supremacist isn't that they choose the wrong aximons but that they focused on the axioms in the first place. They were rationalists of the englishment who had absolute confidence in their belief that certain things are right by axiom and other are wrong.
LW-style rationality allows the conclusion: "Rationality is about winning. Groups that based their moral philosophy on strong axioms didn't win. It's not rational to base your moral philosophy on strong axioms."
Mszegedy's friend got him into a situation where he had no rational argument why he shouldn't draw that conclusion. He is emotionally repulsed by that conclusion.
Mszegedy is emotionally attached to an enlightment ideal of rationality where you care about deducing your conclusions from proper axioms in an internally consistent way instead of just caring about winning.
↑ comment by mszegedy · 2013-02-07T02:04:29.900Z · LW(p) · GW(p)
Oh, okay. That makes sense. So then what's the rational thing to conclude at this point? I'm not going to go back and argue with my friend—they've had enough of it. But what can I take away from this, then?
(I was using the French term philosophe, not omitting a letter, though. That's how my history book used to write it, anyway.)
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-07T03:13:35.984Z · LW(p) · GW(p)
But what can I take away from this, then?
I've mentioned various possible takeaways in my other comments. A specific thing you could do differently in the future is to practice releasing againstness during arguments.
↑ comment by ChristianKl · 2013-02-17T21:44:08.199Z · LW(p) · GW(p)
Humans are emotional creatures. We don't feel emotions for rational reasons.
The emotion you felt is called cognitive dissonance. It's something that humans feel when they come to a point where one of their fundemental beliefs is threatened but they don't have good arguments to back them up.
I think it's quite valuable to have a strong reference experience of what cognitive dissonance feels like. It's make it easier to recognize the feeling when you feel it in the future. Whenever you are feeling that feeling, take note of the beliefs in question and examine them more deeply in writing when you are at home.
↑ comment by bsterrett · 2013-02-14T19:08:10.720Z · LW(p) · GW(p)
I was recently reflecting on an argument I had with someone where they expressed an idea to me that made me very frustrated, though I don't think I was as angry as you described yourself after your own argument. I judged them to be making a very basic mistake of rationality and I was trying to help them to not make the mistake. Their response implied that they didn't think they had executed a flawed mental process like I had accused them of, and even if they had executed a mental process like the one I described, it would not necessarily be a mistake. In the moment, I took this response to be a complete rejection of rationality (or something like that), and I became slightly angry and very frustrated.
I realized afterwards that a big part of what upset me was that I was trying to do something that I felt would be helpful to this person and everyone around them and possibly the world at large, yet they were rejecting it for no reason that I could identify in the moment. (I know that my pushiness about rationality can make the world at large worse instead of better, but this was not on my mind in the moment.) I was thinking of myself as being charitable and nice, and I was thinking of them as inexplicably not receptive. On top of this, I had failed to liaise even decently on behalf of rationalists, and I had possibly turned this person off to the study of rationality. I think these things upset me more than I ever could have realized while the argument was still going on. Perhaps you felt some of this as well? I don't expect these considerations to account for all of the emotions you felt, but I would be surprised if they were totally uninvolved.
↑ comment by OrphanWilde · 2013-02-06T21:39:21.235Z · LW(p) · GW(p)
Do people's brains actually work this way? Other people's, I should say, because mine certainly doesn't.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-06T21:46:40.375Z · LW(p) · GW(p)
What are you referring to by "this way"?
Replies from: OrphanWilde↑ comment by OrphanWilde · 2013-02-06T21:53:23.374Z · LW(p) · GW(p)
"first you feel an emotion, then your brain guesses a reason why you feel that emotion"
To explain why this is completely alien to me:
First, I rarely even notice emotions. To say I feel them would be stretching the concept of "feel" quite considerably. "Observe" would be closer to my relationship with my emotions. (Except in the case of -extremely- intense emotions, anyways; it's kind of like a fire a hundred yards away; I can see it when it's over there if I look in its direction, and only feel it when it's really quite significant) Second, I don't have any kind of... pointer, where I can automatically identify where an emotion came from; my brain isn't performing such a function at all.
I also haven't noticed in my relationships any indications that people do have any kind of intuitions about where their emotions come from. Indeed, it's my experience that a lot of other people also don't have any kind of direct knowledge of their own emotional state except in extreme situations, much less any intuitions about where that emotional state arises from. If we did have either of these things, I'd expect things like depression wouldn't go unnoticed by so many people for so long.
Replies from: pcm, Qiaochu_Yuan↑ comment by pcm · 2013-02-07T02:27:05.262Z · LW(p) · GW(p)
There's a lot of variation in how aware people are of their emotions.
You might want to look into Alexithymia.
Replies from: None↑ comment by Qiaochu_Yuan · 2013-02-06T22:06:47.024Z · LW(p) · GW(p)
I'm not inside your brain or your friend's brains, but that doesn't sound typical to me.
comment by Duncan · 2013-02-06T14:23:59.635Z · LW(p) · GW(p)
"Sorry if it offends you, I just don't think in general that you should apply this stuff to society. Like... no."
Let me translate: "You should do what I say because I said so." This is an attempt to overpower you and is quite common. Anyone who insists that you accept their belief without logical justification is simply demanding that you do what they say because they say so. My response, to people who can be reasoned with, is often just to point this out and point out that it is extremely offensive. If they cannot be reasoned with then you just have to play the political game humans have been playing for ages.
Replies from: AlexMennen, None, ChristianKl↑ comment by AlexMennen · 2013-02-06T17:44:17.625Z · LW(p) · GW(p)
A more charitable translation would be "I strongly disagree with you and have not yet been able to formulate a coherent explanation for my objection, so I'll start off simply stating my disagreement." Helping them state their argument would be a much more constructive response than confronting them for not giving an argument initially.
Replies from: Duncan↑ comment by Duncan · 2013-02-06T18:57:11.911Z · LW(p) · GW(p)
It is not as much that they haven't given an argument or stated their position. It is that they are telling you (forcefully) WHAT to do without any justification. From what I can tell of the OP's conversation this person has decided to stop discussing the matter and gone straight to telling the OP what to do. In my experience, when a conversation reaches that point, the other person needs to be made aware of what they are doing (politely if possible - assuming the discussion hasn't reached a dead end, which is often the case). It is very human and tempting to rush to the 'Are you crazy?!! You should __.' and skip all the hard thinking.
Replies from: AlexMennen↑ comment by AlexMennen · 2013-02-06T23:13:05.295Z · LW(p) · GW(p)
It sounds like the generic "you" to me. So "you shouldn't apply this stuff to society" means "people shouldn't apply this stuff to society." I don't see anything objectionable about statements like that.
↑ comment by [deleted] · 2013-02-06T17:45:04.705Z · LW(p) · GW(p)
Let me offer a different translation: "You are proposing something that is profoundly inhuman to my sensibilities and is likely to have bad outcomes."
Rukifellth below has, I think, a much more likely reason for the reaction presented.
Replies from: Duncan↑ comment by Duncan · 2013-02-06T18:46:33.910Z · LW(p) · GW(p)
Given the 'Sorry if it offends you' and the 'Like... no' I think your translation is in error. When a person says either of those things they are A. saying I no longer care about keeping this discussion civil/cordial and B. I am firmly behind (insert their position here). What you have written is much more civil and makes no demands on the other party as opposed to what they said "... you should ...."
That being said, it is often better to be more diplomatic. However, letting someone walk all over you isn't good either.
Replies from: AlexMennen↑ comment by AlexMennen · 2013-02-07T19:55:03.811Z · LW(p) · GW(p)
"Like..." = "I'm about to explain myself, but need a filler word to give myself more time to formulate the sentence." "no" = "whoops, couldn't think of what to say quick enough to avoid an awkwardly long pause; I'd better tie off that sentence I just suggested I was about to start." I'm not quite sure what to make of "Sorry if it offends you", but I don't see how you can get from there to "I'm not even trying to be polite."
↑ comment by ChristianKl · 2013-02-17T21:23:57.620Z · LW(p) · GW(p)
Their conversation was longer than one sentence. If his discussion partner wouldn't have backed up his point in any way, I doubt mszegedy would have felt enough cognitive dissonance to contemplated suicide.
"You should do what I say because I said so.", generally doesn't make people feel cognitive dissonance that's that strong.
comment by Mestroyer · 2013-02-06T08:00:21.655Z · LW(p) · GW(p)
If you really contemplated suicide over this subject, I am afraid to discuss it with you.
Replies from: mszegedy↑ comment by mszegedy · 2013-02-06T08:16:27.780Z · LW(p) · GW(p)
Oh. Well, that was a while ago, and I get over that stuff quickly. Very few people have that power over me, anyway; they were one of the only friends I had, and it was extremely unusual behavior foming from them. It was kind of devastating to me that there was a thought that was directed at me by a trusted source that was negative and I couldn't explain... but I could, so now I'm all the more confident. This is a success story! I've historically never actually committed sucide, and it was a combination of other stress factors as well that produced that response. I doubt that I actually would, in part because I have no painless means of doing so: when I actually contemplate the action, it's just logistically impossible to do the way I like. I've also gotten real good at talking myself out of it. Usually it's out of a "that'll show 'em" attitude, which I recognize immediately, and also recognize that that would be both cruel and a detriment to society. So, I appreciate your concern for me a lot, but I don't think I'm in any danger of dying at all. Thanks a lot for caring, though!
comment by NancyLebovitz · 2013-02-06T12:19:35.899Z · LW(p) · GW(p)
Tetlock's foxes vs. hedgehogs (people without strong ideologies are somewhat better predictors than those who have strong ideologies, though still not very good predictors) suggests that a hunt for consistency in for something as complex as politics leads to an excessively high risk of ignoring evidence.
Hedgehogs might have premises about how to learn more than about specific outcomes.
comment by Shmi (shminux) · 2013-02-06T17:15:13.541Z · LW(p) · GW(p)
I suspect that what frustrated you is not noticing your own confusion. You clearly had a case of lost purposes: "applying a math thing to social justice" is instrumental, not terminal. You discovered a belief "applying math is always a good thing" which is not obviously connected to your terminal goal "social justice is a good thing".
You are rationalizing your belief about applying math in your point 2:
An inconsistent belief system will generate actions that are oriented towards non-constant goals, and interfere destructively with each other, and not make much progress. A consistent belief system will generate many actions oriented towards the same goal, and so will make much progress.
How do you know that? Seems like an argument you have invented on the spot to justify your entrenched position. Your point 3 confirms it:
No matter how offended you are about something, thinking about it will still resolve the issue.
In other words, you resolved your cognitive dissonance by believing the argument you invented, without any updating.
If you feel like thinking about the issue some more, consider connecting your floating belief "math is good" to something grounded, like The Useful Idea of Truth:
True beliefs are more likely than false beliefs to make correct experimental predictions, so if we increase our credence in hypotheses that make correct experimental predictions, our model of reality should become incrementally more true over time.
This is reasonably uncontroversial, so the next step would be to ponder whether in order to be better at this social justice thing one has to be better at modeling reality. If so, you can proceed to the argument that a consistent model is better than an inconsistent one at this task. This may appear self-evident to you, but not necessarily to your "socially progressive" friend. Can you make a convincing case for it? What if s/he comes up with examples where someone following an inconsistent model (like, say, Mother Teresa) contributes more to social justice than those who study the issue for a living? Would you accept their evidence as a falsification of your meta-model "logical consistency is essential"? If not, why not?
Replies from: mszegedy, Qiaochu_Yuan, Giles↑ comment by mszegedy · 2013-02-07T02:36:36.594Z · LW(p) · GW(p)
You're completely right. I tried, at first, to look for ways that it could be a true statement that "some areas shouldn't have consistent belief systems attached", but that made me upset or something (wtf, me?), so I abandoned that, and resolved to attack the argument, and accept it if I couldn't find a fault with it. And that's clearly bad practice for a self-proclaimed rarionalist! I'm ashamed. Well, I can sort of make the excuse of having experienced emotions, which made me forget my principles, but that's definitely not good enough.
I will be more careful next time.
EDIT: Actually, I'm not sure whether it's so cut-and-dry like that. I'll admit that I ended up rationalizing, but it's not as simple as "didn't notice confusion". I definitely did notice it. Just when I am presented with an opposing argument, what I'll do is that I'll try to figure out at what points it contradicts my own beliefs. Then I'll see whether those beliefs are well-founded. If they aren't, I'll throw them out and attempt to form new ones, adopting the foreign argument in the process. If I find that the beliefs it contradicts are well-founded, then I'll say that the argument is wrong because it contradicts these particular beliefs of mine. Then I'll go up to the other person and tell them where it contradicts my beliefs, and it will repeat until one of us can't justify our beliefs, or we find that we have contradictory basic assumptions. That is what I did here, too; I just failed to examine my beliefs closely enough, and ended up rationalizing as a result. Is this the wrong way to go about things? There's of course a lot to be said about actual beliefs about reality in terms of prior probability and such, so that can also be taken into account where it applies. But this was a mostly abstract argument, so that didn't apply, until I introduced an epistemological argument instead. But, so, is my whole process flawed? Or did I just misstep?
Replies from: shminux↑ comment by Shmi (shminux) · 2013-02-07T06:55:08.366Z · LW(p) · GW(p)
From your original story, it doesn't look like you have noticed that your cached belief was floating. Presumably it's a one-off event for you, and the next time you feel frustrated like that, you will know what to look for.
Now, I am not a rationalist (IANAR?), I just sort of hang out here for fun, so I am probably not the best person to ask about methodology. That said, one of the approaches I have seen here and liked is steelmanning the opposing argument to the point where you can state it better than the the person you are arguing with. Then you can examine it without the need to "win" (now it's your argument, not theirs) and separate the parts that work from those which don't. And, in my experience, there is a grain of truth in almost every argument, so it's rarely a wasted effort.
Replies from: Kawoomba↑ comment by Qiaochu_Yuan · 2013-02-06T18:04:01.879Z · LW(p) · GW(p)
How do you know that? Seems like an argument you have invented on the spot to justify your entrenched position.
Agreed. Many people can act effectively starting from what might be regarded as inconsistent belief systems by compartmentalizing (e.g. religious scientists). There is also an underlying assumption in the post that beliefs are logical statements with truth values that is questionable. Many beliefs are probably "not even wrong."
↑ comment by Giles · 2013-02-06T18:50:52.591Z · LW(p) · GW(p)
This may appear self-evident to you, but not necessarily to your "socially progressive" friend. Can you make a convincing case for it?
Remember you have to make a convincing case without using stuff like logic
Replies from: shminux↑ comment by Shmi (shminux) · 2013-02-06T20:21:19.438Z · LW(p) · GW(p)
Remember you have to make a convincing case without using stuff like logic
Hence what I said, start with something they both can agree on, like whether making accurate models of reality is important for effective social justice.
comment by JenniferRM · 2013-02-06T18:53:38.927Z · LW(p) · GW(p)
An inconsistent belief system will generate actions that are oriented towards non-constant goals, and interfere destructively with each other, and not make much progress. A consistent belief system will generate many actions oriented towards the same goal, and so will make much progress.
One way to model willpower is that it is a muscle that uses up brain energy to accomplish things. This is a common model but it is not my current working hypothesis for how things "really universally work in human brains". Rather, I see a need for "that which people vaguely gesture towards with the word willpower" as a sign that a person's total cognitive makeup contains inconsistent elements that are destructively interfering with each other. In other words, the argument against logically coherent beliefs is sort of an argument in favor of akrasia.
Some people seem to have a standard response to this idea that is consonant with the slogan "that which can be destroyed by the truth should be" and this is generally not my preferred response except as a fallback in cases of a poverty of alternative options. The problem I have with "destroy my akrasia with the truth" responses is roughly that they are sort of like censoring a part of yourself without proper justification for doing so. I generally expect constraints of inferential distances and patience to make the detailed reasoning here opaque, but for those interested, a useful place to start is to consider the analogy of "cognitive components as assets" and then play compare and contrast with modern portfolio theory (MPT).
However, explicitly learning about MPT appears not to be within the cognitive means of most people at the present time... which means that if the related set of insights is critical to optimal real life functioning as an epistemic agent then an implicit form of the same insights is likely to be embedded in people in "latent but effective form". It doesn't mean that such people are "bad" or "trying to dominate you" necessarily, it just means that they have a sort of in-theory-culturally-rectifiable disability in the context of something like "explicitly negotiated life optimization".
If this disability is emotionally affirmed as a desirable state and taken to logical extremes in a context of transhuman self modification abilities you might end up with something like dream apes:
Their ancestors stripped back the language centres to the level of higher primates. They still have stronger general intelligence than any other primate, but their material culture has been reduced dramatically – and they can no longer modify themselves, even if they want to. I doubt that they even understand their own origins any more.
Once you've reached the general ballpark of dream apes, the cognitive MPT insight has reached back around to touch on ethical questions that come up in daily life. You can imagine a sort of grid of social and political possibilities based on questions like: What if the dream ape is more (or less) ethical than me? What if a dream ape is more (or less) behaviorally effective than me, but in a "directly active" way (with learning and teaching perhaps expected to work by direct observation of gross body motions and direct inference of the justifications for those actions)? What if the dream ape has a benevolent (or hostile) attitude towards me right now? What if, relative to someone else, I'm the dream ape?
You can get an interesting intellectual puzzle by imagining that "becoming a god-like dream ape" (ie lesioning verbal processing but getting better at tools and science and ethics) turned out as "scientific fact" to be the morally and pragmatically correct outcome of the transhuman possibility. In that context, imagine that one of these "super awesome transhuman dream apes" runs into a person from a different virtue ethical clade who is (1) worth saving but (2) has tried (successfully or unsuccessfully) to totally close themselves to anything except verbally explicit forms of influence, and then (3) fallen into sin somehow. In this scenario, what does the angelic dream ape do to get a positive outcome?
EDITED: Ran into the comment length limit and trimmed the thought to a vaguely convenient stopping point.
Replies from: JenniferRM↑ comment by JenniferRM · 2013-02-06T19:01:09.171Z · LW(p) · GW(p)
trying to delete
comment by William_Quixote · 2013-02-06T18:10:15.623Z · LW(p) · GW(p)
I sympathize, but I down voted this post.
this is a personal story and a generalization from one person's experience. I think that as a category, that's not enouph for a post on its own. It might be fine as a comment in an open thread or other less prominently placed content.
comment by V_V · 2013-02-06T19:19:41.211Z · LW(p) · GW(p)
And that did it. For the rest of the day, I wreaked physical havoc, and emotionally alienated everyone I interacted with. I even seriously contemplated suicide. I wasn't angry at my friend in particular for having said that. For the first time, I was angry at an idea: that belief systems about certain things should not be internally consistent, should not follow logical rules.
This emotional reaction seems abnormal. Seriously, somebody says something confusing and you contemplate suicide?
What are you, a Straw Vulcan computer that can be disabled with a Logic Bomb ?
Unless you are making this up, I suggest you consider seeking professional help.
It was extremely difficult to construct an argument against, because all of my arguments had logically consistent bases, and were thus invalid in its face.
Actually, it's rather easy: just tell them that ex falso quodlibet.
Replies from: Nisan, mszegedy↑ comment by mszegedy · 2013-02-07T02:43:56.051Z · LW(p) · GW(p)
True, I swear! I think I can summarize why I was so distraught: external factors, this was a trusted friend, also one of my only friends, and I was offended by related things they had said prior. I am seeking help, though.
Replies from: V_Vcomment by Tenoke · 2013-02-06T11:51:38.976Z · LW(p) · GW(p)
And that did it. For the rest of the day, I wreaked physical havoc, and emotionally alienated everyone I interacted with. I even seriously contemplated suicide.
You never get offended but this little thing brought you on the verge of suicide!? Did you recently become a rationalist? I am not sure how to read the situation.
Replies from: None↑ comment by [deleted] · 2013-02-06T17:05:27.157Z · LW(p) · GW(p)
A tricky problem is, you can't really read the situation from a brief description. Here is an example of increasingly suicidality to show why:
Monday: "Today was horrible... just horrible. I can't take this any more, I'm going to end it all."
Tuesday: "I am going to walk to that cliff near my house and jump off, that would do it. That would definitely be fatal." and then not doing anything or:
Wednesday: "Okay, I have a list of things I'm going to do before jumping off the cliff. Step 1, Eat a large meal." eats "Step 2: Write a Suicide Note:" types "In retrospect... I don't feel like jumping off the cliff anymore today." (Deletes note)
Thursday: Doing all of the above, actually walking to that cliff near your house, looking over the edge and only then thinking "You know, maybe I shouldn't jump. Not today. Maybe I'll jump if tomorrow is this bad too."
Friday: Standing on the edge as previously, but doing so until one of your friends finds you and pulls you away while you are saying "No, let me go, I need to do this!"
I have no idea what serious contemplation refers to (I'm assuming the verge would be either Thursday or Friday.) For instance, even in the past, on my worst days of depression, I don't think I've ever gotten past Wednesday on the list above.
If there is a more explicit metric for this, please let me know, I'm not finding one, and it would be great to have an easier way of communicating about some of this.
Replies from: Tenoke↑ comment by Tenoke · 2013-02-06T18:35:45.337Z · LW(p) · GW(p)
Well, thanks for the distinction between suicidal intentions but I don't see this to be really relevant to what I said. In this example 'on the verge of suicide' referred to:
And that did it. For the rest of the day, I wreaked physical havoc, and emotionally alienated everyone I interacted with. I even seriously contemplated suicide.
Seriously contemplating something is semi-synonymous with being on the verge of doing something. I can't really help you decipher how suicidal he was but if I had to guess he was just exaggerating.
Replies from: None, magfrump↑ comment by [deleted] · 2013-02-06T19:09:36.540Z · LW(p) · GW(p)
Sorry about that. I was trying to break something that seemed unclear into concrete examples, but on looking at it again, I think it may have been a bit too much armchair psychology, and when I tried explaining what I was saying, my explanation sounded even more like armchair psychology (but this time I noticed before posting). Thank you for helping me see that problem more clearly.
↑ comment by magfrump · 2013-02-07T05:51:52.517Z · LW(p) · GW(p)
Seriously contemplating something is semi-synonymous with being on the verge of doing something.
From my perspective, Tuesday would feel like "seriously contemplating" from the inside; even late Monday night could too I think.
So I disagree with the quoted sentence.
EDIT addition for clarity: Had I personally felt like the "Tuesday" scenario described above, I could easily imagine myself describing the event as "seriously contemplating suicide," regardless of what other people think about the definition of "seriously contemplate." So it seems wise to me not to dismiss the possibility that when someone described their situation, it may be less serious than you personally think should be the definition of those words.
comment by Cthulhoo · 2013-02-06T08:46:48.143Z · LW(p) · GW(p)
"Sorry if it offends you, I just don't think in general that you should apply this stuff to society. Like... no."
I'm making a wild guess, but possibly it's the bold part that offended you... Because this is usually what irritates me (Yay for lovely generalization from one example... but see also Emile's comment quoting pjeby's comment).
Similar offenders are:
- "Come on, it's obvious!"
- "You can't seriously mean that /Are you playing dumb?"
- "Because everybody knows that!"
In general, what irritates me is the refusal to really discuss the subject, and the quick dismissal. If arguments are soldiers, this is like building a foxhole and declare you won't move from there at any cost.
Replies from: Benito, whowhowho↑ comment by Ben Pace (Benito) · 2013-02-06T14:52:03.247Z · LW(p) · GW(p)
"I mean, have you heard of cri... cry... cryonics? Hehe..."
"Yeah, I'm interested in it."
"...Like... no."
From conversation today.
comment by David_Gerard · 2013-02-06T08:38:08.683Z · LW(p) · GW(p)
Reason as memetic immune disorder by Phil Goetz.
comment by Antisuji · 2013-02-06T07:39:57.703Z · LW(p) · GW(p)
"Sorry if it offends you, I just don't think in general that you should apply this stuff to society. Like... no."
I felt offended reading this, even though I was expecting something along these lines and was determined not to be offended. I've come to interpret this feeling, on a 5-second level, as "Uh oh, someone's attacking my group." I'm sure I'd be a little flustered if someone said that to me in conversation. But after some time to think about it, I think my response would be "Why shouldn't math be applied to social justice?" And I really would be curious about the answer, if only because it would help me better understand people who hold this kind of viewpoint.
Also, I expect there are good reasons why it's dangerous to apply math to social justice, especially since most people aren't good at math.
Replies from: mszegedy↑ comment by mszegedy · 2013-02-06T07:59:58.029Z · LW(p) · GW(p)
Well, the friend had counterexamples to "math as a basis for society is good". I sort of skipped over that. They mentioned those who rationalized bad things like racism, and also Engels. (We both agree that communism is not a successful philosophy.) Counterexamples aren't really enough to dismiss an idea unless they're stronger than the evidence that the idea is good, but I couldn't think of such evidence at the time, and I still can't think of anything particularly convincing. There's no successful society to point at that derived all of its laws and givernment axiomatically.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-02-06T16:42:58.687Z · LW(p) · GW(p)
Those are good examples that you need to be really careful applying math to society.
If you come up with a short list of axioms for a social group, and then use them to formulate policy, you're probably going to end up leaving the domain over which those axioms are valid. If you have a lot of power, this can be a really bad thing.
comment by Rukifellth · 2013-02-06T13:27:33.206Z · LW(p) · GW(p)
Almost no one these days regards axiom compiling as a way of describing emotional phenomenon, such as altruism. The idea of describing such warm reasons in terms of axioms was so unintuitive that it caused your friend to think that you were looking for some other reason for social justice, other than a basic appeal to better human nature. He may have been disgusted at what he thought was an implicit disregard for the more altruistic reasons for social justice, as if they weren't themselves sufficient to do good things.
comment by Dahlen · 2013-02-06T21:02:07.256Z · LW(p) · GW(p)
1) I think your reaction to this situation owed itself more to your psychological peculiarities as a person (whichever they are) than to a characteristic that all people that identify as rationalists share. There's no reason to expect people with the same beliefs as yours never to keep their cool (at least never on the first time) when talking to someone with an obviously incompatible belief system.
2)
It was extremely difficult to construct an argument against, because all of my arguments had logically consistent bases, and were thus invalid in its face.
It doesn't have to be like that, at least if you don't start off with consistent and false belief systems. The way I think about such issues while effectively avoiding epistemological crises is the following: an algorithm by which I arrived at conclusions which I can pretty confidently dub "knowledge" ends up being added to my cognitive toolbox. There are many things out there that look like they were designed to be added to people's cognitive toolboxes, but not all of them can be useful, can they? Some of them look like they were specifically designed to smash other such tools to pieces. So here's a good rule of thumb: don't add anything to your cognitive toolbox that looks like an "anti-tool" to a tool that is already inside of it. Anything that you suspect makes you know less, be dumber, or require you to forsake trustworthy tools is safe & recommendable to ignore. (In keeping with the social justice topic, a subcategory of bad beliefs to incorporate are those that cause you to succumb to, rather than resist, what you know to be flaws in your cognitive hardware, such as an ingroup-outgroup bias or affect heuristics -- that's why, I think, one should avoid getting too deep into the "privilege" crowd of social justice even if the arguments make sense to one.) Of course, you should periodically empty out the toolbox and see whether the tools are in a good state, or if there's an upgraded version available, or if you were simply using the wrong hammer all along -- but generally, rely on them.
3) You like to explore the implications of a premise, which is completely incompatible with your friend's "separate magisteria" approach (a technique directly out of the Official Handbook of Belief Conservation); unfortunately it is why you weren't able to abandon the train of thought before it derailed into emotional disturbance. You see someone saying you shouldn't use an obviously (to you) useful and relevant method for investigating something? That's a sign that says "Stop right here, there's no use in trying to extrapolate the consequences of this belief of theirs; they obviously haven't thought about it in sufficient detail to form opinions on it that you can make head or tails of." The knowledge and deepness of thought that it takes to see why math is relevant to understanding society is small enough that, if they failed to catch even that, they obviously went no further in establishing beliefs about math that could be either consistent or inconsistent with the pursuit of justice and equality. You went as far as seeing the implications and being horrified -- "How can anyone even think that?" -- but it is a thought they likely didn't get to think; the ramifications of their thought about math ended long before that, presumably at the point when it began to interfere with ideological belief conservation.
4) Get better friends. I know the type, and I've learned the hard way not to try to reason with them. Remember that one about playing chess with a pidgeon?
Replies from: B_For_Bandana↑ comment by B_For_Bandana · 2013-02-07T00:10:03.011Z · LW(p) · GW(p)
So here's a good rule of thumb: don't add anything to your cognitive toolbox that looks like an "anti-tool" to a tool that is already inside of it. Anything that you suspect makes you know less, be dumber, or require you to forsake trustworthy tools is safe & recommendable to ignore. (In keeping with the social justice topic, a subcategory of bad beliefs to incorporate are those that cause you to succumb to, rather than resist, what you know to be flaws in your cognitive hardware, such as an ingroup-outgroup bias or affect heuristics -- that's why, I think, one should avoid getting too deep into the "privilege" crowd of social justice even if the arguments make sense to one.)
Why is privilege such a dangerous idea? I suspect that your answer is along the lines of "A main tenet of privilege theory is that privileged people do not understand how society really works (they don't experience discrimination, etc.), therefore it can make you despair of ever figuring anything out, and this is harmful." But reading about cognitive biases can have a similar effect. Why is learning about bias due to privilege especially harmful to your cognitive toolbox?
Replies from: Dahlen, Dahlen↑ comment by Dahlen · 2013-02-07T23:33:02.767Z · LW(p) · GW(p)
No, it's not that. It's that there are many bugs of the human mind which identity politics inadvertently exploits. For one, there's the fact that it provides convenient ingroups / outgroups for people to feel good, respectively bad, about -- the privileged and the oppressed -- and these outgroups are based on innate characteristics. Being non-white, female, gay etc. wins you points with the social justice crowd just as being white, male, straight etc. loses you points. Socially speaking, how much a "social justice warrior" likes you is partly a function of how many disadvantaged groups you belong to. This shouldn't happen, maybe not even in accordance to the more academic, theoretical side of social justice, but it does, because we're running on corrupted hardware and these theories fail to compensate for it.
Another very closely related problem is "collecting injustices". You can transform everything bad that happens to you for a cause that you perceive to be your belonging to an oppressed group into debate ammunition against the other; you can point to it to put yourself in a positive, sympathetic, morally superior light, and your opponents in a negative light. So there's this powerful rhetorical upside to being in a situation that otherwise can only be seen as a very shitty situation to be in. This incentivizes people, on some level, to not really seek to minimize these situations. But obviously people hate oppression and don't actually, honestly want to experience it, but winning debates automatically and gaining the right to pontificate feels good. So what to do? Lower the threshold for what counts as oppression, obviously. This has absolutely disastrous effects on their stated goals. If there's anything whatsoever that incentivizes you to find more oppression in the world around you, you can't sincerely pursue the goal of ending oppression.
Also, some of the local memes instruct people to lift all the responsibility of a civilized discussion off themselves and put it on the other. Yvain had a post on his LJ which described this mode of discussion as a "superweapon". Also, see this page (a favorite of the internet social justice advocates that I had the unpleasantness of running into) for getting a good idea about the debate rights claimed by many of them, and the many responsibilities of which they absolve themselves. If that doesn't look like mindkilling, I don't know what does.
Simply put, many people like this ideology because it gives them an opportunity to revel in their self-righteousness. Of course, it's good for people to know whether they have their cultural blinders on in specific situations; it's also very bad for people to vilify an entire race or sex or whatever. The tricky thing to do is to clear your mind of your identity-induced biases without adopting an ideology that, overall, has a great chance of making you more irrational than before.
comment by asparisi · 2013-02-06T15:53:46.858Z · LW(p) · GW(p)
I usually turn to the Principle of Explosion to explain why one should have core axioms in their ethics, (specifically non-contradictory axioms). If some principle you use in deciding what is or is not ethical creates a contradiction, you can justify any action on the basis of that contradiction. If the axioms aren't explicit, the chance of a hidden contradiction is higher. The idea that every action could be ethically justified is something that very few people will accept, so explaining this usually helps.
I try to understand that thinking this way is odd to a lot of people and that they may not have explicit axioms, and present the idea as "something to think about." I think this also helps me to deal with people not having explicit rules that they follow, since it A) helps me cut off the rhetorical track of "Well, I don't need principles" by extending the olive branch to the other person; and B) reminds me that many people haven't even tried to think about what grounds their ethics, much less what grounds what grounds their ethics.
I usually use the term "rule" or "principle" as opposed to "axiom," merely for the purpose of communication: most people will accept that there are core ethical rules or core ethical principles, but they may have never even used the word "axiom" before and be hesitant on that basis alone.
comment by Viliam_Bur · 2013-02-06T15:41:07.882Z · LW(p) · GW(p)
From your strong reaction I would guess that your friend's reaction somehow ruined the model of the world you had, in a way that was connected with your life goals. Therefore for some time your life goals seemed unattainable and the whole life meaningless. But gradually you found a way to connect your life goals with the new model.
Seems to me that your conclusion #2 is too abstract ("far") for a shock that I think had personal ("near") aspects. You write impersonal abstractions -- "do people really desire progress? can actions actually accomplish things?" -- but I guess it felt more specific than this; something like: "does X really desire Y? can I actually accomplish Z?" for some specific values of X, Y, and Z. Because humans usually don't worry about abstract things; they worry about specific consequences for themselves. (If this guess is correct, would you like to be more specific here?)
comment by TheOtherDave · 2013-02-06T15:20:59.546Z · LW(p) · GW(p)
I would add to this that if the domain of discourse is one where we start out with a set of intuitive rules, as is the case for many of the kinds of real-world situations that "social justice" theories try to make statements about, there are two basic ways to arrive at a logically consistent belief structure: we can start from broad general axioms and reason forward to more specific rules (as you did with your friend), or we can start from our intuitions about specific cases and reason backward to general principles.
IME, when I try to reason-forward in such domains, I end up with a more simplistic, less workable understanding of the domain than when I try to reason-backward. The primary value to me of reasoning-forward for these domains is if I distrust my intuitive mechanism for endorsing examples, and want to justify rejecting some of those intuitions.
With respect to your anecdote... if your friend's experience aligns with mine, it may be that they therefore understood you to be trying to justify rejecting some of their intuitions about social mores in the name of logical consistency, and was consequently outraged (defending social mores from challenge is basically what outrage is for, after all).
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-02-06T16:01:21.500Z · LW(p) · GW(p)
It occurs to me that I can express this thought more concisely in local jargon by saying that any system which seeks to optimize a domain for a set of fixed values that do not align with what humans collectively value today is unFriendly.
comment by B_For_Bandana · 2013-02-06T23:34:38.538Z · LW(p) · GW(p)
It seems possible that when your friend said, in effect, that there can never be any axioms for social justice, what they really meant was simply, "I don't know the axioms either." That would indeed be a map/territory confusion on their part, but it's a pretty common and understandable one. The statement, "Flying machines are impossible" is not equivalent to "I don't know how to build a flying machine," but in the short term they are making the same prediction: no one is flying anywhere today.
Actually, and I don't know if you've thought of it this way, but in asking for the axioms of social justice theory, weren't you in effect asking for something close to the solution to the Friendly AI problem? No wonder your friend couldn't come up with a good answer on the spot!
Replies from: mszegedy↑ comment by mszegedy · 2013-02-07T00:08:18.529Z · LW(p) · GW(p)
It seems possible that when your friend said, in effect, that there can never be any axioms for social justice, what they really meant was simply, "I don't know the axioms either." That would indeed be a map/territory confusion on their part, but it's a pretty common and understandable one. The statement, "Flying machines are impossible" is not equivalent to "I don't know how to build a flying machine," but in the short term they are making a similar prediction: no one is flying anywhere today.
They seemed to be saying both things.
Actually, and I don't know if you've thought of it this way, but in asking for the axioms of social justice theory, weren't you in effect asking for something close to the solution to the Friendly AI problem? No wonder your friend couldn't come up with a good answer on the spot!
Hah, that's true! I didn't think of it that way. I don't know that much about the Friendly AI problem, so I wouldn't know anyway. I've been able to reduce my entire morality to two axioms, though (which probably aren't especially suitable for AI or a 100% rational person, because there's no possibility at all that I've actually found a solution to a problem I know nothing about that has been considered by many educated people for long periods of time), so I thought that maybe you could find something similar for social justice (I was having trouble deciding on what to feel about certain fringe cases).
Replies from: B_For_Bandana↑ comment by B_For_Bandana · 2013-02-07T00:15:21.945Z · LW(p) · GW(p)
They seemed to be saying both things.
My point was that they probably did think they meant both things, because the distinction between "it's impossible" and "I don't know how" is not really clear in their mind. But that is not as alarming as it would be coming from someone who did know the difference, and insisted that they really did mean "impossible."
I've been able to reduce my entire morality to two axioms...
Okay, I'll bite. What are they?
Replies from: mszegedy↑ comment by mszegedy · 2013-02-07T00:46:30.795Z · LW(p) · GW(p)
My point was that they probably did mean both things, because the distinction between "it's impossible" and "I don't know how" is not really clear in their mind. But that is not as alarming as it would be coming from someone who did know the difference, and insisted that they really did mean "impossible."
Hmm, I agree, but I don't think that it adequately explains the entire picture. I think it might have been two different ideas coming from two different sources. I can imagine that my friend had absorbed "applying formalized reason to society is bad" from popular culture, whereas "I don't know what founding propositions of social justice are", and subsequently "there might not be able to be such things" (like you talked about), came from their own internal evaluations.
Okay, I'll bite. What are they?
I kinda wanted to avoid this because social approval etc., also brevity, but okay:
- Everybody is completely, equally, and infinitely entitled to life, positive feelings, and a lack of negative feelings.
- One must forfeit gratification of axiom 1 to help others to achieve it. (This might be badly worded. What I mean is that you also have to consider the entitlement of others as well to etc etc etc in their actions, and while others are do not have the things in axiom 1, one should be helping them get them, not oneself.)
I know it loses a lot of nuance this way (to what extent must you help others? well, so that it works out optimally for everyone; but what exactly is optimal? the sum of everyone's life/positive feelings/lack of negative feelings? that's left undefined), but it works for me, at least.
Replies from: Qiaochu_Yuan, Richard_Kennaway, Eugine_Nier↑ comment by Qiaochu_Yuan · 2013-02-09T18:56:17.730Z · LW(p) · GW(p)
I think it is deeply misleading to label these "axioms." At best these are summaries of heuristics that you use (or believe you use) to make moral decisions. You couldn't feed these axioms into a computer and get moral behavior back out. Have you read the posts orbiting around Fake Fake Utility Functions?
↑ comment by Richard_Kennaway · 2013-02-07T14:12:18.866Z · LW(p) · GW(p)
Okay, I'll bite. What are they?
I kinda wanted to avoid this because social approval etc., also brevity, but okay:
(axioms omitted)
I don't see any mathematics there, and making them into mathematics looks to me like an AI-complete problem. What do you do with these axioms?
↑ comment by Eugine_Nier · 2013-02-07T04:11:27.430Z · LW(p) · GW(p)
What do you mean by "positive feelings"? For example, would you support wireheading everyone?
Replies from: mszegedy↑ comment by mszegedy · 2013-02-07T04:50:49.051Z · LW(p) · GW(p)
That's exactly what I can't make my mind up about, and forces me to default to nihilism on things like that. Maybe it really is irrelevant where the pleasure comes from? If we did wirehead everyone for eternity, then would it be sad if everyone spontaneously disappeared at some point? Those are questions that I can't answer. My morality is only good for today's society, not tomorrow's. I guess strictly morally, yes, wireheading is a solution, but philosophically, there are arguments to be made against it. (Not from a nihilistic point of view, though, which I am not comfortable with. I guess, philosophically, I can adopt two axioms: "Life requires meaning," and "meaning must be created." And then arises the question, "What is meaning?", at which point I leave it to people with real degrees in philosophy. If you asked me, I'd try to relate it to the entropy of the universe somehow. But I feel that I'm really out of my depth at that point.)
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-09T19:03:51.610Z · LW(p) · GW(p)
I think you're giving up too early. Have you read the metaethics sequence?
comment by passive_fist · 2013-02-06T07:42:55.106Z · LW(p) · GW(p)
Perhaps I'm mistaken about this, but isn't a far stronger argument in favor of a consistent belief system the fact that with inconsistent axioms you can derive any result you want? In an inconsistent belief system you can rationalize away any act you intend to take, and in fact this has often been seen throughout history.
Replies from: Kawoomba, Larks, Desrtopa↑ comment by Kawoomba · 2013-02-06T10:22:19.882Z · LW(p) · GW(p)
In theory, yes. In practice, ... maybe. Like saying "a human can implement a bounded TM and can in principle, without tools other than paper&pencil, compute a prime number with a million digits".
It depends on how inconsistent the axioms are in practice. If the contradictions are minor, before leveraging that contradiction to derive arbitrary results, the hu-man may die of old age.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-02-07T03:35:13.811Z · LW(p) · GW(p)
It depends on how inconsistent the axioms are in practice. If the contradictions are minor, before leveraging that contradiction to derive arbitrary results, the hu-man may die of old age.
Of course, if the belief system in questions becomes popular, one of his disciples may wind up doing this.
↑ comment by Desrtopa · 2013-02-06T20:03:56.956Z · LW(p) · GW(p)
The friend in question wouldn't buy that argument though, because rather than accepting as a premise that they hold inconsistent axioms, they would assert that they don't apply things like axioms to their reasoning about social justice.
Plus, it's not likely to reflect their impression of their own actions. They're probably not trying to logically derive conclusions from a set of conflicting premises so much as they're following their native moral instincts, which may be internally inconsistent, but certainly do not have unlimited elasticity of output. You can get an ordinary person to respond to the same moral dilemma in different ways by framing it differently, but there are some conclusions that they cannot be convinced to draw, and others that they will uphold consistently, so if they're told that their belief system can derive any result, their response is likely to be "What? No it can't."
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-02-07T03:37:58.223Z · LW(p) · GW(p)
In practice this tends to manifest as being able to rationalize any result.
Replies from: Desrtopa↑ comment by Desrtopa · 2013-02-07T03:43:59.881Z · LW(p) · GW(p)
They'll tend to rationalize whatever results they output, but that doesn't mean that they'll output just any result.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-02-07T04:14:47.223Z · LW(p) · GW(p)
Unfortunately the results they output tend to resemble this.
comment by alfredmacdonald · 2013-02-08T05:53:46.672Z · LW(p) · GW(p)
I really liked this post, and I think a lot of people aren't giving you enough credit. I've felt similarly before -- not to the point of suicide, and I think you might want to find someone who you can confide those anxieties with -- but about being angered at someone's dismissal of rationalist methodology. Because ultimately, it's the methodology which makes someone a rationalist, not necessarily a set of beliefs. The categorizing of emotions as in opposition to logic for example is a feature I've been frustrated with for quite some time, because emotions aren't anti-logical so much as they are alogical. (In my personal life, I'm an archetype of someone who gets emotional about logical issues.)
What I suspect was going on is that you felt that this person was being dismissive of the methodology and that the person did not believe reason to be an arbiter of disagreements. This reads to me like saying "I'm not truth-seeking, and I think my gut perception of reality is more important than the truth" -- a reading that sounds to me both arrogant and immoral. I've ran across people like this too, and every time I feel like someone is de-prioritizing the truth over their kneejerk reaction, it's extremely insulting. Perhaps that's what you felt?
comment by christina · 2013-02-07T07:11:11.948Z · LW(p) · GW(p)
I am confused why your friend thought good social justice arguments do not use logic to defend their claims. Good arguments of any kind use logic to defend their claims. Ergo, all the good social justice arguments are using logic to defend their claims. Why did you not say this to your friend?
EDIT: Also confused about your focus on axioms. Axioms, though essential, are the least interesting part of any logical argument. If you do not accept the same axioms as your debate partner, the argument is over. Axioms are by definition not mathematically demonstrable. In your post, you stated that axioms could be derived from other fundamental axioms, which is incorrect. Could you clarify your thinking on this?
comment by [deleted] · 2013-02-06T21:04:24.279Z · LW(p) · GW(p)
Did I miss any sort of deeper reasons I could be using for this?
"That one guy I know and the stuff they say" is usually not a great proxy for some system of belief in general; therefore, to the extent you care about the same stuff they care about when they say "social justice", a knee-jerk rejection of thinking about that stuff systematically or in terms of axioms and soforth should probably come off to you as self-defeating or shortsighted.
comment by Giles · 2013-02-06T19:19:38.123Z · LW(p) · GW(p)
2 sounds wrong to me - like you're trying to explain why having a consistent internal belief structure is important to someone who already believes that.
The things which would occur to me are:
- If both of you are having reactions like this then you're dealing with status, in-group and out-group stuff, taking offense, etc. If you can make it not be about that and be about the philosophical issues - if you can both get curious - then that's great. But I don't know how to make that happen.
- Does your friend actually have any contradictory beliefs? Do they believe that they do?
- You could escalate - point out every time your friend applies a math thing to social justice. "2000 people? That's counting. You're applying a math thing there." "You think this is better than that? That's called a partial ordering and it's a math thing". I'm not sure I'd recommend this approach though.
comment by MrMind · 2013-02-07T15:51:47.421Z · LW(p) · GW(p)
The refuse of your friend to axiomatize the theory of social justice doesn't necessarily imply that he believes that social justice can be governed by incoherence (theory here is used in its model theory meaning: a set of true propositions). It may under a (admittedly a little stretched) charitable reading just means that your friend believes it's incompressible: the complexity of the axioms is as great as the complexity of the facts the axioms would want to explain.
It's just like the set of all arithmetical truths: you cannot axiomatize it, but it's for sure not inconsistent.
↑ comment by Qiaochu_Yuan · 2013-02-10T19:26:38.515Z · LW(p) · GW(p)
It's just like the set of all arithmetical truths: you cannot axiomatize it, but it's for sure not inconsistent.
Mega-nitpicks: 1) it is possible to axiomatize the set of all arithmetical truths by taking as your axioms the set of all arithmetical truths. The problem with this axiomatization is that you can't tell what is and isn't an axiom, which is why Gödel's theorem is about recursively enumerable axiomatizations instead of arbitrary ones, and 2) it is very likely that Peano arithmetic is consistent, but this isn't a proposition I would assign probability 1.
Replies from: MrMind↑ comment by MrMind · 2013-02-11T10:22:44.467Z · LW(p) · GW(p)
it is possible to axiomatize the set of all arithmetical truths by taking as your axioms the set of all arithmetical truths.
Yes, I've thought to add "recursively" to the original statement, but I felt that the word "axiomatize" in the OP carried the meaning of somehow reducing the number of statement, so I decided not to write it. But of course the trivial axiomatization is always possible, you're totally correct.
it is very likely that Peano arithmetic is consistent, but this isn't a proposition I would assign probability 1.
Heh, things get murky really quickly in this field. It's true that you can prove arithmetic consistent inside a stronger model, and it's true that there are non-standard submodel that think they are inconsistent while being consistent in the outer model. There are also models (paraconsistent in the meta-logic) that can prove their own consistency, avoiding Goedel theorem(s). This means that semantically, from a formal point of view, we cannot hope to really prove anything about some true consistency. I admittedly took a platonist view in my reply.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-11T17:05:40.116Z · LW(p) · GW(p)
we cannot hope to really prove anything about some true consistency.
Sure we can. If we found a contradiction in Peano arithmetic, we'd prove that Peano arithmetic is inconsistent.
comment by ChristianKl · 2013-02-17T21:06:55.592Z · LW(p) · GW(p)
Therefore, assuming the first few statements, having an internally consistent belief system is desirable!
I think that's a straw man. Nobody denies that it's advantagous to have a consistent belief system. People rather argue that consistency isn't the only criteria on which to judge belief systems.
It pretty easy to make up a belief system that's internally consistient but that leads to predictions about reality that are wrong.
A good example would be the problem of hidden Markov models. There are different algorithms to generate a path. The Viterbi algorithm creates a path that internally consistent. The forward-backward algorithm creates a path that's not necessarily internally consistent but more robust against error. In my bioinformatics class we learned that for practical problems the forward-backward algorithm is often better than the Viterbi algorithm.
Fischer Black and Myron Scholes did work that gave financial traders an internally consistent way to measure risk. Nassim Taleb argues in his books that as a result the two reduced the resilience of financial models against errors and wrecked the financial system. To many people believed in the importance of consistency and as a result bad things happened.
If someone tells me there's global warming I'm much more interested in the question: "Is your model robust against error?" than the question "Is your model internally consistent?"
A lot of current social justice theory comes out of post-modernism which doesn't see being consistent as the prime value to which one should aspire. Political totalitarianism which presumes that the moral values of the population should be consistent got rejected. Multiculturalism assumes that it's good to have people with values that aren't consistent living together.
Making progress in accomplishing goals is a desirable thing. An inconsistent belief system will generate actions that are oriented towards non-constant goals, and interfere destructively with each other, and not make much progress. A consistent belief system will generate many actions oriented towards the same goal, and so will make much progress.
If you are wrong and focus on making progress as fast as possible, that's pretty dangerous. It's prudent to think a bit about minizing the damage that you cause when you are wrong.
If you are the social elite and are wrong, you do damage by forcing your values onto a minority that's right.
Often it's more important to focus on minizing the cost of being wrong than to focus on progressing as fast as possible towards some goal.
comment by JQuinton · 2013-02-13T16:36:58.877Z · LW(p) · GW(p)
I don't have anything to add, other than to say that I've had similar frustrations with people. It was mainly in my heyday of debating theists on the Internet. I quite often would encounter the same exact dismissal of logic when presenting a logical argument against the existence of god; literally, they would say something like "you can't use that logic stuff on god" (check out stuff like the presuppositional argument for the existence of god if you want to suffer a similar apoplexy). Eventually, I just started to find it comical.
Replies from: ChristianKl↑ comment by ChristianKl · 2013-02-17T22:07:05.672Z · LW(p) · GW(p)
I don't have anything to add, other than to say that I've had similar frustrations with people. It was mainly in my heyday of debating theists on the Internet. I quite often would encounter the same exact dismissal of logic when presenting a logical argument against the existence of god; literally, they would say something like "you can't use that logic stuff on god"
Did a theist really get you to contemplate suicide by making that argument in an internet discussion? If not, then I don't think that you felt a frustration that similar to what the guy in the first post felt and read something into the post that isn't there.
comment by Qiaochu_Yuan · 2013-02-06T18:12:18.897Z · LW(p) · GW(p)
Also, a general comment. Suppose you think that the optimal algorithm for solving a problem is X. It does not follow that making your algorithm look more like X will make it a better algorithm. X may have many essential parts, and making your algorithm look more like X by imitating some but not all of its essential parts may make it much worse than it was initially. In fact, a reasonably efficient algorithm which is reasonably good at solving the problem may look nothing like X.
This is to say that at the end of the day, the main way you should be criticizing your friend's approach to social justice is based on its results, not based on aesthetic opinions you have about its structure.
comment by Kawoomba · 2013-02-06T10:17:49.715Z · LW(p) · GW(p)
In order to make a rationalist extremely aggravated, you can tell them that you don't think that belief structures should be internally logically consistent.
There are ways to argue for that too. Both the aggravated rationalist and your friend have inconsistent belief systems, as you say the difference is just that the aggravated rationalist would like for that to change, while your friend is fine with that.
The point is this: you can value keeping the "is"-state, and not want to change the as-is for some optimized but partly different / differently weighed preferences.
If I were asked "You wanna take this pill and become Super-Woomba, with a newly modified and now consistent preference system", I'd be inclined to decline. Such self-modifications, while in some cases desirable, should not change me beyond recognition, otherwise I as I exist now would for all purposes stop existing.