Judging the intent of others favorably

post by lessdazed · 2011-08-09T17:10:10.888Z · LW · GW · Legacy · 9 comments

I would like LW to be an environment in which we can learn by having honest and productive conversations. Fortunately, it substantially is such a place, but we can do better. 

I would like to make a post about judging others favorably in the near future. To this end I think a useful mechanism would be to encourage people to post as comments scenarios in which they made erroneous assumptions about others' intent, and hide the conclusion in which they learned of their error from view until the reader has performed the exercise of considering what the innocuous actual explanation might be.

 

The purpose would be to make a repository of stories in which people could read the scenario, fail to think of how the situation could be resolved, and then see how in the previously hidden comment. Each bias involved in misjudgment - thinking one's enemies innately evil, believing one's own argument from ignorance about what the best possible explanation could be, and so forth - would be identified.

 

 

I don't know that hiding the conclusions of stories would be technically easy. One hack would be to have people post the conclusion within the comment in which they laid out the story. people could then downvote the child comment and upvote the parent. However, not everyone has the hiding threshold set at -3, and the first people to see the comment would see the conclusion, not everyone has unlimited dowvotes, etc.

Alternatively, the conclusion to each story could be in rot13.

 

As a protocols, analogously to how people are discouraged from quoting themselves, I would think to limit posts about when others misjudged the author's intent to a maximum, perhaps one for every two submissions in which an author posts he or she misjudged the intent of others.

As another protocol posts in which others on LW misjudged one's intent would be off-limits.

 

Comments are encouraged, whether on my proposed protocols, how to format, etc.

9 comments

Comments sorted by top scores.

comment by [deleted] · 2011-08-10T23:03:41.828Z · LW(p) · GW(p)

Hm. I tried to think of a couple last night:

I thought my friend was mad at me because I apologized for coming to a social event late and she brusquely said "Ok." Somewhat later, I saw another friend of hers give her a small gift; she again responded by brusquely saying "ok." It turns out this is how she responds to most things when she's preoccupied.

Replies from: lessdazed
comment by lessdazed · 2011-08-10T23:08:10.950Z · LW(p) · GW(p)

It's easier to think of when others have misunderstood your intentions than when you have misunderstood others' intentions.

comment by Kaj_Sotala · 2011-08-10T14:39:33.436Z · LW(p) · GW(p)

Can you give an example of such a scenario?

Replies from: lessdazed
comment by lessdazed · 2011-08-10T15:31:48.621Z · LW(p) · GW(p)

I raise a concern about someone or something and someone else is dismissive. I think it's because they dismiss the concern, and are implying my feelings aren't valid. Really, they dismiss every concern not supported with citations, of which my expressed concern was one example.

Someone says they "just looked up" something, I think they're implying I'm lazy or negligent in not looking it up. They meant "just" meaning recently, as in "just now".

I am being sarcastic, someone does not realize it.

I tell someone who only writes long comments that I enjoy their writing and that they should write more short comments, they think I'm trying to politely tell them to stop writing long comments.

Someone says anything we can do to X we can do to Y". I mistakenly think they are implying that only things that can be done to X can be done to Y.

I stumble shakily about the subway station clutching my groin, eyes unfocused and face contorted oddly. People think I am a drunken pervert, in reality I just had a mole removed high up on my inner thigh.

Since every action is the result of multiple influences, any thought in which "just because" springs to mind is immediately highly suspect. E.g. Mormons are "just" on here to proselytize.

Replies from: None
comment by [deleted] · 2015-10-22T22:27:13.037Z · LW(p) · GW(p)

I stumble shakily about the subway station clutching my groin, eyes unfocused and face contorted oddly. People think I am a drunken pervert, in reality I just had a mole removed high up on my inner thigh.

haha that's extremely specific. Did that really happen? I'd reckon you'd just been injured in the balls (owwie! or had an orgasm from a really good thigh kissing experience (indeed, an experience every good women shoes experience at least once in their life), the memory of which you were reliving.

comment by Jayson_Virissimo · 2011-08-11T05:51:58.532Z · LW(p) · GW(p)

Is your plan to harness motivated cognition to correct for a cognitive bias?

Replies from: lessdazed
comment by lessdazed · 2011-08-11T15:25:35.832Z · LW(p) · GW(p)

Excellent question. This is importantly not my plan.

I am not one who advocates fighting irrationality with irrationality. Once one stops trying to be accurate, and summons all one's power to push against one's current bias, one's success becomes a matter of chance and circumstance. For example, if someone tries to counter their bias of favoring ingroups over outgroups by using motivated cognition to see flaws in (groups labeled) ingroups and excuse everything done by (groups labeled) outgroups, one ends up like Chris Hedges. It will be a miracle if one ends up thinking straight after summoning demons of stupidity to fight the demons of stupidity already possessing one.

Neither can we transpose the methodology of how we consider our own actions onto how we think of others'. That won't work because we aren't good at introspection. If our motivated cognition that we use to exculpate ourselves actually consistently found the truth, then we could practice excusing others as we do ourselves, but we excuse ourselves too much.

Rather, I think the best goal is to be well calibrated, with a healthy respect for unknown unknowns. We are enjoined that "If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm." However,

Many psychological experiments were conducted in the late 1950s and early 1960s in which subjects were asked to predict the outcome of an event that had a random component but yet had base-rate predictability - for example, subjects were asked to predict whether the next card the experiment turned over would be red or blue in a context in which 70% of the cards were blue, but in which the sequence of red and blue cards was totally random.

In such a situation, the strategy that will yield the highest proportion of success is to predict the more common event. For example, if 70% of the cards are blue, then predicting blue on every trial yields a 70% success rate.

What subjects tended to do instead, however, was match probabilities - that is, predict the more probable event with the relative frequency with which it occurred...

...In the dilemma of the blue and red cards, our partial knowledge tells us - on each and every round - that the best bet is blue. This advice of our partial knowledge is the same on each and every round. If 30% of the time we go against our partial knowledge and bet on red instead, then we will do worse thereby - because now we're being outright stupid, betting on what we know is the less probable outcome.

If you bet on red every round, you would do as badly as you could possibly do; you would be 100% stupid. If you bet on red 30% of the time, faced with 30% red cards, then you're making yourself 30% stupid. Given incomplete information, the optimal betting strategy does not resemble a typical sequence of cards.

In the experiment, the payoff was the same for each correct guess, whether a guess of red or blue. However, that isn't the case with the iron. Mistakenly thinking the iron is cool has a greater cost than mistakenly thinking the iron is hot.

If red paid 8 cents to blue's 3 cents, it would be right to bet on red every time. It would also be right to expect blue with a probability of (nearly) .7.

In our communications, we are often right about what others intend. Even when others mean to be malicious, we may be better than not at identifying it. Nonetheless, the cost of attributing ill-intent when none was meant is sufficiently high that in many contexts it virtually never the right thing to do.

I hope to show that people misinterpret others sufficiently often that anger and similar emotions as well as counter-accusatory responses are wrong responses until significant clarification is sought, not because they are usually wrong, but because of their cost multiplied by how often they are wrong is so great.

This is so unless some feature removes a circumstance from this general class, though be warned - humans overestimate how often it is appropriate to do more than apply the actuarial tables to a situation, even when they know of this bias!

So, I think I detect people more or less responding to, say, comments they 50% likely to mean something evil as responding as if the comment was 50% evil. The work is to first show that people misread and misspeak so often that the feeling of 50% corresponds to evil intent 25% of the time or so, and second to establish solidly within ourselves that a comment we suspect has evil intent, a comment we adjudge 50% or 25% likely to be evil, ought to be treated very nearly the same as, if not totally the same as, a comment .01% likely to have been made maliciously.

The anecdotes with hidden conclusions are to improve our skill at estimating the chance a situation has justifications we simply aren't imagining, not to delude us into thinking the chance is 100%. We will also examine the biases that held us back from thinking of the justification, for our map should reflect the territory, and improve our skill at considering the problem without letting premature answers push away other possibilities.

Subsequently, examining the costs of false positives and false negatives for each hypothetical situation makes them available to us such that they will hopefully be with us when we consider the causes of things in our daily lives.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2011-08-11T18:56:13.230Z · LW(p) · GW(p)

Nitpick:

If red paid 4 cents to blue's 7 cents, it would be right to bet on red every time.

Nope.

Consider a deck of 10 cards; 7 blue and 3 red. You write down a list of 10 predictions, in order, and the cards are dealt out; for every card that matches the corresponding prediction you get a reward based on the card color.

If you make 10 blue predictions, and blue cards are worth 7 points, you get 7 correct answers and 49 points.

If you make 10 red predictions, you get 3 correct answers, and red cards have to be worth 16.33 for you to break even.

Replies from: lessdazed
comment by lessdazed · 2011-08-11T19:54:43.793Z · LW(p) · GW(p)

Fixed. I meant to add a cent to red after making it fair, but accidentally set it up such that the .7 chance one got the seven cents rather than the three in the fair case.