Mapping the Social Mind (Buttons)

post by Bound_up · 2017-12-29T20:33:41.034Z · score: 20 (9 votes) · LW · GW · 6 comments

I said before that normal people were like a vast wall of buttons. Each button triggers a response: out pops a slip of paper with a rehearsed mini-speech, or an idea about what to do. Very much like cached responses, each one being triggered as its corresponding concept is activated by a stimulus.

I say "minimum wage." That is the stimulus.

The brain of a normal person resolves the sensory data and activates a concept. The "minimum wage" concept in the brain lights up. That is the pushing of the button.

The person says "Oh, yes, did you hear about the effects observed in ____land? Just goes that show that minimum wage is good/bad, doesn't it!" That is the cached response, the slip of paper that comes out.

One of the mistakes I made before I understood this system is that I expected people to be less hypocritical. Better said (for they're not really being hypocrites, at least, not in the important sense of the word), I expected their expressed beliefs to give me clues to how they would behave.

For example, respectable people in American society will tell you that "family is the most important thing."

Ah, you might think (subconsciously). I shall hereafter expect their revealed preferences to favor family very highly. If their brother comes looking for some help, you might predict that help will gladly be given.

Hahaha, yeah, that's pretty silly, amiright?

Think of it this way. The brother asking for help is like one button being pushed, the brother-wants-help button. Mentioning "family values" pushes another button, the recite-respectable-mantra-about-family-values button. What happens when you push those buttons? The deeper question behind that is, What determines what gets written on the little slips of paper? What determines which thoughts get cached and which don't?

For normal humans, the answer is that their social experiences decide what will be on the papers. You can predictably expect to find written on their papers whatever response that they have, so far, found to give them the most prestige in their in-group. Responses that win the adoration and adulation of their audience are kept, and refinements are tested over time and made permanent if found favorable.

Now, it is clear, yes? Why would the slip of paper from one button tell you anything about the slip of paper of another button? They're completely unrelated, don't you see? The papers are not compared to each other; the one is not used to write the other, and so, the decision to help the brother or not is not determined by what the shiniest answer to "family values" questions is!

In reality, it's a bit more complicated. After all, you can use their answer to "family values" to infer their ingroup, then call to mind the list of that ingroup's values, and then use that to deduce how they migth behave when the brother asks for help. If the ingroup both says to recite line X in response to "family values" stimuli and says to perform act Z in response to a brother, then you might indeed use the one answer to infer the other. But the point is that the connection between the two papers does NOT come from trying to build a consistent philosophy to live by. X doesn't actually tell you anything about Z! The causal connection is less direct, because the real drivers of all the acts and opinions, and of whether or not they end up connected or not, are social status and ingroup norms.

As such, the usefulness of their expressed "opinion" on the importance of family is no more useful in predicting their behavior towards their family and in no more direct way, than is observing their manner of dress. Both are equally connected to their behavior towards their family, which is to say, not that much. And the connection is that both indicate which ingroup they identify with, which is useful for predicting generabl behaviors. The connection is not because the one answer actually directly tells you something about the other on the mere and unimportant basis that the one is the abstract representation of the other.

In short, the normal, non-nerdy, social, political animal that is the human does not use abtract thoughts like nerds do; they are not used to tell you something that is true about all the members in the category that the abstract refers to (well, doesn't refer to, but seems to from a nerdy interpretation). Rather, they, as all other behaviors, are used to gain social status. It's perfectly consistent when seen from this angle, but also very inconsistent when judged from the angle of...let's see, how to say...

If you take their abstract statements, and you interpret them naively, interpret them as if they were meant to describe the qualities of a certain category of things...if you interpret them that way, then they say something which is not true at all, does not accurately describe the members of the category in the slightest.

But I feel dumb to have ever interpreted them that way! Couldn't I see that the illusion of contradiction was because I was taking a perfectly natural piece of social signaling and filtering it through a weird interpretive system that it was never meant for? It was really all my fault; I wasn't hearing what they were saying. They often said that I was misrepresenting them, but I never understood why, since I could repeat what they said almost verbatim and they'd agree that that was their stance.

Well, now I get it. My ears heard signals, but my mind heard abstract descriptions of reality. I thought in terms of describing reality, not garnering brownie points. It's weird to me, really, but so is quantum mechanics. This is how humans naturally behave, so I try not to get in the way of my own understanding by calling it weird.


Comments sorted by top scores.

comment by zulupineapple · 2018-01-01T17:41:37.050Z · score: 7 (3 votes) · LW · GW

This might shock you, but I think you're one of the button-people. You're asked about "minimum wage" and, without thinking, the most defensible claim you know comes out of your mouth (by defensible, I mean either easy to prove or hard to falsify). But why? Surely, you rationally understand that social connections have value, and that talking to people is a social activity. Yet your behaviors don't reflect that understanding.

Seriously though, I find your dichotomy quite bad. It's true that some people worry about the consistency of their beliefs more than others. That's because enforcing consistency takes effort. You think that your mind works in distinctly different ways, while in reality you're merely wasting your efforts on unimportant things that most people know not to bother with. That's probably because you've put yourself in the "rational" social group, and that's just what "rational" people do.

Another issue is that "family values" and not helping our brother don't need to contradict each other. It sounds like you build trivial models of people, observe the models fail and deduce that no reasonable models exist. This is not to say that people never contradict themselves, of course. And I'm willing to imagine that you're talking about a real person, whose circumstances and values are well known to you, and who truly are contradicting themselves. However the text does not suggest this convincingly.

comment by elizabeth (pktechgirl) · 2017-12-31T21:24:16.794Z · score: 7 (2 votes) · LW · GW

moved to front page

comment by Aaron Teetor (aaron-teetor) · 2018-01-01T00:56:36.882Z · score: 6 (3 votes) · LW · GW

I don't necessarily disagree that this dichotomy exists; but this way of looking at it feels exagerated. People tend to do what gets rewarded. We get rewarded for saying factually correct things, so we build maps around what is factually correct. Other people get rewarded for saying emotionally correct things, so they build maps around what is emotionally correct. It is still a map though. I'm not necessarily certain it's a bad map. It makes them happy, it can build a good sense of community, and generally matches something humans have loved and chased since before we even evolved into humans. I'd bet on about 3:2 odds they average happier than us.

It is useful to have a post detailing how some people use their maps to describe territories other than factuality along with how to talk to those people. This has about two sentences about those interactions and a lot of "did you know the outgroup can't even think abstractly?"

comment by Bound_up · 2018-01-01T14:23:27.293Z · score: 6 (3 votes) · LW · GW

Insofar as we are "overthinking things," they seem to agree that they think less in certain ways. That's purely descriptive, which was my whole purpose. Normal people tend to use system 2 and abstraction less, near as I can tell.

If I were to get prescriptive, I'd agree that nerds tend to use System 2 at some times when they should use System 1. Neither system is unequivocally superior, though, since it's a spectrum, I wonder if there are some lucky souls who's dispositions land at the sweet spot in the middle.

As for calling the normal person's system of caching thoughts according to social doesn't seem very map-like to me. I mean, you can call it a map if you like, but the key distinction to understand is that at one extreme, we have an attempt to accurately describe the universe, and at the other extreme, an attempt to maximize social status, with real people falling somewhere in the middle, and the minority which are strongly biased towards accurately describing the universe called "nerds."

comment by Bound_up · 2018-01-01T14:26:10.016Z · score: 6 (3 votes) · LW · GW

Oh, and the difficult part is to realize that the status-maximizing answers resemble descriptions of reality, so you have to be careful about interpretation, and remember to consider the status-maximizing hypothesis when you hear someone giving logically contradictory answers without caring to fix the contradictions when they find them.

comment by Error · 2018-04-28T22:41:05.317Z · score: 4 (1 votes) · LW · GW

This seems related to something I've been thinking about recently: That the concept of "belief" would benefit from an analysis along the lines of How an Algorithm Feels from the Inside [LW · GW]. What we describe as our "beliefs" are sometimes a map of the world (in the beliefs-paying-rent sense), and sometimes a signal to our social group that we share their map of the world, and sometimes a declaration of values, and probably sometimes other (often contradictory) things as well. But we act as if there's a single mental concept underlying them. The ambiguities are hard to shake out, I think because the signal version is only useful if it pretends to be the map version.

(I feel sour about human nature whenever I start thinking about this, because it leaves me feeling like almost all communication is either speaking in bad faith, or displaying a complete lack of intellectual integrity, or both)