Generalizing From One Example & Evolutionary Game Theory

post by multifoliaterose · 2011-05-31T23:23:32.998Z · LW · GW · Legacy · 7 comments

Contents

7 comments

Back in April 2010 Robin Hanson wrote a post titled Homo Hypocritus Signals. Hal Finney wrote a comment

This reasoning does offer an explanation for why big brains might have evolved, to help walk the line between acceptable and unacceptable behavior. Still it seems like the basic puzzle remains: why is this hypocrisy unconscious? Why do our conscious minds remain unaware of our subconscious signaling?

to which Vladimir M responded. This post is a short addendum to the discussion.

In Generalizing From One Example Yvain wrote

There's some evidence that the usual method of interacting with people involves something sorta like emulating them within our own brain. We think about how we would react, adjust for the other person's differences, and then assume the other person would react that way.

It's plausible that the evolutionary pathway to developing an internal model of other people's minds involved bootstrapping from one's awareness of one's own mind. This would work well to the extent that there was psychological unity of humankind. In our evolutionary environment, people who interact with each other were more similar to one another than they are today.

I don't understand many of the decision theory posts on Less Wrong, but my impression is that the settings in which one is better off with timeless decision theory or updateless decision theory than with casual decision theory are situations in which the other agents have a good model of the one's own internal wiring.

This together with one's model of others being based on one's model of one's own mind and the psychological unity of humankind would push in the direction of the conscious mind adapting to something like timeless/updateless decision theory; based around cooperating with others. But the unconscious mind would then have free reign to push in the direction of defection (say, in one-shot prisoners' dilemma situations) because others would not have conscious access toward their own tendency toward defection and consequently would not properly emulate this tendency toward defection in their model of the other person.

The analysis given here is overly simplistic; for example quoting myself

The conscious vs. unconscious division is not binary but gradualist. There are aspects of one's thinking that one is very aware of, aspects that one is somewhat aware of, aspects that one is obliquely aware of, aspects that one could be aware of if one was willing to pay attention to them, and aspects that one has no access to.

but it rings true to me in some measure.

7 comments

Comments sorted by top scores.

comment by atucker · 2011-06-01T03:34:18.948Z · LW(p) · GW(p)

Interesting post.

I think that the extent to which dealing with other people gets you into Newcomb-style problems also comes from how your body language and whatnot helps other people predict what you're thinking. For instance, my unconscious mind signals my conscious thought processes to other people.

However, I'm not sure if there's any particular evidence that suggests that your model of others based on yourself is based only on your conscious mind.

Replies from: multifoliaterose
comment by multifoliaterose · 2011-06-01T04:47:48.974Z · LW(p) · GW(p)

Thanks for your feedback. I agree with:

I'm not sure if there's any particular evidence that suggests that your model of others based on yourself is based only on your conscious mind.

In fact, I suspect that the model is partially based on unconscious processes. I guess I'm just throwing out a hypothetical - the tendency toward unconscious hypocrisy could come from game theoretic considerations together with the conscious mind playing a greater role than the unconscious mind in modeling other people based on one's own mind.

comment by timtyler · 2011-06-14T23:57:47.254Z · LW(p) · GW(p)

Why do our conscious minds remain unaware of our subconscious signaling?

Because it is best if the PR department doesn't know about the lies and deceptions.

If it did, it would have to lie about them - and humans are passable lie detectors.

comment by timtyler · 2011-06-01T15:41:34.742Z · LW(p) · GW(p)

It's plausible that the evolutionary pathway to developing an internal model of other people's minds involved bootstrapping from one's awareness of one's own mind. This would work well to the extent that there was psychological unity of humankind.

Psychological unity is not really needed. Much the same mental apparatus works reasonably well with cats, dogs, and other goal-directed agents, such as chess computers. The apparatus is accustomed to the idea that other agents may not be exactly like you, and accepts parameters to help deal with such cases.

Replies from: multifoliaterose
comment by multifoliaterose · 2011-06-01T21:34:11.449Z · LW(p) · GW(p)

The apparatus is accustomed to the idea that other agents may not be exactly like you, and accepts parameters to help deal with such cases.

Right, but the more similar the other agents are to you the more immediate and intuitive the apparatus is; the more different they are from you, the more crude & error prone the approximation. Hence the relevance of the psychological unity of humankind.

comment by XiXiDu · 2011-06-01T08:57:42.548Z · LW(p) · GW(p)

The game and decision theoretic models being discussed here and elsewhere are often overly simplistic. Take for example the prisoner's dilemma you mentioned. In such a game theoretic setting, where all agents solely care about reducing their prison sentence, it is perfectly rational to defect. This isn't the case in real life, the situation often is much more complex and people care about more than the sentence they will receive.

Similarly with Newcomb-style problems, where one agent is giving and the other is on the receiving side. Either you care about the prize, in which case it would be perfectly rational to adopt the decision theory that will make you predictably precommit to one-boxing, or you don't, in which case you'll just ignore the game.

The problem I see is that I believe it to be mistaken to adopt such models universally, just because they work for simple thought experiments. Doing so leads to all kinds of idiotic decisions like walking into death camps if it decreases the chance of being blackmailed. This is idiotic because doing so means to lose by preemptively turning yourself into something you don't want to be, doing something you don't want to do, just because some abstract mathematical models suggest that by doing so you'll reach an equilibrium between yourself and other agents. That is not what humans want. Humans want to win in a certain way, or die trying, whatever the consequences.

Many theories ignore that humans discount arbitrarily and are not consistent, that humans can assign arbitrary amounts of utility to certain decisions. Just because we would die for our family does not mean that you can extrapolate this decision to mean that we would die for a trillion humans, or precommit to becoming an asshole to win an amount of money that could outweigh being an asshole. Some things can't be outweighed by an even bigger amount of utility, that's not how humans work.

We do not want to be blackmailed, but we also do not want to become an asshole. If not to be blackmailed means to become an asshole, you are perfectly rational to choose to be blackmailed, even if that means that you'll be turned into an even bigger asshole when you don't give in to the blackmail. That's human!

Replies from: multifoliaterose
comment by multifoliaterose · 2011-06-01T21:30:27.855Z · LW(p) · GW(p)

I think that some of the points that you make here are valid; but they seem oblique to the thrust of my post which is about (hypothetically) why humans evolved to be the way they are.