Narcissism vs. social signalling

post by Chris_Leong · 2019-05-12T03:26:31.552Z · LW · GW · 18 comments

The main thrust of Robin Hanson's work is that so much of human behaviour is the result of social signalling; that is, the attempt to convince others that we possess good qualities. On the other hand, The Last Psychiatrist presents narcissism as an alternative theory; that is, much of our behaviour is an attempt to convince ourselves that we possess good qualities. I haven't read enough of The Last Psychiatrist to be able to provide a good summary of their ideas, so this will just be a short comment to raise awareness that an alternative theory exists and to discuss how these hypotheses relate.

The separation between these two theories isn't as clear as it may first appear. After all, we may attempt to convince other people of our goodness in order to ultimately convince ourselves or we may attempt to convince ourselves of our goodness so that we can more persuasively convince other people. For the later, imagine someone prepping for a job interview teaching when they are aware that their knowledge of the material isn't as strong as they'd like, but believing they might get hired if they seems confident.

Trying disambiguate using revealed preferences may be misleading - someone may spend most of their time trying to impress other people, but that may simply be the strategy they've adopted for convincing themselves that they are worthy and they may drop it as soon as they learn other strategies. Alternatively, trying to use people's ultimate goals to disambiguate this is tricky when, according to both theories, we often don't know why we do what we do. And indeed, when we talk about ultimate goals, are we referring to the ultimate goal that is represented in the brain or are we allowed to reference evolutionary psychology reasons for behaviour? Then of course these the issue that most of the time, both theories will be correct to a certain extent. I'll admit the the practical consequences of adopting one theory over another aren't always immediately clear, but I expect that adopting one model instead of the other would necessarily result in some differences in predictions.

18 comments

Comments sorted by top scores.

comment by jessicata (jessica.liu.taylor) · 2019-05-12T06:09:27.337Z · LW(p) · GW(p)

Signalling implies an evaluator trying to guess the truth. At equilibrium, a signaller reveals as much information as is cheap to reveal. Not revealing cheap-to-reveal information is a bad sign; if the info reflected well on you, you'd have revealed it, and so at equilibrium, evaluators literally assume the worst about non-revealed but cheap-to-reveal info (see: market for lemons).

This is stage 1 [LW · GW] signalling. Stage 2 signalling is this but with convincing lies, which actually are enough to convince a Bayesian evaluator (who may be aware of the adversarial dynamic, and audit sometimes).

At stage 3, the evaluators are no longer attempting to discern the truth, but are instead discerning "good performances", the meaning of which shifts over time, but which initially bears resemblance to stage 2's convincing lies.

Narcissism is stage 3, which is very importantly different from stage 1 signalling (maximal revealing of information and truth-discernment) and stage 2 lying (maximizing for impression convincingly).

Replies from: Benquo, Wei_Dai
comment by Benquo · 2019-05-12T13:39:12.541Z · LW(p) · GW(p)
This is stage 1 [LW · GW] signalling. Stage 2 signalling is this but with convincing lies, which actually are enough to convince a Bayesian evaluator (who may be aware of the adversarial dynamic, and audit sometimes).

The theory of costly signaling is specifically about stage 1 strategies in an environment where stage 2 exists - sometimes a false signal is much more expensive than a true signal of the same thing.

comment by Wei Dai (Wei_Dai) · 2019-05-12T11:46:01.037Z · LW(p) · GW(p)

Stage 2 signalling is this but with convincing lies, which actually are enough to convince a Bayesian evaluator (who may be aware of the adversarial dynamic, and audit sometimes).

Do you have a formal (e.g., game theoretic) model of this in mind, or see an approach to creating a formal model for it?

On the one hand, I don't want to Goodhart on excess formality / mathematization or not take advantage of informal models where available, but on the other hand, I'm not sure if long-term intellectual progress is possible without using formal models, since informal models seem very lossy in transmission and it seems very easy to talk past each other when using informal models (e.g., two people think they're discussing one model but actually have two different models in mind). I'm thinking of writing a Question Post about this. If the answer to the above question is "no", would you mind if I used this as an example in my post?

Replies from: Benquo, jessica.liu.taylor
comment by Benquo · 2019-05-12T13:52:30.490Z · LW(p) · GW(p)

It seems to me like the first two stages are simple enough that Jessica's treatment is an adequate formalization, insofar as the "market for lemons" model is well-understood. Can you say a bit more about how you'd expect additional formalization to help here?

It's in the transition from stage 2 to 3 and 4 that some modeling specific to this framework seems needed, to me.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-05-12T20:23:06.257Z · LW(p) · GW(p)

It seems to me like the first two stages are simple enough that Jessica’s treatment is an adequate formalization, insofar as the “market for lemons” model is well-understood. Can you say a bit more about how you’d expect additional formalization to help here?

In the original "market for lemons" game there was no signaling. Instead the possibility of "lemons" in the market just drives out "peaches" until the whole market collapses.

As I mentioned in my reply to Jessica, the actual model for stage 2 she had in mind seems more complex than any formal model in the literature that I can easily find. I was unsure from her short verbal description in the original comment what model she had in mind (in particular I wasn't sure how to interpret "convincing lies"), and am still unsure whether the math would actually work out the way she thinks (although I grant that it seems intuitively plausible). I was also unsure whether she is assuming standard unbounded rationality or something else.

It’s in the transition from stage 2 to 3 and 4 that some modeling specific to this framework seems needed, to me.

I was confused/uncertain about stage 2 already, but sure I'd be interested in thoughts about how to model the higher stages too.

comment by jessicata (jessica.liu.taylor) · 2019-05-12T14:43:48.735Z · LW(p) · GW(p)

We can imagine a world where job applicants can cheaply reveal information about themselves (e.g. programming ability), and can more expensively generate fake information that looks like true information (e.g. cheating on the programming ability test, making it look like they're good at programming). The employer, meanwhile, is doing a Bayesian evaluation of likely features given the revealed info (which may contain lies), to estimate the applicant's expected quality. We could also give the employer audit powers (paying some amount to see the ground truth of some applicant's trait).

This forms a game; each player's optimal strategy depends on the other's, and in particular the evaluator's Bayesian probabilities depend on the applicant's strategy (if they are likely to lie, then the info is less trustworthy, and it's more profitable to audit).

I would not be surprised if this model is already in the literature somewhere. Ben mentioned the costly signalling literature, which seems relevant.

Fine to refer to this in a question, in any case.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-05-12T20:10:00.556Z · LW(p) · GW(p)

I would not be surprised if this model is already in the literature somewhere.

I couldn't find one after doing a quick search. According to http://www.rasmusen.org/GI/chapters/chap11_signalling.pdf there are separate classes of Audit Games and Signaling Games in the literature. It would seem natural to combine auditing and signaling into a single model but I'm not sure anyone has done so, or how the math would work out.

comment by Benquo · 2019-05-12T03:49:42.307Z · LW(p) · GW(p)

The temporal aspect seems important in distinguishing the two models - TLP says something changed in 20th Century American culture to make narcissism much more common.

Replies from: Chris_Leong
comment by Chris_Leong · 2019-05-12T04:23:00.893Z · LW(p) · GW(p)

What did he believe changed?

Replies from: Benquo
comment by Benquo · 2019-05-12T13:36:48.344Z · LW(p) · GW(p)

I can't think of a clear thing to point to in the text - I think he's more concerned with describing what's happening than modeling its historical causes.

I can guess on my own account - I think the commodification of human life, rapid pace of change with respect to economic roles, and rise of mass-media advertising in the mid 20C accelerated a force already latent in American culture. But that's my guess. TLP is more empirical.

comment by jessicata (jessica.liu.taylor) · 2019-05-12T06:30:37.316Z · LW(p) · GW(p)

There's no such thing as "convincing yourself" if you're an agent, due to conservation of expected evidence. What people describe as "convincing yourself" is creating conditions under which a certain character-level belief is defensible to adopt, and then (character-level) adopting it. It's an act, a simulacrum of having a belief.

(Narcissism is distinct from virtue ethics, which is the pursuit of actual good qualities rather than defensible character-level beliefs of having good qualities)

Replies from: Benquo, mr-hire, ChristianKl, Chris_Leong
comment by Benquo · 2019-05-12T13:42:30.988Z · LW(p) · GW(p)

Since all three comments so far seem to have had the same basic objection, I'm going to reply to the parent.

It seems like the claim in your first paragraph is implicitly disjunctive: IF your beliefs are "about the world" (i.e. you're modeling yourself as an agent with a truth-seeking epistemology), THEN "convincing yourself" isn't a thing. So IF you're "convincing yourself", THEN the relevant "beliefs" aren't a sincere attempt to represent the world.

comment by Matt Goldenberg (mr-hire) · 2019-05-12T08:30:42.651Z · LW(p) · GW(p)
There's no such thing as "convincing yourself" if you're an agent, due to conservation of expected evidence.

Is your claim that the actual way the brain works is close enough to Bayesian updating that this is true?

Replies from: jessica.liu.taylor
comment by ChristianKl · 2019-05-12T11:51:06.803Z · LW(p) · GW(p)

But humans are badly modeled as single agents. Our behavior is rather the result of multiple agents acting together. It seems to me that some of those agents do try to convince others.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-05-12T14:58:05.986Z · LW(p) · GW(p)

I don't believe humans are badly modeled as single agents. Rather, they are single agents that have communicative and performative aspects to their cognition and behavior. See: The Elephant In The Brain, Player vs Character.

If you have strong reason to think "single agent communicating and doing performances" is a bad model, that would be interesting.

In this case, "convincing yourself" is clearly motivated. It doesn't make sense as a random interaction between two subagents (otherwise, why aren't people just as likely to try to convince themselves they have bad qualities?); whatever interaction there is has been orchestrated by some agentic process. Look at the result, and ask who wanted it.

comment by Chris_Leong · 2019-05-12T09:23:13.608Z · LW(p) · GW(p)

Do you have any empirical evidence?

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-05-12T14:46:16.830Z · LW(p) · GW(p)

The academic term for the Bayesian part is Bayesian Brain. Also see The Elephant In The Brain. The model itself (humans as singular agents doing performances) has some amount of empirical evidence (note, revealed preference models deductively imply performativity), and is (in my view) the most parsimonious. I haven't seen empirical evidence specific to its application to narcissism, though.