What determines the balance between intelligence signaling and virtue signaling?
post by Wei_Dai
This is a question post.
Lately I've come to think of human civilization as largely built on the backs of intelligence and virtue signaling. In other words, civilization depends very much on the positive side effects of (not necessarily conscious [LW · GW]) intelligence and virtue signaling, as channeled by various institutions. As evolutionary psychologist Geoffrey Miller says [LW · GW], "it’s all signaling all the way down."
A question I'm trying to figure out now is, what determines the relative proportions of intelligence vs virtue signaling? (Miller argued that intelligence signaling can be considered a kind of virtue signaling, but that seems debatable to me, and in any case, for ease of discussion I'll use "virtue signaling" to mean "other kinds of virtue signaling besides intelligence signaling".) It seems that if you get too much of one type of signaling versus the other, things can go horribly wrong (the link is to Gwern's awesome review/summary of a book about the Cultural Revolution). We're seeing this more and more in Western societies, in places like journalism, academia, government, education, and even business. But what's causing this?
One theory is that Twitter with its character limit, and social media and shorter attention spans in general, have made it much easier to do virtue signaling relative to intelligence signaling. But this seems too simplistic and there has to be more to it, even if it is part of the explanation.
Another idea is that intelligence is valued more when a society feels threatened by an outside force, for which they need competent people to protect themselves from. US policy changes after Sputnik is a good example of this. This may also explain why intelligence signaling continues to dominate or at least is not dominated by virtue signaling in the rationalist and EA communities (i.e., we're really worried about the threat from Unfriendly AI).
Does anyone have other ideas, or have seen more systematic research into this question?
Once we understand the above, here are some followup questions: Is the trend towards more virtue signaling at the expense of intelligence signaling likely to reverse itself? How bad can things get, realistically, if it doesn't? Is there anything we can or should do about the problem? How can we at least protect our own communities from runaway virtue signaling? (The recent calls against appeals to consequences [LW · GW] make more sense to me now, given this framing, but I still think they may err too much in the other direction.)
PS, it was interesting to read this in Miller's latest book Virtue Signaling:
Where does the term ‘virtue signaling’ come from? Some say it goes back to 2015, when British journalist/author James Bartholomew wrote a brilliant piece for The Spectator called ‘The awful rise of ‘virtue signaling.’’ Some say it goes back to the Rationalist blog ‘LessWrong,’ which was using the term at least as far back as 2013. Even before that, many folks in the Rationalist and Effective Altruism subcultures were aware of how signaling theory explains a lot of ideological behavior, and how signaling can undermine the rationality of political discussion.
I didn't know that "virtue signaling" was first coined (or at least used in writing) on LessWrong. Unfortunately, from a search, it doesn't seem like there was substantial discussion around this term. Signaling in general was much discussed on LessWrong and OvercomingBias, but I find myself still updating towards it being more important than I had realized.
answer by Jacob Falkovich (Jacobian)
) · GW
Another idea is that intelligence is valued more when a society feels threatened by an outside force, for which they need competent people to protect themselves from.
Building up on this, virtue is valued more when a society is threatened from the inside. If people are worried about being betrayed or undermined by those who appear to be part of their tribe they will look for virtue signals. We see this a lot in the high correlation of virtue signaling with signals of ingroup loyalty, while intelligence signaling often takes the shape of disagreeing with the group.
In general, an outside threat or goal allows people to measure themselves against it. Status is set by the number of enemy scalps one collects, for example. But without an external measuring stick people will jockey for relative status by showing loyalty and virtue
↑ comment by Wei_Dai ·
2019-12-17T05:17:01.897Z · LW(p) · GW(p)
Building up on this, virtue is valued more when a society is threatened from the inside.
Right, and unfortunately the relevant thing here isn't how much society is objectively threatened from the inside, but people's perceptions of the threat, which can differ wildly from reality, because of propaganda, revenue-driven news media, preexisting ideology, or any number of other things. To quote a particularly extreme and tragic instance of this from Gwern's review of The Cultural Revolution: A People's History, 1962-1976:
Replies from: Jacobian
The disappointment generates dissonance: many people genuinely believed that the solutions had been found and that the promises could be kept and the goals were realistic, but somehow it came out all wrong. ("We wanted the best, but it turned out like always.") Why? It can't be that the ideology is wrong, that is unthinkable; the ideology has been proven correct. Nor is it the great leader's fault, of course. Nor are there any enemies close at hand: they were all killed or exiled. The cargo cult keeps implementing the revolution and waving the flags, but the cargo of First World countries stubbornly refuses to land.
The paranoid yet logical answer is that there must be invisible enemies: saboteurs, counter-revolutionaries, and society remaining 'structurally' anti-ideological. No matter that victory was total, the failure of their policies proves that the enemies are still everywhere.
↑ comment by Jacob Falkovich (Jacobian) ·
2019-12-17T14:49:40.184Z · LW(p) · GW(p)
This is a great example. During the Cultural Revolution and similar periods (e.g., Stalinist Russia) you not only wanted to signal virtue above intelligence, you actively wanted to signal *lack* of intelligence as vigorously as you could. The inteligentzia are always suspect.
answer by FactorialCode
) · GW
I have a boring hypothesis. It's similar to the social media hypothesis. One signals virtue or IQ based on how much other people confer status for either of those things. In the early days of the internet, when there were barriers to entry preventing people from participating online, the internet was populated with people who disproportionately valued IQ over virtue. As a result, in order to gain status in old school online communities, you need to signal IQ. However, as the barriers to entry were lowered, a more representative sample of the population began to emerge online. The general population does not confer status based on signals of IQ, if anything, it's it's the opposite. Nerds have always been low status in the general population. Thus, the parts of the internet conferred status for IQ have become insignificant compared to those that confer status for virtue. So people respond to the new incentives and signal virtue. Since our online views are connected to our real personas, this virtue signalling leaks out into reality, with observable phenomena such as people being fired from their jobs due to internet flash-mobs.
If this hypothesis is correct, then there isn't much we can do about it. Maybe we can make people aware of the dangers of conferring status for fake virtue in places with barriers to entry that are strong enough to keep out the people who would displace you for speaking about those dangers.
In short, we're all becoming part of the global village, and the village is chock full of people who "manipulate the social world, the world of popularity and offense and status, with the same ease that you manipulate the world of nature. But not to the same end. There is no goal for them, nothing to be maintained, just the endless twittering of I’m-better-than-you and how-dare-you-say-that."
↑ comment by frontier64 ·
2019-12-13T20:01:52.689Z · LW(p) · GW(p)
So people respond to the new incentives and signal virtue
If this were the case we would observe that people who previously signaled IQ would switch by and large to signaling virtue. This does not seem to be the case. Rather, the IQ signalers still exist and still signal IQ but have been marginalized/drowned-out by the superior number of virtue signalers.
This is apparent when looking at old reddit accounts. On reddit it used to be that most popular comments signaled IQ and most popular commenters would serially signal IQ. Look at historical posts, find an account that historically signaled IQ, and more often than not that account is either currently abandoned or used less often. Rarely will you find that the commenter has done a 180 and now consistently virtue signals.
I think this is a case where the one step removed hypothesis (new internet commenters value virtue over IQ so internet commenters switched to virtue signaling over IQ signaling) is less accurate than the direct hypothesis (new internet commenters virtual signal more and there's more of them). The eternal september is caused by the old guard being unable to enforce their norms on the uncultured new users. It's not caused by the new users changing the value system of the old guard.
I don't think this casts much doubt on your hypothesis that the shift of internet values towards virtue signaling over IQ signaling leaked out into the real world. Virtue signaling leaked from the real world into the internet and then leaked back out strengthened and renewed. It's a positive feedback loop.
Maybe the solution is focusing on reverting the internet values back towards IQ signaling or away from virtue signaling. This may then stop the positive feedback loop. This could be done by making the internet barrier to entry higher again. Canceling the internet as a whole would also achieve the ultimate goal.
answer by Wei_Dai
) · GW
Internet search engines and the resulting lack of privacy/forgetting also incentivizes more virtue signaling than in the past, because if you're "called out" or "canceled" due to insufficient virtue signaling, that has much worse consequences than it used to. (This applies to companies too, which then enforce/incentivize more virtue signaling on their employees.) I think people who say we should just be braver are not acknowledging the real changes in our social environment.
answer by Wei_Dai
) · GW
A couple more ideas:
- The decline of religion paradoxically caused virtue signaling to increase. [LW(p) · GW(p)] (I wrote that earlier and forgot to mention it in the post.)
- Access to mass (one-to-many) media used to be restricted to elites, but are now open to ordinary people, who compared to the elites perhaps tend to favor virtue signaling over intelligence signaling, and this somehow feeds back into elite culture (e.g., journalism and academia). (This can be seen as another way of stating the "social media" hypothesis mentioned in the post. I feel like I still don't have a clear model of what happened / is happening though.)
answer by Kerry (email@example.com)
) · GW
I'm not exactly sure how the two differ...it seems like two definitions of "signalling" are getting conflated.
"Virtue-signalling" is generally understood to be a conscious effort to draw the attention of others to one's own "virtuous" behavior, which may be totally disconnected from any desire to do virtuous deeds. It is co-opting a signal for one's own purposes, and can even be more like fabricating a signal. While virtuous behavior has actual value, virtuous people don't necessarily broadcast their deeds or portray them in the way signalers do. It's translated into social status terms. In this sense, "Intelligence-signalling" would be the equivalent of "virtue-signalling" when it becomes about looking cool for one's social group, and can be totally disconnected from real intelligence---knowing the latest "cool beliefs," "owning" the out-group, coming up with jargon, etc.
Miller is talking about virtues "evolved in both sexes through mutual mate choice to advertise good genetic quality, parenting abilities, and/or partner traits," implying a natural correspondence, though of course not one totally reliable. People value kindness and quasi-moral behavior like seeming mentally stable because it tends to correlate with being a "successful" parent and member of the community. Even the ability to fake these things has value. Same with intelligence. Signalling in this sense is closer to demonstrating the virtue than a conscious choice to get attention, as actually improving the space program would be, even if the motivation for doing so was to demonstrate superiority to the Soviets--the goal was to actually be superior to them, not just look good. The opposite of virtue-signalling is actual virtue, not another type of signalling.
So, demonstrating capability, which often means intelligence, seems like it should be classified differently. It will become more popular when there are needs to be met that offer the opportunity to take real action. So, yes, times of war or external threat will have people looking for someone who can actually take on the enemy. If there isn't much to do, people who can be satisfied by "signalling" whatever quality will get in-group approval will be forced to turn to internal social status concerns. The ability to signal and get feedback is closely tied to things like mass or social media, or certain social arrangements, so I would expect it to be way up under modern conditions. But for most of human history, you lived in a community where people knew you well, and you couldn't appeal to huge swaths of humanity very easily, so the audience dynamic would be different. The most common kinds of signalling for recognition from others, which are similar to things like etiquette and social standards, would be more formalized.
Modern life has also tended to work against individuals seizing control in response to crises, as we have extensive institutions and norms in place, and tend to standardize how people express their capabilities. But in any type of real emergency or situation where there are any stakes, people will value someone who seems to know what they are doing. It can be hard to assess these situations accurately in the modern world, though, since people are far away from it and the media/government will hype or minimize the stakes as needed and try to direct who gets credit.
Comments sorted by top scores.
comment by Wei_Dai ·
2019-12-11T18:56:17.526Z · LW(p) · GW(p)
So how bad can things get? Am I crazy to worry about a future Cultural-Revolution-like virtue signaling dystopia, but even worse because it will be tech-enhanced / AI-assisted? For example during the Cultural Revolution almost everyone who kept a diary (including my own parents) either burned theirs or had their diaries become evidence for various thoughtcrimes (i.e., any past or current thoughts contradicting the current party line, which changes constantly so nobody is immune). But doing the equivalent of burning one's diary will be impossible for a lot of people in the next "Cultural Revolution". Also, during the Cultural Revolution, people eventually became exhausted from the extreme virtue signaling, Mao died, and common sense finally prevailed again. But with AI assistance, none of these things might happen in the next "Cultural Revolution".
On the other side, I was going to say that it seems unlikely that too much intelligence signaling can cause anything as bad to happen, but then I realized that AI risk is actually a good example of this, because a lot of research interest in AI is driven at least in part by intellectual curiosity, and evolution probably gave us that to better signal intelligence. The whole FAI / AI alignment movement can be seen as people trying to inject more virtue signaling into the AI field! (It's pretty crazy how much of a blind spot we have about this. I'm only having this thought now, even though I've known about signaling and AI risk for at least two decades.)
Replies from: daniel-kokotajlo
↑ comment by Daniel Kokotajlo (daniel-kokotajlo) ·
2020-01-14T20:22:23.801Z · LW(p) · GW(p)
I don't think you are crazy; I worry about this too. I think I should go read a book about the Cultural Revolution to learn more about how it happened--it can't have been just Mao's doing, because e.g. Barack Obama couldn't make the same thing happen in the USA right now (or even in a deep-blue part of the USA!) no matter how hard he tried. Some conditions must have been different.*
*Off the top of my head, some factors that seem relevant: Material deprivation. Overton window so narrow and extreme that it doesn't overlap with everyday reality. Lack of outgroup that is close enough to blame for everything yet also powerful enough to not be crushed swiftly.
I don't think it could happen in the USA now, but I think maybe in 20 years it could if trends continue and/or get worse.
Then there are the milder forms, that don't involve actually killing anybody but just involve getting people fired, harassed, shamed, discriminated against, etc. That seems much more likely to me--it already happens in very small, very ideologically extreme subcultures/communities--but also much less scary. (Then again, from a perspective of reducing AI risk, this scenario would be almost as bad maybe? If the AI safety community undergoes a "soft cultural revolution" like this, it might seriously undermine our effectiveness)
comment by Wei_Dai ·
2019-12-09T12:08:12.533Z · LW(p) · GW(p)
Geoffrey Miller explained in a talk about Virtue signaling and effective altruism (which I saw after writing this post) how things can go wrong when there is too much intelligence signaling:
Replies from: FactorialCode, johnswentworth, romeostevensit, ioannes_shade
↑ comment by FactorialCode ·
2019-12-11T13:25:15.007Z · LW(p) · GW(p)
With the exception of a Nootropics arms race, I don't think runaway IQ signalling looks like anything that was mentioned. Runaway IQ signalling might start off like that, but as time goes on, peoples views on all of the above will start to flipflop as they try to distinguish themselves from their peers at a similar level of IQ. Leaning too hard on any of the above mentioned signalling mechanisms exposes you to arguments against them, allowing someone to signal that they're smarter than you. But then if you over-correct, then you become exposed to counter arguments, allowing someone else to signal that they're smarter than you. I think EY captured the basic idea in this post on meta-contrarianism. [LW · GW]
I think a better model might be that it looks a lot more like old school internet arguments. Several people trying really hard to out-manoeuvre each other in a long series of comments in a thread, mailing list, or debate. With each of them saying some version of, "Ah this is true, but you've forgotten to account for..." in order to prove that they are Right and everyone else is Wrong. Or mathematicians trying to to prove difficult and well recognized theorems, since those are solid benchmarks for demonstrating intelligence.
Replies from: lump1
↑ comment by lump1 ·
2020-02-20T17:17:22.646Z · LW(p) · GW(p)
If you want to see what runaway intelligence signaling looks like, go to grad school in analytic philosophy. You will find amazingly creative counterexamples, papers full symbolic logic, speakers who get attacked with refutations from the audience in mid-talk, and then, sometimes, deftly parry the killing blow with a clever metaphor, taking the questioner down a peg...
It's not too much of a stretch to see philosophers as IQ signaling athletes. Tennis has its ATP ladder, and everybody gets a rank. In philosophy it's slightly less blatant, partly because even the task of scorekeeping in the IQ signaling game requires you to be very smart. Nonetheless, there is always a broad consensus about who the top players players are and which departments employ them.
Unlike tennis players, though, philosophers play their game without a real audience, apart from themselves. The winners get comfortable jobs and some worldly esteem, but their main achievement is just winning. Some have huge impact inside the game, but because nobody else is watching, that impact is almost never transmitted to the world outside the game. They're not using their intelligence to improve the world. They're using their intelligence to demonstrate their intelligence.
↑ comment by johnswentworth ·
2019-12-10T19:45:11.811Z · LW(p) · GW(p)
Could you expand a bit on why you expect a trade-off between intelligence/virtue signalling, as opposed to two independent axes? I can sort of see a case where intelligence is the "cost" part of "costly virtue signalling", and virtue is the "cost" part of "costly intelligence signalling", like the examples in toxoplasma of rage. On the other hand, looking at those examples of the dangers of runaway IQ signalling, they generally don't seem to trade-off against virtue.Replies from: Wei_Dai
↑ comment by Wei_Dai ·
2019-12-11T03:08:00.705Z · LW(p) · GW(p)
Could you expand a bit on why you expect a trade-off between intelligence/virtue signalling, as opposed to two independent axes?
They are two independent axes, but when you're at the Pareto frontier (which I think a lot of people are at), doing more of one requires doing less of the other. For virtue signaling in particular, to signal effectively you often have to parrot a very narrow party line or orthodoxy, which leaves very few degrees of freedom to do intelligence signaling. For example, if there are errors in the party line or orthodoxy, which you'd ordinarily get "intelligence points" for finding and pointing them out, in a virtue-signaling environment you'd get shamed/censored/punished.
What started this whole line of thought was this statement (linked to in the OP), which I saw someone quote in a completely serious way.
Replies from: johnswentworth
↑ comment by johnswentworth ·
2019-12-11T04:17:46.753Z · LW(p) · GW(p)
It seems like a lot of examples of virtue signalling require sacrificing intelligence, but sacrificing virtue seems like a less common requirement to signal intelligence. So one possible model would be that, rather than a pareto frontier on which the two trade off symmetrically, intelligent decisions are an input which are destructively consumed to produce virtue signals - like trees are consumed to produce paper.Replies from: Viliam
↑ comment by Viliam ·
2019-12-16T21:21:40.780Z · LW(p) · GW(p)
Sometimes you can sacrifice a bit of virtue to signal intelligence. For example, when people talk in real life, interrupting other people may give you an opportunity to say something clever first. Or you can make a funny joke that shows how smart and quick you are, even if you know that this will derail the debate.
Then there is contrarianism for signalling sake. You disagree with people not because you truly believe they are wrong, but to show that they are unthinking sheep and you are the brave one who dares to oppose the popular opinion (even if you actually believe the popular opinion to be correct, and the thing you said is just an exercise in finding clever excuses for what is most likely the wrong answer). This can cause actual harm, when people convinced by your speech do the wrong thing instead of the right one.
comment by romeostevensit ·
2019-12-09T03:57:20.437Z · LW(p) · GW(p)
Related [LW · GW]
I like to think of signaling as dialects that communities use to communicate social coordination information (who should be paid attention to, who should receive praise or blame, etc.). I think about them in terms of the Buddhist realms:
who is good/bad? victims and oppressors, hell realm
who controls resources? territory, animal realm
who deserves resources? zero sum competitions, hungry ghost realm
which achievements are laudable? prestige, titan realm
which sorts of enjoyments are available/acceptable/admired? god realm
which models hold sway over the group's decision making? understanding and intellect, human realm
side note: the earliest uses of the term virtue signaling I'm aware of are the PUA community circa ~2011
comment by johnswentworth ·
2020-12-13T19:27:50.631Z · LW(p) · GW(p)
This is an interesting frame which is orthogonal to most of my other frames of the topic and seems to capture something which those other frames miss (or at least deemphasize).
comment by Wei_Dai ·
2019-12-09T04:26:38.319Z · LW(p) · GW(p)
Lately I’ve come to think of human civilization as largely built on the backs of intelligence and virtue signaling.
In case some people are not convinced of this, Geoffrey Miller argued in How did language evolve? that language itself evolved to allow our ancestors to signal intelligence:
Replies from: interstice, FactorialCode, roger-allen-sweeny
This makes human language look puzzling from a Darwinian viewpoint. Why do we
bother to say anything remotely true, interesting, or relevant to anybody who is not
closely related to us? In answering this question, we have to play by the evolutionary
rules. We can’t just say language is for the good of the group or the species. No trait in
any other species has even been shown to be for the benefit of unrelated group
Burling’s theory also has the same trouble explaining content as Dunbar’s theory. I think
this problem can be solved by thinking about what a big-brained species would want to
advertise during sexual courtship. If intelligence is important for survival and social life,
then it would be a good idea to choose sexual partners for their intelligence. Language
makes a particularly good intelligence-indicator precisely because it has rich content.
We put our thoughts and feelings into words – so when we talk to a potential mate, they
can assess our thoughts and feelings. We can read each other’s minds through
language, so we can choose mates for their minds, not just their bodies or songs. No
other species can do this.
↑ comment by interstice ·
2019-12-09T06:39:56.107Z · LW(p) · GW(p)
Do you agree that signalling intelligence is the main explanation for the evolution of language? To me, it seems like coalition-building is a more fundamental driving force(after all, being attracted to intelligence only makes sense if intelligence is already valuable in some contexts, and coalition politics seems like an especially important domain) Miller has also argued that sexual signalling is a main explanation of art and music, which Will Buckingham has a good critique of here. Replies from: Wei_Dai
↑ comment by Wei_Dai ·
2019-12-09T23:47:12.356Z · LW(p) · GW(p)
I think signaling loyalty (which is very important for coalition-building) and other virtues is probably comparable in importance as a function of language to signaling intelligence, so Miller does seem to over-emphasize the latter as an explanation, and he does also seem to have a tendency to over-emphasize sexual selection.
↑ comment by FactorialCode ·
2019-12-11T20:57:18.334Z · LW(p) · GW(p)
I think this explanation misses something very important. Namely, language lets small groups of agents coordinate for their collective gain. The richer the language and the higher the bandwidth, the more effectively the agents can work together and the more complicated the tasks that they can solve. Agents that can work together will mop the floor with agents that can't. It's easy to construct tasks that can only be achieved by letting agents communicate with each other and I suspect the ancestral environment provided plenty of challenges that were easier to solve with communication. I wouldn't be surprised if a large amount of the sophistication of our language comes from it's ability to let us jockey for status or to deceive others to more effectively propagate our genes, but I don't think we should discount that language vastly increases the power of agents that are willing to cooperate.
Replies from: interstice, Vaniver
↑ comment by interstice ·
2019-12-12T07:11:29.702Z · LW(p) · GW(p)
Ability to cooperate is important, but I think that status-jockeying is a more 'fundamental' advantage because it gives an advantage to individuals, not just groups. Any adaptation that aids groups must first be useful enough to individuals to reach fixation(or near-fixation) in some groups.Replies from: FactorialCode
↑ comment by Vaniver ·
2019-12-12T04:32:46.842Z · LW(p) · GW(p)
There are primates with proto-language, which I think let them communicate well enough to do these sorts of things. The question then becomes "why go from a four-grunt language to the full variety of human speech?", and it seems like runaway dynamics make more sense here (in a way that rhymes with the Deutsch-style "humans developed causal reasoning as part of figuring out how to do ritual-style mimicry better" arguments).Replies from: FactorialCode
↑ comment by FactorialCode ·
2019-12-12T17:36:02.742Z · LW(p) · GW(p)
why go from a four-grunt language to the full variety of human speech?
Bandwidth. 4 grunts let you communicate 2 bits of information per grunt n grunts let you communicate log(n) bits per grunt. In addition, without a code or compositional language, that's the most information you can communicate. Even the simple agents in the OpenAI link were developing a binary code to communicate because 2 bits wasn't enough:
The first problem we ran into was the agents’ tendency to create a single utterance and intersperse it with spaces to create meaning.
In my model, the marginal utility of extra bandwidth and a more expressive code is large and positive when cooperating. This goes on up to the information processing limits of the brain, at which point further bandwidth is probably less beneficial. I think we don't talk as fast as Marshal Mathers simply because our brains can't keep up. Evolution is just following the gradient.
The main reason I don't think runaway dynamics are a major factor is simply because language is very grounded. Most of our language is dedicated to referencing reality. If language evolved because of a signalling spiral, especially an IQ signalling spiral, I'd expect language to look like a game, something like verbal chess. Sometimes it does look like that, but it's the exception, not the rule. Social signalling seems to be mediated through other communication mechanisms, such as body-language and tone or things like vibing. [LW · GW] In all cases, the actual content of the language is mostly irrelevant and doesn't need to be the expressive, grounded, and compositional mechanism of language to fulfill it's purpose.
↑ comment by Roger Allen Sweeny (roger-allen-sweeny) ·
2020-02-20T17:26:55.265Z · LW(p) · GW(p)
Richard Wrangham begins The Goodness Paradox with the thought that 200 people will peacefully board an aircraft and sit peacefully for a 6-hour flight. 200 chimps would get into innumerable fights in the terminal. On the other hand, no chimps would conspire together to blow up the plane. Why the difference? His answer is "self-domestication". Proto-humans with a lot of "reactive aggression" were killed or exiled by their group-mates. But doing so required feeling out your group-mates. How much did the asshole's behavior bother him, him, and her? What did they think of ...? What if ... ? Language was helpful to putting together a conspiracy and then carrying out any plans. People who could do so were better able to survive because their groups could do more (more co-operators and fewer defectors).
comment by cousin_it ·
2019-12-17T12:22:53.964Z · LW(p) · GW(p)
Interesting idea, but not sure it cuts reality at the joints: 1) left philosophy is full of intelligence signaling, 2) the Russian revolution was opposed to traditional virtues like family.
I think civilization mainly depends on the idea that punishing people for disagreement is not ok, and that's the idea we should try to reinforce.
comment by Wei_Dai ·
2019-12-09T07:34:37.487Z · LW(p) · GW(p)
I'm not sure the intelligence signaling vs virtue signaling framing is the best one to use, but it came to me while reading Miller's book and seeing a version of this around the same time:
if a marginalized group tells you that a word or phrase is harmful/toxic towards them and they wish you’d stop using it, it’s not an opportunity for you to flex your fucking debate skills