Motive Ambiguity
post by Zvi · 2020-12-15T18:10:01.372Z · LW · GW · 58 commentsContents
58 comments
Central theme in: Immoral Mazes Sequence, but this generalizes.
When looking to succeed, pain is not the unit of effort [LW · GW], and money is a, if not the, unit of caring [LW · GW].
One is not always looking to succeed.
Here is a common type of problem.
You are married, and want to take your spouse out to a romantic dinner. You can choose the place your spouse loves best, or the place you love best.
A middle manager is working their way up the corporate ladder, and must choose how to get the factory to improve its production of widgets. A middle manager must choose how to improve widget production. He can choose a policy that improperly maintains the factory and likely eventually it poisons the water supply, or a policy that would prevent that but at additional cost.
A politician can choose between a bill that helps the general population, or a bill that helps their biggest campaign contributor.
A start-up founder can choose between building a quality product without technical debt, or creating a hockey stick graph that will appeal to investors.
You can choose to make a gift yourself. This would be expensive in terms of your time and be lower quality, but be more thoughtful and cheaper. Or you could buy one in the store, which would be higher quality and take less time, but feel generic and cost more money.
You are cold. You can buy a cheap scarf, or a better but more expensive scarf.
These are trade-offs. Sometimes one choice will be made, sometimes the other.
Now consider another type of problem.
You are married, and want to take your spouse out to a romantic dinner. You could choose a place you both love, or a place that only they love. You choose the place you don’t love, so they will know how much you love them. After all, you didn’t come here for the food.
A middle manager must choose how to improve widget production. He can choose a policy that improperly maintains the factory and likely eventually poisons the water supply, or a policy that would prevent that at no additional cost. He knows that when he is up for promotion, management will want to know the higher ups can count on him to make the quarterly numbers look good and not concern himself with long term issues or what consequences might fall on others. If he cared about not poisoning the water supply, he would not be a reliable political ally. Thus, he chooses the neglectful policy.
A politician can choose between two messages that affirm their loyalty: Advocating a beneficial policy, or advocating a useless and wasteful policy. They choose useless, because the motive behind advocating a beneficial policy is ambiguous. Maybe they wanted people to benefit!
A start-up founder can choose between building a quality product without technical debt and creating a hockey stick graph with it, or building a superficially similar low-quality product with technical debt and using that. Both are equally likely to create the necessary graph, and both take about the same amount of effort, time and money. They choose the low-quality product, so the venture capitalists can appreciate their devotion to creating a hockey stick graph.
You can choose between making a gift and buying a gift. You choose to make a gift, because you are rich and buying something from a store would be meaningless. Or you are poor, so you buy something from a store, because a handmade gift wouldn’t show you care.
Old joke: One Russian oligarch says, “Look at my scarf! I bought it for ten thousand rubles.” The other says, “That’s nothing, I bought the same scarf for twenty thousand rubles.”
What these examples have in common is that there is a strictly better action and a strictly worse action, in terms of physical consequences. In each case, the protagonist chooses the worse action because it is worse.
This choice is made as a costly signal. In particular, to avoid motive ambiguity.
If you choose something better over something worse, you will be suspected of doing so because it was better rather than worse.
If you choose something worse over something better, not only do you show how little you care about making the world better, you show that you care more about people noticing and trusting this lack of caring. It shows your values and loyalties.
In the first example, you care more about your spouse’s view of how much you care about their experience than you care about your own experience.
In the second example, you care more about being seen as focused on your own success than you care about outcomes you won’t be responsible for.
In the third example, you care more about being seen as loyal than about improving the world by being helpful.
In the fourth example, you care about those making decisions over your fate believing that you will focus on the things they believe the next person deciding your fate will care about, so they can turn a profit. They don’t want you distracted by things like product quality.
In the old joke, the oligarchs want to show they have money to burn, and that they care a lot about showing they have lots of money to burn. That they actively want to Get Got to show they don’t care. If someone thought the scarf was bought for mundane utility, that wouldn’t do at all.
One highly effective way to get many people to spend money is to give them a choice to either spend the money, or be slightly socially awkward and admit that they care about not spending the money. Don’t ask what the wine costs, it would ruin the evening.
The warning of Out to Get You is insufficiently cynical. The motive is often not to get your resources, and is instead purely to make your life worse.
Conflict theorists are often insufficiently cynical. We hope the war is about whether to enrich the wealthy or help the people. Often the war is over whether to aim to destroy the wealthy, or aim to hurt the people.
In simulacra terms, these effects are strongest when one desires to be seen as motivated on level three, but these dynamics are potentially present to an important extent for motivations at all levels. Note also that one is not motivated by this dynamic to destroy something unless you might plausibly favor it. If and only if everybody knows you don’t care about poisoning the river, it is safe to not poison it.
This generalizes to time, to pain, to every preference. Hence anything that wants your loyalty will do its best to ask you to sacrifice and destroy everything you hold dear, because you care about it, to demonstrate you care more about other things.
Worst of all, none of this assumes a zero-sum mentality. At all.
Such behavior doesn’t even need one.
If one has a true zero-sum mentality, as many do, or one maps all results onto a zero-sum social dynamic, all of this is overthinking. All becomes simple. Your loss is my gain, so I want to cause you as much loss as possible.
Pain need not be the unit of effort if it is the unit of scoring.
The world would be better if people treated more situations like the first set of problems, and less situations like the second set of problems. How to do that?
58 comments
Comments sorted by top scores.
comment by AnnaSalamon · 2020-12-19T18:20:27.076Z · LW(p) · GW(p)
I tried looking for situations that have many of the same formal features, but that I am glad exist (whereas I intuitively dislike the examples in the OP and wish they happened less). I got:
-
Some kids set out to spend the night outdoors somewhere. They consider spending it in a known part of the woods, or in an extra scary/risky-seeming part of the woods. They choose the latter because it is risky. (And because they care more about demonstrating to themselves and each other that they can tolerate risk, then about safety.)
-
A boy wants to show a girl that he cares about her, as he asks her on a first date. He has no idea which flowers she does/doesn’t like. He considers getting her some common (easy to gather) flowers, or seeking out and giving her some rare flowers. (Everybody in town knows which flowers and common and which are rare.) He decides on the rare flowers, specifically because it’ll cause her to know that he spent more time gathering them, which will send a louder “hey I’m interested in you” signal. (This is maybe the same as your gift-giving example, which I feel differently good/bad about depending on its context.)
-
Critch used to recommend (probably still does) that if a person has e.g. just found out they’re allergic to chocolate, and is planning to try to give up chocolate but expecting to find this tricky, that they go buy unusually fancy/expensive/delicious chocolate packages, open them up, smell them, and then visibly throw them away without taking a bite. (Thus creating more ability to throw out similar things later, if they are e.g. given fancy chocolates as a gift.) For this exercise, the more expensive/fancy/good the chocolates are (and thus, the larger the waste in buying them to throw away), the better.
-
Some jugglers get interested in a new, slippery kind of ball that is particularly difficult to juggle. It is not any more visually beautiful to watch a person juggle (at least, if you’re an untrained novice who doesn’t know how the slippery balls work) — it is just harder. Many of the jugglers, when faced with the choice between training on an easier kind of ball or training on the difficult slippery ones, choose to train on the difficult slippery ones, specifically because it is harder (i.e., specifically because it’ll take more time/effort/attention to learn to successfully juggle them).
-
My anonymous friend Alex was bullied a lot in school. Then, at age 18, Alex was in a large group house with me and… made cookies… all through the afternoon. A huge batch of cookies. Twelve hungry people, including me, sat around smelling the cookies, expecting the cookies. (Food-sharing was ubiquitous in this group house.) Then, when the cookies were made and we showed up in the kitchen to ask for them, Alex… said they were the boss of the cookies (having made them) and that no one could have any! (Or at least, not for several hours.) A riot practically broke out. I was pretty upset! Partly in myself, and partly because lots of other people were pretty upset! But later Alex said they were still glad they did this, basically to show themselves that they didn’t always have to lose conflicts with other people, or have to be powerless in social contexts. And I still know Alex, and haven’t known them to do anything similar again. So I think I in fact feel positively about this all things considered. And the costs/value-destruction was pretty intrinsic to how Alex’s self-demonstration worked — Alex had no other reason to prefer “making everybody wait hours without cookies” to “letting people eat the cookies”, besides that people strongly dis-prefered waiting. (This is a true story.)
(I don’t have a specific thesis in sharing these. It’s just a step for me in trying to puzzle out what the dynamics in the OP’s examples actually are, and I want to share my scratch work. More scratch work coming probably.)
Replies from: Yoav Ravid, rudi-c↑ comment by Yoav Ravid · 2020-12-19T21:15:19.128Z · LW(p) · GW(p)
two themes i see in your examples that i don't think was prevalent in the OP is challenge and effort, especially in 1, 2 and 3. but 3 and 5 are also about overcoming a challenge, in one it's the challenge of giving up chocolate, and in the other it's standing up to people.
in almost none of these does "the protagonist chooses the worse action because it is worse". sleeping in a more risky part of the forest isn't strictly worse, there are benefits to it. spending time finding a rare flower isn't worse than using a common flower since a rare flower is likely to have more value.
The example that seems closest is the 5th, though as i understand it he tries prove something more to himself than to anybody else.
Replies from: AnnaSalamon, Viliam↑ comment by AnnaSalamon · 2020-12-20T17:54:05.489Z · LW(p) · GW(p)
Yoav, I think there might be a difference like the one you’re gesturing at, but if so, I don’t think Zvi’s formalism quite captures it. If someone can find a formalism that does capture it, I’m interested. (Making that need for a fuller formalism explicit, is sort of what I’m hoping for with the examples.)
For example, I disagree, if I reason formally/rigidly, with “in almost none of these does "the protagonist chooses the worse action because it is worse". sleeping in a more risky part of the forest isn't strictly worse, there are benefits to it. spending time finding a rare flower isn't worse than using a common flower since a rare flower is likely to have more value.”
Re: the flowers, I can well imagine a situation where the boy chooses the [takes more work and has more opportunity cost to gather (“rarer”)] flower because it [visibly takes more cost to gather it], and “because it has more costs” is IMO pretty clearly an example of “because it is worse” in the sense in the OP (similar to: “because it costs more rubles to buy this scarf-that-is-not-better”). To make a pure example: It’s true the flower’s that rarity itself makes it more valuable to look at (since folks have seen it less) — but we can imagine a case where it is slightly uglier and slightly less pretty-smelling, to at least mostly offset this, so that a naive observer from a different region who did not know what was rare/common would generally prefer the other. Anyhow, in that scenario it still seems pretty plausible that the boy’s romantic gesture would work better with the rarer flower, as the girl says to her gossipy girlfriends “He somehow brought me 50 [rareflowers]!” And they’re like “What? Where did he possibly get those from?” And she’s like “Yeah. I don’t even like [rareflowertype] all that much, but still, gathering those! I guess he must really like me!” (I.e., in this scenario, the boy having spent hours of his time roving around seeking flowers, which seems naively/formally like a cost, is itself a thing that the girl is appreciating.)
Similarly, “riskier part of the forest” means “part of the forest with less safety” — and while, yes, the forest-part surely has other positive features, I can well imagine a context where the “has less safety” is itself the main attraction to the kids (so they can demonstrate their daring). (And “has less safety / has greater risk of injury” seems formally like an example of “worse”. If it isn’t, I need better models of what “worse” means here.)
If these are actually disanalogous, maybe you could spell out the disanalogy more? I realize I didn’t engage here with your point about “challenge” and “effort” (which seem like costs on some reckoning, but costs that we sometimes give a positive-affect term to, and for reason)
↑ comment by Viliam · 2020-12-20T20:43:04.190Z · LW(p) · GW(p)
You could apply some positive spin to Zvi's examples, too. For example, how difficult it is to establish trust and cooperation between people, and how necessary it is for a survival of civilization, and therefore a manager who sacrifices something valuable in order to signal loyalty is actually the good guy.
↑ comment by Rudi C (rudi-c) · 2021-02-15T20:01:52.055Z · LW(p) · GW(p)
I dislike all of your examples, too.
Replies from: Raemon↑ comment by Raemon · 2021-02-15T21:31:49.456Z · LW(p) · GW(p)
What do you dislike about them?
Replies from: rudi-c↑ comment by Rudi C (rudi-c) · 2021-02-18T22:30:57.661Z · LW(p) · GW(p)
They aren't that different from the examples Zvi has mentioned. They all burn value to achieve an outcome that would be achievable via honesty and/or self-control (i.e., they burn value to coordinate externally or internally), which has always felt very bad to me (even if it is the best available option, I feel strong disgust towards it). What is more annoying is when the people involved do not seem to appreciate the burned value as a bad thing and instead "romanticize" it. The examples that feel the worst are the ones where they can actually focus the cost of the signal on something (more) useful. For example, the jugglers could juggle more balls or exotically-shaped balls, instead of wasting their energy on just increasing the difficulty.
Replies from: Raemon, PoignardAzur↑ comment by PoignardAzur · 2021-02-21T15:27:25.812Z · LW(p) · GW(p)
What is more annoying is when the people involved do not seem to appreciate the burned value as a bad thing and instead "romanticize" it
Nailed it.
I think people on this forum all share some variation of the same experience, where they observe that everyone around them is used to do something inefficient, get told by their peers "no, don't do the efficient thing, people will think less of you", they do the efficient thing, and their life gets straightforwardly easier and nobody notices or care.
This is especially the case for social norms, when you can get your social circle to buy in. Eg people have really silly ideas about romance and gender roles and patriarchal ideals (eg the girl has to shave and put on makeup, the guy has to pay everything, everyone must be coy and never communicate), but if you and the person you date agree to communicate openly and respect each other and don't do that crap... well, in my limited experience, it's just easier and more fun?
My point is, it's amazing how much value you can not-burn when your stop romanticizing burning value.
comment by newcom · 2020-12-16T12:53:59.300Z · LW(p) · GW(p)
I remember encountering this same idea in Orwell's '1984':
‘How does one man assert his power over another, Winston?’
Winston thought. ‘By making him suffer,’ he said.
‘Exactly. By making him suffer. Obedience is not enough.
Unless he is suffering, how can you be sure that he is obeying your will and not his own?'
comment by ADifferentAnonymous · 2020-12-15T23:36:03.242Z · LW(p) · GW(p)
One (admittedly idealistic) solution would be to spread awareness of this dynamic and its toxicity. You can't totally expunge it that way, but you could make it less prevalent (i.e. upper-middle managers probably can't be saved, but it might get hard to find enough somewhat-competent lower-middle managers who will play along).
What would it look like to achieve an actually-meaningful level of awareness? I would say "there is a widely-known and negative-affect-laden term for the behavior of making strictly-worse choices to prove loyalty".
Writing this, I realized that the central example of "negative-sum behavior to prove loyalty" is hazing. (I think some forms of hazing involve useful menial labor, but classic frat-style hazing is unpleasant for the pledges with no tangible benefit to anyone else). It seems conceivable to get the term self-hazing into circulation to describe cases like the one in OP, to the point that someone might notice when they're being expected to self-haze and question whether they really want to go down that road.
Replies from: Nonecomment by Ericf · 2020-12-15T20:24:59.954Z · LW(p) · GW(p)
This is obvious, but, if you are in a position of power, however small, seek to reward people who are making the first kind of decisions rather than the second. Even, or especially?, if the object level decision is one you disagree with.
Crucial example that is both high leverage and ubiquitous is being a parent and what behavior you reward in your kids.
Replies from: romeostevensit↑ comment by romeostevensit · 2020-12-17T04:30:46.672Z · LW(p) · GW(p)
This suggest a great compression. Your parents and teachers incentivized things that made their life easy. This explains some of the idiosyncrasy of the things that seem at odds with material that would have actually helped you prepare for life.
comment by Unreal · 2020-12-27T19:17:23.724Z · LW(p) · GW(p)
I tried to directly respond to the points in this post. But the framing of this post is so off-kilter from mine that it's confusing to try to "meet" your frame while maintaining my own.
I'm just going to have to give my own take, and let people be confused how the two integrate.
//
I'm the middle manager with the widget factory. I imagine ~two possible scenarios:
- My higher-ups actually would want me to make the saner choice, but I am personally very confused, or there's been a lot of miscommunication / lack of clarity. Maybe my ability to read their signals of what they'd want is very wrong, or they just don't give much feedback at all. (I've seen this happen lots. This seems totally plausible to me.) In MY world, the higher-ups need me to demonstrate loyalty by showing an ability to make the worse decision. I am desperate for their approval and am confused about how to get it. In REALITY, the game of dishing out approval is not something the company optimizes for, and so the higher-ups haven't a clue about my internal drama. If they could read my mind, they'd pity me. They assume that I'm just learning the ropes, and they're willing to eat the cost of some middle manager making mistakes, and they don't have time to fix all the errors. They let it go without comment (perhaps a sign of the problem). I am twiddling my fingers in anxiety, hoping they like me / don't fire me.
- My higher-ups will in fact promote me for making the insane choice, and my read about that is totally correct. In this world, lots of systems and people are corrupt. Bribery, cheating, scams, embezzling, etc. are prevalent. There isn't much rhyme or reason to choice-making because people can NOT be expected to be rewarded for doing good work; the only reward is having the right connections. If you don't have the right connections, you're probably fucked. Think the USSR under Stalin. In this world, basically everyone is insane. They've let integrity go out the window. "This is why we can't have nice things." Clean effort does not result in reward. People resort to other means.
I honestly don't see the example making any sense outside of something similar to the above scenarios, unless you remove information from the system (e.g. I don't know that the water-poisoning factory costs the SAME as the not-water-poisoning factory) or there's info left out ("no additional cost" isn't taking into account things like legibility or robustness or something).
//
I'm the spouse planning dinner. I can imagine the following scenarios, which carry some element of insanity:
- I have some core belief that Love = Suffering or Love = Sacrifice. ("Core belief" is a technical term here.) This leads me to doing some insane things like always doing the thing I don't want to do, whenever I get a sense my partner wants that something, with the expectation that this is "how love works" or something. My partner does not want me to do this, but I'm kind of stuck / can't get distance from the pattern.
- My partner is stuck in a zero-sum mentality about romantic relationships. They get upset when I don't make grand gestures or display active self-sacrifice. They feel insecure in the relationship. When I seem happy at their "expense", they assume I don't love them / care about them. I feel obligated to pick places they like even when I don't like them, and I am carrying some slight resentment about it. It doesn't feel worth rocking the boat. In fact, they do seem more relaxed when I seem "less openly excited or happy"—because to them, this means I need them more, and they feel less likely to be abandoned or rejected. (In this case, let's say that this is the wrong assumption in this particular relationship, but hasn't been wrong in past relationships, and they are dealing with trauma in the area.)
- Or, as is all too common, both me and my partner are carrying some kind of trauma-based insanity about relationships. We're codependent and playing out a weird stereotypical trope of sacrificing our own preferences for the sake of the other. We don't see a problem actually, with this, if you asked us, but we're both suffering more than otherwise.
I can imagine the following scenarios, which are not insane:
- I enjoy giving my spouse the gift of taking them to their beloved restaurant, regardless of my own preferences. I see this as practicing generosity. I put my preference aside, but this leaves no negative residue. I'm genuinely happy to take them to a restaurant they love. In our relationship, we don't prioritize "having good experiences" as much as we do giving / building / quality attention / etc.
- I am practicing relinquishing my preferences because I want to be able to enjoy myself regardless of particular external circumstances. I believe it's good to take each moment as it is and appreciate the present, over necessarily trying to make myself experience particular things. Giving my spouse a nice dinner is an excellent bonus.
- If we always went to the restaurant we both love, we'd get less variety of restaurant choices for our romantic dinners. So sometimes I pick the restaurant they love, and sometimes they pick the restaurant I love, and sometimes we pick the restaurant we both love. Overall, this is value-positive in the long term.
//
These examples are outputs from my model of how reality works, from what I can tell.
Replies from: Unreal↑ comment by Unreal · 2020-12-27T19:44:01.380Z · LW(p) · GW(p)
Oh, hm, I think I am noticing something:
I don't like what the post is trying to reify because I think it predicts reality less well than whatever I am using to predict reality
Or
Maybe it predicts reality "okay" but I feel it adds an unnecessary layer of being bitter / cynical / paranoid when this is not particularly healthy or useful.
The latter thing feels like a serious cost to me.
I'm not trying to promote naive optimism either.
But the world this post paints feels "dark" in a way that seems less accurate than available alternatives.
And also seems a bit more likely to lead to adversarial dynamics / Game A / finite games / giving up on oneself and others / less love / less faith / less goodwill / less trying. That is a serious cost to me.
I'm guessing that the counterpoint is... NOT seeing the world this way will lead to getting taken advantage of, good guys losing, needless loss, value degradation, etc. ?
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-12-24T20:03:30.935Z · LW(p) · GW(p)
This post is based on the book Moral Mazes, which is a 1988 book describing "the way bureaucracy shapes moral consciousness" in US corporate managers. The central point is that it's possible to imagine relationship and organization structures in which unnecessarily destructive behavior, to self or others, is used as a costly signal of loyalty or status.
Zvi titles the post after what he says these behaviors are trying to avoid, motive ambiguity. He doesn't label the dynamic itself, so I'll refer to it here as "disambiguating destruction" (DD). Before proceeding, I want to emphasize that DD is referring to truly pointless destruction for the exclusive purpose of signaling a specific motive, and not to an unavoidable tradeoff.
This raises several questions, which the post doesn't answer.
- Do pointlessly destructive behaviors typically succeed at reducing or eliminating motive ambiguity?
- Do they do a better job of reducing motive ambiguity than alternatives?
- How common is DD in particular types of institutions, such as relationships, cultures, businesses, and governments?
- How do people manage to avoid feeling pressured into DD?
- What exactly are the components of DD, so that we can know what to look for when deciding whether to enter into a certain organization or relationship?
- Are there other explanations for the components of DD, and how would we distinguish between DD and other possible interpretations of the component behaviors?
We might resort to a couple explanations for (4), the question of how to avoid DD. One is the conjunction of empathy and act utilitarianism. My girlfriend says she wouldn't want to go to a restaurant only she loves, even if the purpose was to show I love her. Part of her enjoyment is my enjoyment of the experience. If she loved the restaurant only she loves so much that she was desperate to go, then she could go with someone else. She finds the whole idea of destructive disambiguation of love to be distinctly unappealing. The more aware she is of a DD dynamic, the more distasteful she finds it.
Another explanation for (4) is constitutional theory. In a state of nature, people would tend to form communities in which all had agreed not to pressure each other into DD dynamics. So rejecting DD behavior is a way of defending the social contract, which supercedes whatever signaling benefit the DD behavior was supposed to contribute to in particular cases.
As such, for a DD dynamic to exist consistently, it probably needs to be in a low-empathy situation, in which there is little-no ability to enforce a social contract, where the value of motive disambiguation is very high, and where there are destructive acts that can successfully reduce ambiguity. It could also be the result of stupidity: people falsely believing that DD will accomplish what it is described as accomplishing here and bring them some selfish benefit. As such, a description of DD might constitute an infohazard of sorts - though it seems to me to be very far from anything like sharing the genome of smallpox or the nuclear launch codes.
It seems challenging to successfully disambiguate motives with destructive behavior, because they expose the person enacting DD to perceptions of incompetence. Maybe they poisoned the water supply because they wanted to show loyalty, or maybe they did it because they're too incompetent to know how to maintain the factory without causing pollution. Maybe they took you to a restaurant they hate because they love you, or maybe it's because they're insecure or trying to use it as some sort of a bargaining chip for future negotiations.
All that said, I can imagine scenarios in which a person makes a correct judgment that DD will work as described, brings them the promised benefits, and provides supporting evidence in favor of DD as an effective strategy for acquiring status. This does indeed seem bad. One way to explain how this could be done is the idea of a cover story, a reasonable-sounding explanation for the behavior that all involved know is false, and serves simultaneously as evidence to external parties that the behavior was reasonable and evidence to internal parties that the behavior ought to be interpreted as DD.
But we also need to explain why DD is not only the best way to affirm loyalty, but the best overall way to affirm the things that loyalty is meant to accomplish. For example, loyalty is often meant to contribute to group survival, such as among soldiers. Even if DD is the best way to contribute to display a soldier's loyalty, it could be that it has side effects that diminish the health of the group, such as diminishing the appeal of military service to potential recruits.
Band of Brothers is a dramatic reenactment of the true story of Easy Company, paratroopers in WWII. Their captain, Herbert Sobel, put them through all kinds of hazing rituals. Examples include offering the company a big spaghetti dinner only to surprise them in the middle by forcing them to run up a mountain, causing the soldiers to vomit halfway up; or forcing the soldiers to inflict cruel punishments on each other.
Ultimately, Sobel loses the loyalty of his troops, not due to his strictness, but due to his incompetence in making command decisions in training exercises. They mutiny, and Sobel is replaced. Despite their dislike of Sobel, some soldiers think he did cause the soldiers to become particularly loyal to each other, though there are also many other mechanisms by which the soldiers were both selected for loyalty and had opportunities to demonstrate it. It's not at all clear that Sobel's pointlessly harsh treatment was overall beneficial to the military, though his rigor as a trainer does seem to have been appreciated.
This suggests another explanation for DD, which is that the person enacting it may find the capacity to be strict and punitive to be useful in other contexts, and just not have the discernment to distinguish between appropriate and inappropriate contexts. Or DD might be a form of training in order to enable the perpetrator to enact strictness in non-destructive contexts. To really work as a motive disambiguation, these also need to be ruled out.
Taken all together, we have lots of reasons to think that DD ought to be rare.
- We can create constitutions, explicit or implicit, that bar DD. These constitutions can be on many group levels: a management group, corporation, and whole industry might create multi-layered anti-DD constitutions, on the level of explicit contracts or, more likely, implicit or informal norms.
- DD needs to affirm loyalty without creating overal negative side effects for the group or for the person sending the signal.
- DD needs to reduce ambiguity on net, and destructive behaviors invite explanations other than loyalty signals.
All of these suggest to me that we should have a low prior that any given act is best explained as an example of DD. First, we'd want to resort to explanations such as tradeoffs, incompetence, negative outcomes from risky decisions, value differences, an attempt to "rescue" a mistake or example of incompetence and reframe it as a signal of loyalty post-hoc, and our own lack of information. These are common causes of destructiveness. Loyalty-based relationships are extremely common, so destructive behavior will often be associated with a loyalty-based relationship, and test those bonds of loyalty. There are so many plausible alternative explanations that we should require some extraordinary evidence that a particular behavior is a central case of destructive disambiguation.
I think a counterargument here is that DD is the "cover story" hypothesis I referred to earlier. If we are supposing that DD is common enough to be a serious problem in our society, perhaps we are also assuming that cover stories are effective enough that it will be very hard to find examples that are obvious to outsiders. It's a little like sex in a stereotypical "Victorian" society: it's obviously happening (we see the children), but everybody's taking pains to disguise it, and if you didn't know that sex existed, you might never figure it out and it would sound very implausible if explained to you.
Of course, with the sex analogy, even Victorians would eventually figure out that sex existed. Likewise, if DD is happening all the time, then people ought to be able to consult their lived experience to find ready examples. I personally find it to be alien to my experience, and others seem to feel the same way in the comment section here. My girlfriend goes further, feeling that it's not only alien, but a repugnant concept to even discuss. I can recall conversations with my uncle, who spent his career in the wine industry, and he says his company took a strong stand against doing any business in corrupt countries, even if there were profits to be made. These are just anecdotes, but I think it's necessary to start by resorting to them in this case.
If DD has been operationalized and subjected to scientific study, I would be interested to read the studies. But I would subject them to scrutiny along the lines I've outlined here. It would be a disturbing finding if robust evidence led us to conclude that DD is pervasive, but I suspect that we'd find out that the disturbing features of human behavior have alternative explanations.
comment by AnnaSalamon · 2020-12-19T01:55:30.348Z · LW(p) · GW(p)
Thanks for writing this; I really appreciated getting to read this post, especially the examples, which seem helpful for trying to bring something into focus.
comment by AnnaSalamon · 2020-12-21T00:15:37.176Z · LW(p) · GW(p)
I wish I had a better model of how common it is to actually have people destroying large amounts of value on purpose, for reasons such as those in the OP. And if it is common, I wish I had a clearer model of why. (I suspect it’s common. I’ve suspected this since reading Atlas Shrugged ~a year ago. But I’m not sure, and I don’t have a good mechanistic mode, and in my ignorance of the ‘true names’ of this stuff it seems really easily to blatantly misunderstand.)
To try to pinpoint what I don’t understand:
- I agree that we care about each others’ motives. And that we infer these from actions. And that we care about others’ models of our motives, and that this creates funny second-order incentives for our actions.
- I also agree that there are scenarios, such as those in the OP, where these second-order incentives can lead a person to destroy a bunch of value (by failing to not-poison the river, etc.)
- I’m uncertain of the frequency and distribution of such incentive-impacts. Are these second-order incentives mostly toward “actions with worse physical consequences”, or are they neutral or positive in expectation (while still negative in some instances)? (I agree there are straight-forward examples where they’re toward worse, and that Zvi lists some of these. But there are also examples the other way. Like, in Zvi’s widget-factory example, I could imagine the middle manager choosing the policy that will avoid poisoning the water (whether or not he directly cares about it) so that other people will think he is the sort of person who cares about good things, and will be more likely to ally with him in contexts where you want someone who cares about good things (marriage; friendships; some jobs).)
- If the distribution does have a large amount of cases second-order incentives push toward destroying value — why, exactly?
Differently put: in the OP, Zvi writes “none of this assumes a zero-sum mentality. At all.” But, if we aren’t assuming a zero-sum mentality, why would the middle manager’s boss (in the widgets example) want to make sure he doesn’t care about the environment? Like, one possibility is that the boss thinks things are zero-sum, and expects a tradeoff between “is this guy liable to worry about random non-company stuff in future situations” and “is this guy doing what’ll help the company” in future cases. But that seems like an example of the boss expecting zero-sum situations, or at least of expecting tradeoffs. And Zvi is saying that this isn’t the thing.
(And one possibility for why such dynamics would be common, if they are, is if it is common to have zero-sum situations where “putting effort toward increasing X” would interfere with a person’s ability to increase Y. But I think this isn’t quite what Zvi is positing.)
Replies from: AnnaSalamon, daig↑ comment by AnnaSalamon · 2020-12-21T00:20:27.158Z · LW(p) · GW(p)
Here is a different model (besides the zero-sum effort tradeoffs model) of why value-losses such as those in the OP might be common and large. The different model is something like “compartmentalization has large upsides for coordination/predictability/simplicity, and is also easier than most other ways of getting control/predictability”. Or in more detail: having components of a (person/organization/anything) that act on anything unexpected means having components of a (person/organization/anything) that are harder to control, which decreases the (person/organization/etc.)’s ability to pull off maneuvers that require predictability, and is costly. (I think this might be Zvi’s model from not-this-post, but I’m not sure, and I’d like to elaborate it in my own words regardless.)
Under this model, real value is actually created via enforcing this kind of predictability (at least, if the organization is being used to make value at all), although at real cost.
Examples/analogies that (correctly or not) are parts of why I find this “compartmentalization/simplicity has major upsides” model plausible:
A. I read/heard somewhere that most of our muscles are used to selectively inhibit other muscles, so as to be able to do fine motor coordination. And that this is one of the differences between humans and chimps, where humans have piles of muscles inhibiting each other to allow fine motor skill, and chimps went more for uncoordinated strength. (Can someone confirm whether this is true?) (The connection may be opaque here. But it seems to me that most of our psychologies are a bit like this — we could’ve had simple, strongly felt, drives and creative impulses, but civilized humans are instead bunches of macros that selectively track and inhibit other macros; and this seems to me to have been becoming more and more true across the last few thousand years in the West.)
B. If I imagine hiring someone for CFAR who has a history of activism along the lines of [redacted, sorry I’m a coward but at least it’s better than omitting the example entirely], I feel pause, not because of “what if the new staff member puts some of their effort into that instead of about CFAR’s goals” but because of “what it makes it more difficult and higher-overhead to coordinate within CFAR, and leaves us with a bunch of, um, what shows up on my internal radar as ‘messes we have to navigate’ all the time, where I have to somehow trick them into going along with the program, and the overhead of this makes it harder to think and talk and get things done together.” (To be clear, parts of this seem bad to me, and this isn’t how I would try to strategize toward me and CFAR doing things; in particular it seems to highlight some flaw in my current ontology to parse divergent opinions as ‘messes I have to navigate, to trick them into going along with the program’. I, um, do not want you to think I am endorsing this and to get to blame or attack me for it, but I do want to get to talk about it.)
C. I think a surgeon would typically be advised not to try to operate on their own child, because it is somehow harder to have steady hands and mind (highly predictable-to-oneself and coordinated behavior) if a strong desire/fear is activated (even one as aligned with “do good surgery on my child” as the desire/fear for one’s child’s life). (Is this true? I haven’t fact-checked it. I have heard poker players say that it’s harder to play well for high stakes. Also the book “The inner game of tennis” claims that wanting to win at tennis impairs most adults’ ability to learn tennis.)
D. In the OP’s “don’t ask what the wine costs, it would ruin the evening” example: it seems to me that there really is a dynamic where asking what the wine costs can at least mildly harm my own experience of the evening, and that for me (and I imagine quite a few others), the harm is not that asking the wine’s price reveals a stable, persistent fact that the asker cares about money. Rather, the harm is asking it breaks the compartmentalization that was part of how I knew how to be “in flow” for the evening. Like, after the asking, I’m thinking about money, or thinking about others thinking about money, and I’m somehow less good at candlelight and music and being with my and others’ experiences when that is happening. (This is why Zvi describes it as “slightly socially awkward” — awkwardness is what it feels like when a flow is disrupted.) (We can tell that the thing that’s up here in my experience of the evening isn’t about longer-term money-indicators, partly because I have no aversion to hearing the same people talk about caring about money in most other contexts.) (I’m sure straight money-signaling, as in Zvi’s interpretation, also happens with some people about the wine. But the different “compartmentalization is better for romantic evenings” dynamic I’m describing can happen too.)
E. This is the example I care most about, and am least likely to do justice to. Um: there’s a lot of pain/caring that I find myself dissociating from, most of the time. (Though I can only see it in flashes.) For example, it’s hard for me to think about death. Or about AI risk, probably because of the “death” part. Or about how much I love people. Or how I hope I have a good life, and how much I want children. I can think words about these things, but I tend to control my breathing while doing so, to become analytic, to look at things a bit from a distance, to sort of emulate the thoughts rather than have them.
It seems to me my dissociating here is driven less by raw pain/caring being unpleasant (although sometimes it is), and more by the fact that when I am experiencing raw pain/caring it is harder to predict/plan/control my own behavior, and that lack of predictability is at least somewhat scary and risky. Plus it is somehow tricky for other people to be around, such that I would usually feel impolite doing it and avoid visibly caring in certain ways for that reason. (See the example F.)
F. [Kind of like E, but as an interpersonal dynamic] When other people show raw caring, it’s hard for me to avoid dissociating. Especially if it’s to do with something where… the feeling inside my head is something like “I want this, I am this, but I can’t have this. It isn’t mine. Fear. Maybe I’m [inadequate/embarrassing/unable forever]?” Example: a couple days ago, some friends and I watched “It’s a Wonderful Life”, which I hadn’t seen before. And afterward a friend and I were raw and talking, and my friend was, I can’t remember, but talking about wanting to be warm and human or something. And it was really hard for me not to just dissociate — I kept having all kinds of nonsense arguments pop into my head for why I should think about my laundry, why I get analytic-and-in-control-of-the-conversation, why I should interrupt him. And later on, my friend was “triggered” about a different thing, and I noticed it was the same [fear/blankness/tendency-to-want-to-dissociate] in me, in response to those other active currents. And I commented on it to my friend, and we noticed that the thing I was instinctively doing in response to that fear in me, was kind of sending my friend “this is weird/bad what you’re doing” signals. So. Um. Maybe there’s a thing where, once people start keeping raw pain/caring/love/anything at distance, if they run into other people who aren’t, they send those people “you’re being crazy/bad” signals whenever those other people aren’t keeping their own raw at a distance. And so we socialize each other to dissociate.
(This connects still to the compartmentalization-as-an-aid-to-predictability thesis, because part of the trouble with e.g. somebody else talking about death, or being raw, or triggered, is that it makes it harder for me to dissociate, and so makes me less predicable/controllable to me.)
G. This brings me to an alternate possible mechanics of Zvi’s “carefully not going out of one’s way not to poison the river with the widget factory” example. If lots of people at WidgetCorp wanted to contribute to (the environment / good things broadly), but are dissociated from their desire, it might mess with their dissociation (and, thus, their control and predictability-to-themselves of their own behavior, and plus WidgetCorp’s ability to predict and control them) if anybody else visibly cares about the river (or even, visibly does a thing one could mistake as caring about the river). And so we get the pressure that Zvi mentions, here and in his “moral mazes” sequence. (And we can analogously derive a pressure not to be a “goody two-shoes” among kids who kind of want to be good still, but also kind of want to be free from that wanting. And the pressure not to be too vulnerably sincere in one’s romantic/sexual encounters, and to instead aspire to cynicism. And more generally (in the extreme, at least) to get attack anyone who acts from intact caring. Sort of like an anti-epistemology [? · GW], but more exactly like an anti-caring.
Replies from: AnnaSalamon, AnnaSalamon, Pongo↑ comment by AnnaSalamon · 2020-12-21T01:11:20.862Z · LW(p) · GW(p)
Extending the E-F-G thing: perhaps we could say “every cause/movement/organization wants to become a pile of defanged pica and ostentatious normalcy (think: Rowling's Dursleys) that won’t be disruptive to anyone”, as an complimentary/slightly-contrasting description to “every cause wants to be a cult”.
In the extreme, this “removing of all impulses that’ll interfere with predictability-and-control” is clearly not useful for anything. But in medium-sized amounts, I think predictability/controllability-via-compartmentalization can actually help with creating physical goods, as with the surgeon or poker player or tennis player who has an easier time when they are not in touch with an intense desire for a particular outcome. And I think we see it sometimes in large amounts — large enough that they are net-detrimental to the original goal of the person/cause/business/etc.
Maybe it’s something like:
-
Being able to predict and control one’s own actions, or one’s organization’s actions, is in fact useful. You can use this to e.g. take three coordinated actions in sequence that will collectively but not individually move you toward a desired outcome, such as putting on your shoes in order to walk to the store in order to be able to buy pasta in order to be able to cook it for dinner. (I do not think one can do this kind of multi-step action nearly as well without prediction-and-control of one’s behavior.)
-
Because it is useful, we build apparatuses that support it. (“Egos” within individual humans; structures of management and deferral and conformity within organizations and businesses and social movements.)
-
Even though prediction-and-control is genuinely useful, a central planning entity doing prediction-and-control will tend to overestimate the usefulness of its having more prediction-and-control, and to underestimate the usefulness of aspects of behavior that it does not control. This is because it can see what it’s trying to do, and can’t see what other people are trying to do. Also, its actions are specifically those that its own map says will help, and others’ actions are those which their own maps say will help, which will bring in winner’s curse-type dynamics. So central planning will tend to over-invest in increasing its own control, and to under-invest in allowing unpredictability/disruption/alternate pulls on behavior.
-
… ? [I think the above three bullet points are probably a real thing that happens. But it doesn’t seem to take my imagination all the way to full-on moral mazes (for organizations), or to individuals who are full-on trying to prop up their ego at the expense of everything. Maybe it does and I’m underestimating it? Or maybe there are added steps after my third bullet point of some sort?]
↑ comment by AnnaSalamon · 2020-12-21T04:59:48.975Z · LW(p) · GW(p)
[Epistemic status: I’m not confident of any of this; I just want better models and am trying to articulate mine in case that helps. Also, all of my comments on this post are as much a response to the book “Moral Mazes” as to the OP.]
Let’s say that A is good, and that B is also good. (E.g. equality and freedom, or diversity and families, or current lives saved and rationality, or any of a huge number of things.) Let’s consider how the desire-for-A and the desire-for-B might avoid having their plans/goal-achievement disrupted by one another.
In principle, you could build a larger model that explains how to trade off between A and B — a model that subsumes A and B as special cases of a more general good. And then the A-desire and the B-desire could peacefully co-exist and share influence within this larger structure, without disrupting each others’ ability to predict-and-control, or to achieve their goals. (And thereby, they could both stably remain part of your psyche. Or part of your organization. Or part of your subcultural movement. Or part of your overarching civilization’s sense of moral decency. Or whatever. Without one part of your civilization’s sense of moral decency (or etc.) straining to pitch another part of that same sense of moral decency overboard.)
Building a larger model subsuming both the A-is-good and B-is-good models is hard, though. It requires a bunch of knowledge/wisdom/culture to kind a workable model of that sort. Especially if you want everybody to coordinate within the same larger model (so that the predict-and-control thing can keep working). A simpler thing you could attempt instead is to just ban desire B. Then desire-for-B won’t get in the way of your attempt to coordinate around achieving desire A. (Or, in more degenerate cases, it won’t get in the way of your attempt to coordinate around you-the-coordinator staying coordinating, with all specific goals mostly forgotten about.) This “just abolish desire B” thing is much simpler to design. So this simpler strategy (“disown and dissociate from one of the good things”) can be reinvented even in ignorance, and can also be shared/evangelized for pretty easily, without needing to share a whole culture.
Separately: once upon a time, there used to be a shared deep culture that gave all humans in a given tribe a whole bunch of shared assumptions about how everything fit together. In that context, it was easier to create/remember/invoke common scaffolds allowing A-desire and B-desire to work together without disrupting each others’ ability to do predictability-and-control. You did not have to build such scaffolds from scratch.
Printing presses and cities and travel/commerce/conversation between many different tribes, and individuals acquiring more tools for creating new thoughts/patterns/associations, and… social media… later made different people assume different things, or fewer things. It became extra-hard to create shared templates in which A-desire and B-desire can coordinate. And so we more often saw social movements / culture wars in which the teams (which each have some memory of some fragment of what’s good) are bent on destroying one another, lest one another destroy their ability to do prediction-and-control in preservation of their own fragment of what’s good. “Humpty Dumpty sat on a wall…”
(Because the ability to do the simpler “dissociate from desire B, ban desire B” move does not break down as quickly, with increasing cultural diversity/fragmentation, as the ability to do the more difficult “assimilate A and B into a common larger good” move.)
↑ comment by AnnaSalamon · 2020-12-21T00:26:40.582Z · LW(p) · GW(p)
Also: it seems to me that “G” might be the generator of the thing Zvi calls as “Moloch’s Army.” Zvi writes: [? · GW]
Moloch’s Army …. I still can’t find a way into this without sounding crazy. The result of this is that the sequence talks about maze behaviors and mazes as if their creation and operation are motivated by self-interest. That’s far from the whole picture.
There is mindset that instinctively and unselfishly opposes everything of value. This mindset is not only not doing calculations to see what it would prefer or might accomplish. It does not even believe in the concept of calculation (or numbers, or logic, or reason) at all. It cares about virtues and emotional resonances, not consequences. To do this is to have the maze nature. This mindset instinctively promotes others that share the mindset, and is much more common and impactful among the powerful than one would think. Among other things, the actions of those with this mindset are vital to the creation, support and strengthening mazes.
Until a proper description of that is finished, my job is not done. So far, it continues to elude me. I am not giving up.
For whatever it’s worth, I am also inclined to think that something like “Moloch’s Army” describes something important in the world. As sort-of-mentioned, Atlas Shrugged more or less convinced me of this by highlighting a bunch of psychological dynamics that, once highlighted, I seemed to see in myself and others. But I am still confused about it (whether it’s real; what it’s made of insofar as there is a real thing like that). And G is my best current attempt to derive it.
↑ comment by Pongo · 2020-12-21T01:43:31.347Z · LW(p) · GW(p)
Reminds me of The Costs of Reliability
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2020-12-21T05:13:52.954Z · LW(p) · GW(p)
Oh man; that article is excellent and I hadn't seen it. If anyone's wondering whether to click the link: highly recommend.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-12-21T05:26:36.471Z · LW(p) · GW(p)
It's currently up for review [LW · GW] if anyone wants to write a review :)
↑ comment by daig · 2020-12-21T09:32:30.054Z · LW(p) · GW(p)
Here is my take:
Value is a function of the entire state space, and can't be neatly decomposed as a sum of subgames.
Rather (dually), value on ("quotient") subgames must be confluent with the total value on the joint game.
Eg, there's an "enjoying the restaurant food" game, and a "making your spouse happy" game, but the joint game of "enjoying a restaurant with your spouse" game has more moves available, and more value terms that don't show up in either game, like "be a committed couple".
"Confluence" here means that what you need to forget to zoom in on the "enjoying the restaurant food" subgame causes your value judgement of "enjoying the restaurant food" and "enjoying a restaurant with your spouse, ignoring everything except food" to agree.
The individual subgames aren't "closed", they were never closed, their value only makes sense in a larger context, because the primitives used to define that value refer to the larger context. From the perspective of the larger game, no value is "destroyed", it only appears that way when projecting into the subgames, which were only ever virtual.
↑ comment by Rudi C (rudi-c) · 2021-02-15T20:28:25.669Z · LW(p) · GW(p)
This is just saying the coordination that results from the destruction of value is more valuable than the value destroyed, externalities disregarded. The post is about finding cheaper coordination strategies and internalizing more of the externalities.
comment by Unnamed · 2020-12-20T04:50:48.591Z · LW(p) · GW(p)
I notice that many of these examples involve something like vice signalling - the person is destroying value in order to demonstrate that they have a quality which I (and most LWers) consider to be undesirable. It seems bad for the middle manager, politician, and start-up founder to aim for the shallow thing that they're prioritizing. And then they take the extra step of destroying something I do value in order to accentuate that. It's a combination that feels real icky.
The romantic dinner and the handmade gift examples don't have that feature. And those two cases feel more ambiguous - I can imagine versions of these where it seems good that the person is doing these things, and versions where it seems bad. I can picture a friend telling me "I took my partner out for their birthday to a restaurant that I don't really care for, but they just adore" and it being a heartwarming story, where it seems like something good is happening for my friend and their relationship.
Katja's recent post on Opposite Attractions [LW · GW] points to one thing that seems good about taking your spouse to a restaurant that only they love - your spouse's life is full of things that you both like, and perhaps starved of certain styles of things that they like and you don't, and they could be getting something out of drawing from that latter category even if there's some sense in which they don't like it any more than a thing in the "youboth like it" category. And there's something good about them getting some of those things within the relationship, of having the ground that the relationship covers not be limited to the intersection of "the things you like" and "the things your spouse likes" - your relationship mostly takes advantage of that part of the territory but sometimes it's good to explore other parts of it together. And I could imagine you bringing an attitude to the meal where you're tuned in to your spouse's experience, trying to take pleasure in how much they enjoy the meal, rather than being focused on your own food. And (this is the part where paying a cost to resolve motive ambiguity comes in directly) going to a restaurant that they love and you don't like seems like it can help set the context for this kind of thing - putting the information in common knowledge between you two that this is a special occasion, and what sort of special occasion it's trying to be. It seems harder to hit some of these notes in a context where both people love the food.
(There are also versions the one-sided romantic dinner which seem worse, and good relationships where this version doesn't fit or isn't necessary.)
comment by jimmy · 2020-12-16T06:58:37.757Z · LW(p) · GW(p)
The world would be better if people treated more situations like the first set of problems, and less situations like the second set of problems. How to do that?
It sounds like the question is essentially "How to do hard mode?".
On a small scale, it's not super intimidating. Just do the right thing and take your spouse to the place you both like. Be someone who cares about finding good outcomes for both of you, and marry someone who sees it. There are real gains here, and with the annoyance you save yourself by not sacrificing for the sake of showing sacrifice, you can maintain motivation to sacrifice when the payoff is actually worth it -- and to find opportunities to do so. When you can see that you don't actually need to display that costly signal, it's usually a pretty easy choice to make.
Forging a deeper and more efficient connection does require allowing potential for conflict so that you can distinguish yourself from the person who is only doing things for shallow/selfish reasons. Distinguish yourself by showing willingness to entertain such accusations, knowing that the truth will show through. Invite those conflicts when you have enough slack to turn it into play, and keep enough slack that you can. "Does this dress make my ass look fat?" -- can you pull off "The *dress* doesn't, no" and get a laugh, or are you stuck where there's only one acceptable answer? If you can, demonstrate that it's okay to suggest the "unthinkable" and keep poking until you can find the edge of the envelope. If not, or when you've reached the point where you can't, then stop and ask why. Address the problem. Rinse and repeat with the next harder thing, as you become ready to.
On a larger scale, it gets a lot harder. You can no longer afford to just walk away from anyone who doesn't already mostly get it, and you don't have so much time and attention to work. There are things you can do, and I don't want to suggest that it's "not doable". You can start to presuppose the framings that you've worked hard to create and justify in the past, using stories from past experience and social proof to support them in the cases where you're challenged -- which might be less than you think, since the ability to presuppose such things without preemptively flinching defensively can be powerful subcommunication. You can start to build social groups/communities/institutions to scale these principles, and spread to the extent that your extra ability to direct motivation towards good outcomes allows you to out-compete the alternatives.
I just don't get the impression that there's any "easy" answer. If you want people to donate to your political campaign even though you won't play favorites like the other guy will, I think you have to genuinely have to be able to expect that your donors will be more personally rewarded by the larger total pie and recognition of doing the right thing than they will in the alternative where they donate to have someone fight to give them more of a smaller pie -- and are perceived however you let that be perceived.
↑ comment by ChristianKl · 2020-12-16T13:22:20.696Z · LW(p) · GW(p)
If you want people to donate to your political campaign even though you won't play favorites like the other guy will
The problem here is not about whether or not you play favorites but how you can demonstrate that you are likely going to play favorites in the future. A politican who has a lunch where they chat with their largest donor and then does what the donor tells them to do is also demonstrating loyality.
You only need to signal loyality via symbolic action when you can't provide value directly.
comment by ChristianKl · 2020-12-15T21:55:35.802Z · LW(p) · GW(p)
In the first example, you care more about your spouse’s view of how much you care about their experience than you care about your own experience.
This is an interesting example. It assumes that your spouse doesn't care about you caring about your own experience.
There's the discourse that asserts that woman don't like nice guys, and I think it applies here. For a majority of women that behavior isn't attractive.
Replies from: supposedlyfun, wslafleur↑ comment by supposedlyfun · 2020-12-15T22:11:07.686Z · LW(p) · GW(p)
Re your last claim, can you provide evidence other than the existence of the discourse? If we're just comparing firsthand experience, mine has been the exact opposite of
Replies from: ChristianKlFor a majority of women that behavior isn't attractive.
↑ comment by ChristianKl · 2020-12-15T22:39:45.326Z · LW(p) · GW(p)
I don't think this is the place to delve deeper into what makes behavior sexually attractive.
Replies from: Baisius↑ comment by Baisius · 2020-12-16T15:26:05.585Z · LW(p) · GW(p)
I don't think this is the place for unsupported theses.
Replies from: ChristianKl↑ comment by ChristianKl · 2020-12-16T16:43:54.502Z · LW(p) · GW(p)
The thesis that the discourse exist is not unsupported. There's no reason to discuss every topic at every depth.
Your comment might be a good practicle example of motive ambiguity. It provides no useful value to censor theses in cases where there are reasons not to discuss them in depth. At the same time speaking to censor serves for signaling.
I would expect Zvi here not to be pro-censorship but pro-speaking in cases where there are barriers to speak.
Replies from: Baisius↑ comment by Baisius · 2020-12-17T17:40:08.642Z · LW(p) · GW(p)
I wasn't trying to suggest that the discourse doesn't exist. I agree its existence is self-evident. Nor was I trying to censor you. I think your first comment was a good one.
My point was that you made a controversial statement ("For a majority of women that behavior isn't attractive.") for which the only evidence you offered was the existence of a controversy surrounding it. Then when someone told you that didn't match their experience (a comment that was upvoted significantly, indicating this is likely true for many people, as it is for me) and asked you to support that claim, you declined to offer any more evidence of your original claim. That is the behavior that I don't think belongs here, and I stand by that.
Replies from: ChristianKl↑ comment by ChristianKl · 2020-12-17T18:47:18.176Z · LW(p) · GW(p)
Having a community norm that assumes that if someone provides evidence against a claim you are making that obligates you to spend more time to make the claim in more depth is bad. It's generally better if people spent their time to argue in a way they believe to be productive and good for LessWrong.
You find plenty of times that people don't spent more time and effort when challenged by other people. The difference in this case is that I explained why I made that decision explicitely.
A position that it's bad to do that explicitely instead of just not replying is one for censorship.
Replies from: Baisius↑ comment by Baisius · 2020-12-17T19:17:13.498Z · LW(p) · GW(p)
I think if you don't want to debate or defend controversial statements it's probably best to just not make them in the first place.
Replies from: ChristianKl↑ comment by ChristianKl · 2020-12-17T20:06:36.081Z · LW(p) · GW(p)
That's the kind of thinking that drove Eliezer from posting on LessWrong. I think it's pretty harmful for our community.
Replies from: Baisius↑ comment by Baisius · 2020-12-17T20:36:04.112Z · LW(p) · GW(p)
That's interesting. Do you have specific examples? I'd be interested the context where he said that. I do agree if that reduced Eliezer's contribution that was a significant negative impact.
My concern is more rooted in status. LW is already associated enough with fringe ideas, I don't think it does us well to be seen endorsing low-status things without evidence. Imagine (as an extreme example, I'm not trying to equate the two) if I said something about Flat Earth Theory and then if I was challenged on it said that I didn't think it was an appropriate place to discuss it. That's... not a good look.
Replies from: ChristianKl↑ comment by ChristianKl · 2020-12-17T21:46:32.990Z · LW(p) · GW(p)
My concern is more rooted in status.
There are two issues here. First, being too much concerned with signaling status is exactly what this sequence challenges.
Second, even if status is your core concern moving a discussion that's about an abstract principle to one that's about personal romantic experience is a low status move.
The key question of Zvi's post is "The world would be better if people treated more situations like the first set of problems, and less situations like the second set of problems. How to do that?".
I gave an example of how to think about one of this example in the second set to move it towards the first. I pointed to the way out of the maze. Yes, going out of the maze is low status but that's the point. Thinking well through the example takes a bit of a Straussian perspective.
↑ comment by wslafleur · 2022-04-11T15:55:26.594Z · LW(p) · GW(p)
This is incorrect. The example only assumes that your only consideration was your spouse's view of how much you care about their experience. It makes no assumptions about what your spouse actually cares about.
Your claim, that for the majority of women that behavior isn't attractive, is just superfluous editorializing and I support Baisius's attempt to pressure you into more constructive discourse.
comment by Luke Allen (luke-allen) · 2020-12-15T20:58:40.213Z · LW(p) · GW(p)
I posit four basic categories of value: resources, experiences, esteem, and agency. You've listed a group of esteem games.
In the first example, let's assume your spouse likes the other restaurant significantly better than the one you both like. You deny yourself a specific potential positive experience by using your agency to grant her a more positive experience, and in doing so, you obtain the esteem of the sacrificial as well as the esteem of the generous in your spouse's eyes.
If it's a healthy relationship, that esteem is a side benefit which gets folded into the gestalt benefit of relational harmony enhanced through generosity. But if the esteem is the main goal, the sacrificer is exhibiting unhealthy codependent behavior. Alternatively, if your spouse likes both restaurants equally well, the esteem is the only benefit; gaming that system is more obvious and may negate the granting of any esteem.
I won't go through the other examples, but in each case, your actions are a gamble, a statement about yourself that pays off with esteem from someone whose esteem you value.
Replies from: AllAmericanBreakfast, Viliam↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2020-12-19T22:50:17.039Z · LW(p) · GW(p)
This is a nice counterpoint to Zvi's equally insightful OP. Your spouse might enjoy both the mutually-enjoyed restaurant and the one only she likes. Yet the fact that only she likes it means that she may rarely get to go. If your preferences were irrelevant, she might go to each of them half the time.
Variety is the spice of life: she's lost something by only going to the one you both like.
Making a "sacrifice" to go to the restaurant only she likes isn't just about loyalty. Not only does it give her the extra pleasure of variety, it also displays flexibility and willingness to compromise, which might be helpful in a future decision where you want your preferences to be prioritized.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2020-12-20T19:49:27.114Z · LW(p) · GW(p)
Okay, this seems true to me (i.e., it seems true to me that some real value is being created by displaying flexibility, willingness to compromise, etc.). (And thanks; I missed it when reading Luke's post above, but it clicked better when reading your reply.)
The thing is, there's somehow a confusing set of games that get played sometimes in cases like the restaurant example that are not about these esteem benefits, but are instead some sort of pica [? · GW] of "look how much I'm sacrificing; now clearly I love you hugely, and I am the victim here unless you give me something similar really a lot, and you owe me" or "look how hard we are working on the start-up; clearly it won't be our fault when the deadline is missed" or various other weird games that seem bad. I guess Luke is referring to this with his phrase about "but if the esteem is the main goal the sacrificer is exhibiting unhealthy codependent behavior." But what is that, exactly?
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2020-12-20T21:11:10.096Z · LW(p) · GW(p)
Pica seems like it's Goodhart's Law, with the added failure that the metric being maximized isn't even clearly related to the problem you're trying to solve.
- Evaluate your startup by the sheer effort you're putting in? That's Goodhart's Law. Evaluate it by how cool the office looks? That's pica.
- Evaluate your relationship by the sheer amount of physical affection? That's Goodhart's Law. Evaluate it by how much misery you put each other through "for love?" That's pica.
I think our culture is starting to produce a suite of relationship metrics that more directly resemble relationship success and failure, such as "the five love languages" or "the four horsemen of the relationship apocalypse." This lets people upgrade from pica to Goodhart's Law.
"More touch, gifts, quality time..." and "less stonewalling, defensiveness, criticism, contempt" makes problems too if done mindlessly. When can I take time for myself? What if my partner's annoying me? People have to think about when those metrics stop being helpful. But it's a better place to start than pica metrics.
Replies from: Yoav Ravid↑ comment by Yoav Ravid · 2021-03-21T07:41:51.945Z · LW(p) · GW(p)
Really liked this connection, i added it to the pica tag [? · GW]
comment by romeostevensit · 2020-12-15T20:08:19.550Z · LW(p) · GW(p)
DIfferent but related: kakonomics
comment by AnnaSalamon · 2021-01-22T05:01:02.336Z · LW(p) · GW(p)
You know how there's standard advice to frame desires / recommendations / policy proposals / etc. in positive rather than negative terms? (E.g., to say "It's good to X" and not "It's bad to Y"?)
I bet this is actually good advice, and that it's significantly about reducing the "doing costly things just to visibly not be like Y" dynamic Zvi is talking about. I bet it happens both between multiple people (as in Zvi's examples) and within a person (as in e.g. the examples in "pain is not the unit of effort [LW · GW]").
comment by ChristianKl · 2020-12-15T22:27:53.682Z · LW(p) · GW(p)
A politician can choose between two messages that affirm their loyalty: Advocating a beneficial policy, or advocating a useless and wasteful policy. They choose useless, because the motive behind advocating a beneficial policy is ambiguous. Maybe they wanted people to benefit!
I think this is a bad description. If we take the EMA decision to approve the COVID-19 vaccine later then other agencies, the had the choice between benefitial policy (early approval) and useless policy (late approval). They chose the late approval to signal that they care strongly about safety and displaying their loyality to the ideal of safety.
I don't think anybody who's the target of the signal is supposed to think "The EMA didn't care about benefiting people".
Politicians signal loyality to lobbyists by doing exactly what the lobbyists tell them. If a politician takes the amendment that a lobbyist gives them and adds his own words to it that make it a more wasteful policy for the general population that's not a sign of loyality towards the lobbyist. It's rather a sign if he pushes the amendment without changing any words. And maybe not asking boring questions such as "what would be the effect if this amendment makes it into law?"
I can see politicians making laws to punish the outgroup and signaling tribal loyality with that but when it comes to that I doubt any of the target audience is supposed to think "the politician doesn't want the people to benefit".
Do you have an example where Maybe they wanted people to benefit! would actually be an important signal in Western politics?
comment by philh · 2021-12-25T22:48:21.885Z · LW(p) · GW(p)
Initial reaction: I like this post a lot. It's short, to the point. It has examples relating its concept to several different areas of life: relationships, business, politics, fashion. It demonstrates a fucky dynamic that in hindsight obviously game-theoretically exists, and gives me an "oh shit" reaction.
Meditating a bit on an itch I had: what this post doesn't tell me is how common this dynamic, or how to detect when it's happening.
While writing this review: hm, is this dynamic meaningfully different from the idea of a costly signal?
Thinking about the examples:
You are married, and want to take your spouse out to a romantic dinner. You could choose a place you both love, or a place that only they love. You choose the place you don’t love, so they will know how much you love them. After all, you didn’t come here for the food. ... you care more about your spouse’s view of how much you care about their experience than you care about your own experience.
Seems believably common, but also kind of like a fairly normal not-very-fucky costly signal.
(And, your spouse probably wants to come to both restaurants sometimes. And they probably come to the place-only-they-love less than they ideally would, because you don't love it.)
Gets fuckier if they'd enjoy the place-you-both-love more than the place-only-they-love. Then the cost isn't just your own enjoyment; it's a signal you care about something, at the cost of the thing you supposedly care about. Still believable, but feels less common.
A middle manager must choose how to improve widget production. He can choose a policy that improperly maintains the factory and likely eventually poisons the water supply, or a policy that would prevent that at no additional cost. He knows that when he is up for promotion, management will want to know the higher ups can count on him to make the quarterly numbers look good and not concern himself with long term issues or what consequences might fall on others. If he cared about not poisoning the water supply, he would not be a reliable political ally. Thus, he chooses the neglectful policy. ... you care more about being seen as focused on your own success than you care about outcomes you won’t be responsible for.
(Didn't work for me, but I think the link is supposed to highlight the following phrase in the second paragraph: "The Immoral Mazes sequence is an exploration of what causes that hell, and how and why it has spread so widely in our society. Its thesis is that this is the result of a vicious cycle arising from competitive pressures among those competing for their own organizational advancement.")
Ah, so I think this has an important difference from a normal "costly signal", in that the cost is to other people.
I read the immoral mazes sequence at the time, but don't remember in depth. I could believe that it reliably attests that this sort of thing happens a lot.
A politician can choose between two messages that affirm their loyalty [to their biggest campaign contributer]: Advocating a beneficial [to the general public] policy, or advocating a useless and wasteful policy. They choose useless, because the motive behind advocating a beneficial policy is ambiguous. Maybe they wanted people to benefit! ... you care more about being seen as loyal than about improving the world by being helpful.
Again, the cost is to other people, not the politician.
How often do politicians get choices like this? I think a more concrete example would be helpful for me here, even if it was fictional. (But non-fiction would be better, and if it's difficult to find one... that doesn't mean it doesn't happen, real-world complications might mean we don't know about them and/or can't be sure that's what's going on with them. But still, if it's difficult to find a real-world example of this, that says something bad.)
(Is the point of the link to the pledge of allegiance simply that that's a signal of loyalty that costs a bit of time? I'm not American and didn't read the article in depth, I could be missing something.)
A start-up founder can choose between building a quality product without technical debt and creating a hockey stick graph with it, or building a superficially similar low-quality product with technical debt and using that. Both are equally likely to create the necessary graph, and both take about the same amount of effort, time and money. They choose the low-quality product, so the venture capitalists can appreciate their devotion to creating a hockey stick graph. ... you care about those making decisions over your fate believing that you will focus on the things they believe the next person deciding your fate will care about, so they can turn a profit. They don’t want you distracted by things like product quality.
The claimed effect here is: the more investors know about your code quality, the more incentive you have to write bad code. I could tell an opposite story, where if they know you have bad code they expect that to hurt your odds of maintaining a hockey-stick graph.
So this example only seems to hold up if
- Investors know about your code quality.
- They care about what your code quality says about loyalty to their interests, more than what it says about your ability to satisfy their interests.
I'm not convinced on either point.
If it does hold up... yeah, it's fucky in that it's another case of "signal you care about a thing by damaging the thing".
You can choose between making a gift and buying a gift. You choose to make a gift, because you are rich and buying something from a store would be meaningless. Or you are poor, so you buy something from a store, because a handmade gift wouldn’t show you care.
Mostly just seems like another fairly straightforward costly signal? Not very fucky.
Old joke: One Russian oligarch says, “Look at my scarf! I bought it for ten thousand rubles.” The other says, “That’s nothing, I bought the same scarf for twenty thousand rubles.” ... the oligarchs want to show they have money to burn, and that they care a lot about showing they have lots of money to burn. That they actively want to Get Got to show they don’t care. If someone thought the scarf was bought for mundane utility, that wouldn’t do at all.
Same as above. I think this is what economists call a Veblen good.
I don't think the article convinces me to be significantly more concerned about regular costly signals than I currently do, where the cost is entirely on the person sending the signal. That's two and a half of the examples, and they seem like the least-fucky. I think... if I'm supposed to care about those, I'd probably want an article that specifically focuses on them, rather than mixing them with fuckier things.
The ones where (some of) the cost is to other people, or to the thing-supposedly-cared-about, are more worrying. But there's also not enough detail here to convince me those are very common. And the example closest to my area of expertise is the one I don't believe. I feel like a lot of my worry about these, in my initial reading, came from association with the others, that having separated out I find more-believable and less-fucky. I don't think it's deliberate, but I feel vaguely Motte-and-Baileyed.
Writing this has moved me from positive on the review to slightly negative. (Strictly speaking I didn't cast a vote before, but I was expecting it would have been positive.) But I won't be shocked if something I've missed brings me back to positive.
comment by Jasnah Kholin (Jasnah_Kholin) · 2021-11-21T08:37:19.116Z · LW(p) · GW(p)
what is cool about that post is it self-demonstrating nature. like the maze, it give explanation that give less precise map of the world, with less prediction power then more standard model. and it give more pessimistic and cynical explanation. you trade off your precision and prediction power to be cynical and pessimistic!
and now i can formalize what i didn't like abut this branch of rationality. it's look like cynicism is their bottom line. they already convinced in the depth of their heart that the most pessimistic theory is true, and now they are ready to bite the bitter bullet.
but from the side, i see no supporting evidence. this is not how people behave. the predictions created by such theories are wrong. it's so strange thing to write on the bottom line! but being unpleasant not making something true, more then being pleasant make it true.
and as Wizard's first rule say, people believe things they want to believe or things the afraid to believe...
comment by Amir Bolous (amir-gamil) · 2021-01-23T14:00:35.628Z · LW(p) · GW(p)
I think the reason situations of problem 2 arise is because of misaligned incentives. When you care more about pleasing some other party, the best action is not necessarily the one that does the most good, but the one that best pleases the other person.
The cost incurred from doing so is then payed by either you (i.e. I pay the price in choosing a restaurant I hate) or society (in the factory example, the water is poisoned because of your choice)